Jens Willmer

Tutorials, projects, dissertations and more..

All posts in one long list

GitHub commit signing

In this post I explain all steps to get the nice green verified flag in GitHub commits when publishing from Windows via GitHub Desktop.

Verified Commit

Generate a new GPG key

  • Download Gnu PG and install it.
  • Open Git bash
  • Start generating a key with gpg --full-generate-key
  • Use key type RSA and RSA
  • Set key size to 4096
  • Define how long the key should be valid
  • Enter user information

The email must match your verified GitHub email. You can also use the GitHub provided no-reply mail.

  • Add a passphrase to secure your key. This needs to be supplied on any commit.

Removing the passphrase from an existing key can be done by setting the password to empty.

$ gpg --list-secret-keys
  sec   4096R/XXXX <creation date>
  uid                  name <email.address>
  ssb   4096R/YYYY <creation date>
$ gpg --edit-key XXXX   
$ gpg> passwd 

Export and backup your public and private key

$ gpg --list-secret-keys -keyid-format LONG
  sec   4096R/XXXX <creation date>
  uid                  name <email.address>
  ssb   4096R/YYYY <creation date>
$ gpg --armor --export XXXX
$ gpg --armor --export-secret-key XXXX

Configure your system

  • Create a new PGP key in the user settings of under SSH and GPG keys and add your public key

  • Lookup the path of your GPG binary file via where gpg
  • Escape the path like this C:\\Program Files\\Git\\usr\\bin\\gpg.exe
  • Open your .gitconfig file located in your home directory or execute the following command in the command line to open it git config --global --edit
  • Add or update the following settings in this file
# YOUR_SIGNING_KEY is the same as you used for exporting your PGP key

  signingkey = YOUR_SIGNING_KEY
  program = GPG_BINARY_PATH
  gpgsign = true

Now you can start the GitHub Desktop app and commit something. When opening your new commit in GitHub you should see the verify symbol!


GitHub with multiple SSH deployment keys

This post explains how to use multiple deployment keys with Git and is my summary of a blog post by ramachandra1. The post builds upon the previous post about GitHub SSH based authentication.

Modify your SSH config to reference multiple host aliases: nano ~/.ssh/config

Host ProjectAliasOne 
    User git
    IdentityFile ~/.ssh/github/project-one/id_rsa
Host ProjectAliasTwo
    User git
    IdentityFile ~/.ssh/github/project-two/id_rsa

Register the SSH aliases in the SSH agent via:

eval `ssh-agent`
ssh-add ~/.ssh/github/project-one/id_rsa
ssh-add ~/.ssh/github/project-two/id_rsa

Change the access rights of the SSH key files to prevent a constant password prompt from GitHub:

chmod -R 600 ~/.ssh/github/project-one
chmod -R 600 ~/.ssh/github/project-two

Test that it is working:

ssh -T [email protected]
ssh -T [email protected]

Now you can use the following syntax to pull a GitHub repository:

git clone [email protected]:repo-owner-name/repo-name.git

Custom message of the day with MOTD

In this tutorial you learn how to modify the welcome message of your Linux system. This message is shown when you login via SSH.

The simplest way of modification is to edit the content of the /etc/motd file. This however only works for text changes and not if you like to calculate some of the displayed values.

To add dynamic content to the message of the day you start by disabling MOTD, remove the MOTD content file as well as the default MOTD script file:

systemctl disable motd
rm -f /etc/motd
rm /etc/update-motd.d/10-uname

Next, we add our own MOTD script file. The number in front defines the order in which the script files are executed.

touch /etc/update-motd.d/10-info
chmod a+x /etc/update-motd.d/*

Now we can add our own content to this file. I found a code snipped1 to display a nice raspberry:

ASCII Raspberry
echo "$(tput setaf 2)
   .~~.   .~~.
  '. \ ' ' / .'$(tput setaf 1)
   .~ .~~~..~.
  : .~.'~'.~. :
 ~ (   ) (   ) ~
( : '~'.~.'~' : )
 ~ .~ (   ) ~. ~
  (  : '~' :  ) $(tput sgr0)Raspberry Pi$(tput setaf 1)
   '~ .~~~. ~'
$(tput sgr0)"
To test your work you can execute your script by running sh /etc/update-motd.d/10-info or start a new SSH session.

For my home server I display the name of my server in ASCII art that I generated using an ASCII art generator2 followed by a summary of the system information:

System Information

upSeconds="$(/usr/bin/cut -d. -f1 /proc/uptime)"
UPTIME=`printf "%d days, %02dh%02dm%02ds" "$days" "$hours" "$mins" "$secs"`

# get the load averages
read one five fifteen rest < /proc/loadavg

echo "
   _____                  _____
  |  |  |___ _____ ___   |   __|___ ___ _ _ ___ ___
  |     | . |     | -_|  |__   | -_|  _| | | -_|  _|
  |__|__|___|_|_|_|___|  |_____|___|_|  \_/|___|_|

  `uname -srmo`
  `date -u`

  Last login.........: `lastlog -u pi | awk 'NR==2 {$1=$2=$3=""; print $0}' | awk '$1=$1'` from `lastlog -u pi | awk 'NR==2 {print $3}'`
  Uptime.............: ${UPTIME}
  Temperature........: `/opt/vc/bin/vcgencmd measure_temp | awk -F '[/=]' '{print $2}'`
  Load Averages......: ${one} (1 minute) ${five} (5 minutes) ${fifteen} (15 minutes)
  Memory.............: `free -h | awk 'NR==2 {print $4}'` (Free) / `free -h | awk 'NR==2 {print $2}'` (Total)
  Root Drive.........: `df -h -x tmpfs -x vfat -x devtmpfs | awk 'NR==2 {print $5 " (" $3 "/" $2 ") used on " $1 }'`
  Media Drive........: `df -h -x tmpfs -x vfat -x devtmpfs | awk 'NR==3 {print $5 " (" $3 "/" $2 ") used on " $1 }'`
  Media Drive 2......: `df -h -x tmpfs -x vfat -x devtmpfs | awk 'NR==4 {print $5 " (" $3 "/" $2 ") used on " $1 }'`
  IP Addresses.......: `ifconfig eth0 | grep "inet " | awk '{print $2}'` / `ifconfig eth0 | grep "inet6" | awk 'NR==1 {print $2}'`
Note that I hard coded the user, my hard drives and my network interface. If you like to use my script you need to update these values to fit your system.

In case you like to remove the last login information that are added by the SSH agent you can disable it by editing the /etc/ssh/sshd_config and adding the following line:

PrintLastLog no


Gear ratio basics

The first thing to remember when calculating gear ratios is that we compare two distances with each other. If we compare a circle with the radius of 100 to a circle with the radius of 50 we get a 0.5 ratio. The same is true if we compare the perimeter of both circles.

Gear ratio simplification with two circles.

The second important constant to remember is that for intermeshing gears to work the distance between the teeth as well as the teeth size needs to be the same on both gears.

Different teeth dimensions can't interact with each other.

In the following picture we have two gears that engage with each other. Two or more gears that engage with each other are called a gear train. The driver gear that starts the motion has 40 teeth and the driven gear has 80 teeth. To get the gear ratio you have to divide the driven gear teeth (80) by the drive gear teeth (40). In this example the calculation looks like this:

\[Ratio = {driven \over drive } = {80 \over 40 } = {2 \over 1 } = 2 : 1\]
Gear train with a 2:1 ratio.

If you have more than two gears in a gear train you need to identify the driver and the driven gear - In the following picture they are called input and output. The gears between the driver and the driven are called idler. Idler gears are usually used for changing the gear direction or to overcome some space limitations that prevent you from attaching your output gear to your input gear. The idler gears can be ignored in the ratio calculation. Use the same calculation as above to get your ratio:

\[Ratio = {output \over input } = {100 \over 50 } = {2 \over 1 } = 2 : 1\]
Gear train with idler gear in between.

To calculate the ratio of compound gears you have to split them up in gear trains first and calculate them as described above. For the following example we get out two ratio calculations:

\[Ratio_{(P+G)} = {green \over pink } = {80 \over 40 } = {2 \over 1 } = 2 : 1\] \[Ratio_{(B+O)} = {orange \over blue } = {120 \over 20 } = {6 \over 1 } = 6 : 1\]
Note that we can't calculate yellow since it has no other gear to interact with.
Compound gear train.

To get the end result for the compound gear example we multiply our calculations from the last step:

\[Ratio_{(sum)} = {green \over pink } * {orange \over blue } = {2 \over 1 } * {6 \over 1 } = {12 \over 1 } = 12 : 1\]

GitHub SSH based authentication

This is just a quick writeup about how to setup SSH based authentication on GitHub since the GitHub documentation is missing some key parts and I search for it every time on the Internet.

Make sure openssh-client and git is installed:

sudo apt update && sudo apt install -y openssh-client git

Create GitHub SSH directory:

mkdir -p ~/.ssh/github
chmod 700 ~/.ssh ~/.ssh/github

Generate the SSH key and add your GitHub project or account as a comment to know were you use this key:

ssh-keygen -t rsa -b 4096 -C 'Comment' -f ~/.ssh/github/id_rsa -q -N ''

Create a SSH config:

touch ~/.ssh/config
chmod 600 ~/.ssh/config

Edit the SSH config and reference the GitHub key: nano ~/.ssh/config

    IdentityFile ~/.ssh/github/id_rsa

Output public key and copy it to the GitHub deploy section in project or account settings:

cat ~/.ssh/github/
Account settings
Project settings

Test the authentication to GitHub:

ssh -T [email protected]

Use the SSH URL option in GitHub to clone your repository. In case you have a existing repository that was cloned using HTTPS you can open .git/config in the root directory of your project and replace the URL with the one you get from the SSH option.

Independent IoT System (3) Lessons learned and next steps

This project creates a system that can run an ARM Linux server 24/7 without any external power. The system will run on batteries that are charged by a solar panel. A remote monitoring solution will track the system and collect sensor data.

Finalizing the build

To complete the project the power converter needs to be installed. It was glued into place and connected as shown in the picture on the right. The power cable from the solar panel, to the battery and to the consumer are passed through the current and power monitor.

Then the WIFI dongle got mounted to the cover with a Velcro tape and two holes got drilled for the WIFI antenna connector and the solar panel cable.

Electronics installation

Electronics installation

A PG7 cable gland was used to get a sealed connection for the solar power cable. And the solar panel was mounted to a tripod with the help of a 3D printed mount.

PG7 Cable Gland

Solar cell mount

This is how the water tight enclosure looks like outside.

Closed enclosure

Closed enclosure

Lessons learned

  • Measuring the Raspberry Pi consumption from the Raspberry Pi is not possible since the script that takes the measurements increases the CPU load significantly. My readings are 100 mA off.

  • The solar cell is producing around 300-450 mA on average. But the Raspberry Pi Zero with an external WIFI dongle requires the same amount. With this configuration I’m not able to run the system 24/7.

  • A regular USB WIFI dongle is consuming a lot of power!

  • A Raspberry Pi Zero W needs around 170 mA with WIFI turned on. But mounting a U.FL RF connector to the Raspberry Pi Zero is close to impossible without professional tools. Without this connector we can’t mount a external WIFI antenna.

  • I tested a Orange Pi Zero as Raspberry Pi replacement since it has an external antenna port. It works really well and only consumes 130 mA on idle. But(!) when my scheduled scripts on the single-board computer (SBC) detect low battery they force the SBC to shut down. I found out that the Orange Pi does not do a clean shut down and will continue draining the battery when in halt state. It continues to consume 80 mA!

Next steps

My short-term solution was ordering another solar panel and connecting the two in parallel. Additionally, I’m currently working on a Savonius wind turbine design that can provide additional power.

I’m also thinking about replacing the SBC with an Arduino. When all my power concerns would be gone but I will trade it against the flexibility a full Linux system provides. I haven’t decided yet how I want to proceed..

Independent IoT System (2) Case build, software and scripts

This project creates a system that can run an ARM Linux server 24/7 without any external power. The system will run on batteries that are charged by a solar panel. A remote monitoring solution will track the system and collect sensor data.

Case Assembly

After collecting all materials and printing the 3D models the case gets assembled. But before the case assembly can start, insert nuts (M3) need to be pressed into the case to be able to fixate the 3D model layers. The batteries can be fixated by Velcro tape, zip ties or a combination of both. The battery level can then be screwed directly onto the screw holes. After that, hex spacer screws will be used to get the necessary space between the battery packs and the electronics.

Wait with the installation of the insulation till all components were dry fitted.

Battery Assembly

Battery assembly


Screw the Raspberry Pi into place as a reference point for the ribbon cable. Then add the pin plug and the two sockets to the cable. Try to attach the cable sockets in line with the cable because offsetting will bend the cable when connecting the hats.

Cable management

Cable management

Ribbon cable fold

Ribbon cable fold

Next attach the hats, fit the cables, glue the button to the board, connect the temp/humidity sensor to the bottom of the board and take care of the cable management. Adding the temp/humidity sensor to the bottom will give better readings since it should measure the battery conditions. Adding the sensor on top of the electronics will probably alter the readings since the Pi will dispense some heat.

Electronic Assembly

Electronic Assembly

Dry fit the two layers in the enclosure. If all fits well, add the insulation. The external WIFI dongle will later be mounted (glued?) to the box and a whole for the external antenna and the solar panel will be drilled.

Terminal Pin Connections

Connect the cables to the terminal as displayed in the following picture.

Pin layout

Pin layout

System Preparation

The following configurations and libraries need to be present for the scripts to work.

Python Libraries

sudo apt-get install python3-pip
sudo apt-get install python3-pil
sudo apt-get install python3-numpy
sudo pip3 install RPi.GPIO
sudo pip3 install spidev

ePaper Libraries

Install BCM2835

tar zxvf bcm2835-1.60.tar.gz 
cd bcm2835-1.60/
sudo ./configure
sudo make
sudo make check
sudo make install
#For more details, please refer to

Install wiringPi

sudo apt-get install wiringpi

#For Pi 4, you need to update it:
cd /tmp
sudo dpkg -i wiringpi-latest.deb
gpio -v
#You will get 2.52 information if you install it correctly

Optional: Download example code to get the Waveshare epd library:

sudo git clone
cd e-Paper/RaspberryPi\&JetsonNano


The scripts are published in a GitHub repository called jwillmer/IndependentIoT. Clone the repository and follow the readme to get started. This is my first-time using Python, if you can improve the code feel free to make a pull request!

The repository contains two scripts. One will be executed every 15 minutes by the system. It will collect sensor data and posts the data to the remote endpoint or a CSV file. The second script is monitoring the button press event. As soon as someone presses the button the script will collect all sensor data and present it on the ePaper screen. This way the ePaper screen only changes when were is a need for it.


In the next part of the series we will install the solar panel, the WIFI antenna and connect the PSU.

Independent IoT System (1) BoM and 3D models

This project creates a system that can run an ARM Linux server 24/7 without any external power. The system will run on batteries that are charged by a solar panel. A remote monitoring solution will track the system and collect sensor data.

The initial data collection will focus on the system performance:

  • solar power output
  • battery levels
  • system power consumption
  • temperature and moisture inside the case

Bill of materials

Most of the parts were already present from other projects. In case all the parts need to be ordered the bill will be around 150€ (without shipping). But it can also be much cheaper if you do not use the convenient prototype boards and shields.

Electrical Junction Box IP67 Waterproof
Solar Panel 5V, 840mA
2x 10000mAh Lipo cells
Raspberry Pi Zero
Screw terminal
WIFI Antenna 2.4Ghz, 5dbi
SMA Pigtail
Current/Voltage/Power Monitor
E-Ink display module
UPS mobile power board
Temperature/humidity sensor
Hex spacers screws
Ribbon cable 40 pin
40 pin plug
40 pin socket
Foam insulator

3D Model

The 3D models are made in Fusion 360. The source file1 as well as the STL files of the battery level2 and the electronics level3 can be found at the bottom of the post. The latest versions can be found in the GitHub repository that will be published with the second part of the series.

The 3D model of the battery grill has a spacer in the middle to allow air ventilation from below. This is not necessary when mounting the two batteries with Velcro (instead of zip ties) since the Velcro tape will already create enough space between battery and the battery grill. In this case the insolation can be placed closer to the battery grill.

The 3D model of the electronics level has holes for the 40-pin sockets. The modules (hats) will be connected by a 40-pin ribbon cable that will be mounted at the bottom of the plate. The two standoffs in the picture are supporting the modules since the 40-pin sockets are preventing mounting screws. The rest of the wholes are for fasteners and cable outlets.

The Raspberry Pi as well as the modules will be fastened by hex spacer screws. One spacer screw will be mounted at the bottom of the electronics level to increase steadiness.

Since some components can be scavenged from other projects (button and sensors) it is hard to add specific mounting holes that are flexible enough to accommodate the different dimensions. For this reason, the model does not include holes for all components. These components will be mounted with the use of a glue gun.

Enclosure slice
Battery grill
Electronics level
Box dimensions (the small holes are not on the same level as the big holes!)
Box dimensions (not 100% accurate!)


In the next part of the series we will assemble the parts, take care of the wiring and setup the scripts.

WireGuard Proxy Configuration

In this tutorial I explain how you configure WireGuard on your devices to access remote networks.

The network layout

We will have one office location with its own local network And our VPN network will have the IP range Our network layout will have one central VPN server in the cloud that can be reached by the domain: vpn.domain.tld. The goal of this tutorial is to make the office network accessible to other clients that are connected to the central VPN server.

Network Layout

Network layout


Before we start with the configuration, we need to install WireGuard on all devices. The best way is to follow the official WireGuard installation instructions.


Next, we create private and public keys for each WireGuard installation.

  • Change the directory to the WireGuard folder
  • Prevent credential leaks in race conditions
  • Generate key pair
$ cd /etc/wireguard/
$ umask 077
$ wg genkey | tee privatekey | wg pubkey > publickey

If you like to add a mobile device to your VPN network you should create an additional key pair on one of the devices.

$ wg genkey | tee mobile-client-privatekey | wg pubkey > mobile-client-publickey

I will show you later how you can transfer the configuration to your mobile once it is complete.


Before we start, we need to enable IP forwarding on the servers. Edit /etc/sysctl.conf on the VPN server and office server and uncomment these lines:

$ sudo nano /etc/sysctl.conf

# Uncomment the next line to enable packet forwarding for IPv4

# Uncomment the next line to enable packet forwarding for IPv6
#  Enabling this option disables Stateless Address Autoconfiguration
#  based on Router Advertisements for this host

$ sudo sysctl -p /etc/sysctl.conf

Now we can start with the configuration of WireGuard:

  • Switch to the WireGuard folder
  • Create a new configuration file
  • Display the private and public key in the console. (You will need them in the next step.)
$ cd /etc/wireguard/
$ touch wg0.conf
$ cat privatekey publickey

VPN server configuration

For the VPN server we need to configure two things. First we need to configure the interface. The interface defines the LAN IP address of the server and it’s private key as a minimum. You can also set the port the server should listen for incoming requests. If you do not set the port it will be chosen randomly.

Technically there is no server instance in WireGuard. All clients are function as a server or client.
Address =
PrivateKey = <Content of privatekey file>
ListenPort = 51820

Finally we specify the following IPv4/IPv6 forwarding rules (iptables manual):

PostUp = iptables -A FORWARD -i %i -j ACCEPT
PostUp = iptables -A FORWARD -o %i -j ACCEPT
PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostUp = ip6tables -A FORWARD -i %i -j ACCEPT
PostUp = ip6tables -A FORWARD -o %i -j ACCEPT
PostUp = ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

PostDown = iptables -D FORWARD -i %i -j ACCEPT
PostDown = iptables -D FORWARD -o %i -j ACCEPT
PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
PostDown = ip6tables -D FORWARD -i %i -j ACCEPT
PostDown = ip6tables -D FORWARD -o %i -j ACCEPT
PostDown = ip6tables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

On PostUp we configure the IP routing changes and on PostDown we remove the changes from the network configuration.

Replace eth0 with your LAN interface in PostUp, PostDown. You can use ifconfig to get a list of your interfaces.

After we specified the interface for the server, we need to specify the peers of the server. The server needs to know the public key of each peer as well as the IP address range. The IP address range specifies which IP address range will be accessible on the peer side.

For the office location we specify two IP ranges. First, we specify the WireGuard LAN IP of the office location: The /32 limits the IP range to only one IP, in this case

Then we add an additional IP range to redirect all requests to the office LAN to office peer: The /24 limits the IP range to 255 entries ( -

# Name = Office Network
PublicKey = <Content of publickey file>
AllowedIPs =,

The complete server configuration in wg0.conf will then look like this:

Address =
PrivateKey = <Content of privatekey file>
ListenPort = 51820
PostUp = iptables -A FORWARD -i %i -j ACCEPT
PostUp = iptables -A FORWARD -o %i -j ACCEPT
PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostUp = ip6tables -A FORWARD -i %i -j ACCEPT
PostUp = ip6tables -A FORWARD -o %i -j ACCEPT
PostUp = ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT
PostDown = iptables -D FORWARD -o %i -j ACCEPT
PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
PostDown = ip6tables -D FORWARD -i %i -j ACCEPT
PostDown = ip6tables -D FORWARD -o %i -j ACCEPT
PostDown = ip6tables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

# Name = Office Network
PublicKey = <Content of publickey file>
AllowedIPs =,

# name = Remote Laptop/Mobile
PublicKey = <Content of publickey file>
AllowedIPs =


Secure the server by only allowing external access to the wireshark service as well as you ssh session.

sudo ufw allow 22/tcp
sudo ufw allow 51820/udp
sudo ufw enable

Verify the firewall settings:

sudo ufw status verbose

VPN (mobile) client configuration

For the mobile/laptop client we specify that both IP ranges (10.0.0/24, will be redirected to the VPN server.

If you like to tunnel all your traffic to the VPN server you can use:, ::/0. This will redirect all IPv4 and IPv6 traffic through the VPN tunnel. Be sure to set a dedicated DNS server if you do this.

If you get the following error on starting wireguard /usr/bin/wg-quick: line 31: resolvconf: command not found you need to install openresolv (sudo apt install openresolv).

Address =
PrivateKey = <Content of privatekey file>
# DNS =, 2001:4860:4860::8888

# Name = vpn.domain.tld
PublicKey = <Content of publickey file>
Endpoint = vpn.domain.tld:51820
AllowedIPs = 10.0.0/24,

# If you're behind a NAT and want the connection to be kept alive.
PersistentKeepalive = 25

Use the following command to generate a QR code from your mobile.conf file. Scan this QR code with the WireQuard app on your phone to transfer the configuration.

$ qrencode -t ansiutf8 < mobile.conf

VPN office configuration

In the office configuration we specify IP forwarding to and from the local network as we did for the VPN server.

Remember to replace eth0 with your LAN interface in PostUp, PostDown.

Address =
PrivateKey = <Content of privatekey file>
PostUp = iptables -A FORWARD -i %i -j ACCEPT
PostUp = iptables -A FORWARD -o %i -j ACCEPT
PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostUp = ip6tables -A FORWARD -i %i -j ACCEPT
PostUp = ip6tables -A FORWARD -o %i -j ACCEPT
PostUp = ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT
PostDown = iptables -D FORWARD -o %i -j ACCEPT
PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
PostDown = ip6tables -D FORWARD -i %i -j ACCEPT
PostDown = ip6tables -D FORWARD -o %i -j ACCEPT
PostDown = ip6tables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

# Name = vpn.domain.tld
PublicKey = <Content of publickey file>
Endpoint = vpn.domain.tld:51820
AllowedIPs = 10.0.0/24
PersistentKeepalive = 25


You can now start WireGuard on all your devices. After all devices are connected you will be able to access the Office LAN by your mobile client.

# Start WireGuard
$ wg-quick up wg0

# Show connection information
$ wg show

# Stop WireGuard
$ wg-quick down wg0

wg show

Display connections

Configure auto start

Start by making the WireGuard directory readable only by an administrator:

$ sudo chown -R root:root /etc/wireguard/
$ sudo chmod -R og-rwx /etc/wireguard/*

Then use systemd to initialize the VPN on startup:

$ sudo systemctl enable [email protected]


I used the following resources to aggregate this tutorial.

SJCAM SJ8 Action Cam Repair

I ordered a SJ8 action cam from a reseller and got one with a broken battery connector. Since shipping it back to China and waiting for a replacement costs time, money and nerfs I did it myself.



There is not a lot of repair guides on the internet for SJCAM action cameras and I didn’t find any of the latest model, the SJ8. The following will show how I opened it.

case front

Open the case front

The following screws and connectors need to be removed to open the case - highlighted in red. The green circles are for the lens and can stay attached. The display is attached to the board and does not need to be removed.

There are two types of connectors on the front. The small one on the bottom will release by opening the black clamp on the right side (under the tape). The connectors on the right side are opening by pulling the black clamp on the left side up.

left side
right side

After removing all marked screws and connectors you can remove the board. The backside is closed the same way as the front and can be opened as soon as you have removed the connectors on the front side.

The following two pictures will show the broken battery connector and the reattached one. I used a tiny bit of instant glue to fixate the part at its location. I then melted the existing soldering tin and added some additional. Finally, I tested the connections with a multimeter and added some more instant glue.

broken power connector
fixed power connector

Make sure you accounted for all screws.



Zigbee to MQTT case for Raspberry Pi Zero

I am using a Zigbee to MQTT bridge for my home automation. The software runs on a Raspberry Pi Zero W and uses the CC2531 USB Sniffer to talk with Zigbee devices. I use a 90 degree angeled USB adapter to connect the USB stick with the Raspberry Pi. The setup of the bridge is described in the zigbee2mqtt getting started guide.

I really like the setup but in order to connect with all my devices the bridge needs to be in a central place. At the moment of writing I couldn’t find any case that would support the Raspberry Pi Zero together with the CC2531 USB stick that looks elegant enough to put it into the living room.

That is why I created my own case. I got my inspiration from a round Raspberry Pi case on Thingiverse. My case design is publicly available in Onshape 1. I also posted my case design on Thingiverse 2 since this is the first place to look for 3D files. The STL files can also be downloaded directly from my homepage 3 4 5.

Mountain View

Case design at the moment of writing.


I started to print the case prototypes in PLA filament since it is easy to print and biodegradable. To close the case I added two M2.5 x 8mm flat head screws and two M2.5 x 4mm embedded knurled nuts. To fix the nuts I used a soldering iron to heat up the nuts and press them into the plastic. The case is 1.5cm high and is close to 12 cm in diameter.

Prototype Hull
Prototype Base
Prototype Top
Prototype Bottom

The final product I printed in white ABS since I like to have a better head resistant and a stronger material. I changed the ventilation shaft slightly since I didn’t want to use support material.

ABS Case Base
ABS Case Bottom
ABS Case Ventilation
ABS Case Mounted


I am not 100% satisfied with the current design. One minor point is that the ventilation doesn’t look to good since i didn’t use support and didn’t post processed it. A slightly more annoying point is that the case has warped a little and the top is not completely closed around the ventilation shaft. I might need to add some sliding lock functionallity similar to the picture from adafruit in the future. But both points are only visible if you look closely. Mounted to a wall these minor defects are not visible.

My biggest concern is regarding the filament. As you can see on the last picture it is slightly transparent. I could make the walls thicker, but I really don’t like to since they are already strong enough. Maybe I will experiment with different filament or color painting the case from the inside. Not sure yet!

New letter in postbox notification with Arduino and ATtiny85

Lately I get a lot of small packages with hardware modules from China. They take between 2 and 6 weeks to arrive. Some of the parts I’m waiting for are needed to continue my current project(s). Because of this I often check the mailbox and it is really annoying to open an empty mailbox. Therefore, I was building the mailbox notification.


I checked the WIFI inside the mailbox and it was not existing so I couldn’t just use a simple ESP8266 like the Wemos D1 Mini and a battery to get it to work. Instead I decided to order a 433 MHz radio frequency (RF) transmitter module and combine it with a very low power ATtiny85 that I can program with Arduino.

Hardware parts

Part Price
433 MHz radio frequency (RF) transmitter module 1.00 €
ATtiny85 Digispark 1.20 €
Battery coin socket (don’t buy a no name product from China!) 1.29 €
CR2032 button cell battery 0.22 €

Hardware pictures

433 MHz radio frequency (RF) transmitter/receiver module
ATtiny85 Digispark
CR2032 lithium battery
CR2032 battery mount


First, we need to flash a new firmware onto the ATtiny85 since the stock firmware is outdated and - more importantly - waits 5 secs before booting to allow an easy program code upload. For me it was important to get rid of the 5 sec bootup since my solution cuts the power of the ATtiny85 completely as long as the postbox lit is not open. The few seconds it is open will be enough to bootup and send a notification!

ATtiny85 firmware

To upgrade the firmware I followed the GitHub article from Ircama. Basically, you need to:

  • Install (Windows) USB drivers
  • Get the latest source code of the Micronucleus bootloader
  • Change the firmware configuration for the ATtiny85 by defining the ENTRYMODE to ENTRY_EXT_RESET. This prevents the 5 sec waiting time on startup. You can reset the ATtiny85 by connecting P5 and GND with a 100R resistor.
  • Build the new firmware
  • Create upgrade firmware - the ATtiny85 has not enough space for a complete firmware update so you need to generate a firmware upgrade instead of replacing the whole thing.
  • Flash the new firmware

Arduino transmitter

The code I flashed on my ATtiny85 is very small. As soon as the ATtiny85 starts it will send the notification 15 times - to make sure it gets received - and then the ATtiny85 goes to sleep in case the postbox stays open and the power is not cut off.

The ATtiny85 does not support the serial monitor. You can use the onboard LED to check if your code is running. There is a serial monitor software implementation you could flash but it will take up most of the available space on the ATtiny85. I tried it, but the driver had a bug that repeatedly crashed my Windows PC!

I use a CR2032 button battery to power the ATtiny85. This battery outputs only 3V and the ATtiny85 is used to 5V. To get him to work with a 3V battery we need to downgrade the CPU speed by using the Digispark (1mhz - No USB) profile in the Arduino IDE. The No USB option only means that you can’t reach the USB port inside your code - using the USB port to flash a code update or use the default Digispark (Default - 16.5mhz) profile is not a problem.

#include <RCSwitch.h>
#include <avr/sleep.h>

RCSwitch mySwitch = RCSwitch();

void setup() {
  // Send notification 15 times
  mySwitch.enableTransmit(2); // 2 is P2
  mySwitch.send("011100000110001001100001"); // pba = Post Box Alert = 7365217

  // Shutdown device
  set_sleep_mode(SLEEP_MODE_PWR_DOWN); // sleep mode is set here

void loop() {  }

Arduino receiver

The following code I used to test the transmitter. I connected the receiver (included in the hardware parts) to a Wemos D1 Mini. It has a ESP8266 WIFI microchip and support the serial monitor. This way you can monitor what you receive. You will also receive signals from other devices since 433 MHz is a common frequency for garage doors.

#include <RCSwitch.h>

RCSwitch mySwitch = RCSwitch();

void setup() {
  mySwitch.enableReceive(13);   // Receiver on interrupt GPio 13 => that is pin D7
  Serial.print("Starting up.. ");

void loop() {
  if (mySwitch.available()) {
    Serial.print("Received ");
    Serial.print( mySwitch.getReceivedValue() );
    Serial.print(" / ");
    Serial.print( mySwitch.getReceivedBitlength() );
    Serial.print("bit ");
    Serial.print("Protocol: ");
    Serial.println( mySwitch.getReceivedProtocol() );

    digitalWrite(BUILTIN_LED, LOW);
    digitalWrite(BUILTIN_LED, HIGH);

3D printed case

I created a custom 3D case model with Onshape, a CAD online service to get the smallest possible containment for the hardware. I have attached the STL files for you to download in the footer 1 2 3 but I recommend to export the latest version from Onshape directly.

Mountain View

3D Model v1.0


Case assembly close-up
Case assembly comparison with lighter
Case assembly final product
Case assembly top view
Case assembly bottom view
Case assembly bottom view expanded


Connecting the two wires that go out of the case will boot the ATtiny85 and transmit the notification. The Installation is depending on your postbox. I have a type A postbox. Therefor I have soldered a reed switch (normally closed - NC) to the wires. I mounted a magnet on the inside of the postbox lid and the reed switch next to it. As long as the lid is closed the magnet will be next to the reed switch and there is no connection. As soon as someone opens the lid the magnet will move away from the reed switch and the ATtiny85 gets powered up.

If you have a type B postbox you could use a normally closed switch that is often used in wardrobes. Mount it next to the lid and the ATtiny85 will power up as soon as the lid is open.


To receive the notification, I use a Wemos D1 Mini with the receiver attached. The Wemos is running the OpenMQTTGateway software that sends the received 433MHz signal via the MQTT protocol to my Home Assistant software that is running on my server. And finally I have configured Home Assistant to use Pushbullet to notify me on my phone.

What is Starcounter and shall I try it?

In the past year my team and I used Starcounter as our database and web server for various projects. This writeup will offer you a glimpse of what Starcounter is, what features it offers and which scenario it is best suited for.


Starcounter is an in-memory database and web server. To develop on Starcounter you will need to write the backend in C#. For the frontend you can choose to expose a REST API or use their HTML mapping.


Starcounter features an app concept that lets you run multiple apps on the same instance - think about the Android or iOS platform. Each app runs in isolation, with its own database. You can share parts of your database with other apps or create integration triggers that will display your app inside other apps. These come in handy if you build single purpose apps. You can for instance build an image gallery app that automatically shows up if another app likes to display images.

Starcounter has a good integration in Visual Studio and focuses on development productivity. Their team focuses heavily on eliminating boilerplate from the development process so that the time to market (TTM) of your product reduces.

The platform itself has very low hardware requirements compared to other database systems. You can use a NUC in production (if you put enough RAM into it) without compromising the speed of your system.

There are a lot more features that Starcounter offers. If you decide to give it a try or like to read more about them, head over to their developer website.

Best suited for

  • Prototyping

    As already mentioned the hardware requirements are low and the focus on development productivity by their team helps you build apps fast.

  • Retail System

    Retail systems profit a lot by the speed that an in-memory database offers. Retail systems also have a lot of concurrent transactions that this database handles very well. Therefor it is not surprising that one of their biggest clients is a company that offers a retail system.

  • Sophisticated Websites

    Websites with lots of functionality will get a lot of value out of Starcounter. The app concept Starcounter is focusing on lets you build single purpose apps that will increase your website´s functionality step by step. Interactions and data sharing between apps is a breeze and offers many build in features you won’t find in other systems.

When to look for other options

  • Offline Capability

    If you are relying on offline functionality you should move on. The current version of Starcounter does not have any offline capability other than telling the user that he lost connection. Developing your own offline functionality will cost you the Starcounter frontend integration and you need to use their REST API.

  • Progressive Web Apps (PWA)

    If you like to build a PWA you can of course do it with Starcounter but you will quickly get to the point there you need some kind of offline capability. A PWA looks and feels like a native app and a user will be confused if it does not work as soon as he loses connection. So, you need at least rudimentary offline capabilities that Starcounter don’t offer.

  • Native Apps

    Starcounter currently only runs on Windows with Linux support around the corner. But it is not available for mobile devices. Maybe there will be a Starcounter Mobile version in the future.

Technology friendly business card

This post is focusing on the key factors for creating technology friendly business cards. I will describe the information that should be present on a business card, how to reference additional information, guidelines for readability as well as the future of technology friendly business cards.


  • Presented Information
  • Additional content
  • Optical character recognition (OCR)
  • Human Readable
  • Digital transmitting

Presented information

Selecting which data should be on a business card is not a simple task. The first impulse is to put as much information as possible on the card. Before you do that you should sort the data you like to add by relevance and secondly by how fast the data gets invalid because of changes. For instance, many think that it is a good idea to put the company address on the card. But most companies do not own a building and instead renting. This information can easily change and your prospect or future business partner will carry an old address around.

If you leave out information you need to offer an easy way to retrieve this information. One solution to this problem is a link to your company homepage or an individual contact card page. My proposal is to create a profile on a professional identity service like LinkedIn and use your profile link on your business card. This has the advantage that the business card still holds some of its value after you changed the company since you can update your information online. If the business card links to your company page you will probably lose the connection as soon as you quit your job.

Make sure your link is mobile friendly.

Now that you defined a link you like to reference you need to think about the length of the link since people can’t click on it and need to type it in the browser. You can use a URL shortener service (like Google URL Shortener)that will offer you a small URL that redirects to your (long) URL - LinkedIn has a short public profil URL. There are a lot of different service providers, choose one that you trust to exist for a long time since your business card will depend on their link.

If you present a link on your business card then think about offering a more convenient way to get the link from your card and into a browser. Generate a barcode out of your link and place it on your business card. Usually you use a matrix barcode like QR. A QR code has a fast readability and a greater storage capacity compared to barcodes you know from shopping. Additionally, you can still extract the data from a QR code that is up to 30% destroyed! 1. You can find plenty online QR code generators on Google.

To increase the recognition of your QR code you should uses around 32×32mm of your business card (not less than 26x26mm) for the QR code and use a 4 modules wide quiet zone (empty space) around the QR code. Pay attention that you do not put the QR code - or other information - to close to the borders of your card since it could be removed in the production.

You could also use the QR code to store your contact details on it but when you end up with the problem of outdated data.

Quiet zone around the QR code

Optical character recognition (OCR)

There are many apps for smartphones that aim to extract the contact details of business cards. For instance, the latest Moto camera app for Android will ask you if you like to create a new contact if it recognizes a business card. For this apps to work probably you should check your design against the most popular apps and tweak your design to offer a good recognition.

The key factors to improve readability

This factors not only help a camera lens detect your details but also the human eye.

  • Simple and consistent font
  • Decent font size (try out: large: 11-12pt, small 8-9pt)
  • Grouping of relevant information (data structure)
  • Spaces to identify the data structure
  • Prefer high contrast

Digital transmitting content

Some companies start to offer business cards with near field communication (NFC). NFC [..] enables two electronic devices, one of which is usually a portable device such as a smartphone, to establish communication by bringing them within 4 cm (1.6 in) of each other 2. This can be considered as the future replacement for the QR code.

Today I would not replace a QR code with NFC since not many users know how this works or expect it. Of course, if you work in the tech industry it is a nice gimmick to have on your business card and you can expect that more people will get how it works.

Every day a pixel

I read about a GIF that will run 1000 years till it has ended one loop. I told a friend about it and he had the idea to create a white canvas and every day you add one pixel to it. At the end, you would have a complete image that has grown over time. I thought about the other way around - displaying an image that loses one pixel every day. This would also cover up pixel failures in the screen but I decided to give his idea a try.


See the Pen Art Project - Every day a pixel by Jens Willmer (@jwillmer) on CodePen.


See the Pen Art Project - Every day a pixel (Timelaps) by Jens Willmer (@jwillmer) on CodePen.

Image Template

Found this image on

abstract painting

Setup custom domain and SSL for GitHub hosted website

If you are thinking of hosting a blog on GitHub and like to use a custom domain for it then this blog post is for you. I will show you which services I used and which settings have worked for me.

To follow this tutorial you need to have access to:

  • The nameservers of your domain.
  • The settings page of your GitHub repository

In the following steps, I use the domain You need to replace this domain with your own to get it to work.

Domain Hosting Website

Create the following DNS records for your website:

  • CNAME named www and value
  • CNAME named and value
  • A named and value
  • A named and value

Make sure that the A record IP’s are still the same by visiting the GitHub page.


Cloudflare Domain Setup

The tutorial from Cloudflare: Create a Cloudflare account and add a website

Quick Setup

  • Create a free account at Cloudflare
  • Add your website to Cloudflare and follow the instructions
  • Use the free Cloudflare plan
  • After the domain setup you get domainserver names displayed. Copy them and replace your current at the website of your domain registrar
DNS Verification
Select Cloudflare Plan
Replace Nameservers
Cloudflare DNS Setup

GitHub Setup

Detailed setup instructions from GitHub: Using a custom domain with GitHub Pages

Quick Setup

  • Add a file with name CNAME to you root folder of the repository you like to host
  • Add your domain with https:// in front as content of the CNAME file:
  • Go to the repository settings page and locate the custom domain field. Add in that field
  • If you use the static site generator Jekyll you need to set your domain and subdomain in the _config.yml:
    • Set url to your domain name
    • Optionally force SSL by adding enforce_ssl:

Cloudflare Modifications

  • Go to the Crypto tab of your domain:
    • Set the SSL option to Full
    • Set Automatic SSL Rewrite to On to fix http links (If this is Off you need to use Flexible as SSL option)
  • Go to PageRules and create page rules for your website:
Cloudflare Page Rules

Discussion on GitHub

This tutorial was created after a request from Sampath Vanimisetti on Github. He helped me in the process to test this tutorial and improve it - thank you!

How to install Jekyll and pages-gem on Windows (x64)

Jekyll is an awesome static file generator and it is really easy to create a blog with the generator. My current blog is built with Jekyll and you can find the jekyllDecent theme on GitHub.

It only takes minutes to create your own blog and run it on Windows.

System Prerequisites

  1. Install a package manager for Windows called Chocolatey
  2. Install Ruby via Chocolatey: choco install ruby -y
  3. Reopen a command prompt and install Jekyll: gem install jekyll

Setup a Blog

  1. Open a command prompt at C:\
  2. Create a blog jekyll new myBlog
  3. Change the location cd ./myBlog
  4. Use the command jekyll s to serve/host the blog

Play time

Now you can browse to and visit your new blog. If you like to start changing your blog, I recommend you have a look at the Jekyll getting started guide.

Generated blog

Generated default blog

GitHub pages and plugins

If you like what you saw you might know that you can host your blog on GitHub pages.

GitHub Pages is deeply integrated with Jekyll, a popular static site generator designed for blogging and software documentation, [..]

GitHub will automatically generate your blog if you deploy the source code to the gh-pages branch of your repository. But the thing with GitHub is that you cannot run any plugin that is available for Jekyll. GitHub only allows a few plugins to run. GitHub has bundled the available plugins into a Ruby gem that can be installed via command line. The problem on Windows (x64) is that this is the point there it gets complicated.

How to install github-gem

I assume that you have installed Chocolatey on your system. If you have a version of Ruby installed already you need to uninstall it. We need a specific version!

Install Ruby and Ruby development kit

Open a elevated command prompt and execute the following commands:

  • choco install ruby -version 2.2.4
  • Reopen an elevated command prompt
  • choco install ruby2.devkit - needed for compilation of json gem

Configure Ruby development kit

The development kit did not set the environment path for Ruby so we need to do it.

  • Open command prompt in C:\tools\DevKit2
  • Execute ruby dk.rb init to create a file called config.yml
  • Edit the config.yml file and include the path to Ruby - C:/tools/ruby22
  • Execute the following command to set the path: ruby dk.rb install

Nokogiri gem installation

This gem is also needed in the github-gem. Continue with the next step and if you get errors about Nokogiri follow this steps.

Note: In the current pre release it works out of the box with Windows x64 but this version is not referenced in the github-gem.

cinst -Source "" libxml2

cinst -Source "" libxslt

cinst -Source "" libiconv

 gem install nokogiri --^

Install github-gem

  • Open command prompt and install Bundler: gem install bundler
  • Create a file called Gemfile without any extension in your root directory of your blog
  • Copy & past the two lines into the file:
source ''
gem 'github-pages'
  • Note: We use an unsecure connection because SSL throws exceptions in the version of Ruby
  • Open a command prompt in your root directory
  • Install github-pages: bundle install


After this process you should have github-pages installed on your system and you can host your blog again with jekyll s.
There will be a warning on startup that you should include gem 'wdm', '>= 0.1.0' if Gem.win_platform? to your Gemfile but I could not get jekyll s working if I include that line so for the moment I ignore that warning.

If you can wait 2-3 months, the installation process of the github-gem should be as simple as the setup of the blog. But as long as the new version of the Nokogiri (v1.6.8) is not stable and referenced it is work to get it up and running on Windows.

Useful packages in github-gem

How to migrate WordPress to Azure

As I’m leaving my current hosting provider, my WordPress backup needs to run on Azure. In this blog post I’d like to explain a few different ways to archive this. There are three possibilities I have encountered so far. Let’s start with the simplest.

The simplest way to get WordPress up and running on Azure

For the first migration of you WordPress blog to Azure you only need an Azure account. Create a new Website by choosing “Website from catalog” and select WordPress from the blog section. You need to enter a domain name for your blog and accept a third party service to host your MySQL database. Thereafter your blog will be created .

Azure blog selection

Azure blog selection

By browsing to your recently created domain you’ve be prompted to setup an admin account for your blog. Next you create a backup of your old WordPress blog. Therefore you select the export option in the tools menu of your admin panel. As a result you’ll get a WordPress eXtended Rss file (WXR). This file you have to import into your new blog on Azure. The import function is located in the same menu like the export function – maybe you will be asked to install a plugin supporting the import. Now, your blog on Azure contains all your posts and the migration is finished.

Running WordPress on Azure with a Microsoft SQL database

For using this method you need to know how to work with a FTP client. To start with, we create a new Azure website and upload the downloaded and extracted WordPress files into the wwwroot location of the website.

As an alternative, you can use the website we have just created and remove the database by unlinking it from the website. This can be done by browsing to the settings of you website, in the sub-menu “Linked Ressources”.

Thereafter you have to download the WordPress DB Abstraction plugin. Extract it and copy it to the ./wp-content/mu-plugins/wp-db-abstraction/ folder on your website. Also you have to extract the db.php file from this plugin and copy it into the ./wp-content/ folder. Finally, you need to remove the web.config file in your WordPress root directory – this file will be auto generated by the plugin.

Now let’s get a new MS SQL database by browsing to the website settings and creating a new linked SQL database resource. Than you have to navigate to the following location of your WordPress domain with your browser:


Wordpress database connection

Wordpress database connection

After having browsed to domain above, you’ll get a screen asking you for your database credentials. Insert your database without the port and choose PDO SqlSrv as database type. The rest is self-explaining. Afterwards your database connection is set and you can create a new admin account. To import you current WordPress data you need to follow the steps having been described above (simple solution).

The third way. Cloning you MySQL database to MS SQL

This alternative should be used by those who know what they do. To start with, setup you Azure WordPress blog having been described above – with one exception: instead of creating a new admin user and using the import function of WordPress you need to clone the existing MySQL database to the new MS SQL database.

In order to fulfill this task you need the Microsoft SQL Server Migration Assistant. When you have downloaded and installed it you need to create a new project and select the migration to Azure option. Before filling out the connection settings to Azure you have to go to the Azure dashboard and add the current IP to the acceptance list of the database server. Afterwards you can setup the connections.

Microsoft SQL Server Migration Assistant

Microsoft SQL Server Migration Assistant

Having established both connections you can go to the MySQL database which should be cloned and select “Convert Schema” from the context menu. This will locally(!) convert the existing schema to the MS SQL database. Check the result. If it was successful, select the Azure database and choose “Synchronize Database” from the context menu. This will process the local changes to the database.

Than you choose “Migrate Data” from the MySQL database and synchronize the Azure database again. That’s it. If everything is all right you have an exact clone of you MySQL data in your Azure database, now. Check it by browsing to you Azure hosted WordPress blog.

Dashboard für Exchange & Filesystem in ASP.NET MVC Part II

In meiner Studienarbeit ging es um die Entwicklung und Umsetzung einer zentralen Oberfläche (Dashboard) für das Intranet der Universität. Das Dashboard hatte dabei zur Aufgabe verschiedene Systeme der Universität anzubinden und die daraus zugänglichen Informationen dem jeweiligen Anwender anzuzeigen. Angebunden wurde ein Microsoft Exchange Server1, die anwenderspezifische Dateiablage auf dem Linux Server der Universität sowie das Active Directory.

Dashboard Übersicht

Dashboard Übersicht

Ein Ausschnitt aus der Zusammenfassung der semesterübergreifenden Arbeit:

Phase II

In diesem Teil der Arbeit wird der Schwerpunkt auf die praktische Umsetzung gelegt, wodurch der Grundlagenteil kleiner ausfällt als in der ersten Ausarbeitung.

Die Grundlagen umfassen hierbei den Microsoft Exchange Server, das Linux Rechtesystem sowie die Verbindung zweier Peers über das Secure Shell Protokoll. Das Secure Shell Protokoll wird benötigt, um eine Verbindung zwischen der Anwendung und dem Linux Server herzustellen.

Die Umsetzung des Projekts wird unter Verwendung von Visual Studio 2010 und ASP.NET MVC 3 angegangen. In dieser Ausarbeitung wird anhand von Quellcode beschrieben wie die Funktionalität der Anwendung umgesetzt wurde und welche Hürden dafür zu nehmen waren. Des Weiteren wird dem Leser im Unterkapitel Oberflächendesign das Aussehen der Oberfläche anhand von Momentaufnahmen vorgeführt und erklärt, was den Entwickler dazu bewogen hat es so und nicht anders umzusetzen.

Link to Phase I

Dashboard für Exchange & Filesystem in ASP.NET MVC Part I

In meiner Studienarbeit ging es um die Entwicklung und Umsetzung einer zentralen Oberfläche (Dashboard) für das Intranet der Universität. Das Dashboard hatte dabei zur Aufgabe verschiedene Systeme der Universität anzubinden und die daraus zugänglichen Informationen dem jeweiligen Anwender anzuzeigen. Angebunden wurde ein Microsoft Exchange Server1, die anwenderspezifische Dateiablage auf dem Linux Server der Universität sowie das Active Directory.

Dashboard Übersicht

Dashboard Übersicht

Ein Ausschnitt aus der Zusammenfassung der semesterübergreifenden Arbeit:

Diese Arbeit befasst sich mit der Erstellung einer Homepage, die dem angemeldeten Benutzer ermöglicht, sein E-Mail-Kontingent sowie sein verbleibendes Kontingent auf der privaten Dateiablage der Universität einzusehen.

Die Arbeit gliedert sich hierzu in zwei Phasen. [..]

Phase I

In diesem Teil der Arbeit wird der Schwerpunkt auf die Grundlagen gelegt und es wird damit begonnen, die Projektstruktur aufzubauen sowie die Anmeldung am Active Directory zu realisieren.

Die Grundlagen umfassen hierbei den Microsoft Windows Server, den Microsoft SQL-Server, das .NET-Framework, die Entwicklungsumgebung Visual Studio 2010 sowie das Entwicklungsmuster Model-View-Controller.

Die Umsetzung des Projekts wird unter Verwendung von Visual Studio 2010 und ASP.NET MVC 3 angegangen. Es wird hierzu beschrieben, an welchen Stellen die Anwendung und der IIS konfiguriert werden müssen, um eine verschlüsselte Verbindung zuzulassen und wo die Anmeldedaten für das ActiveDirectory abgelegt werden. Des Weiteren werden Daten aus dem ActiveDirectory in einer lokalen Datenbank zwischengespeichert, welche ebenfalls beschrieben wird.

Link to Phase II

Evaluation von Cloudservices am Beispiel von SharePoint 2010 Online

Für die folgende Studienarbeit habe ich den ursprünglichen Blogpost leider gelöscht. Ich konnte jedoch noch alle Metadaten (Datum, Postname, Dateien, ..) finden, sodass sich dieser Post nicht groß unterscheiden sollte.


Die Aufgabe dieser Arbeit ist es, Möglichkeiten zu finden, mit welchen der Funktionsumfang von SharePoint Online erweitert werden kann sowie die Vor- und Nachteile von Cloudlösungen zu evaluieren. Hierzu werden zwei Anwendungsbeispiele vorgestellt und betrachtet wie diese sich, in SharePoint und SharePoint Online, lösen lassen.

Im Verlauf der Arbeit wird deutlich, dass die einzige Möglichkeit, SharePoint Online zu erweitern, über das Erstellen von Sandboxed Solutions möglich ist. Werden Funktionen benötigt, die über den Rechtekontext des aktuellen Anwenders hinausgehen, wird zudem ein externer Server benötigt. Dieser führt – mit einem Administratorkonto ausgestattet – die Funktionen anstelle des aktuellen Anwenders auf dem Server aus.



Installation und Einrichtung von SharePoint 2007

In meiner ersten Praxisarbeit für die Duale Hochschule beschäftigte ich mich mit der Installation und der Einrichtung eines Dokumentenmanagementsystems (DMS). Als DMS wählte ich SharePoint 2007. Ich beschäftigte mich dabei mit der Planung des späteren Funktionsumfangs, den Grundlagen von SharePoint 2007, dem Einrichten eines Windows Servers, der Konfiguration des Internet Information Services sowie des anpassen des DMS an das Corporate Design.

Damit diese und weitere Arbeiten die ich in meinem Studium angefertigt habe nicht auf meiner Festplatte in Vergessenheit geraten möchte ich Sie durch das Veröffentlichen auf meinem Blog der Allgemeinheit zugänglich machen, auf dass sie für den ein oder andere noch nützlich sein können.

Problemstellung: In der zu untersuchenden Firma gab es das Problem, dass die zu verwaltenden Datenmengen zu groß wurden, um sie mit der bisherigen Lösung weiterhin zu verwalten. Die Verteilung der Dokumente in die Ordnerstruktur und die Verlinkung in andere Ordner nahm ein zu großes Ausmaß an, sodass die Übersichtlichkeit verloren ging und der administrative Aufwand unrentabel wurde. Dazu kam, dass es keine praktikable Lösung für Sicherungen gab, sodass der Server für Sicherungen erst heruntergefahren werden musste, was ein Ausfall der Erreichbarkeit der Daten zufolge hatte.

Ziel: Das Ziel war es die Datenmengen in ein neues System zu portieren, welches eine merkliche Erleichterung der Administration zufolge haben sollte. Weiter wurde gewünscht, dass es eine ausgeklügelte Suche gibt, mit welcher sich die Daten schnell und effizient wiederauffinden lassen. Zudem wurden eine dauerhafte Erreichbarkeit und die Option zur Datensicherung verlangt.


SharePoint Online with ADFS Authentication

On July 2012 I had the problem that I wanted to connect to a SharePoint Online instance that had an Active Directory Federation Services (ADFS) in front. At this time I couldn’t find any how-to on the web that would explain me how to do it. I asked at for a solution and became a hint and also found a post by Wictor Wilén at his blog that describes the authentication to SharePoint Online without an ADFS. At this time I solved my problem with a workaround in the project but later that year I had the same problem again and I know that I had to solve it on my own because my question hadn’t received any new answers.

I used Fiddler, a web debugging proxy, to understand the authentication process. First you need to get the Security Assertion Markup Language (SAML) tokens. I looked up the needed requirements for the SAML-Tokens and was able to get them. With those tokens I was now able to get a token from the Microsoft Online Services (MOS) via the Secure Token Service (STS). With that token I can now finally authenticate my application against SharePoint Online and receive a authentication token that I have to use in all my REST-Requests (as cookies) in order to authenticate my requests.

At this last point on receiving the token from the Secure Token Service (STS) I stumbled upon an article from Omar Venado that solved my problem and posted the solution in his blog. Because I was short on time I used his finished solution in my project (with a few fixes and modifications) and throw my pre finished solution away. This is why I didn’t post some code snippets - look at Omar’s post for the snippets and a deeper explanation.

But because of my question on I received a view e-mails if I had found the solution to my problem and if I’m willing to share it. So I fought I make this blog post to spread the solution. I have also created a new project and copied the modified version of Omar’s solution in it and created a Windows 8 Store skeleton app. You can find it at Github: SharePointAuthentication

Feel free to use it, improve it and tell others about it ;-)

SharePoint Online with ADFS Authentication

Outlook Visitenkarten Add-In

Den heutige Post wollte ich schon längst schreiben, hatte aber immer andere gute Gründe dies nicht zu tun. Dafür ist er jetzt sogar auf Deutsch. Der Grund dafür ist, dass ich das Plugin, welches ich euch jetzt vorstellen werde, als Workaround für meine Kollegen programmiert habe und deshalb die Menüführung auf Deutsch ist.

Was das Plugin genau kann und wozu es gut ist habe ich schon einmal aufgeschrieben, in der Anleitung für meine Kollegen. Hier der interessante Teil davon:

Das Outlook Visitenkarten Add-In wurde entwickelt, um Anwendern die Möglichkeit zu bieten, komfortabel ihre, im Active Directory hinterlegten, Informationen zu pflegen und aktuell zu halten.

Der Grund für die Entwicklung dieses Add-Ins lag darin, dass in unserem Unternehmen die Soft- und Hardware der Mittarbeiter ausgetauscht wurde. Das Unternehmen migrierte von Windows XP zu Windows 7 und Office 2010.

Die in Office 2010 zur Verfügung stehenden Funktionen zur Kollaboration beziehen die Kontaktinformationen der Anwender aus dem Active Directory. Leider war dieses zum damaligen Zeitpunkt nicht ausreichend gepflegt, wodurch in den Office Produkten nur spärliche Informationen über den eigenen sowie die Firmenkontakte angezeigt wurden.

Um diese Informationen schnell und ohne viel Bürokratie einzupflegen, wurde dieses Add-In entwickelt, welches den Anwendern die Möglichkeit bietet, ihre Kontaktinformationen selbst zu pflegen und aktuell zu halten.

Die zu bearbeitenden Felder können im Active Directory durch den Administrator eingeschränkt werden. Dadurch behält die IT weiterhin die Kontrolle über die Einträge im Active Directory und kann Anwender- oder Gruppengebundene Rechte für die Bearbeitung der Felder anlegen.

Durch die Installation der Anwendung als ClickOnce-Bereitstellung ist die Aktualisierung problemlos und es werden keine erhöhten Anwenderrechte für die Installation benötigt, wodurch der Einsatz für einen maximalen Anwenderkreis ermöglicht wird.

Das Plugin funktioniert wirklich gut, einzig eine manuelle Anmeldung an das Active Directory fehlt. (Das Plugin verwendet immer das Systemkonto.) Dies ist aber für 99% der Anwender egal, weil diese das vorkonfigurierte Betriebssystem des Unternehmens benutzen - der Rest sind Entwickler(maschinen), welche sich mit dem AD-Explorer von Microsoft zu helfen wissen ;-).

Für alle die es testen möchten, selbst gebrauchen können oder sogar Lust haben es auszubauen, habe ich es auf GitHub geladen.

Zu finden unter: Outlook Contact Information AddIn

Outlook Contact Information AddIn

How to display different Items in a GridView

The last weeks I was coding an app for the windows 8 store and this weekend I try to complete the first version and send it to the app store. One problem that I had while I was coding was that I wanted to display different Items in a GridView. I came up with the following solution to accomplish this and it worked that well that I thought some readers could be interested in it ;-)

In my example I use the Model-view-controller (MVC) architecture with a little help from the MVVM-Light Toolkit.

First create some models you’d like to display:

public class Person
    public string FirstName { get; set; }
    public string LastName { get; set; }
public class Car
    public string Model { get; set; }
    public double Price { get; set; }

Second you add the models to a collection which you’d like to data bind against:

public class MainViewModel : ViewModelBase
    private ObservableCollection<object> _gridViewSource;
    public ObservableCollection<object> GridViewSource
        get { return _gridViewSource; }
            _gridViewSource = value; 

    public MainViewModel()
        var person = new Person() {FirstName = "Jens", LastName = "Willmer"};
        var car = new Car() {Model = "BMW-xyz", Price = 15.000};

        GridViewSource = new ObservableCollection<object>();

After this you create for each Model a custom UserControl, for testing I added two controls with different background colors in the XAML:

    <Grid Background="Red"  Width="20" Height="20"/>

    <Grid Background="Green"  Width="20" Height="20"/>

Then you add an IValueConverter that does the switching between the different Controls:

public class ContentTypeToControlConverter : IValueConverter
    public object Convert(object value, Type targetType, object parameter, string language)
        if (value != null)
            if (value is Person)
                return new PersonItemControl();
            else if (value is Car)
                return new CarItemControl();

        return null;

    public object ConvertBack(object value, Type targetType, object parameter, string language)
        throw new NotImplementedException();

Now for the last step you only have to create a GridView in your main XAML and add the source with a converter like this:

        <GridView ItemsSource="{Binding GridViewSource}">
                        <ContentControl Content="{Binding Converter={StaticResource local:ContentTypeToControlConverter}}" />

As result of my example code you will get this:


How to start Outlook minimized with C#?

Today I had the problem that I want to start outlook minimized and automatically in C#. First I tried to use ProcessWindowStyle.Minimized as in this snippet:

ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.WindowStyle = ProcessWindowStyle.Minimized;
// ...

But this approach won’t work with Outlook. I tried a few other methods but I couldn’t get it to work so I posted my question at stackoverflow. After this I went lucky and found out that the MainWindowHandle changes if Outlook was loaded. I build a loop and wrote the output of the MainWindowTitle into the console. The output was this:


At this point I started to develop a solution and came up with this:

    private static extern bool ShowWindowAsync(IntPtr hWnd, int nCmdShow);

    // console application entry point
    static void Main()
        // check if process already runs, otherwise start it

        // get running process
        var process = Process.GetProcessesByName("OUTLOOK").First();

        // as long as the process is active
        while (!process.HasExited)
            // title equals string.Empty as long as outlook is minimized
            // title starts with "öffnen" (engl: opening) as long as the programm is loading
            string title = Process.GetProcessById(process.Id).MainWindowTitle;

            // "posteingang" is german for inbox
            if (title.ToLower().StartsWith("posteingang"))
                // minimize outlook and end the loop
                ShowWindowAsync(Process.GetProcessById(process.Id).MainWindowHandle, 2);

            // wait awhile

            // place for another exit condition for example: loop running > 1min

I also posted this at stackoverflow as solution. I hope someone can use this snippet to save some time and if you think the code can be improved write me in the comments. I’m happy to hear your thoughts about it :)

Mobile Entwicklung - Mobiles Lernen

Im Rahmen meiner Studienarbeit im 5. und 6. Semester habe ich mich mit dem Thema plattformübergreifende App-Entwicklung und dem Schwerpunkt Mobiles Lernen beschäftigt. Heraus kam StudiCast, eine App zum Aufnehmen und Austauschen kurzer Lern-Podcasts. Das Projekt ist unter gehostet…


Im Rahmen dieser Studienarbeit werden zunächst die verschiedenen Plattformen mobiler Endgeräte vorgestellt. Ausführlich wird auf die Möglichkeiten der Software entwicklung für diese Betriebssysteme eingegangen: Vor- und Nachteile von Native Apps und Web Apps, das Konzept der Hybrid Apps, die Plattformunabhängigkeit mit den nativen Features der Plattformen zu kombinieren, sowie der Ansatz von Generated Apps.

Anschließend wird das Framework PhoneGap praktisch an einigen Anwendungsbeispielen getestet. Da es durch seine Performance nicht restlos überzeugt, werden ähnliche Programmierbeispiele anschließend mit dem Titanium SDK durchgeführt, einem weiteren Framework zur hybriden App-Entwicklung. Da sich damit allerdings nicht alle benötigten Funktionalitäten abdecken lassen, wird für die Umsetzung des Praxisprojekts PhoneGap eingesetzt.

Im zweiten Teil der Arbeit wird das Projekt StudiCast, ein Konzept zum mobilen Lernen, vorgestellt und praktisch umgesetzt. Dabei geht es darum, kurz zusammengefasste Lerninhalte aus Schule und Studium selbst mit einer App aufzunehmen, um sojederzeit und überall durch wiederholtes Anhören lernen zu können. Diese Podcasts können außerdem über eine Community mit anderen geteilt werden.


JSON - JavaScript Object Notation

Die heutige Ausarbeitung zum Thema “JSON - JavaScript Object Notation” stammt von Tobias Maier. Sie entstand im Rahmen der Vorlesung “Verteilte Systeme”.


Zusammenfassung - Diese Arbeit erläutert die Idee und den Nutzen von JSON.

Einleitung - Der Begriff Web 2.0 prägt seit einigen Jahren den Sprachjargon rund um das World Wide Web. Entstanden ist er irgendwann um 2003. Das Web 2.0 steht für eine neue Ära im Umgang mit dem Internet. Der Gebrauch der Versionsnummer soll eine Abgrenzung zum bisherigen WWW, sozusagen der Version 1.0 darstellen.

Bestanden Websites bis dahin meist aus statischen HTML-Seiten, die von einem festen Autoren-Kreis bearbeitet wurden, rückte nun die dynamische Bearbeitung von Inhalten immer mehr in den Vordergrund. Benutzer einer Website konnten und sollten selbst Inhalte einstellen. Überall entstanden Gästebücher, Foren, Weblogs und Wikis. Das Content-Management wurde zu einem zentralen Begriff: Bearbeiter einer Website brauchten keine umfassenden Programmierkenntnisse mehr, sondern konnten mit webbasierten Oberflächen ihre Seiten bearbeiten und die Änderungen sofort sehen bzw. online stellen.


Pyro - Python Remote Objects

Für die Vorlesung “Verteilte Systeme” musste jeder aus unserem Studiengang eine kleine Ausarbeitung erstellen. Damit diese Arbeiten nicht nur unseren Noten helfen, poste ich diese, im Namen der Autoren, auf meinem Blog. Angefangen mit der Ausarbeitung von Oliver Burger zum Thema “Pyro - Python Remote Objects”.


In der hier vorliegenden Arbeit werden die Grundzüge der Programmierung mit der Pyro-Bibliothek zum Erstellen verteilter Anwendungen unter Python dargestellt. Hierbei soll das grundlegende Vorgehen der Programmierung dargestellt werden sowie die Architektur dieser Anwendungen betrachtet werden.


Async and await in .NET 4.5

Yesterday I was playing around with Windows 8 and VisualStudio 11 and because my University is developing a web service to connect to the class schedule I played around with the semi-finished service and tried to get some data out.

The implementation of the alpha service is horrible, there is no authorisation and there isn´t planed some, everyone can get the usernames of the students and their class schedules. Also there is no limitation of the output, I had to limit the output myself to avoid a crashing program while my integer was too small. Finally you can´t get a person by his unique mail address, you have to search for the last name and let the user select the right person because the mail is not provided and finally the search results are redundant.

So in short terms, the service is buggy but that´s not the problem, that´s a challenge ;-) Anyway my intension was to play around with the .NET-Framework 4.5 and the service was just an opportunity.

I used async and await the first time and I really liked it :-) It is very easy to use, with async in the header you can define your method as asynchronous and with the use of await you can select a specific function inside the asynchronous method that it needs to wait for. No need for callbacks anymore ;-) A very good article about async, await and the pros and contras can be found at msdn (german).

Below you can see my WPF solution. I can´t post the whole project because the service shouldn´t be available for everyone right now and it isn´t - there are some more hints needed to get it to work properly..

The only two methods I implemented (the service was implemented via service reference in VisualStudio):

private async void TextBoxUsername_TextChanged(object sender, TextChangedEventArgs e)
    DataGridUsernames.ItemsSource = null;
    DataGridCourse.ItemsSource = null;

	// min req length
    if (TextBoxUsername.GetLineLength(0) > 2)
        using (Service1Client client = new Service1Client())
				// because the service defines no max output
                ((HttpBindingBase)(client.Endpoint.Binding)).MaxReceivedMessageSize = Int32.MaxValue;

                var users = await client.getUsersAsync(TextBoxUsername.Text);
                DataGridUsernames.ItemsSource = from user in users select new { user.firstName, user.lastName, user.userId };
            catch (EndpointNotFoundException)
                MessageBox.Show("Service unavailable!", "Info", MessageBoxButton.OK);
                MessageBox.Show("Undefined Error!", "Error", MessageBoxButton.OK);

private async void DataGridUsers_SelectionChanged(object sender, SelectionChangedEventArgs e)
    if (DataGridUsernames.SelectedItem != null)
            using (Service1Client client = new Service1Client())
                TimetabelListType[] courses = await client.getMyTimeTableForDayAsync(DateTimePicker.DisplayDate,
                DataGridCourse.ItemsSource = null;
                DataGridCourse.ItemsSource = from course in courses select new { course.title,
																				 course.endDate };
        catch (EndpointNotFoundException)
            MessageBox.Show("Service unavailable!", "Info", MessageBoxButton.OK);
            MessageBox.Show("Undefined Error!", "Error", MessageBoxButton.OK);

And finally a pictue of the GUI (sorry for the noise): GUI of the app

Edit: A frind asked me for the XAML - see below ;-)

<Window x:Class="WSDL-______.MainWindow"
        Title="______ Querry" Icon="favicon.ico"
		Height="340" MinHeight="340"
		Width="550" MinWidth="550">
    <DockPanel LastChildFill="True" >
        <Grid DockPanel.Dock="Top" VerticalAlignment="Bottom" Margin="5,10,5,0">
                <ColumnDefinition Width="Auto" />
                <ColumnDefinition Width="*" />
                <ColumnDefinition Width="Auto" />
            <Label Grid.Column="0" Content="Username"/>
            <TextBox Grid.Column="1" Name="TextBoxUsername" TextWrapping="Wrap" TextChanged="TextBoxUsername_TextChanged"/>
            <DatePicker Grid.Column="2" Name="DateTimePicker" BorderThickness="0" Margin="15,0,0,0"/>
        <Grid DockPanel.Dock="Bottom" Margin="5,10,5,5">
                <RowDefinition Height="Auto" />
                <RowDefinition MinHeight="90" Height="*" />
                <RowDefinition Height="Auto" />
                <RowDefinition MinHeight="90" Height="*" />
            <Label Grid.Row="0" Content="Anwender auswählen:"/>
            <DataGrid Grid.Row="1" Name="DataGridUsernames" SelectionChanged="DataGridUsers_SelectionChanged" IsReadOnly="True"/>
            <Label Grid.Row="2" Content="Kursübersicht"/>
            <DataGrid Grid.Row="3" Name="DataGridCourse" />

Protected subdomain/domain alias htaccess

For a new costumer project I had the problem to protected a subdomain by a basic authentication. The Problem was that the specific folder was the target for 5 other first level domains. To ensure that the other domains are still available i made a .htaccess hack that i like to share with you ;-)

AuthUserFile /*your_path/.htpasswd
AuthName "Locked Test"
AuthType Basic
Require valid-user

SetEnvIf Host SUB.DOMAIN.TLD secure_content

Order Allow,Deny
Allow from all
Deny from env=secure_content

Satisfy Any

Simple TCP/IP client/server application

For a university lecture I had to write a simple application to demonstrate a client/server communication over TCP/IP. I know there are many demos in the world wide web and now there is one more ;-)


Below I post you the code and at the end of the post you will find a ZIP-file with the project - have fun.

using System;
using System.Net;
using System.Net.Sockets;
using System.Text;

namespace Server
    class Program
        const int port = 8001;
        const string ip = "";
        const int maxBuffer = 100;

        static void Main(string[] args)
                IPAddress ipAddress = IPAddress.Parse(ip);
                TcpListener tcpListener = new TcpListener(ipAddress, port);

                Console.WriteLine(string.Format("The server is running at port {0}..", port));
                Console.WriteLine(string.Format("The local End point is: {0}", tcpListener.LocalEndpoint));

                Console.WriteLine("\nWaiting for connection..");
                using (Socket socket = tcpListener.AcceptSocket())
                    Console.WriteLine(string.Format("Connection accepted from: {0}", socket.RemoteEndPoint));

                    byte[] receiveBuffer = new byte[maxBuffer];
                    int usedBuffer = socket.Receive(receiveBuffer);

                    for (int i = 0; i < usedBuffer; i++)

                    Console.WriteLine("\n\nSent acknowledgement");
                    socket.Send(new ASCIIEncoding().GetBytes("The string was recieved by the server."));


            catch (Exception e)
                Console.WriteLine(string.Format("Error: {0}", e.StackTrace));


using System;
using System.Text;
using System.Net;
using System.Net.Sockets;
using System.IO;

namespace Client
    class Program
        const int port = 8001;
        const string ip = "";
        const int maxBuffer = 100;

        static void Main(string[] args)
                using (TcpClient tcpClient = new TcpClient())
                    tcpClient.Connect(ip, port);

                    Console.Write("\nEnter the string to be transmitted: ");
                    String inputString = Console.ReadLine();
                    Stream networkStream = tcpClient.GetStream();

                    byte[] sendBuffer = new ASCIIEncoding().GetBytes(inputString);
                    networkStream.Write(sendBuffer, 0, sendBuffer.Length);

                    Console.WriteLine("Receive acknowledgement from server..");
                    byte[] receiveBuffer = new byte[maxBuffer];
                    int k = networkStream.Read(receiveBuffer, 0, maxBuffer);

                    for (int i = 0; i < k; i++)

            catch (Exception e)
                Console.WriteLine(string.Format("Error: {0}", e.StackTrace));

Work with Exchange Web Services

The following code has cost me allot of time. It was very hard to find the pieces and put them together. So I hope I can help a few of you with that post ;-)

The Problem

The problem was that I need to access different Exchange accounts via Exchange Web Services (EWS) and read out the mailbox size.


To do this I first used a view only account in the active directory that I have activated for impersonation by typing the following code into the Exchange-Shell:

| where {$_.IsClientAccessServer -eq $TRUE}
| ForEach-Object {Add-ADPermission -Identity $_.distinguishedname -User (Get-User -Identity User1 | select-object)
    .identity -extendedRight ms-Exch-EPI-Impersonation}

You can find an explanation to this code on MSDN.


Then I set up the connection string to the EWS:

// Certification Validation always true
ServicePointManager.ServerCertificateValidationCallback =
                        Object obj,
                        X509Certificate certificate,
                        X509Chain chain,
                        SslPolicyErrors errors)
                        return true;

// Setup connection string
ExchangeService service = new ExchangeService(ExchangeVersion.Exchange2007_SP1);
service.Credentials = new NetworkCredential("viewadmin", "password", "FQDN");
service.AutodiscoverUrl("[email protected]");


Thirdly I used impersonation to connect to another mailbox account:

// Impersonate
service.ImpersonatedUserId = new ImpersonatedUserId(ConnectingIdType.SmtpAddress, "[email protected]");


And finally I requested the folder size with this nice little piece of code:

private static readonly ExtendedPropertyDefinition PidTagMessageSizeExtended
                        = new ExtendedPropertyDefinition(0xe08, MapiPropertyType.Long);

/// <summary>
/// Gets the size of the mailbox in kilobytes.
/// </summary>
/// <param name="service">The ExchangeService object.</param>
/// <returns>Returns the used kilobytes in double.</returns>
public static double GetMailboxSize(ExchangeService service)
    var offset = 0;
    const int pagesize = 12;
    long size = 0;

    FindFoldersResults folders;
        folders = service.FindFolders(WellKnownFolderName.MsgFolderRoot,
                                      new FolderView(pagesize, offset, OffsetBasePoint.Beginning)
                                          Traversal = FolderTraversal.Deep,
                                          PropertySet =
                                              new PropertySet(BasePropertySet.IdOnly, PidTagMessageSizeExtended,

        foreach (var folder in folders)
            long folderSize;
            if (folder.TryGetProperty(PidTagMessageSizeExtended, out folderSize))
                size += folderSize;
        offset += pagesize;
    } while (folders.MoreAvailable);

    return size;

I hope I could help you. If you have improvements on the code don´t hazle to comment ;-)


Einer meiner Studienkollegen hat mir eben angeboten seine Seminararbeit zum Thema Software-Reengineering zu veröffentlichen. Vielen Dank an Andre Ufer und euch viel Spaß beim lesen ;-)


Diese Seminararbeit hat das Ziel, einen Überblick über die Disziplin des Software-Reengineering zu verschaffen. Um das zu erreichen, werden zum einen die einzelnen Ausprägungen erläutert, zum anderen eine Abgrenzung gegenüber Techniken beschrieben, die zwar ähnliche Ziele verfolgen jedoch auf einem höheren bzw. niedrigerem Abstraktionsniveau ansetzen. Außerdem soll der Begriff des Software-Reengineerings erläutert werden und auf seine Unterschiedlichen Bedeutungen hin untersucht werden. Im Rahmen dieser Arbeit werden ebenfalls Werkzeuge vorgestellt, mit denen sich das Reengineering automatisieren oder vereinfachen lässt. Viele dieser Werkzeuge sind dabei Ergebnisse aus der Forschung und somit im Rahmen von Projekten an Universitäten oder forschenden Unternehmen aus der Wirtschaft entstanden. Es wurde aber auch untersucht, ob es Werkzeuge aus dem Open-Source-Bereich gibt. Zusätzlich sollen Best-Practices beschrieben werden, mit denen bereits Reengineering-Projekte erfolgreich durchgeführt wurden. Eine wirtschaftliche Betrachtung wurde nicht durchgeführt, jedoch wurden einige Aspekte davon behandelt.


Setting up a Mac VM inside Windows 7

Because of some changes in my company I now have to code for iOS. This is why I need to learn Cocoa (the programming language for developing iOS software). So tonight I have set up a virtual machine with Mac OS X Lion 10.7.2 to play a little bit around with Xcode - the preferred IDE.

Till now I thought setting up a Mac-VM is very difficult and hard to obtain but it was very easy and I hadn´t any problems. To help all you guys out there that like to do the same, here is my how to ;-)

For this how to I assume that you have already played around with VMware and also know how to open a terminal window in Mac OS X.

2. Getting the software

3. Prepare

  • Execute the VMware unlocker (Mac OS X Lion VMware Files.exe), it extracts the unlocker and a predefined VM.
  • Your directory should now look like this (Mac Os X update to 10.7.2 isn´t listed - sorry!):
  • overview
  • Now, if you have installed the Workstation run the windows.bat in the directory: VMware Workstation Unlocker - Windows. If you use Fusion, Player or Linux take the appropriate folder ;-)
  • A little bit intel, what are you installing? This is in the windows.bat
 net stop vmauthdservice
"%~dpn0_32.exe" %*
net start vmauthdservice

The windows.bat stops the VMware service, installs a patch and then starts the service again. At this point you have to trust the patch that he does what you’re expecting and isn´t a virus..

  • Now go to the Mac OS X Lion folder and double-click the Mac OS X Lion.vmx that should start the VMware - likewise you can start the VMware first and browse for the file.
  • Now you see the VM in VMware:


  • Edit the VM settings and add another (existing) hard drive, the Mac OS X Lion Installer.vmdk that you have downloaded earlier - around 4,12GB, you remember?
  • Also edit the CD device and add the darwin_snow.iso
  • Now it should look like this:


4. Installation

  • Start the VM!
  • Select “I moved it”
  • If VMware asks you to repair the hard drive, let VMware do it.
  • Now follow the Mac OS X instructions for installing the OS.
  • When you´r done and reached the Mac desktop open the CD drive of your VM and install the VMware Tools. After that you have to reboot. From now on the resolution will automatic fitt to the screen size and a share drive shows up on your Mac desktop. If you now setting up a share in your VMware Workstation you can reach it from within the VM via that share drive - cool, isn´t it :-)
  • Now shutdown the VM, make a snapshot, add a soundcard (in the settings) and power it on again, we will now update the VM to 10.7.2!

5## Update to Mac OS X Lion 10.7.2

  • After booting you have to open a terminal window in the Mac VM.
  • We now backup a file - the dot at the end isn´t accidentally!
cp -r /System/Library/Extensions/AppleLSIFusionMPT.kext .
  • Copy the Mac Os X update to 10.7.2 to your share drive and install it within the VM. Don´t reboot!
  • Now we remove the AppleLSIFusionMPT.kext that was created by the installation and replace it with our backup file:
sudo rm -rf /System/Library/Extensions/AppleLSIFusionMPT.kext
sudo cp -r AppleLSIFusionMPT.kext /System/Library/Extensions
  • Now reboot the VM, copy the EnsoniqAudioPCI_v1.0.3_Lion.pkg to the share and install it within the VM. After a reboot you should have working sound :-)
  • If your screen resolution won´t change dynamically any more it´s because of the system update, you can fix it by installing the VMware tools again ;-)

6. Clean up

  • Shutdown your VM. Remove the second hard drive and the mounted CD-iso, you do not need them any more.
  • Take a final snapshot!
  • Now you hopefully have a working Mac OS X Lion 10.7.2 VM in the Mac OS X Lion directory, feel free to move your VM to another location of your hard drive and remove/backup the installation files.

7. Enjoy


8. Edit: Update to OS X Lion 10.7.3

  • To update your system to 10.7.3 you have to download the update-image from and then go back to step 5.
  • After installing 10.7.3 the sound and VMware driver should still work, so you can pass this two steps ;-)

Verzeichnisstruktur auflisten in PHP

Dieses Script stammt aus meinen alten PHP-Zeite. Ich nutze es, in abgewandelter Form, zurzeit auf und dachte vielleicht kann es ja auch jemand anders gebrauchen ;-)

   Dieses kleine Script listet die Verzeichnisstrukturen auf und verlinkt deren Inhalte.
   Viel Spaß damit! - Gruß jEns

if (!function_exists('scandir')) {
	function scandir($directory, $sorting_order=0) {
		if(!is_dir($directory)) {
			return false;
		$files = array();
		$handle = opendir($directory);
		while (false !== ($filename = readdir($handle))) {
			$files[] = $filename;

		if($sorting_order == 1) {
		} else {
		return $files;

function ordnerinhalt($folder='.') {
	$content = "";

	foreach(scandir($folder) as $file) {
		// Versteckte Dateien nicht anzeigen
		if($file[0] != '.') {
			if(is_dir($folder.'/'.$file)) {
				$folderArray[] = $file;
			} else {
				$fileArray[] = $file;

	// Erst die Ordner ausgeben
	if(isset($folderArray)) {
		foreach($folderArray as $row) {
			$content .= '<b>'.$row.'</b><br />';
			// Unterordner nach rechts einrücken
			$content .= '<div style="padding-left:10px;color:#afafaf" />';
			// Rekursive Funktion
			$content .= ordnerinhalt($folder.'/'.$row);
			$content .= '</div>';

	// ...dann die Dateien ausgeben
	if(isset($fileArray)) {
		foreach($fileArray as $row) {
			// Dateien verlinken
			$content .= '<a href="'.$folder.'/'.$row.'">'.$row.'</a><br />';

	return $content;

echo ordnerinhalt();

Sicherheitsmechanismen im Netzwerk

Eine weitere Ausarbeitung aus der Datensicherheit-Vorlesung, diesmal zum Thema Sicherheitsmechanismen im Netzwerk, geschrieben von Yassin Uddin und Jonas Bartusch.

Abstract & Einleitung

Abstract — Die steigende Internet-Euphorie der letzten Jahre lässt oft vergessen, dass jeder Rechner, der an ein offenes Netzwerk wie das Internet angebunden ist, per se einen Angriffspunkt für Hacker und Cracker darstellt. Dieser Artikel entstand als Seminararbeit der Dualen-Hochschule Stuttgart Campus Horb des Studiengangs AI2009 und soll Heimanwendern und Administratoren aufzeigen, welchen Risiken Rechner und Daten im Netz ausgesetzt sind und wie man diese vor potentiellen Angriffen schützt.

Dieser Artikel ist in zwei Abschnitte unterteilt. Der erste Abschnitt befasst sich mit Angriffsmöglichkeiten und den Zielen die dahinter stecken. Desweiteren wird im zweiten Teil erläutert welche Sicherheitsmechanismen das System schützen und auf die Vor- und Nachteile eingegangen.



Auch diese Ausarbeitung stammt aus der Vorlesung “Objektorientierte Software Engineering”, geschrieben von Tobias Maier.

Abstract & Einleitung

Abstract - Diese Arbeit erklärt die Idee hinter JavaScript, stellt die geläufigsten JavaScript-Frameworks vor und vergleicht jQuery, MooTools, Prototype und YUI anhand mehrerer praktischer Beispiele, wie etwa dem dynamischen Nachladen von Seiteninhalten oder verschiedenen Animationen.

Einleitung - Der Begriff Web 2.0 prägt seit einigen Jahren den Sprachjargon rund um das World Wide Web. Entstanden ist er irgendwann um 2003. Das Web 2.0 steht für eine neue Ära im Umgang mit dem Internet. Der Gebrauch der Versionsnummer soll eine Abgrenzung zum bisherigen WWW, sozusagen der Version 1.0 darstellen.

Bestanden Websites bis dahin meist aus statischen HTML-Seiten, die von einem festen Autoren-Kreis bearbeitet wurden, rückte nun die dynamische Bearbeitung von Inhalten immer mehr in den Vordergrund. Benutzer einer Website konnten und sollten selbst Inhalte einstellen. Überall entstanden Gästebücher, Foren, Weblogs und Wikis. Das Content-Management wurde zu einem zentralen Begriff: Bearbeiter einer Website brauchten keine umfassenden Programmierkenntnisse mehr, sondern konnten mit webbasierten Oberflächen ihre Seiten bearbeiten und die Änderungen sofort sehen bzw. online stellen.


Usability Engineering

Diese Ausarbeitung stammt aus der Vorlesung “Objektorientierte Software Engineering”, geschrieben von Thomas Tigges und Jens Willmer. (Die Quellenangaben sind hierbei teilweise unvollständig.)

Zusammenfassung & Einleitung

Zusammenfassung - Usability Engineering bezeichnet ein Gebiet, in dem es darum geht eine optimale Mensch-Maschinen-Schnittstelle zu entwickeln. Bei einer solchen Schnittstelle liegt das Hauptaugenmerk auf der Benutzerfreundlichkeit. Die entwickelte Schnittstelle soll möglichst effizient für die vorgesehenen Arbeitsaufgaben gestaltet sein und zudem ein ansprechendes, gewohntes Aussehen haben, damit der Benutzer gerne damit arbeitet und sich schnell zu Recht findet. Des Weiteren muss die Schnittstelle vorgegebene Standards und Guidelines einhalten.

Einleitung - Usability Engineering beinhaltet eine Vielzahl von Aufgaben, welche sich über den kompletten Zeitraum eines Projektes hinziehen. Die Bedienbarkeit kann nicht als eine isolierte Aufgabe während des Projekts betrachtet werden, vielmehr ist die Bedienbarkeit ein wichtiger Teil der Entwicklung. Diese verläuft parallel zu der Programmierung des Produkts und hat direkte Auswirkungen auf die Reputation einer Firma, denn die Bedienbarkeit des zu verkaufenden Produkts wirkt sich auf die Produktivität und Zufriedenheit der Anwender aus.

Um die Entwicklung der Oberfläche und die damit verbundene Bedienbarkeit so effizient wie möglich zu gestalten wird das Usability Engineering in verschiedene Prozessphasen eingeteilt, welche in den kommenden Abschnitten näher betrachtet werden.



In den letzten Wochen hatten wir in der Datensicherheit-Vorlesung die Aufgabe zu verschiedenen Themen aus dem Bereich eine Ausarbeitung zu erstellen und diese vorzutragen. Damit die Arbeit jetzt nicht in der letzten Ecke meiner Festplatte verstaubt, poste ich euch hier meine Resultate.


Diese Arbeit befasst sich mit Methoden der sicheren Software-Entwicklung (am Beispiel des Microsoft Security Development Lifecycles) sowie bekannten Schwachstellen in Software wie Buffer Overflows oder String Format Attacken. Am Beispiel einer selbstgeschriebenen C++ Anwendung wird hierbei eine Buffer Overflow Schwachstelle verdeutlicht und anhand eines weiteren Programms, das ausnutzen dieser Schwachstelle beschrieben.


How to use Eventhandlers in C#

class Program
	static void Main()
		//init stock
        Stock s1 = new Stock();

		//add handler
        s1.PriceChanged += new PriceChangedHandler(s1_PriceChanged);
		s1.MyFunction += new MyFunctionhandler(s1_MyFunction);

		//change something
        s1.Price = 12;


    public static void s1_PriceChanged(decimal a, decimal b)
        Console.WriteLine("Old value: " + a + " - new value: " + b);

    public static void s1_MyFunction(string s, int i)
        Console.WriteLine(s + " - number:" + i.ToString());

public delegate void PriceChangedHandler (decimal oldPrice, decimal newPrice);
public delegate void MyFunctionhandler (string hand, int p);

class Stock
    decimal price;

    public event PriceChangedHandler PriceChanged;
    public event MyFunctionhandler MyFunction;

    public decimal Price
        get { return price; }
        set {
            if (price == value) return;
			//if handler was added trow the event
            if (PriceChanged != null)
                PriceChanged(price, value);
            price = value;

    public void myFunction(int i)
        if (i == 2 && MyFunction != null)
            MyFunction("left", 3);

Threading in C# und WPF

Anbei der Code womit sich neue WPF-Fenster in einem eigenen Thread öffnen lassen. War eine unsere Aufgaben in der Vorlesung Betriebssysteme.


using System.Threading;

namespace Threading
    public partial class MainWindow : Window
        public MainWindow()

            Thread thread = Thread.CurrentThread;
            this.DataContext = new
                ThreadId = thread.ManagedThreadId

        private void OnCreateNewWindow( object sender, RoutedEventArgs e)
            Thread thread = new Thread(() =>
                MainWindow w = new MainWindow();




<Window x:Class="Threading.MainWindow"
        Title="MainWindow" Height="132" Width="210">
        <StackPanel Orientation="Horizontal">
            <TextBlock Text="Thread's ID is "/>
            <TextBlock Text="{Binding ThreadId}"/>
        <Button Click="OnCreateNewWindow" Content="Create new Window" />

How to use the BackgroundWorker Thread in C#

Pils wird in einem neuen Thread ausgeführt und nach Fertigstellung wird bw_RunWorkerCompleted aufgerufen und im alten Thread ausgeführt.

private BackgroundWorker bw = new BackgroundWorker();

public Form1()
    bw.WorkerReportsProgress = true;
    bw.WorkerSupportsCancellation = true;
    bw.ProgressChanged += new ProgressChangedEventHandler(bw_ProgressChanged);
    bw.DoWork += new DoWorkEventHandler(bw_DoWork);
    bw.RunWorkerCompleted += new RunWorkerCompletedEventHandler(bw_RunWorkerCompleted);

public void buttonStart_Click(object sender, EventArgs e)
    if (bw.IsBusy != true)
        bw.RunWorkerAsync(12); //Start

public int Pils(int i)
    bw.ReportProgress(70, "In the middle of the work..");
    bw.ReportProgress(90, "Returning the result..");
    return (2 * i);

private void bw_DoWork(object sender, DoWorkEventArgs e)
    bw.ReportProgress(20, "Waiting for cancel..");
    if ((bw.CancellationPending == true))
        e.Cancel = true;
        bw.ReportProgress(50, "Starting process..");
        e.Result = Pils((int)e.Argument);

private void bw_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
    bw.ReportProgress(100, "Work done..");
    if ((e.Cancelled == true))
         textBox1.Text = "Canceled!";
    else if (e.Error != null)
         textBox1.Text = ("Error: " + e.Error.Message);
    else textBox1.Text = e.Result.ToString();

private void buttonCancel_Click(object sender, EventArgs e)
    if (bw.WorkerSupportsCancellation == true)

private void bw_ProgressChanged(object sender, ProgressChangedEventArgs e)
    listBox1.Items.Add((e.ProgressPercentage.ToString() + "%") + " - " + e.UserState as String);

Active Directory Scanner


Heute war ich produktiv! Anbei ein Script womit ihr euren Proxy bequem einstellen könnt:

@echo off
echo  Proxyeinstellungen
echo ~~~~~~~~~~~~~~~~~~~~~
echo Proxy aktivieren:   1
echo Proxy deaktivieren: 2
echo Proxy Status:       3
echo Help:               4
echo Programm beenden:   5
set /p menue="Bitte Aktionsnummer eingeben: "
if "%menue%" == "1" goto activate
if "%menue%" == "2" goto deactivate
if "%menue%" == "3" goto show
if "%menue%" == "4" goto help
if "%menue%" == "5" exit
goto start

echo Geschrieben von: Jens Willmer ([email protected])
echo Falls das Programm auf Windows 7 nicht funktioniert versuchen
echo Sie bitte die Eingabeaufforderung mit Administratorrechten
echo zu starten.
goto start

set /p proxy="Bitte Proxyname eingeben: "
if "%proxy%" == "" goto activate

set /p port="Bitte Port eingeben: "
if "%port%" == "" goto port
set #=%port%
set length=0

if defined # (set #=%#:~1%&set /A length += 1&goto loop)
if %length% GTR 5 goto error
netsh winhttp set proxy %proxy%:%port%
goto start

echo Port ist zu lang, bitte kleineren Port eingeben.
goto port

netsh winhttp reset proxy
echo Proxy wurde deaktiviert!
goto start

netsh winhttp show proxy
goto start

WLAN Access Point in Windows

Windows 7 besitzt von Haus aus einen virtuellen Wifi-Netzwerkadapter mit welchem es möglich ist seinen Rechner in einen Access Point zu verwandeln. Damit das komfortabel von statten gehen kann habe ich euch ein kleines Kommandozeilen-Programm geschrieben.

Wichtig: Nach der Konfiguration in den Einstellungen auf den Lanadapter klicken welcher die Verbindung bereitstellen soll und in den Einstellungen unter Freigabe die eingerichtete Drahtlosverbindung zur gemeinsamen Nutzung auswählen. Und immer daran denken die CMD mit maximalen Rechten starten ;-)

Have Fun ;–)

@echo off
echo   WLAN Access Point
echo ~~~~~~~~~~~~~~~~~~~~~
echo WLAN aktivieren:    1
echo WLAN deaktivieren:  2
echo WLAN konfigurieren: 3
echo Help:               4
echo Programm beenden:   5
set /p menue="Bitte Aktionsnummer eingeben: "
if "%menue%" == "1" goto activate
if "%menue%" == "2" goto deactivate
if "%menue%" == "3" goto config
if "%menue%" == "4" goto help
if "%menue%" == "5" exit
goto start

echo Geschrieben von: Jens Willmer (info[at}
echo Falls das Programm auf Windows 7 nicht funktioniert versuchen
echo Sie bitte die Eingabeaufforderung mit Administratorrechten
echo zu starten.
goto start

set /p pw="Bitte WLAN-Passwort eingeben: "
if "%pw%" == "" goto exceptPW
goto setSSID

echo Das WLAN wird nicht durch ein Passwort geschuetzt!
echo Passworteingabe wiederholen: 1
echo Ohne Passwort fortsetzen:    2
echo Zurueck zum Menue:           3
echo Programm beenden:            4
set /p exceptPW="Bitte Aktionsnummer eingeben: "
if "%exceptPW%" == "1" goto config
if "%exceptPW%" == "2" goto setSSID
if "%exceptPW%" == "3" goto start
if "%exceptPW%" == "4" exit
goto exceptPW

set /p ssid="Bitte SSID eingeben: "
if "%ssid%" == "" goto setSSID
if "%pw%" == "" goto openWlan
goto secureWlan

netsh wlan set hostednetwork key= keyUsage=persistent ssid=%ssid%
echo Der WLAN Access Point mit der SSID: %ssid% wurde eingerichtet.
goto start

netsh wlan set hostednetwork key=%pw% keyUsage=persistent ssid=%ssid%
echo Der WLAN Access Point mit der SSID: %ssid% wurde eingerichtet.
goto start

netsh wlan start hostednetwork
echo Um die Internetverbindung eines anderen Netzwerkadapters
echo zu verwenden muss diese in den Einstellungen des Adapters
echo fuer unseren Netzwerkadapter freigegeben werden!
goto start

netsh wlan stop hostednetwork
goto start

Domain suche

Ich bin gerade auf der suche nach einer neuen Domain. Sie sollte kürzer als die alte sein und am besten sollte der Name mit “de” enden. Also zum Beispiel:

Da ich faul bin habe ich mir ein kleines Script in C# geschrieben das mir Worterlisten einliest und sie auf meine Vorlieben untersucht. Dieses kleine Script möchte ich hier mit euch teilen ;-)

string path = @"C:\Users\me\Desktop\wordlists\";
string[] filePaths = Directory.GetFiles(path);
List<string> myList = new List<string>();

//Alle Wortlisten in eine Liste packen.
foreach (string expPath in filePaths)
//Duplikate entfernen
var hash = new HashSet<string>(myList, StringComparer.OrdinalIgnoreCase);

//Eigentliche filterung/sortierung
var filtered = from item in hash.ToArray()
               where item.Count() > 5 &&       //kürzer als 5
                     item.Count() < 9 &&       //kürzer als 8
                     item.EndsWith("de")       //de am schluss
               orderby item ascending          //sortieren nach name
               orderby item.Count() ascending  //sortieren nach länge
               select item;
//Gefiltert und sortierte Liste in Datei schreiben
using (StreamWriter file = new StreamWriter(string.Concat(path, "Ergebnis.txt")))
    foreach (string line in filtered)

Da mich gerade einige danach gefragt haben, hier meine zwei Wortlisten. Habe sie selbst aus dem Netz, habe allerdings nicht lange gesucht - gibt bestimmt auch bessere/umfangreichere :-)


Slitaz-Server manuell aufsetzen

In diesem Post möchte ich erklären wie man einen Server mit der Linux-Distribution Slitaz manuell aufsetzt.

Verwendete Komponenten

Mein Server besteht aus einem NOVA-4899:

  • 300 MHz CPU
  • 256 MB RAM
  • CompactFlash Cardreader intern + CF-Karte mit 128MB
  • CD-Laufwerk

Die verwendete Slitaz-ISO ist:

slitaz-3.0-base.iso [8.0M] - Base system in text mode and including useful commandline tools.


Zuerst brennen wir die Live-CD. Danach booten wir mit Hilfe von ihr und loggen uns mit Username: root und Passwort: root ein.



fdisk -l

lässt man sich nun alle Festplatten + Partitionen ausgeben. In meinem Fall wird die CF-Karte erkannt und als



Nun geht es an das Partitionieren, mit

fdisk /dev/hdc

starten wir fdisk mit meine zu Partitionierenden Festplatte. Als nächstes legen wir eine primäre Partition im ext3-Format an (Defaultformat bei dem erstellen einer Partition via fdisk - ID 83), und machen sie durch ein Boot-Flag bootbar.


D - Partition Löschen (delete)
L - Partitions-ID nummer anzeigen (list)
M - Online-Hilfe (menu)
N - neue Partition anlegen (new)
P - Partitonsliste anzeigen (print)
Q - Programm beenden (ohne Partitonstabelle zu verändern; quit)
T - Partitionstabelle überprüfen (verify)
W - Partitionstabelle ändern (write)


Nachdem wir die Partition angelegt haben formatieren wir sie noch (hdc1 ist die erste Partition auf der Festplatte hdc) mkfs.ext3 /dev/hdc1


Danach mounten wir die Partition sowie das CD-Laufwerk:

mount /dev/hdc1 /mnt/
mount /dev/cdrom /media/cdrom

Erstellen das Bootverzeichniss auf unserer neuen Festplatte, kopieren die Systemdaten auf die Festplatte, wechseln mit unserem aktuelle Verzeichnis auf die Festplatte und extrahieren die Systemdaten, löschen nicht mehr benötigtes Archiv (mit der Tabulatortaste lassen sich Befehle, Verzeichnisse,Pfade vervollständigen).

mkdir /mnt/boot
mkdir /mnt/boot/grub

cp -a /media/cdrom/boot/vmlinuz-... /mnt/boot/
cp /media/cdrom/boot/rootfs.gz /mnt/

cd /mnt/
lzma d rootfs.gz -so | cpio -id
rm rootfs.gz init

Bootloader installieren

Als Bootloader verwenden wir GRUB. Dieser muss von dem Livesystem auf die Festplatte kopiert werden und dort dann installiert werden. Zuerst der Kopiervorgang:

cp /usr/lib/grub/i386-pc/* /mnt/boot/grub/

Zuerst wechseln wir in das Verzeichnis von Grub und legen die menu.lst an. In dieser Datei wird angegeben was Grub booten soll, hier können später auch weitere Systeme einegtragen werden. (a, Esc, :wq sind Befehle für den Texteditor namens Vi)

cd /mnt/boot/grub/
vi menu.lst
timeout 5
title Slitaz
root (hd0,0)
kernel /boot/vmlinuz-... root=/dev/hdc1


Danach starten wir Grub und legen die Festplatte fest auf der Grub Installiert werden soll, in meinem Fall ist das die erste Festplatte und davon die erste Partition. Danach überprüfen wir mit find ob Grub alle benötigten Daten findet, um zum Schluss Grub den Befehl zur Installation zu geben.

root (hd0,0)
find /boot/grub/stage1
    found (hd0,0)

setup (hd0)

Das wars! Jetzt noch mit


neustarten und sich freuen wenn es keine unerwarteten Fehler gibt ;-)

Kill LanDesk

Bat-Datei welche LanDesk beendet wenn sich der Rechner im Firmennetz befindet. Einfach in den Autostart verschieben. Der Platzhalter: “InternErreichbarerFIRMENSERVER” muss in diesem Script noch angepasst werden.

@echo off

ping -n 1 InternEreichbarerFIRMENSERVER
if errorlevel 1 goto ende

TASKKILL /F /FI "IMAGENAME eq ScanningProcess.exe" /IM *
TASKKILL /F /FI "IMAGENAME eq SoftMon.exe" /IM *
TASKKILL /F /FI "IMAGENAME eq AVService.exe" /IM *
TASKKILL /F /FI "IMAGENAME eq pds.exe" /IM *
TASKKILL /F /FI "IMAGENAME eq collector.exe" /IM *
TASKKILL /F /FI "IMAGENAME eq issuser.exe" /IM *
TASKKILL /F /FI "IMAGENAME eq rcgui.exe" /IM *
TASKKILL /F /FI "IMAGENAME eq residentAgent.exe" /IM *
TASKKILL /F /FI "IMAGENAME eq tmcsvc.exe" /IM *
TASKKILL /F /FI "IMAGENAME eq ldav.exe" /IM *
TASKKILL /F /FI "IMAGENAME eq policy.client.invoker.exe" /IM *
goto ende2

echo -
echo Sie befinden sich nicht im Firmennetz!
echo LANDesk wurde NICHT beendet.
echo -
goto end

echo -
echo Sie befinden sich im Firmennetz.
echo LANDesk wurde beendet.
echo -