Golang for Java-Coders

The mocking gopher: a fitting mascot for golang

I want to share my experiences when I started programming in Golang from my perspective as a Java Coder.

What is this and why???

So first I wanted to know what is Golang. Rumors from my colleagues put it into one class with C and C++ (which it is not), this is not true at all. Golang is not a low-level language, it is far closer to Java than to C (from an abstraction perspective). These rumors probably come from the fact that it has a small amount of keywords and control structures reminding a bit of C.

When you read or listen to the talk the creators of the language gave at google, you get a much better idea of what it is and what problems it is supposed to solve. Basically they wanted a language for their C + C++ coders which is safer and easier and which compiles faster and which was built with concurrency in mind.

“Go at Google: Language Design in the Service of Software Engineering” Rob Pike, Google, Inc.

(Video) https://www.infoq.com/presentations/Go-Google
(Transcript) https://talks.golang.org/2012/splash.article

Reading/watching this helped me to understand a lot better what the intentions behind golang were.

Best golang book for Java Coders

The bare minimum you need to program in go. Exactly what you need, not more not less. A quick introduction into the golang programming language for experienced programmers.


Down the rabbit hole

Golang ships with a lot of nice basic stuff builtin, but for medium sized projects you need more.

Dependency Management (like maven)

Golang has its build and dependency management tool builtin. You don’t need any external tool. Unfortunately the golang dependency management does not use some versioning system. Instead you point your program to the github (or other website) master branches of other repos. Yes, I am not kidding. Again this makes more sense if you see it from googles perspective where they run all their stuff in a big repo. Probably they prefer breaking stuff as they are able to fix it right away.

There are workarounds like gopkg.in or vendoring (including the other source in your repo in a vendor folder). We used the “govendor” command in one of our projects. Govendor also allows to only ship a json file with the git commit hashes of the libraries you want to include and build the vendor folder from that.


Read more here:



Golang has very basic logging builtin, but doesn’t cover loglevels, etc. as a java coder you want something like e.g.:


There are a lot more golang logging libraries out there, just search for them if you want.


Golang has basic testing builtin, but for nicer assertions, etc. you might want to use helperfunctions from e.g. testify.


Again, there are a lot more testing frameworks out there.


Mocking in golang is a bit messy, there are some libraries with generators, but I am missing mockito. For now mostly I just built my own mock structs which implement the interface I want to mock.

Update: After digging a bit deeper it is actually possible to generate the needed mock structs quite conveniently using mockery and go generate.


Now you can add the generate expressions over the interfaces for which you need the mocks. This will generate a mock in a subpackage mocks:

//go:generate $GOPATH/bin/mockery -name MyInterface
type MyInterface interface {
 DoSomething() string

Then run go generate.

$> go generate your_module...

Now in your test you can similarly to Mockito mock functions, e.g.

myMock := &mocks.MyInterface{}
myMock.On("DoSomething").Return("It works!")



Resumé after 6 months of Golang, the Good, the Bad, the Ugly

Google developed golang to fullfill very specific needs

  • compile speed. Golang has to compile very fast enormous amounts of code
  • all your code: Google mostly uses its own libraries. If they need a change they do it in all dependent projects, if they break something they fix it in all projects. This is a closed ecosystem. They can just pull the master of a repo and be happy with it
  • easy to learn language with no surprises: Google intentionally built a language which is reduced to a minimum of features to meet their compile speed goal and to have as unfancy and obvious code as possible at the cost of a much greater verbosity. Googles motto is don’t refactor, rewrite.
  • Easy threadsafe parallelism (on the same host, NOT network) by implementing a kind of actor model (go functions, channels)

Annoying language phenomens

As a Java developer the go world is awfully verbose. After 6 months these are my pain points (biggest pain first):

  • Explicit Error handling. While this might be ok if you write small servers for any growing golang program this quickly explodes into an orgy of if err!=nil { return err } . You get to a point where you just want to smash the keyboard against the monitor if you have to write another one of these if elses. I prefer try-catch (Java, Python) a lot compared to this madness.
    Rust eases this problem by having some shortcuts (?) https://m4rw3r.github.io/rust-questionmark-operator . Golang offers nothing to ease your pain.
  • No collection standard functions. How often I longed for a contains, map, filter. Instead well “why should we add it, you can just write a for loop very quickly” <- I hate you golang
  • No dependency injection frameworks. Welcome to good old handcrafted constructor DI. A big factory wiring together all the stuff in the right order. You will miss Spring DI.
  • Interfaces: Golang has duck-type style interfaces, which means the interface doesn’t have any dependency to the implementing struct but it also means you simply don’t know which structs do implement an interface. Personally I don’t like this too much as for me the advantage of sparse interfaces is negated by the overall confusion it causes in bigger codebases.
  • Crosscutting concerns are usually implemented painfully. We added swagger to our software and had to rewrite the routing so the swagger library creates the http Handler. Some stuff feels so unnecessary complicated in golang.
  • No Generics (well just google if you want to know more)

What I like about golang

  • There are very little surprises in this language. It is fast to learn for new coders and most code is easy to understand.
  • The builtin formatting and package management tools make it easy to get started.
  • A friend of mine likes to code with vim and he would be happy with go. Although I personally prefer the Goland IDE from Jetbrains, the language is simple enough to code it without IDE support.
  • It is an easy to use typed language.

TLDR; There is no magic bullet, know why you want golang, be aware of the downsides and if you come from the Java world: Think if Kotlin wouldn’t do the job better.

Of course this all is my very own personal opinion. If you disagree I would be glad to hear your story in the comments.


pixelstats trackingpixel

Migrating from Vaadin 7 to Vaadin 8

Hohoho, Vaadin 8 is out!

Last night I couldn’t sleep and having read the release announcement of Vaadin 8 on the Vaadin Blog ( https://vaadin.com/blog/-/blogs/vaadin-framework-8-is-out ) I was curious to give the new version a try.

For quite some time I had ignored the reminders to migrate my addon the ComponentRenderer ( https://vaadin.com/directory#!addon/componentrenderer ) to Vaadin 8, so I thought this is a good start.

My goal was to just run the component-renderer and demo application with Vaadin 8 but using the compatibility layer to prevent bigger code changes for now. A rewrite embracing Vaadin 8 concepts is planned.



If you plan to use the automatic migration tool (will be explained further down in the text) make sure you don’t have com.vaadin.ui.* imports in your code. You want to configure your IDE to not automatically “star” imports if you import a couple of classes from a package. Then just search your code for any star-imports, remove them and import every class explicitly. The migration tool later will change these classes to Vaadin 7 compatibility imports and can’t work with * imports.

Updating POM

The next step was to update my maven pom.xml manifests to the new version. I had to change vaadin-server and vaadin-client to the corresponding compatibility packages vaadin-compatibility-server and vaadin-compatibility-client.

        ... more ...

First I made the error of also changing vaadin-themes into vaadin-compatibility-themes. But I am using the valo theme and that one is still in vaadin-themes. So if you get the error that valo is not found, check if you accidentally also made that mistake.

[ERROR] Feb 24, 2017 2:42:08 AM com.vaadin.sass.internal.handler.SCSSErrorHandler severe
[ERROR] SEVERE: Import '../valo/valo' in '/data/jonas/privat/projekte/vaadin/widgets/componentrenderer-release/componentrenderer-demo/src/main/webapp/VAADIN/themes/demotheme/styles.scss' could not be found
[ERROR] Feb 24, 2017 2:42:08 AM com.vaadin.sass.internal.handler.SCSSErrorHandler severe
[ERROR] SEVERE: Mixin Definition: valo not found

Updating Widgetset

Also change your Widgetset from the com.vaadin.DefaultWidgetSet to the compatibility Widgetset com.vaadin.v7.Vaadin7WidgetSet (search your whole code for that, you might have it defined in multiple places)

Vaadin 7

    <inherits name="com.vaadin.DefaultWidgetSet"/>

Vaadin 8 with v7 compatibility layer

    <inherits name="com.vaadin.v7.Vaadin7WidgetSet"/>

Rewrite imports using the migration tool

You probably already use the vaadin-maven-plugin anyway to build, so you can use the awesome Vaadin 8 compatibility upgrade mechanism (see github page of migration-tool). Just run the following command and it will automatically change all your imports to the compatibility layer.

mvn vaadin:upgrade8

Cleanup errors

All components are now immediate, and AbstractComponent::setImmediate(boolean immediate) has been removed. So I had to remove calls to the function from my code as well.

Check Memory of Widgetset Compiler

Make sure you give enough memory to the widgetset compile (mine was at 512MB and I had to increase it to 1024MB to get rid of this error):

[INFO] --- vaadin-maven-plugin:8.0.0:compile (default) @ componentrenderer-demo ---
[INFO] auto discovered modules [de.datenhahn.vaadin.componentrenderer.demo.DemoWidgetSet]
[INFO] Compiling module de.datenhahn.vaadin.componentrenderer.demo.DemoWidgetSet
[ERROR] Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
[ERROR] 	at java.util.Arrays.copyOfRange(Arrays.java:3664)
[ERROR] 	at java.lang.String.(String.java:201)
                    <extraJvmArgs>-Xmx1024M -Xss1024k</extraJvmArgs>

Build the project with Vaadin 8

Preparations done, now rebuild your project (e.g. mvn clean install). You might have to correct minor build errors, but for me the overall migration worked very well.

mvn clean install
pixelstats trackingpixel

WTF: physical scrum boards


On every scrum project I have the same discussions with the scrum master. Why on earth would we want to use a physical scrum board? I then hear some cultish praises of awesomeness in favour of the physical scrum board and total neglectance of any arguments against it. That goes so far that the scrum masters seems to be experts on office supply brands and field tested ripping-off techniques for post-its. People start writing information (about two words from a 200 word description) from a ticketsystem on a post-it and making photos of the physical scrumboard to re-digitize it.

  • But, …. Haptic Feedback!!!!
    Unless we start building software solemnly using lego-bricks I think programmers can handle the abstractness of data on a screen. There even are f….in touch screen TVs in most office’s
  • But, …. the big picture!!!!
    Nothing easier to see what you planned to do as to dive into a sea of two word 10x10cm paper pieces written in bad handwriting
  • But, …. everyone can see it!!!!
    Unless there is a wall in between or the atlantic ocean
  • But, …. you shouldn’t define stuff upfront anyway, speak with people!!!!
    Yes, it is always the biggest problem, that specifications are too clear and it is not at all a problem, that most people have the long term memory of a goldfish.
  • But, …. SCRUM!!!!
    When it comes to SCRUM people act like in the german tale “The Emperor’s New Clothes” . Noone speaks ill of SCRUM because questioning it makes you the village idiot who just does not understand. There is an unexplainable fear of throwing away stupid stuff and keeping the good stuff.

In all companies I worked the companies had ticketing systems (Jira, Redmine, etc.) where bugs were tracked and the feature backlog was maintained. Detailed bug reports or feature requests resided in these systems. Ok, not detailed from the start, but after some clarifications which were all visible in the ticket history the topics were quite clear. Now instead of using perfectly fine digital scrum boards like the Jira agile plugin or redmine-backlogs which have the classical swimlane views and automatically generate all kinds of statistics out-of-the-box some coked up scrum masters start writing down ticketnumbers (if you are lucky) and parts of the ticket title (most times not even the whole title fits) on little yellow paper snippets and glue them to a wall. Then after every sprint meeting they have to note the current progress, do the sprint calculations and publish them somewhere. As this step is extremly unnerving most scrum masters I worked with just didn’t do it. As a developer you have to frequently update ticket contents and of course read them to know what you are doing. So what you are doing now is writing down the ticket numbers from the post-its and looking them up in the ticket system.

So the question is, why would a sane person suggest doing stuff like that. I have several explanations:dawn_of_man

  • no paper trail: (paper trail as in “documented state”, because ironically you will have lot’s of paper 😉 )
    Battles with upper management may cost a lot of time and decrease developer happiness. So having some overcontrolling boss crawling through the tickets of the last couple of months and arguing about estimation points, etc. is an obstacle for good software development. Post-its are like snowflakes, everyone is different and they melt when you hold them in your hands (or let’s say after a sprint). No paper trail, no discussions. Countering bad management, with bad project documentation is not a good plan. The scrum masters should put take one for the team and keep these discussions away from the developers, without obfuscating the whole development process.
  • perceived transparency: Anytime someone asks about the development state, you can say: look it’s all here, just see our beautiful loom on that wall. The asking person will see colorful papers and a wall, things they understand, but not the meaning behind them. For the developers who have to work with that it’s the same, but they consult the ticket system. Just explain the digital scrum board on a beamer to a non-digital person or give them a percentage value (60% done, this is something everyone understands).
  • lots of manual non-computer-action for the scrum master: Sometimes I have the feeling scrum masters need something to do between the meetings. To show they are really doing something ideally that thing should be visible. What would be better than having some non automated artwork which you glue to the office wall? Noone else wants to do that anyway. The real job of getting familiar with a complex ticketing system and tuning it perfectly to the team’s needs sounds like a harder job.
  • cult like devotion to the word of the lord: As the pattern I encountered is always the same: physical-scrumboard, no discussion about that, showing the correct rip-off technique for post-its, etc.. I assume that is something they get teached at scrum-master-school and feel the need to follow these rules without questioning.

So what I propose:  If you already have a ticket-system which supports digital-scrum boards. Start with a digital scrum-board, do your standups in a conference room with a beamer. Be honest about what a physical board just can’t deliver. If more than 50% of the people think it is a good idea to switch to a physical scrumboard, then switch back.

I would like to invite anyone to discuss in the comments section. Especially die hard physical board lovers, I would love to see some good arguments (backed by real life stories) for physical scrum boards.

Digital Scrum-Boards

If you know other good digital scrum boards, please add a comment and I will add the link.

Jira Agile



Redmine Backlogs

Plugin for the redmine ticket system




Integrates with github




Integrates with github



pixelstats trackingpixel

Flashing Cyanogenmod 11 on the Samsung Galaxy S3 Neo+ GT-I9301I


He tried to flash his S3 Neo!


Repeatedly I went to a lot of pain flashing my S3 Neo because I did not write down how I did it the last time, but not again!

This tutorial assumes you know what you are doing, you can brick your device, blablabla, read the disclaimer on the firmware sites.


It also assumes:

  • You still have the original firmware on your phone (otherwise you can skip the Heimdall and TWRP steps)
  • Your phone is charged over 50% (better safe than sorry)
  • You have an microSD-Card in your phone
  • You have an USB cable at hand
  • You use Linux as OS
  • You made a Backup of all important data, ALL DATA WILL BE LOST

I was using Ubuntu 14.04.3 LTS.

Compile Heimdall

I built Heimdall some month before, so I had already all dependencies installed. If you run into error messages compiling, keep in mind that you have to install probably some dependencies.

Quote from Linux/README from Heimdall Repo:

1. First make sure you have installed build-essential, cmake, zlib1g-dev,
qt5-default, libusb-1.0-0-dev and OpenGL (e.g libgl1-mesa-glx and

git clone https://github.com/Benjamin-Dobell/Heimdall
 cd Heimdall
 mkdir build
 cd build
 cmake -DCMAKE_BUILD_TYPE=Release ..
 make heimdall

Voilà you got heimdall

Download all android images/software you need

I like to get stuff together completely before I start, because it sucks when the mirror you desperately need is down in the wrong moment.

So create a directory where you put all the files, e.g. in your home-directory.

mkdir ~/android_firmware

TWRP (Custom Bootloader)

You need this only if you have still the stock ROM on your device


Download the file from “Odin method”

it is called recovery.tar.md5 and check the md5sum
 md5sum recovery.tar.md5

compare it with the one mentioned on the download mirror page.

Untar the recovery.tar.md5 file

tar -xvvf recovery.tar.md5

the recovery.img file is what you need later

CyanogenMod 11.0 for the Samsung Galaxy S3 Neo ( GT-I9301I, GT-I9301Q and GT-I9300I )

Get the firmware



and check the md5sum:

md5sum cm-11-20150424-UNOFFICIAL-s3ve3g.zip

Camera Sensor Fix

It freaked me out at my first try that the camera did not work. If that’s the case you might need this fix:



Google Apps Minimal

I did NOT use the GAPPS mentioned on the xda-developers site, it didn’t work for me. During install I read something about for “Android 5.0” scroll by, so it probably is a version to new for that image.

I used:



Again check the md5sum:

md5sum gapps-kk-20150412-minimal-edition-signed.zip



Put all files on the SD-Card




Flash the TWRP Bootloader


Wait until you are prompted to press VOL_UP, Press VOL_UP

Wait until the screen is loaded

Plug in the USB-Cable


sudo ./heimdall flash --RECOVERY ../../../recovery.img --no-reboot

I experienced that it does not work always. Sometimes USB errors came. For me the Heimdall-master from github worked. It may be chance, it may be you have to make sure you wait long enough so the download mode has really fully loaded before you plug in the USB cable into the computer. Just try it several times if it doesn’t work the first time. Always unplug the USB in between the tries.

Now important! Samsung resets its own bootloader if you don’t boot directly into recovery mode after flashing an alternative bootloader.

So when the flashing progressbar is finished, shut off the phone by pressing PWR until it shuts off

Now press VOL_UP + HOME + PWR until booting Recoverymode written in tiny blue font on the top of the phone appears.

Now inside the recovery mode loader

The TWRP loader has nice buttons, if you have to navigate by VOLUP/VOLDOWN it may be the Samsung Original Loader.

So, inside the TWRP loader:

  • first do a factory reset
  • Then choose install and browse through the directories (e.g. one level up and choose external_storage or similar)
    until you find the zip files
  • Install cm-11-20150424-UNOFFICIAL-s3ve3g.zip
  • Install gapps-kk-20150412-minimal-edition-signed.zip
  • Reboot and check if the camera works
  • If the camera does not work boot into Recovery-Mode again and install the Camera_fix.zip
pixelstats trackingpixel

Using Docker in Production


Linux containers are around for quite some time and docker has built a nice toolsuite around the kernel-features for process-isolation (namespaces, cgroups, etc.). The isolation technology is part of the kernel for about 8 years now, so it probably can be considered mature. Big distributions used in commerical environments like Redhat and SUSE linux officially support docker (their packaged versions of it) and provide own base images (only downloadable in the subscriber portals). Also there are companies running huge docker clouds in their daily production business.

We already used Docker to setup our build-environment and create cheap test-containers, but now the plan was to use it also on some production machines.

I want to share some thoughts about docker in production and hopefully others share their experience through comments. This article applies to the scenario of a bigger traditional company. If you are part of a startup, the process may be much smoother because there is less scepticism towards new technologies but also because maybe security considerations are taken too lightly. Also this article does not focus on a company which has obvious big gain from using throwaway containers (like e.g. iron.io).

Restrictions in test or local build environments vs. production environments

Compared to local or test-environments, there are much more restrictions in production environments. In test/on their workstations developers often have vast freedom of tools and access to minimize impediments for the development workflow. But as soon as the software goes into production it has to comply to the much more restrictive production rules to be accepted by IT-security or operations department.

Here is a short comparison:

local or test-environment production environment
less restrictions regarding internet access strictly restricted access, mostly no access at all
less or no inspection of package sources packages have to come from a trusted source and content has to be traceable
freedom to choose arbitrary technologies specific defined supported software/setup
no monitoring required monitoring mandatory
local logging sufficent logservers to consolidate logs are common
backup often not needed backup mandatory
less hard requirements to security or performance (regarding configuration) configuration has to be secure and with optimal performance
developer driven operations driven
security updates are not enforced security updates have to be installed ASAP
run, delete, recreate containers as you like, throw away how you like stopping, deleting or recreating a container must be carefully planned into maintenance windows

Problems with default docker installation/workflow and mitigation

Docker makes it very easy for people to pull prebuilt + preconfigured images out of the docker registry. These images allow to set up software quickly and without in depth knowledge of the software which is used. When you are familiar with docker, you can setup a postgres database or a jenkins in minutes in a quality sufficient for development or testing. In production environments you have to ensure the safety of your customers data and you have to use existing infrastructure and processes for monitoring, logging, backup and even setting up the system.

production requirement default docker mitigation consequence
servers must not access the internet wants to pull images from dockerhub setup your own docker registry (e.g. Portus, see links below) you cannot pull anymore images from dockerhub, you could import them 1:1 in your own registry, but that is also not advisable
operating system must be supported by a vendor there are docker base-images for every linux distribution and depending on the gusto of the image-creator application-images (e.g. jenkins) are built on different distributions distributions offering commmerical support (e.g. Redhat, SUSE) provide docker-baseimages for their paying customers you have to rebuild all docker images using the officially supported base images. In most cases you will have to adjust the Dockerfile of an application image before to be compatible with
software have to be trustworthy  you don’t know what’s inside an image  get the dockerfile, understand what it does, rebuild the image with your trusted baseimage more or less complex depending on the application-image you have to analyze and rebuild
monitoring  run monitoring agent inside the container, or use hostbased monitoring tailoring of the Dockerfile/Monitoring necessary
logfiles STDOUT run logging agent (e.g. rsyslogd) in container, or use some mechanism on the host (e.g. the logspout container https://github.com/gliderlabs/logspout) you have to find a mechanism working for your production environment
backup most times you don’t want to store data to have throw-away-containers, but when you must (e.g. database), you have to use a classical backup tool tailor existing process for use with docker (not too difficult, but has to be done)
configuration ships with the default configuration made by the image maintainer adjust the configuration to your needs will probably take some time, so consider in planning
technology has to be approved by operations team docker is quite new, if the operations team in your company does not have experience with it they most definitively will reject it you need to convince the operations team to use the new technology, build a small sample case and take their objections serious will probably take some time, so consider in planning
security updates build a new updated base image, then rebuild all application images and also make sure updates for additional packages are received (normally automatic by fetching the newest version from the package manager) As in production it is recommendable anyway to have few or one distribution and controlled baseimages, it is easier to keep them up to date. That still involves rebuilding of all images. With arbitrary baseimages from the net, you probably will have a very hard job to keep them up to date. So consider the time you need for planning your update processes and rollout on the machines.
run, delete, recreate To change ports, volumes, environment variables, etc. of your container you have to bring it down and recreate it. That is no problem on dev/test, but in production. Data may get lost by human error (accidental delete of container-volumes, unmapped container-volumes, etc.) Do config changes in maintenance windows. Use your highavailability setup (if you have) to recreate one container at a time. Be careful not to destroy your data. Plan ahead. Give some thought to how you will handle such events and possible disaster recovery in case of data loss. Optimize your setup and documentation so human error is less likely (e.g. be aware of the different storage possibilities of docker and consequences of deleting a volume or an uncommitted container).

Problems we ran into


In production you normally encounter much more restrictive firewall rules (which is good 🙂 ), regarding pulling stuff from the internet or communicating between servers. Consider pushing docker packages into your local package repository and think about a scenario where you can’t pull images from the central registry. Pulling images created by (potentially harmful) strangers into production isn’t a good idea.

Docker Hub

The central docker-repository and offering paid services for private repositories is part of docker’s business model, so the docker-daemon is quite intertangled with the docker hub.

So you want to rely on some base images from docker hub but only a few hand selected ones. There is no easy way to get rid of the docker hub central registry. You can mirror it, but it will push all requests through. I have a problem with people being able to pull arbitrary images into production servers. You may want to allow images from docker, nginx, whatever big projects images, but not everyones. Or you want to rebuild the images on your own.

In the links on the bottom you find some tutorials how to run your own registry. Also there is Portus a docker registry developed by SUSE.

The only solution to keep control over your images would be to block traffic to the internet from the machines, setup an own registry, export the images you want to use from dockerhub and import them into your local registry. Then modify your dockerfiles to not rely on baseimages from dockerhub, but on the ones from your own registry.

What is inside a container?

So you setup a fully automated setup of servers with kickstart or VM image cloning containing all your precious base config. You are running some enterprise linux (e.g. redhat, suse) and pay for support to comply with business requirements. The production network doesn’t have internet access, but connects to your local rpm-mirror/repository (e.g. satellite).

And here comes docker. Suddenly you have a zoo of operatingsystems with unknown pre configuration. You actually don’t know what you are running anymore. Of course that can be fixed by creating your own base-image and only using that. But you have to consider that as well when using docker. That is probably only enforceable when you exclusively use your own local registry with only your handcrafted base-image available as source.

That of course also means rewriting of the prebuild docker images from dockerhub if they are based on a different OS flavor than you are using.

Handling the zoo

Soon you will have a whole bunch of docker images and need some form of distributing the right container-versions and startup commands across your infrastructure. You will also need a cleanup strategy to purge old images. Currently we use jenkins to roll out the images, but that also gets too fiddly soon. For bigger setups I would use traditional configuration management (e.g. Salt, Puppet, Chef, Ansible) or some more advanced docker cloud tools.

Configuration and knowledge about the software

Docker allows you to easily use software you do not know well. This is a gain for development, as most developers don’t need to know how to tune a database or secure a webserver (asuming both of which is running locally). But in production suddenly this matters a lot. So consider the time to tweak the configuration in your estimations for production use.


Radical devops philosophy would be, that developers prepare their software (e.g. Docker-Container) and run it on production. The admins build the tools around it and support them. Both teams work closely together and all involved people are equally responsible for the systems.

That is a nice theory, but besides the idea of closely working together and supporting each other I see some problems in practice. First there is specialization. Every member of the team has some special experience. Programmers can program software better than sysadmins, Sysadmins have a better knowledge about the infrastructure and necessities of running a production environment. Now if you throw together these two roles it won’t be useful. Even if you have persons who have both skills they will always see the problem from their current role’s point of view. It is just a matter of not having enough time. I can just not think about every eventuality of systemadministration AND develop good software. At some point you have to concentrate on one of the two.

Now if you want to use docker to let developers create containers and run these on production the mentioned problems won’t just disappear. You probably will end with a bunch of hard to manage containers which do not fit in your whole production concept.

On call duty

In the world of sysadmins everyone is used to have on-call-support and sees that the infrastructure is fit enough to not interfere too much with the private life. If real devops was done suddenly programmers would have to do on-call-support and would have to be fit to fix problems occuring. In my opinion that is vastly unrealistic.

Suggestion for teaming up

If you want to use docker and decided that it is worth it, then I would suggest that the sysadmin team gives the devs some basic rules. E.g. create for them the base-image, support them with adding monitoring/backup/logging, etc. . Containers are built by devs during development, but then before going into production are reviewed by sysadmins.

I would really split that into different registries and use the classical test-int-prod environments. Dev and Int would have the same registry.

  • Test: Devs have all freedom they need, but before moving to int they have to comply to the production standards
  • Int: Transfer of the work (images, etc. to sysadmins), intense reviewing and testing
  • Prod: Separate registry. Logically Linked to dev/int by a version control system (.e.g. versioning of the Dockerfiles), but totally independent


During development docker makes your life much easier, but that does not mean it can be used in the same manner on production. So if you are aware of the technical and social obstacles and have the time and management backup to overcome them you can start introducing the next level of automation to your production environment. If a company does not even have a configuration management tool in use and the necessary administrative processes established, I personally would not consider using docker in production.

Also consider if you really have gains from using docker. iron.io is a very interesting example, where they benefit hugely from docker, because their cloud service relies on locked-down throwaway environments with minimal overhead. In a more traditional company where you run a bunch of servers under your control with almost the same software all the time and already use a configuration management tools the benefit is not so huge and the additional complexity may not be worth it and do harm to your security and availability.

Some links

Some websites I explored during my research:

Support in distributions with commerical support

Running your own docker registry

Docker in production


pixelstats trackingpixel

Lenovo E540 Standby Problem on Ubuntu 14.04

UPDATE: Lenovo fixed the Bios, after a bios update standby + USB 3.0 enabled work. I used the geteltorito.pl-method descriped in the thinkwiki: http://www.thinkwiki.org/wiki/BIOS_Upgrade


I was furious about a very annoying standby problem my new Lenovo Laptop had. When closing the lid or choosing standby by the menu I did not hear the disks and fans spin down. Instead it kept running. When I then opened the lid, the backlight or something else shined, but the display stayed all black. The only solution was to keep the powerbutton pressed for a hard shutdown. The only solution for now seems to be to:

deactivate USB 3.0 in the Bios

Not cool. But at least know you have the choice with which shitty situation you want to live. With slow usb, or without standby. For more information in detail, read here. It seems to be a Lenovo Bios Problem: https://bugzilla.kernel.org/show_bug.cgi?id=80351

pixelstats trackingpixel

Converting videos with ffmpeg to webm format under Ubuntu 14.04.

I just love ffmpeg, because it is so easy to use and scriptable.

Install FFMPEG on Ubuntu 14.04

sudo apt-add-repository ppa:jon-severinsson/ffmpeg
sudo apt-get update
sudo apt-get install ffmpeg

Convert a video to webm

ffmpeg -i video.avi -c:v libvpx -crf 10 -b:v 1M -c:a libvorbis -q:a 6 -threads 4 video.webm

  • Adjust Video Quality (target bitrate) with -b:v , e.g. for 700kbit/s use -b:v 700k
  • NEVER omit the bitrate, it will use a very low bitrate by default which results in piss poor quality
  • Adjust audiobitrate using the quality indicator -q:a 6 is about 100-128 kbit/s, which was perfect for me


ffmpeg has some brief and good tutorials on their site, definitively have a look at them:

Interlaced Video

To convert interlaced video, add the yadif filter to deinterlace before encoding.

ffmpeg -i video.mpg -vf yadif -c:v libvpx -crf 10 -b:v 1M -c:a libvorbis -q:a 6 -threads 4 video.webm

A shell script

Encodes any video in webm with 1000kbit/s average video bitrate and approx. 100-120 kbit/s audio.

Usage: ./encode2webm.sh foobar.avi

Result: foobar.webm



ffmpeg -i $1 -c:v libvpx -crf 10 -b:v 1M -c:a libvorbis -q:a 6 -threads 4 ${1%.*}.webm
pixelstats trackingpixel

Guide to limits.conf / ulimit /open file descriptors under linux

Why does linux have an open-file-limit?

The open-file-limit exists to prevent users/process to use up all resources on a machine. Every file-descriptor uses a certain amount of RAM and a malicious or malfunctioning program could bring down the whole server.

Systemd ignores /etc/security/limits.conf /limits.d !!!

Because this took me quite some time to debug I name it at the beginning: See this and other posts. Systemd does not respect the system limits. You have to add e.g. “LimitNOFILE” to the daemon file.

Description=Some Daemon
After=syslog.target network.target






What is an open file?

The lsof-manpage makes it clear:

An open file may be a regular file, a directory, a block special file,
a character special file, an executing text reference, a library, a
stream or a network file (Internet socket, NFS file or UNIX domain

It is important to know that also network sockets are open files, because in a high-performance-web-environment lots of these are opened frequently.

How is the limit enforced?

$> man getrlimit
The kernel enforces the open-file limit using the functions setrlimit and getrlimit. 

Newer kernels support the prlimit call to get and set various limits on running processes

$> man prlimit
The prlimit() system call is available  since  Linux  2.6.36.   Library
support is available since glibc 2.13.

What is ulimit?

Mostly when people refer to ulimit they mean the bash builtin ‘ulimit’ which is used to set various different limits in bash context (not to be confused with the deprecated c-routine ulimit(int cmd, long newlimit) from the system libraries). It can be used to set the open file limit of the current bash.

The difference between soft and hard limits

The initial soft and hard limits for open files are set in /etc/security/limits.conf and enforced at login through the PAM-module pam_limits.so . The user then can modify the soft and hard limit using ulimit or the c-functions. The hard limit can never be raised by a regular user. root is the only user who can raise its hard limit. The soft-limit can be freely varied by the user as long as its less than the hard limit. The value that triggers the “24: too many open files”-error is the soft-limit. It is only soft in the sense, that it can be freely set. The user can lower its hard limit, but beware. He can not raise it again (in this shell).

ulimit Mini-Howto

ulimit -n queries the current SOFT limit
ulimit -n [NUMBER] sets the hard and softlimit to the same value
ulimit -Sn queries the current SOFT limit
ulimit -Sn [NUMBER] sets the current soft limit
ulimit -Hn queries the current hard limit (thats the maximum value you can set the softlimit to (if you are not root))
ulimit -Hn [NUMBER] sets the current hard limit

Are there other limits?

Also a system wide open file limit applies. This is the maximum limit of open files the kernel will open for all processes together.

$> man proc
              This file defines a system-wide limit  on  the  number  of  open
              files  for  all processes.  (See also setrlimit(2), which can be
              used by a process to set the per-process  limit,  RLIMIT_NOFILE,
              on  the  number of files it may open.)  If you get lots of error
              messages about running out of file handles, try increasing  this

              echo 100000 > /proc/sys/fs/file-max

              The  kernel constant NR_OPEN imposes an upper limit on the value
              that may be placed in file-max.

              If you  increase  /proc/sys/fs/file-max,  be  sure  to  increase
              /proc/sys/fs/inode-max   to   3-4   times   the   new  value  of
              /proc/sys/fs/file-max, or you will run out of inodes.

Note: /proc/sys/fs/inode-max (only present until Linux 2.2)
This file contains the maximum number of in-memory inodes. This
value should be 3-4 times larger than the value in file-max,
since stdin, stdout and network sockets also need an inode to
handle them. When you regularly run out of inodes, you need to
increase this value.

Starting with Linux 2.4, there is no longer a static limit on
the number of inodes, and this file is removed.

To query the maximum possible limit have a look at (this is only informational. Normally a way lower limit is sufficient):

$> cat /proc/sys/fs/nr_open

Change the system-wide open files limit

Append or change the following line in /etc/sysctl.conf

fs.file-max = 100000

(replace 100000 with the desired number)

Then apply the changes to the running system with:

$> sysctl -p

What does /proc/sys/fs/file-nr show?

$> man proc
              This (read-only)  file  gives  the  number  of  files  presently
              opened.  It contains three numbers: the number of allocated file
              handles; the number of free file handles; and the maximum number
              of file handles.  The kernel allocates file handles dynamically,
              but it doesn't free them again.   If  the  number  of  allocated
              files  is  close  to the maximum, you should consider increasing
              the maximum.  When the number of free  file  handles  is  large,
              you've  encountered a peak in your usage of file handles and you
              probably don't need to increase the maximum.

So basically it says /proc/sys/fs/file-nr is not the actual number of open files, but the maximum which were opened. It also shows the number of file descriptors which are free for reuse. So max-number – free number = actual number. This applies not only to physical files, but also sockets.

From a newer manpage:

Before Linux 2.6,
the kernel allocated file handles dynamically, but it didn’t
free them again. Instead the free file handles were kept in a
list for reallocation; the “free file handles” value indicates
the size of that list. A large number of free file handles
indicates that there was a past peak in the usage of open file
handles. Since Linux 2.6, the kernel does deallocate freed file
handles, and the “free file handles” value is always zero.

$> cat /proc/sys/fs/file-nr 
512	0	36258
max     free    limit

How is it possible to query the number of currently open file descriptors?

System wide

$> cat /proc/sys/fs/file-nr


lsof lists also lots of content which does not count into the open file limit (e.g. anonymous shared memory areas (= /dev/zero entries)). Querying
the /proc-filesystem seems to be most reliable:

$> cd /proc/12345
$> find . 2>&1 | grep '/fd/' | grep -v 'No such file' | sed 's#task/.*/fd#fd#' | sort | uniq | wc -l

If you want to try lsof use this ( the -n prevents hostname lookups and makes lsof way faster for lots of open connections):

lsof -n -p 12345 | wc -l

You can also insert a number of pids for e.g. php5-fpm into lsof with:

lsof -n -p "$(pidof php5-fpm | tr ' ' ',')" | wc -l

Changing the ulimit for users

Edit the file /etc/security/limits.conf or append:

www-data soft nofile 8192
www-data hard nofile 8192

Set the soft to the hard-limit, so you don’t have to raise it manually, as user.

It is also possible to set a wildcard:

* soft nofile 8192
* hard nofile 8192

For root the wildcard will not work and extra lines have to be added:

root soft nofile 8192
root hard nofile 8192

I set my precious limits, I logout and login but they are not applied

As said before the limits in /etc/security/limits.conf are applied by the PAM-module pam_limits.so.
In the directory /etc/pam.d lie various files, which manage different PAM-settings for different commands.
If you don’t log into your account, but change into it using su or execute a command using sudo, then the special config for this program is loaded. Open the config and make sure the line for loading pam_limits.so is
not commented out:

session    required   pam_limits.so

Save and now the limits should be applied.

Program specific special cases


nginx has some special handling:

This is what applied to my Ubuntu Precise 12.04 Testsystem. The init-script seems to be buggy there.

  1. You can set the ulimit which nginx should use in /etc/default/nginx
  2. /etc/init.d/nginx restart does NOT apply the ulimit settings. The setting is only applied in the start-section of the init script. So you have to do /etc/init.d/nginx stop; /etc/init.d/nginx start to apply the new limit

There is a better distribution independent way to set the worker openfiles limit. Using the config file!:

Syntax:	worker_rlimit_nofile number;
Default:	—
Context:	main

Changes the limit on the maximum number of open files (RLIMIT_NOFILE) for worker processes. Used to increase the limit without restarting the main process.
pixelstats trackingpixel

Unit-Tests with Fortran using pFUnit (supports MPI)

Real world example

A friend of mine is meteorologist and wrote a radiative transfer model in fortran and finally we integrated pfUnit into cmake (autoinstall!) and he wrote some first tests.

Have a look here for the code:


(outdated) Setting up pfUnit

This tutorial is a bit outdated and not too useful, better just have a look at the realworld example above. It even contains cmake code to download and build pfUnit!

Minimum requirements

The current master can only be built using unreleased gcc versions (4.8.3, or 4.9). The recommended solution is to use pfunit 2.1.x , which I will do in this tutorial.

I used gcc 4.8.1.

Getting the framework

git clone git://pfunit.git.sourceforge.net/gitroot/pfunit/pfunit pFUnit
git checkout origin/pfunit_2.1.0

Building and testing pfUnit

make tests MPI=YES
make install INSTALL_DIR=/opt/pfunit

Testing if the setup and installation succeeded

In the git main directory do:

cd Examples/MPI_Halo
export PFUNIT=/opt/pfunit
export MPIF90=mpif90
make -C /somepath/pFUnit/Examples/MPI_Halo/src SUT
make[1]: Entering directory `/somepath/pFUnit/Examples/MPI_Halo/src'
make[1]: Nothing to be done for `SUT'.
make[1]: Leaving directory `/somepath/pFUnit/Examples/MPI_Halo/src'
make -C /somepath/pFUnit/Examples/MPI_Halo/tests tests
make[1]: Entering directory `/somepath/pFUnit/Examples/MPI_Halo/tests'
make[1]: Nothing to be done for `tests'.
make[1]: Leaving directory `/somepath/pFUnit/Examples/MPI_Halo/tests'
mpif90 -o tests.x -I/home/jonas/data/programs/pfunit/mod -I/home/jonas/data/programs/pfunit/include -Itests /home/jonas/data/programs/pfunit/include/driver.F90 /somepath/pFUnit/Examples/MPI_Halo/tests/*.o /somepath/pFUnit/Examples/MPI_Halo/src/*.o -L/home/jonas/data/programs/pfunit/lib -lpfunit -DUSE_MPI 
mpirun -np 4 ./tests.x
Time:         0.002 seconds
 Failure in: testBrokenHalo[npes=3]
   Location: []
Intentional broken test. (PE=0)
 Failure in: testBrokenHalo[npes=3]
   Location: []
Intentional broken test. (PE=2)
 Failure in: fails[npes=3]
   Location: [beforeAfter.pf:33]
intentionally failing test expected: <0> but found: <3> (PE=0)
 Failure in: fails[npes=3]
   Location: [beforeAfter.pf:33]
intentionally failing test expected: <0> but found: <2> (PE=1)
 Failure in: fails[npes=3]
   Location: [beforeAfter.pf:33]
intentionally failing test expected: <0> but found: <1> (PE=2)
Tests run: 10, Failures: 2, Errors: 0

The output should look like the one above. There are errors in the written tests, but intentionally. If there are compiling errors, go fix them.

More Examples

More examples can be found in the Examples Directory. The examples are all nice and small and self explainatory.

Common errors

Sometimes if you forget to export the compilervariable:

export F90=gfortran
export MPIF90=mpif90

You will receive these errors:

make[1]: c: Command not found
make[1]: o: Command not found
pixelstats trackingpixel

Secure wiping your harddisk

This is a little FAQ about securely wiping your harddisk.

Why is deleting the files not enough ( e.g. rm -rf *)

Because this removes only the meta-data to find the data, but the data itself is still there. It could be recovered scanning the disk. Imagine it like a book where you ripe out the table of contents. You can’t find a chapter by looking up the page number, but you can flick through the whole book and stop when you find what you are looking for.

Is filling the disk with zeros enough, or do I have to use random numbers, how often do I have to rewrite my harddisk?

Magnetic Discs

The amount of bullshit, half-truth and personal opinions out there is amazing. When you try to get to scientific research results are thin. I found a paper and they did some pretty intense tests and the results are surprising (surprising in contrast to all the opinions out there).

Overwriting Hard Drive Data: The Great Wiping Controversy | Craig Wright, Dave Kleiman, and Shyaam Sundhar R.S.

The short answer is: one write with zeros completely and securely erases your harddrive in a manner, that even with special tools e.g. a electron microscope recovery is not possible.

SSDs and Hybrid-Disks (SSD-Cache + Magnetic)

Zero-filling does not work for SSDs. You have to use the Secure Erase feature every SSD has. Have a look here:

What tools should I use?

Magnetic Discs

The maintenance tools of all harddisk vendors have a option to zerofill the harddisk. Under linux you can use the tool dd to zerofill a disk.

 dd if=/dev/zero of=/dev/sdX bs=4096

to query the dd status you can send the SIGUSR1 Signal to the process. e.g. this sends the signal to all running dd-process:

#> kill -SIGUSR1 $(pidof dd)
320+0 records in
320+0 records out
335544320 bytes (336 MB) copied, 18.5097 s, 18.1 MB/s

SSDs and Hybrid-Disks (SSD-Cache + Magnetic)

Zero-filling does not work for SSDs. You have to use the Secure Erase feature every SSD has. Have a look here:

I only want to overwrite one partition, but my system freezes and I can’t work anymore during the wipe.

This limits the write speed a bit, but you can work during the wipe (only makes sense of course if you are not wiping the whole disk).

echo 15000000 > dirty_bytes

For all the backgrounds to the dirty-pages-flush have a look here:

pixelstats trackingpixel