Securing ejabberd on Debian Wheezy (7.0) : Bind epmd to localhost (

Ejabberd is a nice and (in theory) easy to setup jabber-server. However during setup I came across some WTF’s, I want to share.

What is empd?
epmd is a small name server used by Erlang programs when establishing distributed Erlang communications. ejabberd needs epmd to use ejabberdctl and also when clustering ejabberd nodes. If ejabberd is stopped, and there aren’t any other Erlang programs running in the system, you can safely stop epmd.

  • epmd is started along with ejabberd, but as other erlang programs might use it, it keeps running even if ejabberd is stopped
  • epmd’s default setup is to listen on ALL INTERFACES

For me this seems to be a undesirable default behaviour of the debian package, which can be easily fixed:

Bind epmd to

add the following line to the end of /etc/default/ejabberd to make epmd listen on localhost only. The “export” is imporant. Without it won’t work.


ejabberd looksup the hostname and tries to connect to this ip. If you have a DNS-Hostname it normally does not resolve to . So you have to add to your
local /etc/hosts file the shortname and the fqdn of your server.

Find the shortname and fqdn:

# shortname
$> hostname -s
$> hostname

Now add to /etc/hosts:  foo

Stop epmd with ejabberd

add the follwing line to /etc/init.d/ejabberd

 70 stop()
 71 {
 84         echo -e "\nStopping epmd: "
 85         epmd -kill
pixelstats trackingpixel

Boosting Audio Output under Ubuntu Linux

I often had the problem when I wanted to watch a movie or listen to some audiofile and there was background noise I wanted to turn up the volume but it was already at 100% . I thought it should be possible to turn up the signal beyond 100% and decide by myself if the signal is clipping/distored. And there is a very easy solution for the people using pulseaudio.

Just install the tool paman

sudo apt-get install paman

Now you can boost the audio to 500% volume. For me usually 150% was enough ;).


pixelstats trackingpixel

Encrypted off-site backup with ecryptfs

I was looking for a method to backup my data encrypted. Of course there exist plenty of possibilities, but most of them either encrypt a container or complete partition or seemed to be complicated to setup. I did not want container or partition encryption as I fear if the media is corrupted or something goes wrong during network transfer perhaps all my data would be unaccessable for me. With file-based encryption I have almost the same risk as without encryption. Even if I loose some files to corruption I can still decipher the rest of the data.

Finally I chose ecryptfs because it is a file-based encryption which also encrypts the filenames and it is very easy to setup and use. On the homepage it advertises itself as You may think of eCryptfs as a sort of “gnupg as a filesystem”. and that’s basically what I was looking for. It safes all meta information in the file, so you can recover it when you have the file itself and the encryption parameters (which are few and easy to backup).

So lets get started. I ciphered a testfile on Ubuntu 12.04.1 and deciphered it successfully under Debian 7.0 .

First you have to install the tools which is very easy using apt (the same on both Ubuntu and Debian):

sudo apt-get install ecryptfs-utils

Then create a new directory, (which will be encrypted) and enter some parameters needed by ecryptfs:

mount -t ecryptfs /home/ecrypttest/encrypted/ /home/ecrypttest/decrypted/
Select cipher: 
 1) aes: blocksize = 16; min keysize = 16; max keysize = 32 (loaded)
 2) des3_ede: blocksize = 8; min keysize = 24; max keysize = 24 (not loaded)
 3) cast6: blocksize = 16; min keysize = 16; max keysize = 32 (not loaded)
 4) cast5: blocksize = 8; min keysize = 5; max keysize = 16 (not loaded)
Selection [aes]: 
Select key bytes: 
 1) 16
 2) 32
 3) 24
Selection [16]: 
Enable plaintext passthrough (y/n) [n]: n
Enable filename encryption (y/n) [n]: y
Filename Encryption Key (FNEK) Signature [9702fa8eae80f468]: 
Attempting to mount with the following options:
Mounted eCryptfs

The filename encryption key FNEK will be created for you and will be different from mine. Just copy & paste the parameters to a textfile. We will need it later for deciphering.

now enter the directory, and create a test file:

cd /home/ecrypttest/decrypted/
echo "hello ecryptfs" > ecrypttest.txt
cat ecrypttest.txt
hello ecryptfs

if everything is fine, unmount the encrypted filesystem

cd ..
umount /home/ecrypttest/decrypted

Now copy the file to your remote computer to try recover it. Of course you can recover your file anywhere you want, also on the same computer you encrypted it. This is just to prove, that it works on another box without copying anthing else than the file and the mount-parameters.

scp /home/ecrypttest/encrypted/ECRYPTFS_FNEK_ENCRYPTED.FWaL-jeCfc1oO-TGS5G.F.7YgZpNwbodTNkQxRlu6HylnEGw7lTdtfV59---

log into your remote computer and verify the file is there. Then mount the folder in decrypted mode. You need the parameters from above, when you created the first mount. It is basically only the FNEK Key if you used the defaults for the rest.

ls -lah /tmp/ecrypt/*
-rw-r--r-- 1 root       root        12K Aug  4 23:04 ECRYPTFS_FNEK_ENCRYPTED.FWaL-jeCfc1oO-TGS5G.F.7YgZpNwbodTNkQxRlu6HylnEGw7lTdtfV59---

cd /tmp
mount -t ecryptfs /tmp/ecrypt/ /tmp/decrypt/ -o cryptfs_unlink_sigs,ecryptfs_fnek_sig=9702fa8eae80f468,ecryptfs_key_bytes=16,ecryptfs_cipher=aes,ecryptfs_sig=9702fa8eae80f468,ecryptfs_passthrough=n
Attempting to mount with the following options:
Mounted eCryptfs
cd /tmp/decrypt
cat ecrypttest.txt
hello ecryptfs

Voila everything worked fine. Now unmount the encrypted directory and you can copy your encrypted data safely where you want.

pixelstats trackingpixel

Importing tpc-h testdata into mongodb

As written in a former post, tpc-h offers an easy possiblity to generate various amounts of testdata. Download dbgen from this website and compile it:

now run

./dbgen -v -s 0.1

this should leave you with some *.tbl files (PIPE separated csvfiles). Now you can use my scripts to convert them into json an import them into mongodb.
i packed already some generated files into the archive and added the header, so you don’t have to generate the tbl-files by yourself. You only have to adjust the script so it loads into the correct database (if test is not ok for you).

if you use your own generated tpl files you have to run:


tar -xjvvf mongodb_tpch.tar.bz2
cd mongodb_tpch

the default script imports the data into the db “test” and collections named like the tpc-h tables.

pixelstats trackingpixel

Importing large csv-files into mongodb

I wanted to import some dummy data into Mongo-DB to test the aggregation functions. I thought a nice source would be the tpc-h testdata which can generate arbitrary volumes of data from 1 GB to 100 GB. You can download the data generation kit from the website :

In the generated csv-files the header is missing, but you can find the names in the pdf. For the customers table it is:


The mongodb import possibilities are very limited. Basically you can only import COMMA separated (or TAB separated) values, and if the lines have commas in the data then it also fails. So I wrote a little python script which converts CSV-Data to the mongo-db import json format. The first line in the csv file has to be the names of the headers. in the following lines I’m preparing the tpc-h file with headers converting it to json and then import it into my mongodb. mongodb uses a special json format (every value in one line without commas and squarebrackets. You can also import json-arrays, but the size is very limited.

echo "custkey|name|address|nationkey|phone|acctbal|mktsegment|comment" > header_customer.tbl
cat header_customer.tbl customer.tbl > customer_with_header.tbl
./ -c customer_with_header.tbl -j customer.json -d '|'
mongoimport --db test --collection customer --file customer.json

for a csv file with 150000 lines the conversion takes about 3 seconds.

Converting CSV-Files to Mongo-DB JSON format

import csv
from optparse import OptionParser
# converts a array of csv-columns to a mongodb json line
def convert_csv_to_json(csv_line, csv_headings):
	json_elements = []
	for index,heading in enumerate(csv_headings):
	    json_elements.append(heading + ": \"" + unicode(csv_line[index],'UTF-8') + "\"")
	line = "{ " + ', '.join(json_elements) + " }"
	return line
# parsing the commandline options
parser = OptionParser(description="parses a csv-file and converts it to mongodb json format. The csv file has to have the column names in the first line.")
parser.add_option("-c", "--csvfile", dest="csvfile", action="store", help="input csvfile")
parser.add_option("-j", "--jsonfile", dest="jsonfile", action="store", help="json output file")
parser.add_option("-d", "--delimiter", dest="delimiter", action="store", help="csvdelimiter")
(options, args) = parser.parse_args()
# parsing and converting the csvfile
csvreader = csv.reader(open(options.csvfile, 'rb'), delimiter=options.delimiter)
column_headings =
jsonfile = open(options.jsonfile, 'wb')
while True:
        csv_current_line =
	json_current_line = convert_csv_to_json(csv_current_line,column_headings)
	print >>jsonfile, json_current_line
    except csv.Error as e :
        print "Error parsing csv: %s" % e
    except StopIteration as e:
        print "=== Finished ==="
pixelstats trackingpixel

Fix sluggish mouse in Ubuntu 12.04 LTS

For some time now i have the problem with my Dell Latitude E6510 Laptop that when I plug in a USB mouse the mouse is really slow and sluggish. Usually a reboot fixes this, but this is very inconvenient. Today I tried some googleing again and found at least a workaround to restart the usb-services without rebooting. This usually helps to fix the mouse.

Find with lspci the device ids of your usb hubs:

lspci | grep -i usb
00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05)
00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05)

I wrote this little script, but of course you can also execute the commands directly on the commandline. In this case just be sure, that you have another keyboard than the one that is connected to usb, as of course after the unbind it will not work anymore until the rebind. (If you execute the commands by script or in one line, separated with ; it should be no problem as rebind is triggered directly after unbind without further keyboard involment).

switch the device numbers according to your lspci listing

echo -n '0000:00:1a.0' > /sys/bus/pci/drivers/ehci_hcd/unbind 
echo -n '0000:00:1a.0' > /sys/bus/pci/drivers/ehci_hcd/bind 
echo -n '0000:00:1d.0' > /sys/bus/pci/drivers/ehci_hcd/unbind 
echo -n '0000:00:1d.0' > /sys/bus/pci/drivers/ehci_hcd/bind
pixelstats trackingpixel

Ubuntu Upgrade to 12.04 LTS -> Libreoffice not working anymore

After a update session of several hours from Ubuntu 11.04 over 11.10 to 12.04 LTS I wanted to start using libreoffice, but it terminated right after start:

$> loimpress
terminate called after throwing an instance of 'com::sun::star::uno::RuntimeException'

As root it started without problems, after some googleing and seeing the gdb-trace I found the solution to my problem. Something with the migration of the previous version config files got wrong. So I just deleted them. It is not very elegant, but worked for me and as I did not make any special settings in libreoffice for me it was not painful.

Caution! you will loose all libreoffice-settings with this method

For me the important part was deleting the .ure-Directory, after this it worked.

$> cd ~
$> sudo rm -rf .libreoffice
$> sudo rm -rf
$> sudo rm -rf .config/libreoffice
$> sudo rm -rf .ure
pixelstats trackingpixel

Ubuntu 12.04 Gnome Classic Panel Right-Click does not work

As I was looking for this really a long time I am reposting it:

With the new gnome and using a classic-session you have to press META + ALT + RightClick to access the panel menu. In my case META is “Alt Gr”. So try this:

  • ALT + RightClick  (if it doesn’t work try next)
  • Alt Gr + Alt + RightClick
pixelstats trackingpixel

Informatica 9.1 Installation could not start _AdminConsole

Installing Informatica with the installer was straightforward, but for some strange reason the adminconsole could not be started. Here the output I found in the Logfile:

 2012-08-08 12:17:45,294 ERROR [Thread 6 of 6 in DomainServiceThreadPool] [SPC_10013] Prozess für Dienst _AdminConsole konnte nicht gestartet werden.
 2012-08-08 12:27:41,524 ERROR [Thread 6 of 6 in DomainServiceThreadPool] [SPC_10013] Prozess für Dienst _AdminConsole konnte nicht gestartet werden.
 2012-08-08 12:42:07,275 ERROR [Thread 6 of 6 in DomainServiceThreadPool] [SPC_10013] Prozess für Dienst _AdminConsole konnte nicht gestartet werden.

this is not very helpful, but in the following logfile I found a clue:


2012-08-08 11:40:45,103 ERROR [org.apache.catalina.core.ContainerBase.[_AdminConsole].[localhost].[/administrator]] Exception sending context initialized event to listener instance of class
java.lang.RuntimeException: Exception parsing chain config resource ‘/WEB-INF/chain-config.xml’: /..INFA_HOME../services/AdministratorConsole/administrator/WEB-INF/chain-config.xml (Too many open files)
at org.apache.commons.chain.web.ChainResources.parseWebResources(
at org.apache.commons.chain.web.ChainListener.contextInitialized(
at org.apache.catalina.core.StandardContext.listenerStart(
at org.apache.catalina.core.StandardContext.start(
at org.apache.catalina.core.StandardContext.reload(
at org.apache.catalina.loader.WebappLoader.backgroundProcess(
at org.apache.catalina.core.ContainerBase.backgroundProcess(
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(
at org.apache.catalina.core.ContainerBase$

So to fix the problem you have to increase the open file descriptor limits.

#$ sysctl -w fs.file-max=100000

#$ vi /etc/sysctl.conf

add the line:

fs.file-max = 100000

#$ sysctl -p

#$ vi /etc/security/limits.conf

Add the lines for your informatica user (e.g. informatica)
informatica soft nofile 4096
informatica hard nofile 10240

or if you are not sure for all users:

* soft nofile 4096 * hard nofile 10240

Restart the server and it should work. Or do:

Here is a little tip how I found interesting logfiles in the Informatica-Home-Directory:
This gives all files with ending “log” and the line count

find . -name "*.log" | xargs wc -l
 0 ./servicesFramework.log
 0 ./isp/bin/servicesFramework.log
 0 ./services/AdministratorConsole/monitoring.log
 15 ./services/AdministratorConsole/administrator.log
 0 ./server/servicesFramework.log
 0 ./tomcat/webapps/csm/output/csm.log
 367 ./tomcat/logs/node.log
 264 ./tomcat/logs/exceptions.log
 0 ./tomcat/temp/_AdminConsole/logs/host-manager.2012-08-08.log
 0 ./tomcat/temp/_AdminConsole/logs/manager.2012-08-08.log
 55459 ./tomcat/temp/_AdminConsole/logs/catalina.2012-08-08.log
 0 ./tomcat/temp/_AdminConsole/logs/admin.2012-08-08.log
 0 ./tomcat/temp/_AdminConsole/logs/localhost.2012-08-08.log
 622 ./tomcat/bin/ispLogs.log
 676 ./tomcat/bin/servicesFramework.log
 0 ./tomcat/bin/infa_jsf.log
 402 ./Informatica_9.1.0_Services_HotFix4.log
 297 ./Informatica_9.1.0_HotFix4_Services_InstallLog.log
pixelstats trackingpixel

Glassfish Directory Deployment (explode Ear)

Recently I was struggeling with glassfish directory deployment. Actually it is quite easy:

  • unzip the ear-File (eg. example.ear) to a directory without .ear
  • then go into this directory and unzip all war and jar files to directories named _jar and _war (only on this directory level, don’t touch the files in /lib)
  • now copy the folder into your domains-autodeploy-folder

For more convenience, use this script. I think it is selfexplaining. example.ear


unzip $EAR -d $EARDIR
for ARCHIVE in $(ls -1 *.jar); do
for ARCHIVE in $(ls -1 *.war); do
pixelstats trackingpixel