Boosting Audio Output under Ubuntu Linux

I often had the problem when I wanted to watch a movie or listen to some audiofile and there was background noise I wanted to turn up the volume but it was already at 100% . I thought it should be possible to turn up the signal beyond 100% and decide by myself if the signal is clipping/distored. And there is a very easy solution for the people using pulseaudio.

Just install the tool paman

sudo apt-get install paman

Now you can boost the audio to 500% volume. For me usually 150% was enough ;).

Selection_024

pixelstats trackingpixel

Encrypted off-site backup with ecryptfs

I was looking for a method to backup my data encrypted. Of course there exist plenty of possibilities, but most of them either encrypt a container or complete partition or seemed to be complicated to setup. I did not want container or partition encryption as I fear if the media is corrupted or something goes wrong during network transfer perhaps all my data would be unaccessable for me. With file-based encryption I have almost the same risk as without encryption. Even if I loose some files to corruption I can still decipher the rest of the data.

Finally I chose ecryptfs because it is a file-based encryption which also encrypts the filenames and it is very easy to setup and use. On the homepage it advertises itself as You may think of eCryptfs as a sort of “gnupg as a filesystem”. and that’s basically what I was looking for. It safes all meta information in the file, so you can recover it when you have the file itself and the encryption parameters (which are few and easy to backup).

So lets get started. I ciphered a testfile on Ubuntu 12.04.1 and deciphered it successfully under Debian 7.0 .

First you have to install the tools which is very easy using apt (the same on both Ubuntu and Debian):

sudo apt-get install ecryptfs-utils

Then create a new directory, (which will be encrypted) and enter some parameters needed by ecryptfs:

mount -t ecryptfs /home/ecrypttest/encrypted/ /home/ecrypttest/decrypted/
Passphrase: 
Select cipher: 
 1) aes: blocksize = 16; min keysize = 16; max keysize = 32 (loaded)
 2) des3_ede: blocksize = 8; min keysize = 24; max keysize = 24 (not loaded)
 3) cast6: blocksize = 16; min keysize = 16; max keysize = 32 (not loaded)
 4) cast5: blocksize = 8; min keysize = 5; max keysize = 16 (not loaded)
Selection [aes]: 
Select key bytes: 
 1) 16
 2) 32
 3) 24
Selection [16]: 
Enable plaintext passthrough (y/n) [n]: n
Enable filename encryption (y/n) [n]: y
Filename Encryption Key (FNEK) Signature [9702fa8eae80f468]: 
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_fnek_sig=9702fa8eae80f468
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=9702fa8eae80f468
Mounted eCryptfs

The filename encryption key FNEK will be created for you and will be different from mine. Just copy & paste the parameters to a textfile. We will need it later for deciphering.

now enter the directory, and create a test file:

cd /home/ecrypttest/decrypted/
echo "hello ecryptfs" > ecrypttest.txt
cat ecrypttest.txt
hello ecryptfs

if everything is fine, unmount the encrypted filesystem

cd ..
umount /home/ecrypttest/decrypted

Now copy the file to your remote computer to try recover it. Of course you can recover your file anywhere you want, also on the same computer you encrypted it. This is just to prove, that it works on another box without copying anthing else than the file and the mount-parameters.

scp /home/ecrypttest/encrypted/ECRYPTFS_FNEK_ENCRYPTED.FWaL-jeCfc1oO-TGS5G.F.7YgZpNwbodTNkQxRlu6HylnEGw7lTdtfV59--- root@yourremotehost.com:/tmp/ecrypt

log into your remote computer and verify the file is there. Then mount the folder in decrypted mode. You need the parameters from above, when you created the first mount. It is basically only the FNEK Key if you used the defaults for the rest.

ls -lah /tmp/ecrypt/*
-rw-r--r-- 1 root       root        12K Aug  4 23:04 ECRYPTFS_FNEK_ENCRYPTED.FWaL-jeCfc1oO-TGS5G.F.7YgZpNwbodTNkQxRlu6HylnEGw7lTdtfV59---

cd /tmp
mount -t ecryptfs /tmp/ecrypt/ /tmp/decrypt/ -o cryptfs_unlink_sigs,ecryptfs_fnek_sig=9702fa8eae80f468,ecryptfs_key_bytes=16,ecryptfs_cipher=aes,ecryptfs_sig=9702fa8eae80f468,ecryptfs_passthrough=n
Passphrase: 
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_fnek_sig=9702fa8eae80f468
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=9702fa8eae80f468
Mounted eCryptfs
cd /tmp/decrypt
cat ecrypttest.txt
hello ecryptfs

Voila everything worked fine. Now unmount the encrypted directory and you can copy your encrypted data safely where you want.

pixelstats trackingpixel

Importing tpc-h testdata into mongodb

As written in a former post, tpc-h offers an easy possiblity to generate various amounts of testdata. Download dbgen from this website and compile it: http://www.tpc.org/tpch/

now run

./dbgen -v -s 0.1

this should leave you with some *.tbl files (PIPE separated csvfiles). Now you can use my scripts to convert them into json an import them into mongodb.
i packed already some generated files into the archive and added the header, so you don’t have to generate the tbl-files by yourself. You only have to adjust the load_into_mongodb.sh script so it loads into the correct database (if test is not ok for you).

if you use your own generated tpl files you have to run: create_mongodb_headers.sh
first

mongodb_tpch.tar.bz2

tar -xjvvf mongodb_tpch.tar.bz2
cd mongodb_tpch
./convert_to_json.sh
./load_into_mongodb.sh

the default script imports the data into the db “test” and collections named like the tpc-h tables.

pixelstats trackingpixel

Importing large csv-files into mongodb

I wanted to import some dummy data into Mongo-DB to test the aggregation functions. I thought a nice source would be the tpc-h testdata which can generate arbitrary volumes of data from 1 GB to 100 GB. You can download the data generation kit from the website : http://www.tpc.org/tpch/

In the generated csv-files the header is missing, but you can find the names in the pdf. For the customers table it is:

custkey|name|address|nationkey|phone|acctbal|mktsegment|comment

The mongodb import possibilities are very limited. Basically you can only import COMMA separated (or TAB separated) values, and if the lines have commas in the data then it also fails. So I wrote a little python script which converts CSV-Data to the mongo-db import json format. The first line in the csv file has to be the names of the headers. in the following lines I’m preparing the tpc-h file with headers converting it to json and then import it into my mongodb. mongodb uses a special json format (every value in one line without commas and squarebrackets. You can also import json-arrays, but the size is very limited.

echo "custkey|name|address|nationkey|phone|acctbal|mktsegment|comment" > header_customer.tbl
cat header_customer.tbl customer.tbl > customer_with_header.tbl
./csv2mongodbjson.py -c customer_with_header.tbl -j customer.json -d '|'
mongoimport --db test --collection customer --file customer.json

for a csv file with 150000 lines the conversion takes about 3 seconds.

Converting CSV-Files to Mongo-DB JSON format

csv2mongodbjson.py

#!/usr/bin/python
import csv
from optparse import OptionParser
 
# converts a array of csv-columns to a mongodb json line
def convert_csv_to_json(csv_line, csv_headings):
	json_elements = []
	for index,heading in enumerate(csv_headings):
	    json_elements.append(heading + ": \"" + unicode(csv_line[index],'UTF-8') + "\"")
 
	line = "{ " + ', '.join(json_elements) + " }"
	return line
 
# parsing the commandline options
parser = OptionParser(description="parses a csv-file and converts it to mongodb json format. The csv file has to have the column names in the first line.")
parser.add_option("-c", "--csvfile", dest="csvfile", action="store", help="input csvfile")
parser.add_option("-j", "--jsonfile", dest="jsonfile", action="store", help="json output file")
parser.add_option("-d", "--delimiter", dest="delimiter", action="store", help="csvdelimiter")
 
(options, args) = parser.parse_args()
 
# parsing and converting the csvfile
csvreader = csv.reader(open(options.csvfile, 'rb'), delimiter=options.delimiter)
column_headings = csvreader.next()
jsonfile = open(options.jsonfile, 'wb')
 
while True:
    try: 
        csv_current_line = csvreader.next()
	json_current_line = convert_csv_to_json(csv_current_line,column_headings)
	print >>jsonfile, json_current_line
 
    except csv.Error as e :
        print "Error parsing csv: %s" % e
    except StopIteration as e:
        print "=== Finished ==="
        break
 
jsonfile.close()
pixelstats trackingpixel

Fix sluggish mouse in Ubuntu 12.04 LTS

For some time now i have the problem with my Dell Latitude E6510 Laptop that when I plug in a USB mouse the mouse is really slow and sluggish. Usually a reboot fixes this, but this is very inconvenient. Today I tried some googleing again and found at least a workaround to restart the usb-services without rebooting. This usually helps to fix the mouse.

Find with lspci the device ids of your usb hubs:

lspci | grep -i usb
00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05)
00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05)

I wrote this little script, but of course you can also execute the commands directly on the commandline. In this case just be sure, that you have another keyboard than the one that is connected to usb, as of course after the unbind it will not work anymore until the rebind. (If you execute the commands by script or in one line, separated with ; it should be no problem as rebind is triggered directly after unbind without further keyboard involment).

switch the device numbers according to your lspci listing

#!/bin/bash
echo -n '0000:00:1a.0' > /sys/bus/pci/drivers/ehci_hcd/unbind 
echo -n '0000:00:1a.0' > /sys/bus/pci/drivers/ehci_hcd/bind 
echo -n '0000:00:1d.0' > /sys/bus/pci/drivers/ehci_hcd/unbind 
echo -n '0000:00:1d.0' > /sys/bus/pci/drivers/ehci_hcd/bind
pixelstats trackingpixel

Ubuntu Upgrade to 12.04 LTS -> Libreoffice not working anymore

After a update session of several hours from Ubuntu 11.04 over 11.10 to 12.04 LTS I wanted to start using libreoffice, but it terminated right after start:

$> loimpress
terminate called after throwing an instance of 'com::sun::star::uno::RuntimeException'

As root it started without problems, after some googleing and seeing the gdb-trace I found the solution to my problem. Something with the migration of the previous version config files got wrong. So I just deleted them. It is not very elegant, but worked for me and as I did not make any special settings in libreoffice for me it was not painful.

Caution! you will loose all libreoffice-settings with this method

For me the important part was deleting the .ure-Directory, after this it worked.

$> cd ~
$> sudo rm -rf .libreoffice
$> sudo rm -rf .openoffice.org
$> sudo rm -rf .config/libreoffice
$> sudo rm -rf .ure
pixelstats trackingpixel

Ubuntu 12.04 Gnome Classic Panel Right-Click does not work

As I was looking for this really a long time I am reposting it:

http://askubuntu.com/questions/66414/how-to-add-panel-applets-to-classic-gnome-panel

With the new gnome and using a classic-session you have to press META + ALT + RightClick to access the panel menu. In my case META is “Alt Gr”. So try this:

  • ALT + RightClick  (if it doesn’t work try next)
  • Alt Gr + Alt + RightClick
pixelstats trackingpixel

Informatica 9.1 Installation could not start _AdminConsole

Installing Informatica with the installer was straightforward, but for some strange reason the adminconsole could not be started. Here the output I found in the Logfile:

INFA_HOME/tomcat/logs/node.log
 2012-08-08 12:17:45,294 ERROR [Thread 6 of 6 in DomainServiceThreadPool] [SPC_10013] Prozess für Dienst _AdminConsole konnte nicht gestartet werden.
 2012-08-08 12:27:41,524 ERROR [Thread 6 of 6 in DomainServiceThreadPool] [SPC_10013] Prozess für Dienst _AdminConsole konnte nicht gestartet werden.
 2012-08-08 12:42:07,275 ERROR [Thread 6 of 6 in DomainServiceThreadPool] [SPC_10013] Prozess für Dienst _AdminConsole konnte nicht gestartet werden.

this is not very helpful, but in the following logfile I found a clue:

INFA_HOME/services/AdministratorConsole/administrator.log

2012-08-08 11:40:45,103 ERROR [org.apache.catalina.core.ContainerBase.[_AdminConsole].[localhost].[/administrator]] Exception sending context initialized event to listener instance of class com.informatica.adminconsole.app.config.CustomChainListener
java.lang.RuntimeException: Exception parsing chain config resource ‘/WEB-INF/chain-config.xml’: /..INFA_HOME../services/AdministratorConsole/administrator/WEB-INF/chain-config.xml (Too many open files)
at org.apache.commons.chain.web.ChainResources.parseWebResources(ChainResources.java:194)
at org.apache.commons.chain.web.ChainListener.contextInitialized(ChainListener.java:221)
at com.informatica.adminconsole.app.config.CustomChainListener.contextInitialized(CustomChainListener.java:32)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3795)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4252)
at org.apache.catalina.core.StandardContext.reload(StandardContext.java:3056)
at org.apache.catalina.loader.WebappLoader.backgroundProcess(WebappLoader.java:432)
at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1278)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1570)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1579)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1579)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1559)
at java.lang.Thread.run(Thread.java:662)

So to fix the problem you have to increase the open file descriptor limits.
http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/

#$ sysctl -w fs.file-max=100000

#$ vi /etc/sysctl.conf

add the line:

fs.file-max = 100000

#$ sysctl -p

#$ vi /etc/security/limits.conf

Add the lines for your informatica user (e.g. informatica)
informatica soft nofile 4096
informatica hard nofile 10240

or if you are not sure for all users:

* soft nofile 4096 * hard nofile 10240

Restart the server and it should work. Or do:

http://lzone.de/apply+limits+immediately

Here is a little tip how I found interesting logfiles in the Informatica-Home-Directory:
This gives all files with ending “log” and the line count

find . -name "*.log" | xargs wc -l
 0 ./servicesFramework.log
 0 ./isp/bin/servicesFramework.log
 0 ./services/AdministratorConsole/monitoring.log
 15 ./services/AdministratorConsole/administrator.log
 0 ./server/servicesFramework.log
 0 ./tomcat/webapps/csm/output/csm.log
 367 ./tomcat/logs/node.log
 264 ./tomcat/logs/exceptions.log
 0 ./tomcat/temp/_AdminConsole/logs/host-manager.2012-08-08.log
 0 ./tomcat/temp/_AdminConsole/logs/manager.2012-08-08.log
 55459 ./tomcat/temp/_AdminConsole/logs/catalina.2012-08-08.log
 0 ./tomcat/temp/_AdminConsole/logs/admin.2012-08-08.log
 0 ./tomcat/temp/_AdminConsole/logs/localhost.2012-08-08.log
 622 ./tomcat/bin/ispLogs.log
 676 ./tomcat/bin/servicesFramework.log
 0 ./tomcat/bin/infa_jsf.log
 402 ./Informatica_9.1.0_Services_HotFix4.log
 297 ./Informatica_9.1.0_HotFix4_Services_InstallLog.log
pixelstats trackingpixel

Glassfish Directory Deployment (explode Ear)

Recently I was struggeling with glassfish directory deployment. Actually it is quite easy:

  • unzip the ear-File (eg. example.ear) to a directory without .ear
  • then go into this directory and unzip all war and jar files to directories named _jar and _war (only on this directory level, don’t touch the files in /lib)
  • now copy the folder into your domains-autodeploy-folder

For more convenience, use this script. I think it is selfexplaining.

earexploder.sh example.ear

FILE: earexploder.sh

#!/bin/bash
 
EAR=$1
 
EARDIR=${EAR%.ear}
 
unzip $EAR -d $EARDIR
cd $EARDIR
 
for ARCHIVE in $(ls -1 *.jar); do
    ARCHIVEDIR=${ARCHIVE%.jar}
    ARCHIVEDIR=${ARCHIVEDIR}_jar
    unzip $ARCHIVE -d $ARCHIVEDIR
done
 
for ARCHIVE in $(ls -1 *.war); do
    ARCHIVEDIR=${ARCHIVE%.war}
    ARCHIVEDIR=${ARCHIVEDIR}_war
    unzip $ARCHIVE -d $ARCHIVEDIR
done
pixelstats trackingpixel

Oracle exp export with full tns-string

Handling the oracle tns-names file can be a pain in the ass, especially if you don’t want to rely on it. For example you want a user to enter dynamically a connection with typical “host, port, sid, username, password” configuration. In oracle you can also use the full tns-string to connect to a database. I struggled some time finding the correct escape and quotation configuration, but finally this worked:

Linux
(if the line is too long to fit in the browser window mark it completely (double-click) and copy&paste it in your favourite editor)

exp userid=\'sys/yourpw@\(DESCRIPTION\=\(ADDRESS_LIST\=\(ADDRESS\=\(PROTOCOL\=TCP\)\(Host\=192.168.123.123\)\(Port\=1521\)\)\)\(CONNECT_DATA\=\(SID\=xe\)\)\) as sysdba\' file=/tmp/testexp.dmp full=y

Oracle-Documentation
http://docs.oracle.com/cd/B19306_01/server.102/b14215/exp_imp.htm

pixelstats trackingpixel