Linux Boot in 5 Seconds (with subscriber link)
The lwn article turned out to be not as impressive as I had hoped and expected from lwn - usually their articles are of excellent quality. Don't get me wrong - I do like what they've done. But the documentation level is too low for my taste. It is far from a howto that someone in the comments then asked about.
But I can recommend the flame war in the comments about how to correctly measure the boot time of a distribution. Come one guys, it's still so far away from 5 seconds, it doesn't matter how you measure it. And yes, some serious changes arelikely needed.
I hope Ubuntu puts a lot of the boot time improvement into the next release. I would put up with a couple bug in the release for that.
Arjan van de Ven Interview
I think that will change. At some point voice detection has to come around and get better than the keyboard and we will finally start talking again - as nature wants us to - instead of quietly sitting in front of our computer.
And at some point there will be developments to directly transfer sentences from your brain to the computer without the need to talk - though I'm not sure if that's really an improvement, language is just too cool an invention imo.
Well, for now I really look forward to improvements in booting time and on latency issues. That's another point were Windows has a really hard time to compete. The drivers' and applications' code is mostly proprietary and hidden somewhere behind closed doors and servers.
It's simply a great technical advantage to be immediately able to look at all the code, see how it works and find out where's the best point to fix a certain problem. And then being able to fix the actual root cause of the problem. That's one of the neatest things about open source.
Registration Hell
Now more and more I notice myself not bothering to register somewhere to add a comment to a bug, or file a new bug if it doesn't really matter that much to me. I've got probably a hundred or more accounts at different places by now and it's so bothersome. I really hope something's going to eliminate that problem at some point...
The actual page that made me write this was a pidgin bug ticket, which is related to pidgin using fsync way too often and causing very bad latency in the process. (~300+ms on my system). I found it using latencytop after reading this interesting interview. And I was wondering if something has been done about the problem. It doesn't look like it...
Bashrc Tipps
Quick Transcoding with Mencoder and Xvid
e.g. bash mencode.sh dvd://
#!/bin/bash
# mencode.sh (c) 2008 linux-tipps.blogspot.com
# settings
BR=1400 # bitrate
lang=en # your language code for the DVD audio track, e.g. en/de/es
OUT=`basename "$1"`"-xvid-fast.avi" # output file in current directory
XVID="lumi_mask:interlacing:nochroma_me:me_quality=4:vhq=1:autoaspect:chroma_opt:bitrate=$BR" # xvid settings
# settings end
[ -f "$OUT" ] && echo File exists && exit 1;
echo Encoding $i
screen -O mencoder -alang $lang -cache 32768 -ovc xvid -oac copy \
-xvidencopts $XVID "$1" -o "$OUT";
You may want to compile xvid and mencoder for your machine to speed it up even more, but I've already set the xvid settings to a reasonably fast mode. I get around 25-30 fps when doing a DVD to Xvid backup on my notebook.
You could add -vf scale=640:-1 but it doesn't really increase speed, but makes it look worse. I use screen, so I can detach(Ctrl-a, Ctrl-d) from the encoding when I have to e.g. exit the current X session.
Detaching also helps to decrease unnecessary screen updates which eat cpu in many terminals. And it's better than passing -quiet as command line argument, because you now have the choice between seeing what's going on and how long it'll take or not. Screen is great... You can re-attach with screen -r. If you have several cores, try adding threads=x to the xvid options.
Black borders are not removed. If you have an idea how to do that automatically, let me know. Interlacing is kept for a good reason: Deinterlacing slows down the process and usually also degrades the quality. Deinterlacing while playing is usually the better alternative.
By the way: The most important parts are the xvid setting me_quality=4 and vhq=1 - they speed it up a lot - and compiling xvid for your processor, because some distributions don't properly compile it, even for amd64.
You may not use this script for content you do not have the rights to transcode.
Iron - Chrome after Rehab
And don't forget to disable the Google Ad cookie (permanent link in my links box on the right side of the webpage)! ;-)
OpenOffice Tips
I always recommend continually saving with a new filename if you're working on a big document, e.g. Important00, Important 01, (...). This means that if your current document should for some reason get lost (which happens a lot with MS Office, but rarely with Openoffice), you can use the last save. I actually had Openoffice corrupt a document once I think.
Syntax Highlighting in Nano
include "/usr/share/nano/c.nanorc"
include "/usr/share/nano/patch.nanorc"
include "/usr/share/nano/sh.nanorc"
See what else is available with ls /usr/share/nano/
Found here.
Chrome Developer's Channel
Multiseat Display Manager Out Now
I have always wondered when a software such as the MDM would come out, as the Unix and X architecture is already perfectly prepared for cases as this and even over Network it is already possible.
Now you need only one computer with several displays, keyboards and mice to let several users e.g. surf the internet at the same time.
I will post my experiences as soon as I get to trying it out. For the curious of you here is a link to installing instructions, packages for debian and ubuntu are provided, as well as the sources of couse.
Update: The packages is very large at 20K, but unfortunately at this point available only as i386 package...
Update2: I've quickly assembled a Ubuntu 8.04 package for AMD64 systems ;-).
Ubuntu Restricted Extras
sudo apt-get install ubuntu-restricted-extras
You must of course first check the legal situation of these packages in your country.
Chrome on Linux Now Really Easy
Update: In my experience the package is not really worth trying. I had much better results running it with normal wine as mentioned before. The CodeWeavers package is really easy and comfortable.
But the AMD64 version I tested is also much slower and less responsive and I think it crashes more often than, but that's hard to tell with Chrome ;-). And the tabs often hang after loading a webpage and the webpage disappears until you press reload. And the letters are not on the same line, but always higher or lower.
But the cool thing about the port: SSL works. That means you can actually use google webpages with login like GMail. I hope that is ported back to wine, making chrome actually usable in Linux.
Update2: The problems I had seem to be related to 64-bit, as other webpages don't mention these issues.
Don't try Multi Head Configurations with KDE 4.1
See these bugs for more:
https://bugs.kde.org/show_bug.cgi?id=163057
https://bugs.kde.org/show_bug.cgi?id=162623
https://bugs.kde.org/show_bug.cgi?id=158850
Plasma Crash on Start Since Last Update
So my recommendation: Skip the KDE 4.1.1 upgrades for now.
Correct Ink Printer Usage
- Keep the Printer connected to the power outlet. (Do turn it off, though.) The energy this costs should be low on any modern printer. And if you disconnect your printer it will most likely clean it's ink tank the next time you turn it on or print something. And cleaning the ink tank once takes a lot of ink.
- Print something in every color at least once a week. This keeps the tank clean and ensures the printer doesn't need to use the ink cleaning program, which uses a lot more ink than most things you will print. The best thing to do it to print the test pattern once a week if nothing else. It usually activates every ink once and has a very low ink usage.
- Use printing modes like "draft" for uninportant things. They are not only faster, they also use much less ink. Also use your printer's duplex feature if it has one. This saves 50% paper and therefore is good for the environment and your wallet. And you can always print several pages on each side of a paper.
- Get your ink cardridges refilled. This is good for the environment, your bank account balance and it doesn't hurt your printer. Sometimes alternative ink is even better than the original one. E.g. Pelican ink works with printers with a chip now and the results are even better than with Canon's original ink.
Safely Reboot a Crashed Linux
Alt + Print, hold it and press
R E I S U B
(without the spaces).
Now your kernel *should* sync and unmount the filesystems and then reboot.
Found in 10 tips for lazy admins.
Sysctl for Network Perfomance
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_time = 1800
net.ipv4.tcp_window_scaling = 0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.log_martians = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 87380 8388608
net.core.wmem_max = 262143
net.core.rmem_max = 262143
net.core.rmem_default = 262143
net.core.wmem_default = 262143
Or less with more comments:
# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15
# Decrease the time default value for tcp_keepalive_time connection
net.ipv4.tcp_keepalive_time = 1800
# Turn off the tcp_window_scaling
net.ipv4.tcp_window_scaling = 0
# Turn off the tcp_sack
net.ipv4.tcp_sack = 0
# Turn off the tcp_timestamps
net.ipv4.tcp_timestamps = 0
# Enable bad error message Protection
net.ipv4.icmp_ignore_bogus_error_responses = 1
# Log Spoofed Packets, Source Routed Packets, Redirect Packets
net.ipv4.conf.all.log_martians = 1
# Increases the size of the socket queue (effectively, q0).
net.ipv4.tcp_max_syn_backlog = 1024
# Increase the tcp-time-wait buckets pool size
net.ipv4.tcp_max_tw_buckets = 1440000
An excerpt from Webhostingtalk.
The Great Sysctl Mystery
So I just thought to myself:
It would be great to have a program which has all the values and explanations to them. It could then create configuration files and let sysctl parse them. "Linux Kernel Tuning" would be a cool name. If I had more time... ;-)
The World in Danger
Also check out this comic.
Git Bisecting the Linux Kernel to Find Bugs
git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
And then you can compile the kernel normally. To fetch the newest updates you enter the directory (linux-2.6) and enter git pull.
Then if you find a bug that wasn't there before you can find out which patch caused it with a git bisect search. It's relatively quick through a binary search algorithm, it only needs around 14 compiles and boots for about 4000 patches, and the complexity is logarithmic I believe (so it's always relatively quick, even with a lot to test).
Also see this quick intro and the man page.
Compiling really takes most of the time. You can decrease that time by creating a minimal kernel configuration (it's worth it!) with only the features activated that are needed to trigger the bug.
I recommend compiling everything directly into the kernel, you can then just do "make bzImage" instead of make and save the time for making and installing the modules (over and over again).
When Community does not Help: Ubuntu Maintainers keep their hands still about NetworkManager Memory Leak
There are several people watching the bug and of course thousands affected with NetworkManager being in the standard installation of at least Ubuntu and Kubuntu, but the maintainers keep their hands still. A fix is also out and has been successfully applied and a package was released, too. It just never went into the current distribution.
This shows how the community can care so much about important bugs, if the appropriate people don't respond, many Linux users are just as helpless as when they use Windows. But I really doubt that such a major memory leak would remain unfixed for such a long time even in any current Microsoft product.
A shame...
If you want the bug fixed for you, you can do so manually, I posted links to the fixed packages a while ago.
Latencytop Explained
Debian Packages the Easy Way
sudo checkinstall -D
You may of course need to apt-get install the package checkinstall first.
Using IrDA on Linux
A very useful website was this one: http://www.hpl.hp.com/personal/Jean_Tourrilhes/IrDA/#debug
You have to do most things as user root. Otherwise they might now work because you don't have the permissions but you won't know that because the programs won't tell you.
- You need to load all the appropriate modules for your driver, also ircomm_tty irtty_sir sir_dev.
- Then you need to activate the network interface irda0: ifconfig irda0 up.
- You should now be ready to establish a connection (as root).
#reprobe irda kernel moduleNotes
rmmod nsc-ircc
modprobe nsc-ircc dma=3 io=0x02F8 irq=3 # these are standard settings for the 5220. they might work for you, too.
# activate interface
ifconfig irda0 up
# enable discovery
echo 1 > /proc/sys/net/irda/discovery
#wait for results
sleep 2
#check results
cat /proc/net/irda/discovery
#attach irda stack
irattach irda0 -s
#you should now be able to watch the transfer stats with irdadump
# for mobile phone connectivity problems I need to reduce the connection speed
echo 9600 > /proc/sys/net/irda/max_baud_rate
- cat /proc/net/irda/* shows you some information about the irda devices.
- Be aware that the connection device is /dev/ircomm0.
- You should first try everything as user root. Otherwise it might fail just because of missing permissions. Then in a second step when it works, try to use it as a user.
- obexftp claims it doesn't need the -i parameter for irda. Well, it does, though.
- I still haven't been able to get ppp over GPRS to work with Linux.
- Irda currently works only after rebooting into Linux from Windows.
Memory Usage - Pidgin vs. Kopete
The GTalk outage gave me some time to run another test of pidgin vs. kopete in memory consumption. Pidgin used to always win this competition, which is - next to the then better working file transfers - why I use pidgin.
I tested it again today and this time the Kopete coming with KDE 4.1.1 in Kubuntu won!
- Kopete used about 18 MB ram
- Pidgin used 23 MB ram
These results and the differences were reproducible. They were tested with "free" in the console. I started one program, then the other and in between ran free in the shell - several times.
Of course these results apply only if you're running a KDE 4.1.1 desktop. If you're a gnome user, pidgin is very likely to use less memory for you.
Also reproducible was that Kopete ate a lot of memory when opening the - currently very slow - configuration dialogue.
There seems to be a bug somewhere, as this memory is not freed until after you restart kopete. So after configuring something in Kopete 0.60.1 make sure you quit and restart it.
Feel free to test it yourself and post your results. Make sure to include what distribution, program versions and desktop environment you use and that you have the same amout of accounts active in each program.
Google Server Problem
Google Chrome under Linux with WINE
- Get a current wine version (1.1.3+)
- Download the Chrome offline installer.
- Install normally, best with a fresh wine installation.
- Go into the install directory (e.g. #cd ".wine/drive_c/windows/profiles/
/Local Settings/Application Data/Google/Chrome/Application") NOTE: this path is not exactly the same for everyone - Start chrome like this: wine chrome.exe --new-http --no-sandbox
- Disable URL typo correction.
- Disable website suggestions.
- Disable Google Ad cookies.
- Quit Chrome. Then change the browser's unique identifier in "User Data/Local State" to the values from Google Chrome Portable to (hopefully) prevent Google from identifying your personal browser copy:
"client_id_timestamp": "1220449017",
And now finally enjoy your privacy-enhanced Google Chrome browser! :)
Unfortunately there's no guarantee that Google hasn't hidden even more spying features in this otherwise pretty neat little browser.
Strip Mining of Open Source
It makes for an interesting read and I think his article is much better researched than my own I posted here recently. And showing the dangers of some open source licenses he concludes developers should be careful in their choice of open source licenses.
The GPL, he concludes, not only protects from "strip mining" companies, but also (though IMO less well reasoned) from fragmentation and forking of the code.
It seems Hillesley's article is a response to an article by Fleury.
Power Save Script
#!/bin/bash
SUDO=`which sudo`
SU=`which su`
EXEC="$SUDO $SU -c"
echo Activating power saving mode...
# Automatically suspend USB -- How to automatize?
# usbcore.autosuspend=1 to kernel cmdline didnt work
$EXEC "echo 1 > /sys/module/usbcore/parameters/autosuspend"
# Disable CD autodetection by hal
# sudo hal-disable-polling --device /dev/scd0
$EXEC "ethtool -s eth0 wol d"
# reduce disk writes
$EXEC "echo 1500 > /proc/sys/vm/dirty_writeback_centisecs"
# SATA powersaving mode
$EXEC "echo min_power > /sys/class/scsi_host/host0/link_power_management_policy"
# AC97 power saving
$EXEC "echo 1 > /sys/module/snd_ac97_codec/parameters/power_save"
# Intel-HDA power saving
$EXEC "echo 1 > /sys/module/snd_hda_intel/parameters/power_save"
# optimize scheduling for dual-core cpus
$EXEC "echo 1 > /sys/devices/system/cpu/sched_mc_power_savings"
# Remove fixed network adapter, intel watchdog
sudo rmmod iTCO_wdt iTCO_vendor_support tg3
# set minimum frequency
#$EXEC "echo 866664 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq"
sudo killall guidance-power-manager.py
xrandr --output TV --off
for i in /sys/bus/usb/devices/%s/power/autosuspend; do $EXEC "echo 1 > $i"; done
$SUDO iwconfig wlan0 power on
Actually I think a part of the script might be from someone else...
Reducing Power Consumption for Linux Laptop
Thinkwiki is a great Wiki with lots of power saving information I recommend.
Second Thoughts on Google Chrome
- is not (yet) available for Linux
- spies what webpages you visit
- has a unique browser identifier
- can be crashed completely by a single webpage
- has some (so far minor) security issues.
Open Source Donation Center
Imagine there's a simple single place where you can donate to different software projects and for different purposes. They could of course also offer memberships, bounties and support contract models, mascots, licenses etc. And the software projects wouldn't have to deal with the legal and financial issues involved but just get the money.
Especially Xorg needs more support and I guess a couple paid employees wouldn't hurt. Situations like that could be sought out and dealt with by a Linux task force. I guess someone like the Linux Foundation might be a good starting point.
(Not) Giving Back to the Community
Among the current examples of using free software are many big names
- Apple's OS X operating system
- the Safari browser
- Google's Servers (Linux-based)
- Google's new Chrome browser
- AnchorFree's Hotspotshield software.
Apple only publishes the changes made to the Darwin operating system every once in a while, only that few people really care. Of course Apple doesn't want people to be able to run Mac OS X on normal PC hardware and thus does everthing to discourage too active community involvement here.
But you can only wonder how come the Safari browser, based on KDE's Konqueror KHTML engine, is still not available for Linux. Even worse, most of the enhancements made by Apple are never incorporated back into KDE. And Apple even managed to draw developers away from developing KHTML to working only on Safari. (I know, this is a big debate and flate-prown.)
Google claims to give back much more to the community than it would have to and proudly states that Chrome is made open source. But it's not like it really was their free choice. Chrome is based upon Firefox and KHTML and both are open source and at least HTML is GPL-licensed and may thus not be published as a closed source software. And most other Google software products are closed source: Google Desktop, Picasa, etc.
And one has to wonder about the big picture. If companies do not return code enhancements and help to the open source projects, the result will not only be a major frustration in the projects, but also a financial detriment for the global economy. Because only if the project members are encouraged to write free software and not only used, they will enjoy working in their free time. In their work they help to prevent a constant reinventing of the wheel in different areas of software development and fix many bugs.
In the end good open source collaboration can free up many resources and enable programmers everywhere to create new, better software much more quickly, dynamically and freely. This is something a company must consider when getting involved with open source projects and keep in mind to provide help back to the projects in terms of a significant part of the employee time, code and money they saved.
Google Chrome is Out
Try either http://www.google.com/chrome/eula.html or http://dl.google.com/update2/installers/ChromeSetup.exe.
Or the Offline installer: http://dl.google.com/chrome/install/149.27/chrome_installer.exe
(not verified backup: http://rapidshare.com/files/142199853/chrome_installer.exe)