While many people think Facebook was looking for the technology behind Instagram, or their scalability secret sauce, the truth might be something else.

What makes Instagram different, is their unique iOS (and recently Android) app. And what Facebook might be looking for, is integrating Instagram app experience into their mobile application, so the Facebook user can shoot a photo and post it to their Facebook timeline. Lets hope the new changes make Facebook crowd more creative.

Update #1 – Forbes: Facebook Launches Instagram-Fueled Photo App

Tagged with:
 

Unix system administrators develop their own customizations and configuration file templates over time. These customizations could be either personal customizations or system-wide customizations. Personal customizations live under each users home directory and the files name usually starts with a dot (which is why these files are usually called dot-files). And the system-wide customizations can be usually found under /etc hierarchy.

I also developed my own set of configuration templates that I use on almost every system I log into. They are usually kept on my private subversion server so I can update them from time to time and keep track of the changes as well.

So I am uploading a selection of my configuration files on my website so everyone can grab a copy and make his own modifications and use them. I use these files on FreeBSD and Mac OS X, but most of the parts can be used on Linux, Solaris, etc as well. I would also be glad to have your suggestions and feedback.

http://farrokhi.net/dotfiles/

 

Tagged with:
 

I was following the story of stuxnet from the very early days when it was just discovered and recently came across Wired’s very thorough story on stuxnet. Now everyone knows it was not yet another ordinary computer worm. While it might not be the first of it kind, but it actually is the most sophisticated cyber weapon to date.

Obviously we would see more and more of such cyber weapons in future and governments will invest in creating such weapons as they invest in making other type of weaponry. But there are some major differences between a cyber weapon and legacy weaponry and major risks involved in using them.

A cyber weapon is sent to the target and should hide itself for unspecified amount of time until it makes sure it reached the target so it activates (or in some cases, can be triggered remotely or on a specific date), and the weapon payload does whatever it is supposed to do (steal information, destroy information and systems, etc). Cyber Weapons usually act slower due to their nature. They need to hide themselves, and replicate until they reach the ultimate target and they would traverse thousands of systems in their path before they reach their targets. And what if the weapon gets into the wrong hands (e.g. discovered buy the security researchers or by the target itself) before being able to deliver the payload or triggered?

Governments invest a huge amount of money into creating cyber weapons, like any other form of weaponry . So its like a modern fighter plane to crash behind the enemy lines or a spy to be captured. Then it would turn into a source of information for the target and they will figure out the technologies their enemies are using against them, so they can use the same techniques, or find a way to counter them.

In such cases there is something like a self-destructive system in weapons or special instructions that spies have to follow when they are captured. But how would such a system be available in cyber weapons? How a cyber weapon can identify if it is discovered? Or how can it self-destruct (which might be impossible due to the distributed nature of such threats)?

So there is a high risk of being discovered before being able to payload. And once discovered they can be rendered ineffective.

The risk of using cyber weapons seems to be higher than other types of weaponry when it comes to exposure of techniques and methods used in creating the weapon and consequently increases the risk of early detection of the threat and defending against it.

Tagged with:
 

I heard this so many times from different people that CLI in FreeBSD is much less user-friendly than CLI in Linux. But is it true?

Unlike Linux that uses Bash shell, the default shell in FreeBSD is csh or tcsh. Linux users are used to tab completion, which is not the default behavior of C Shell. If you need to have tab completion, all you need to do is to add one line to your C shell configuration file (~/.cshrc) :

set autolist

And you will have your good old tab completion in C Shell.

Tagged with:
 

When it comes to the online existence, our attitudes seem drastically different, though: we only joke about the idea of using the evil bit – and yet, we are perfectly comfortable that the locks on our doors can be opened with a safety pin. We scorn web developers who can’t seem to be able to get input validation right – even though we certainly don’t test our morning coffee for laxatives or LSD. We are being irrational – but why?

- lcamtuf: The rise and fall of perfect security

 

If you are running Mac OS X and upgraded to the recent 10.5.7 and you are using a bluetooth mouse or keyboard, then you are most likely suffering from the same problem that I do: Bluetooth device loses connection to your mac after a few hours of working. and its a real PITA.

I have been struggling with this since I upgraded to 10.5.7 and haven’t found a working solution, until I recently tried this and it worked like a magic:

sudo killall -HUP blued

You only need to open up a Terminal.app window and run this command. This command sends a HUP (hangup) signal to bluetooth daemon, that actually is a soft-reset command. It causes blued to reload the configuration and brings your bluetooth device back to life.

Update 1: The latest bluetooth firmware update from Apple didn’t solve the problem. It still happens (less frequently tough) and needs to kick blued to work.

Tagged with:
 

The traditional (yet very popular) gzip is a single-threaded application from the single-processor/single-core hardware era. Its just fine if you are compressing a few files occasionally, but it become a great pain when you are compressing 32,000 files on an 8-processor server and you suddenly figure out that you are using only 1/8 of your total processor power. Which means you should wait 8 times longer than if you could use all processing power on your machine. I encountered such case in which I should wait about 40 minutes to compress hundreds of gigabytes of a few thousand files, using traditional gzip, while I had one processor doing the whole job and 7 other processors were sitting idle.

So I thought there should be a way to speed-up the process. The most simple method I could use was to open up multiple terminal windows and run parallel copies of gzip, each of them to compress a specific set of files. While this method worked for me, but I was wondering why the gzip itself doesn’t support multi-threading.

The solution: pigz

I came across pigz after searching the internet for a multi-threaded gzip replacement. pigz is a drop-in replacement for gzip that supports parallel compression/decompression when multiple files are involved.

pigz-runningFigure 1: Running “systat -iostat 1” on a FreeBSD 7.2 machine running pigz

Using pigz, I could exploit more than 70% of my processing power. pigz also maintains compatibility with standard gzip command line parameter and supports all switches while adding “-p” command to specify maximum number of compression threads.

Tagged with:
 

I am type of person who likes to build everything from source code in FreeBSD to get better performance and other customizations. It has become a habit to play with ports tree and system source code and now I believe I have my own template for various server platforms.

One of the most important parts of each configuration template is the /etc/make.conf file. This is were you can change general behavior of the build system. This file is where you actually say which compiler optimizations should be used or what options has to be considered as defaults. Good news is that ports collection as well as the operating system itself honor these configuration.

Here is how a typical make.conf on one of my boxes look like:
CPUTYPE?=nocona

CFLAGS=         -O2 -pipe -fno-strict-aliasing
COPTFLAGS=      -O2 -pipe -funroll-loops -ffast-math -fno-strict-aliasing

KERNCONF=       SERVER GENERIC

OPTIMIZED_CFLAGS=       YES
WITHOUT_X11=            YES
BUILD_OPTIMIZED=        YES
WITH_CPUFLAGS=          YES
WITHOUT_DEBUG=          YES
WITH_OPTIMIZED_CFLAGS=  YES
NO_PROFILE=             YES
BUILD_STATIC=        YES


The CPUTYPE variable tells gcc to optimize generated binary code for specified processor. In this case I am using 64bit Xeon processor architecture and “nocona” is the correct CPUTYPE to use. You may want to use “pentium4” on a typical Intel P4 CPU. A list of possible CPUTYPE values can be found in the sample make.conf file located at /usr/share/examples/etc/make.conf.

Continue reading »

Tagged with:
 

I suddenly came across this old post from 2004 in which I explained my early experiments in OS X. Now after being a hardcore OS X user for more than a year, I found how much the world has changed since. The Firefox is a really usable browser now and IM clients are up to date. I still hate iChat for no good reason. Maybe because I hate IM on the whole.
And guess what. My favorite OS X app is Terminal.
In fact OS X offers an intuitive interface that is very usable and hassle-free. I would call it a real productivity booster. Beside the interface, the OS itself is based on a mature BSD skeleton, and as a Unix fanatic and really enjoy poking around OS X.

I seriously urge you to switch to Mac if you care about your productivity and performance.

Tagged with:
 

Keeping accurate time on a host (either a server or a workstation) is important because:

1- You need to know accurately when you should go for lunch or back home
2- You need accurate time in your event log files for further analysis
3- Many programs need to have the correct date and time to function (e.g. MTA)
4- You need correct timestamps on your files

Given above facts, you will need to enable NTP on your hosts and keep your system clock in sync with public time servers.

First you should make sure that your timezone setting is correct. The latest timezone information can be updated by installing “zoneinfo” port from /usr/ports/misc/zoneinfo:


# cd /usr/ports/misc/zoneinfo/
# make install clean

and run tzsetup(8) to make sure you have selected the correct timezone.

Now, to enable automatic time sync during system startup, you need to add a few lines to your /etc/rc.conf file:

ntpdate_enable="YES"
ntpdate_flags="-b pool.ntp.org"

This will make your system to sync the clock upon startup. I use NTP pool at “pool.ntp.org” that suggests a NTP server from a large pool of available time servers. However you may use your favorite/local NTP server.

You can also synchronize your time manually by invoking ntpdate(8) from command line, passing an NTP server address to it:

# ntpdate time.nist.gov

Tagged with: