Sunday, December 6, 2009

Ubuntu - fixing the issue of /etc/resolv.conf being overwritten - edit /etc/dhcp3/dhclient.conf instead!

When you need a DNS server in order to access any sites, and since editing /etc/resolv.conf doesn't always work (because it gets overwritten regularly), thus this won't work work in /etc/resolv.conf:

search yahoo.com
namesever 10.1.10.2

(in my example, 10.1.10.2 is the IP of my router, which in turn has the proper DNS servers from Comcast, but I could use the DNS servers from Comcast just as well)

Since editing the /etc/resolv.conf may not work, instead, nano /etc/dhcp3/dhclient.conf and at the end of the file, add the entries.
Example for Comcast:

supersede domain-name "example.com"
prepend domain-name-server 68.87.74.162, 68.87.68.162

(where the domain-name-server can be a list, comma separated)

Saturday, December 5, 2009

Setting up MRTG on a managed switch (ex: on a SMC8024L switch)

Basic idea:

1
SMC switches have default IP 192.168.2.10, log into the switch via http interface from a PC on the same configured network, change IP (i.e. 10.1.10.10) and community string (from "public" to "digitalagora")

2
Install the snmp tools (example used Ubuntu), or "apt-cache search snmp" and find your favorite tools.
apt-get install snmp

3
Test if you see the switch:

snmpwalk -v 2c -Os -c digitalagora 10.1.10.10 system

sysDescr.0 = STRING: SMC8024L
sysObjectID.0 = OID: enterprises.202.20.59
sysUpTimeInstance = Timeticks: (981900) 2:43:39.00
sysContact.0 = STRING: SYSTEM CONTACT
sysName.0 = STRING: SMC8024L2
sysLocation.0 = STRING: SYSTEM LOCATION
sysServices.0 = INTEGER: 3

4
install mrtg by following: http://oss.oetiker.ch/mrtg/doc/mrtg-unix-guide.en.html Example:

wget http://www.zlib.net/zlib-1.2.3.tar.gz
gunzip -c zlib-*.tar.gz | tar xf -
rm zlib-*.tar.gz
mv zlib-* zlib
cd zlib
./configure
make
cd ..


5
run the cfgmaker tool to create your /etc/mrtg.cfg file, by telling it to connect to the digitalagora community @ the switch's ip. This creates a nice big config file with all the snmp info, provided that there was traffic on those ports. Otherwise, they're commented out.

cfgmaker --global 'WorkDir: /opt/website/mrtg' \
--global 'Options[_]: growright' \
--output /etc/mrtg.cfg \
digitalagora@10.1.10.10


6
run mrtg so that it sees the latest settings
env LANG=C /usr/bin/mrtg /etc/mrtg.cfg


7
Rebuild the web site's index file
indexmaker /etc/mrtg.cfg > /opt/website/mrtg/index.html


8
Look at the output.
http://localhost/mrtg/

Wednesday, December 2, 2009

Tinyproxy - enabing "Anonymous Host" and "Anonymous Authorization"

Testing headers at: www.digitalagora.com/headers


Client's headers when hitting digitalagora.com through Tinyproxy with disabled settings:
#Anonymous "Host"
#Anonymous "Authorization"
where in my example 8 header fields are showing:

Host = digitalagora.com
Connection = close
Via = 1.1 firewallserver (tinyproxy/1.6.5)
Accept = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent = Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.15) Gecko/2009102815 Ubuntu/9.04 (jaunty) Firefox/3.0.15
Accept-Charset = ISO-8859-1,utf-8;q=0.7,*;q=0.7
Accept-Encoding = gzip,deflate
Accept-Language = en-us,en;q=0.5



Client's headers when hitting digitalagora.com through Tinyproxy with enabled settings:
Anonymous "Host"
Anonymous "Authorization"
where in my example 3 header fields are showing:

Host = digitalagora.com
Connection = close
Via = 1.1 firewallserver (tinyproxy/1.6.5)

Tuesday, December 1, 2009

Ubuntu and Apache: How to fix the error: "you have chosen to open ... which is a: application/x-httpd-php"

Ubuntu and Apache

How to fix the error: "you have chosen to open ... which is a: application/x-httpd-php"


Edit the Apache configuration file:
sudo nano /etc/apache2/apache2.conf

Find these 2 lines:
AddType application/x-httpd-php .php .phtml
AddType application/x-httpd-php-source .phps

Comment them by adding a pound sign in front:
#AddType application/x-httpd-php .php .phtml
#AddType application/x-httpd-php-source .phps

Add the following 2 lines right under the first 2 lines:
AddType application/x-httpd-php .php .phtml
AddType application/x-httpd-php-source .phps

Restart Apache:
sudo /etc/init.d/apache2 restart

Close your browser to clear its cache, and access your web page again.

Done.

-----

In a little more detail:

You can telnet to port 80 and view the web page. From the prompt, type:
telnet localhost 80
and then type "GET / HTTP/1.0" without the quotes, and press ENTER two times.
Note that there is a space before the slash and a space after the slash.
The page should then display. Here is an example of before the fix:

root@myfunserver:~# telnet localhost 80
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.0

HTTP/1.1 200 OK
Date: Tue, 01 Dec 2009 21:40:03 GMT
Server: Apache/2.2.11 (Ubuntu) PHP/5.2.6-3ubuntu4.4 with Suhosin-Patch
Last-Modified: Fri, 20 Nov 2009 08:18:29 GMT
ETag: "b7a6b-f-478c91ee61f40"
Accept-Ranges: bytes
Content-Length: 15
Connection: close
Content-Type: x-httpd-php

Website works

Connection closed by foreign host.
root@myfunserver:~#

Notice that the Content-Type is: x-httpd-php. Now, after the change:

root@myfunserver:~# telnet localhost 80
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.0

HTTP/1.1 200 OK
Date: Tue, 01 Dec 2009 21:40:03 GMT
Server: Apache/2.2.11 (Ubuntu) PHP/5.2.6-3ubuntu4.4 with Suhosin-Patch
Last-Modified: Fri, 20 Nov 2009 08:18:29 GMT
ETag: "b7a6b-f-478c91ee61f40"
Accept-Ranges: bytes
Content-Length: 15
Connection: close
Content-Type: text/html

Website works

Connection closed by foreign host.
root@myfunserver:~#

Notice that the content type is text/html.

How to play videos in Ubuntu

sudo apt-get update
sudo apt-get install vlc vlc-plugin-esd

Wednesday, November 25, 2009

How to back up a system to another server using rsync and get an email with a result

1. Create a trust relationship between the system which needs to backed up and the system where the files will get backed up.

Replace ALPHA and BETA with the proper server names. (to add more server names, "sudo nano /etc/hosts" and add the ip and the name you wish to assign to each server)

ALPHA = server 1, where to log in from (on ALPHA, do this as user root)
BETA = server 2, destination where we log in

ALPHA: ssh-keygen -t rsa
BETA: mkdir .ssh
ALPHA: cat .ssh/id_rsa.pub | ssh user@BETA 'cat >> .ssh/authorized_keys'
BETA: chmod 644 .ssh/authorized_keys

2. As root, create a backup script

Replace "abc" with the name of your server which you are backing up.

Create the file: "nano /usr/bin/backupabc" and paste the script below, change:
- the backup server name, ex: mybackupserver
- the user id you use on the backup server, ex: my-user-id-on-backup-server
- the backup paths on the backup server, ex: /mnt/mybigdrive/backups/abc/
- your email address, ex: my.lovely.email@gmail.com (make sure you install mail: "sudo apt-get install mailutils" )


#!/bin/sh

LOG=/tmp/backupabc.log

START=$(date +%s)
echo "" > $LOG
echo "Start " >> $LOG
echo `date` >> $LOG

rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /bin/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/bin/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /boot/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/boot/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /etc/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/etc/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /home/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/home/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /lib/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/lib/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /opt/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/opt/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /root/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/root/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /sbin/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/sbin/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /srv/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/srv/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /usr/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/usr/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /var/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/var/ >> $LOG

END=$(date +%s)
DIFF=$(( $END - $START ))

echo "I have ran the /usr/bin/backupabc script and it took $DIFF seconds" >> $LOG
echo "\nEnd " >> $LOG
echo `date` >> $LOG

cat $LOG |  mail -s "mybackupserver: backed up abc" my.lovely.email@gmail.com



3. As root, run the script manually:
/usr/bin/backupabc
OR
add the script to the crontab to run every day at 10 pm (22 hrs) (as root):
crontab -e   (if prompted, use "nano" as the editor)
0 22 * * * /usr/bin/backupabc

To see the log while it's being built, open another shell and:
tail -f /tmp/backupabc

Tuesday, November 24, 2009

How to set up Apache and limit access per IP - mod_limitipconn.so module

# Get Apache with the apxs2 tool
apt-get install apache2-threaded-dev

# test that apxs works
which apxs2


nano /etc/apache2/apache2.conf

and add this at the bottom:

# This command is always needed
ExtendedStatus On

# Only needed if the module is compiled as a DSO
LoadModule limitipconn_module lib/apache/mod_limitipconn.so

<IfModule mod_limitipconn.c>

    # Set a server-wide limit of 10 simultaneous downloads per IP,
    # no matter what.
    MaxConnPerIP 10
    <Location /somewhere>
        # This section affects all files under http://your.server/somewhere
        MaxConnPerIP 3
        # exempting images from the connection limit is often a good
        # idea if your web page has lots of inline images, since these
        # pages often generate a flurry of concurrent image requests
        NoIPLimit image/*
    </Location>

    <Directory /home/*/public_html>
        # This section affects all files under /home/*/public_html
        MaxConnPerIP 1
        # In this case, all MIME types other than audio/mpeg and video*
        # are exempt from the limit check
        OnlyIPLimit audio/mpeg video
    </Directory>
</IfModule>

# Modify the "/somewhere" to match the alias (not directory) which you are protecting.



# Add this mod at the bottom of the actions.load file:
  cd /etc/apache2/mods-available
  nano actions.load
# Add this at the end of the file:
  LoadModule evasive20_module /usr/lib/apache2/modules/mod_evasive20.so

# edit the httpd conf (not the apache2.conf) config file:
  nano /etc/apache2/httpd.conf
# add the following 2 comments at the bottom of the file, with the pound sign in front,
# this will ensure that in the following steps, the "make install" won't barf.

# Dummy LoadModule directive to aid module installations
#LoadModule dummy_module /usr/lib/apache2/modules/mod_dummy.so




# Download the limit ip connection module and set it up
  wget http://dominia.org/djao/limit/mod_limitipconn-0.23.tar.bz2
  tar -jxvf mod_limitipconn-0.23.tar.bz2
  cd mod_limitipconn-0.23
  nano Makefile
# Look for apxs and modify it to apxs2
  make
  make install
# If the "make install" barfs with an error such as:
  apxs:Error: Activation failed for custom /etc/apache2/httpd.conf file..
  apxs:Error: At least one `LoadModule' directive already has to exist..
then you forgot to edit the httpd.conf file and add the dummy module entry (see above).

Friday, November 20, 2009

How to convert an .avi to .mpeg in Ubuntu

sudo apt-get install libavcodec-unstripped-51
sudo apt-get install ffmpeg
ffmpeg -i holiday.avi -aspect 16:9 -target ntsc-dvd holiday.mpeg
(and then wait a long time)

Sunday, November 15, 2009

How to convert uif to iso

This information is copied from: http://wesleybailey.com/articles/convert-uif-to-iso
Tested successfuly.
-----------------------------------


Convert UIF to ISO

The fastest way to convert an UIF image to ISO image is UIF2ISO. It is a speedy command line tool, that will save you the hassle of installing wine and MagicISO.

This is how I downloaded and installed UIF2ISO, written by Luigi Auriemma. - http://aluigi.altervista.org/

1. We first need to install zlib and OpenSSL with apt-get.

sudo apt-get install zlib1g zlib1g-dev libssl-dev build-essential

2. Now we can download UIF2ISO with wget from a terminal, or from the author’s site here.

wget http://aluigi.altervista.org/mytoolz/uif2iso.zip

3. Once you have the file downloaded, unzip it and cd into the directory.

unzip uif2iso.zip
cd src

4. Finally compile the source, and create the executable.

make
sudo make install

5. Now you can convert the .uif file to an .iso with the following command:

uif2iso example.uif output.iso

Mounting an ISO

You don't necessarily need to burn a cd in order to access the files within the ISO. You can mount it with some simple commands.

Here is how to mount the ISO from command line.

sudo modprobe loop
sudo mkdir ISO_directory
sudo mount /media/file.iso /media/ISOPoint/ -t iso9660 -o loop


Friday, November 13, 2009

Eratosthenes Sieve prime number benchmark in Java




// Eratosthenes Sieve prime number benchmark in Java
import java.awt.*;

public class Sieve // extends java.applet.Applet implements Runnable {
{
String results1, results2;

void runSieve()
{
int SIZE = 8190;
boolean flags[] = new boolean[SIZE+1];
int i, prime, k, iter, count;
int iterations = 0;
double seconds = 0.0;
int score = 0;
long startTime, elapsedTime;

startTime = System.currentTimeMillis();
while (true) {
count=0;
for(i=0; i<=SIZE; i++) flags[i]=true;
for (i=0; i<=SIZE; i++) {
if(flags[i]) {
prime=i+i+3;
for(k=i+prime; k<=SIZE; k+=prime)
flags[k]=false;
count++;
}
}
iterations++;
elapsedTime = System.currentTimeMillis() - startTime;
if (elapsedTime >= 10000) break;
}
seconds = elapsedTime / 1000.0;
score = (int) Math.round(iterations / seconds);
results1 = iterations + " iterations in " + seconds + " seconds";
if (count != 1899)
results2 = "Error: count <> 1899";
else
results2 = "Sieve score = " + score;
}

public static void main(String args[])
{
Sieve s = new Sieve();
}

public Sieve()
{
System.out.println("Running Sieve - please wait 10 seconds for results...");
runSieve();
System.out.println( results1 );
System.out.println( results2 );
}

}



Wednesday, November 11, 2009

Ubuntu: How to fix the apt-get update error: W: GPG error: http://ppa.launchpad.net intrepid Release: The following signatures couldn't be verified be

The problem is during apt-get update:

...
Reading package lists... Done
W: GPG error: http://ppa.launchpad.net intrepid Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B9FBE5158B3AFA9
W: You may want to run apt-get update to correct these problems


Solution:

gpg --keyserver keyserver.ubuntu.com --recv 8B9FBE5158B3AFA9
gpg --export --armor 8B9FBE5158B3AFA9 | sudo apt-key add -


Update should work now:

sudo apt-get update

Sunday, November 8, 2009

How to mount a remote file system in Ubuntu

# install the utility
sudo apt-get install sshfs

# make a directory where to mount the remote file system
sudo mkdir /mnt/backups
sudo chown YOURUSERNAME /mnt/alpha

# mount the remote drive
sshfs YOURUSERNAME@192.168.1.123:/home/YOURUSERNAME/backups /mnt/backups

# check to see that the files are mounted
ls -la /mnt/backups

How to listen to mp3s in Ubuntu/Linux

sudo apt-get install amarok
sudo apt-get install libxine1-ffmpeg

(Amarok needs the libxine codec to decode mp3s)

Saturday, November 7, 2009

How to log into another server without asking you for a pasword - in 4 steps.

ALPHA = server 1, where to log in from
BETA = server 2, destination where we log in


ALPHA: ssh-keygen -t rsa
BETA: mkdir .ssh
ALPHA: cat .ssh/id_rsa.pub | ssh user@BETA 'cat >> .ssh/authorized_keys'
BETA: chmod 644 .ssh/authorized_keys


To establish a mirror relationship, exchange server ALPHA with BETA and run through the 4 steps again.

Friday, October 16, 2009

How to configure and install Tinyproxy

How to configure and install Tinyproxy

Download Tinyproxy - go to https://www.banu.com/tinyproxy/download/ and download the latest version
ex: wget https://www.banu.com/pub/tinyproxy/1.6/tinyproxy-1.6.5.tar.gz

Unpackage
tar xzvf tinyproxy-1.6.5.tar.gz

Build
cd tinyproxy-1.6.5
./configure
make
sudo make install


Edit the configuration file:
nano /usr/local/etc/tinyproxy/tinyproxy.conf

or use my version of it:


sudo su -
cd /usr/local/etc/tinyproxy
echo "" > tinyproxy.conf
nano tinyproxy.conf

and paste this. Make sure to change YOUR_USER_NAME to be the name of the
user account from which you are running Tinyproxy


# ==================================================================
##
## tinyproxy.conf -- tinyproxy daemon configuration file
##

#
# Name of the user the tinyproxy daemon should switch to after the port
# has been bound.
#
User YOUR_USER_NAME
Group YOUR_USER_NAME

#
# Port to listen on.
#
Port 8888

#
# If you have multiple interfaces this allows you to bind to only one. If
# this is commented out, tinyproxy will bind to all interfaces present.
#
#Listen 192.168.0.1
Listen 127.0.0.1
#
# The Bind directive allows you to bind the outgoing connections to a
# particular IP address.
#
#Bind 192.168.0.1

#
# Timeout: The number of seconds of inactivity a connection is allowed to
# have before it closed by tinyproxy.
#
Timeout 600

#
# ErrorFile: Defines the HTML file to send when a given HTTP error
# occurs. You will probably need to customize the location to your
# particular install. The usual locations to check are:
# /usr/local/share/tinyproxy
# /usr/share/tinyproxy
# /etc/tinyproxy
#
# ErrorFile 404 "/usr/share/tinyproxy/404.html"
# ErrorFile 400 "/usr/share/tinyproxy/400.html"
# ErrorFile 503 "/usr/share/tinyproxy/503.html"
# ErrorFile 403 "/usr/share/tinyproxy/403.html"
# ErrorFile 408 "/usr/share/tinyproxy/408.html"

#
# DefaultErrorFile: The HTML file that gets sent if there is no
# HTML file defined with an ErrorFile keyword for the HTTP error
# that has occured.
#
DefaultErrorFile "/usr/share/tinyproxy/default.html"

#
# StatFile: The HTML file that gets sent when a request is made
# for the stathost. If this file doesn't exist a basic page is
# hardcoded in tinyproxy.
#
StatFile "/usr/share/tinyproxy/stats.html"

#
# Where to log the information. Either LogFile or Syslog should be set,
# but not both.
#
Logfile "/var/log/tinyproxy.log"
# Syslog On

#
# Set the logging level. Allowed settings are:
# Critical (least verbose)
# Error
# Warning
# Notice
# Connect (to log connections without Info's noise)
# Info (most verbose)
# The LogLevel logs from the set level and above. For example, if the LogLevel
# was set to Warning, than all log messages from Warning to Critical would be
# output, but Notice and below would be suppressed.
#
LogLevel Info

#
# PidFile: Write the PID of the main tinyproxy thread to this file so it
# can be used for signalling purposes.
#
PidFile "/var/run/tinyproxy.pid"

#
# Include the X-Tinyproxy header, which has the client's IP address when
# connecting to the sites listed.
#
#XTinyproxy mydomain.com

#
# Turns on upstream proxy support.
#
# The upstream rules allow you to selectively route upstream connections
# based on the host/domain of the site being accessed.
#
# For example:
# # connection to test domain goes through testproxy
# upstream testproxy:8008 ".test.domain.invalid"
# upstream testproxy:8008 ".our_testbed.example.com"
# upstream testproxy:8008 "192.168.128.0/255.255.254.0"
#
# # no upstream proxy for internal websites and unqualified hosts
# no upstream ".internal.example.com"
# no upstream "www.example.com"
# no upstream "10.0.0.0/8"
# no upstream "192.168.0.0/255.255.254.0"
# no upstream "."
#
# # connection to these boxes go through their DMZ firewalls
# upstream cust1_firewall:8008 "testbed_for_cust1"
# upstream cust2_firewall:8008 "testbed_for_cust2"
#
# # default upstream is internet firewall
# upstream firewall.internal.example.com:80
#
# The LAST matching rule wins the route decision. As you can see, you
# can use a host, or a domain:
# name matches host exactly
# .name matches any host in domain "name"
# . matches any host with no domain (in 'empty' domain)
# IP/bits matches network/mask
# IP/mask matches network/mask
#
#Upstream some.remote.proxy:port

#
# This is the absolute highest number of threads which will be created. In
# other words, only MaxClients number of clients can be connected at the
# same time.
#
MaxClients 100

#
# These settings set the upper and lower limit for the number of
# spare servers which should be available. If the number of spare servers
# falls below MinSpareServers then new ones will be created. If the number
# of servers exceeds MaxSpareServers then the extras will be killed off.
#
MinSpareServers 5
MaxSpareServers 20

#
# Number of servers to start initially.
#
StartServers 100

#
# MaxRequestsPerChild is the number of connections a thread will handle
# before it is killed. In practise this should be set to 0, which disables
# thread reaping. If you do notice problems with memory leakage, then set
# this to something like 10000
#
MaxRequestsPerChild 0

#
# The following is the authorization controls. If there are any access
# control keywords then the default action is to DENY. Otherwise, the
# default action is ALLOW.
#
# Also the order of the controls are important. The incoming connections
# are tested against the controls based on order.
#
Allow 127.0.0.1
#Allow 192.168.1.0/25

#
# The "Via" header is required by the HTTP RFC, but using the real host name
# is a security concern. If the following directive is enabled, the string
# supplied will be used as the host name in the Via header; otherwise, the
# server's host name will be used.
#
ViaProxyName "tinyproxy"

#
# The location of the filter file.
#
#Filter "/etc/tinyproxy/filter"

#
# Filter based on URLs rather than domains.
#
#FilterURLs On

#
# Use POSIX Extended regular expressions rather than basic.
#
#FilterExtended On

#
# Use case sensitive regular expressions.
#
#FilterCaseSensitive On

#
# Change the default policy of the filtering system. If this directive is
# commented out, or is set to "No" then the default policy is to allow
# everything which is not specifically denied by the filter file.
#
# However, by setting this directive to "Yes" the default policy becomes to
# deny everything which is _not_ specifically allowed by the filter file.
#
#FilterDefaultDeny Yes

#
# If an Anonymous keyword is present, then anonymous proxying is enabled.
# The headers listed are allowed through, while all others are denied. If
# no Anonymous keyword is present, then all header are allowed through.
# You must include quotes around the headers.
#
#Anonymous "Host"
#Anonymous "Authorization"

#
# This is a list of ports allowed by tinyproxy when the CONNECT method
# is used. To disable the CONNECT method altogether, set the value to 0.
# If no ConnectPort line is found, all ports are allowed (which is not
# very secure.)
#
# The following two ports are used by SSL.
#
ConnectPort 443
ConnectPort 563
ConnectPort 6667
ConnectPort 6668
ConnectPort 6669
ConnectPort 7000
ConnectPort 80
# ==================================================================

Make some config files readable:
sudo chmod a+r /usr/local/etc/tinyproxy/tinyproxy.conf

Create the log file:
sudo touch /var/log/tinyproxy.log
sudo chmod a+rw /var/log/tinyproxy.log
sudo touch /var/run/tinyproxy.pid
sudo chmod a+rw /var/run/tinyproxy.pid





You can optionally create a startup script for tinyproxy, in your home directory:
nano starttinyproxy
and paste this:

#!/bin/sh
killall tinyproxy
/usr/local/sbin/tinyproxy -c /usr/local/etc/tinyproxy/tinyproxy.conf -d &
sleep 5
tail /var/log/tinyproxy.log

save it, and make it executable:
chmod u+x starttinyproxy



Exit from root, and under your account, start up Tinyproxy:
./starttinyproxy

Wednesday, October 7, 2009

How to set up the Linksys WUSB300N wireless N device to work with Linux/Ubuntu

How to set up the Linksys WUSB300N wireless N device to work with Linux/Ubuntu

Credits: mcdsco - http://ubuntuforums.org/showthread.php?t=530772

# start a shell, and log in as root
sudo su -

# install ndiswrapper for your system, this could vary, get a new version
cd /root
wget http://downloads.sourceforge.net/project/ndiswrapper/stable/1.55/ndiswrapper-1.55.tar.gz?use_mirror=softlayer
gzip -d ndiswrapper-1.55.tar.gz
tar -xvf ndiswrapper-1.55.tar
cd ndiswrapper-1.55
make install


# get the relevant files for the Linksys WUSB300N wireless device
mkdir /opt/ndis
cd /opt/ndis
wget http://www.atvnation.com/WUSB300N.tar
tar xvf WUSB300N.tar -C /opt/ndis/
cd /opt/ndis/Drivers

# install the drivers
ndiswrapper -i netmw245.inf

# plug the USB wireless device into the PC and:
modprobe ndiswrapper

# check to see if the device is seen:
dmesg | grep ndis
[ 4336.851339] ndiswrapper version 1.53 loaded (smp=yes, preempt=no)
[ 4336.890513] usbcore: registered new interface driver ndiswrapper
[ 4636.519061] ndiswrapper: driver netmw245 (Linksys, A Division of Cisco Systems, Inc.,12/07/2006,1.0.5.1) loaded


At this point, the device should work. Go to the wireless settings, set up your connection.
Type "ifconfig" to see the network configuration, the wireless device should show up under "wlan0".

Tuesday, October 6, 2009

College of Business at FSU



College of Business faculty: http://cob.fsu.edu/faculty/faculty_staff.cfm?type=2


========================
Some fun core courses
========================
ACG5026 Financial Reporting and Managerial Control
This course provides a basic understanding of accounting systems and financial statements as a foundation for analysis. The course also addresses cost systems and controls as they pertain to organizational control. Cannot be taken for credit for the Master of Accounting degree.
9780470128824 Financial Accounting in Economic Context Pratt 2009 7TH Required Textbook
9780967507200 Code Blue (w/264 or 261 pgs) McDermott 2002 3RD Required Textbook
ACG5026 Course Notes Target Copy Required Other
Stevens, Douglas E, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=399

========================
BUL5810 The Legal & Ethical Environment of Business
no sections open for Spring 2010
========================
FIN5425 Problems in Financial Management
no sections open for Spring 2010
========================
ISM5021 Problems in Financial Management
Applied course in concepts and techniques used in the design and implementation of management information systems and decision support systems, with emphasis on management of these systems
Textbooks and materials not yet assigned
Wasko, Molly M, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=305
T R 2:00-3:15 RBA 0202
========================
MAR5125 Marketing Strategy in the Global Environment
This course examines the business-level marketing strategy in the context of global markets and uses the marketing-planning process as a framework for understanding how global environments, markets, and institutions affect the strategic marketing operations of the global business enterprise.
9780324362725 Marketing Strategy Ferrell 2008 4TH Required Textbook
9781591396192 Blue Ocean Strategy Kim 2005 Required Textbook
Hartline, Michael D, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=306
========================
MAN5245 Leadership and Organizational Behavior
This course offers a dynamic examination of managerial concepts of human behavior in work organizations.
9780324578737 Organizational Behavior Nelson 2009 6th Required Textbook
Douglas, Ceasar, http://cob.fsu.edu/man/hrcenter/faculty.cfm
========================
MAN5501 Production and Operations Management
Develops a conceptual framework which is useful in describing the nature of the operations function, with emphasis on identifying basic issues in managing the operations of a service organization.
9780324662559 Operations Management David Collier and James Evans 2009-2010 Required Textbook
Smith, Jeffery S, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=421
========================
MAN5716 Economics and Business Conditions
Problems of managing the firm in relation to the changing economic environment. Analysis of major business fluctuations and development of forecasting techniques.
No textbook required
Christiansen, William A, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=25
========================
MAN5721 Strategy and Business Policy
The course covers the relation between theories and practices of management, and focuses on utilizing methododologies and theories for strategic decision making.
9780132341387 Strategic Management: Concepts & Cases Carpenter 2009 2ND Recommended Textbook
M W 9:30 - 10:45 RBA 0202
Holcomb, Timothy R, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=427
========================


========================
Flex options
========================
FIN5515 Investments
This course offers an analysis of financial assets with emphasis on the securities market, the valuation of individual securities, and portfolio management.
9780324656121 Investment Analysis and Portfolio Management Reilly and Brown 9th Required Textbook
T R 3:35-4:50PM
Doran, James S, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=368
========================
ISM5315 Project Management
no sections open for Spring 2010
========================
MAR5465 Supply Chain Marketing
no sections open for Spring 2010
========================
RMI5011 Fundamentals of Risk Management
This course develops concepts such as time value of money, statistical analysis, information technology, and management of risk exposure. Topics include risk fundamentals, risk management, insurer operations, and insurance regulation.
9780072339703 Risk Management & Insurance Harrington 2004 2ND Required Textbook
M W 11am-12:15pm
Born, Patricia H, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=458
========================

Thursday, September 10, 2009

Summary of the talk by Prof. Ted Baker

Summary of the talk by Prof. Ted Baker

Alan Lupsha

Professor Ted Baker’s area of research is real-time systems. He focuses on real-time runtime systems, real-time scheduling and synchronization and real-time software standards.

Real-time scheduling for multiprocessors involves finding ways to guarantee deadlines for tasks which are scheduled on multiprocessor systems. A main problem with scheduling is that it is very difficult to meet constraints, given specific computational workloads. As workloads vary, meeting given constraints can be achieved with different guarantees. For example, the guarantee of execution differs when given constraints for fault tolerance, window of execution or energy usage. The quality of scheduling can vary as well, as this quality can quantify how well the schedule guarantees the meeting of deadlines or how late the task will complete over the deadline. Once an algorithm is able to schedule a workload, a schedule can also vary in sensitivity in proportionality with the variation in the parameters of the execution.

Professor Baker looks at workload models which involve jobs, tasks and task systems. Jobs are units of computation that can be scheduled with a specific arrival time, worst-case execution time, or deadline. Tasks are sequences of jobs, and can depend on other tasks. Sporadic tasks have two specific qualities: they have a minimum inter-arrival time, and they have a worst case execution time. Task systems are sets of tasks, where tasks can be related or they can be independent (scheduled without consideration of interactions, precedence or coordination).

Scheduling involves models, which can be defined as having a set of (identical) processors, shared memory, and specific algorithms. These algorithms can be preemptive or non-preemptive, on-line (decisions are made on the fly as instructions arrive) or off-line, and global or partitioned (split amongst processors where they can predict in advance the workload for each processor). There are three typical scheduling algorithms and tests. The first one is “fixed task-priority scheduling”, where the highest priority tasks run first. The second is “earliest deadline first”, where higher loads are handled without missing the deadline (these algorithms are easier to implement). The third type of algorithms (which are not used in single processing systems but only in multi-processor systems) are “earliest deadline zero laxity”, where the execution of a job can be delayed without missing the given deadline.

The difficulty of scheduling is that there is no practical algorithm for scheduling a sporadic task. One example of a scheduling test is the density test, where one can analyze what fraction of the processor is needed to serve a given task. Professor Baker researches task scheduling and is looking for acceptable algorithms which are practical, given specific processing constraints.

Tuesday, September 8, 2009

Summary of the Talk by Prof. FeiFei Li

Summary of the Talk by Prof. FeiFei Li

Alan Lupsha

Professor FeiFei Li researches Database Management and Database technologies. His research focuses on efficient indexing, querying and managing large scale databases, spatio-temporal databases and applications, and sensor and stream databases.

Efficient indexing, querying and managing large scale databases deals with problems such as retrieving structured data from the web and automating the process of identifying the structure of web sites (ex. to create customized reports for users). It is important to interpret web pages and to identify data tree structures. This allows one to first create a schema for the structure of the data, and then to integrate information from different sources together in a meaningful way. The topic of indexing higher dimensional data (using tree structures and multi dimensional structures) deals with space partitioning that indexes data anywhere from 2 to 6 dimensions.

The topic of spatio-temporal databases and applications deals with the execution of queries, like finding solutions to NP-hard problems such as the traveling salesman problem. A solution uses a greedy algorithm, which has a start node location and finds the nearest neighbor in each predefined category of nodes. By minimizing the sum distance (using the minimum sum distance algorithm), a path from a start to and end node is found in such a way that each category is visited, and the solution is at most 3 times the complexity of the optimal solution.

Sensor and stream databases deal with the integration of sensors into network models. A large set of sensors is distributed in a sensor field, and a balance is sought to solve problems such as data flow between sensors, hierarchy of sensors and efficient data transmission for the purpose of saving battery life. Professor Li analyzes the best data flow models between sensors and different ways to group sensors so that hub nodes transmit data further to other hub nodes (an example of such an application is the monitoring of temperatures on an active volcano). One can not use broadcast since this would drain the sensors’ battery life. Thus, routing methods and fail over mechanisms are examined, to ensure that all sensor data is properly being read.

Professor Li also researches problems with the method of Identical Independent Distributed Random Noise (IID), which introduces errors in data sets for the purpose of hiding secret data, while maintaining correct data averages and other data benchmarks (for example hiding real stock data or employees’ salaries, but preserving averages). The problem with IID is that attackers can filter out outliers in data and still extract the data that is meant to remain secret. A solution to this problem is to add noise to the original component of the data set by adding the same amount of noise, but in parallel to the principal component. This yields more securely obfuscated data.

Thursday, September 3, 2009

Summary of the talk by Prof. Zhenhai Duan

Summary of the talk by Prof. Zhenhai Duan

Alan Lupsha

Professor Zhenhai Duan researches accountable and dependable Internet with good end-to-end performance. There is currently a serious problem with the Internet because it lacks accountability and there is not enough law enforcement. It is very hard to find out who did something wrong because hackers do not worry about breaking the law and they cover their tracks in order to not get caught. There is a need to design protocols and architectures which can prevent bad activities from happening and which can easier identify attackers.

The current Internet lacks accountability, as even if there are no attacks, there are still many problems. For example, the time to recover during routing failures is too long, and DNS also has many issues. Dependable Internet defines higher accountability for banking and secure applications. End-to-end performance also needs to be high, especially for more important applications which need a greater guarantee of data delivery.

Professor Duan’s research projects include network security, solutions to network problems, routing, and intrusion detection. In IP spoofing attacks it is difficult to isolate attack traffic from legitimate traffic, and these attacks include the man-in-the-middle method with TCP hijacking and DNS poisoning, as well as reflector-based attacks with DNS requests and DDOS. There are distributed denial of service attacks which are issued from bot nets made up of millions of zombie (compromised) computers. To solve these network problems, professor Duan researches route-based filtering techniques. These techniques take advantage of the fact that hackers can spoof their source addresses but they can not control the route of the packets, while filters which know part of the network topology can isolate illegitimate traffic.

Inter-Domain Packet Filter (IDPF) systems identify feasible routes based on the BGP (an Internet domain routing protocol) updates. These systems evaluate the performance of other IDPFs based on Autonomous Systems graphs. It is hard to completely protect an Autonomous System from spoofing attacks, but IDPFs can effectively limit the spoofing capability of attackers. Using the vertex cover algorithm, one can prevent attackers in 80.8% of the networks which are attacked. If the attacks can not be prevented, one can still look at the topology and determine who are the candidates of the source packets. IDPFs are effective in helping IP traceback, as all Autonomous Systems can localize attackers. The placement of IDPFs also plays a very important role in the performance of protecting networks.

Since botnets are becoming a major security issue, and they are used in distributed denial of service attacks, spamming and identity theft, there is a greater need for utility based detection of zombie machines. The SPOT system is one system being researched which classifies messages as spam or not spam. It computes a function based on the sequential probability ratio test, using previously learned behavior of systems, and finally arriving at one of two different hypotheses, classifying messages as spam or not spam. Professor Duan is currently testing the SPOT system and improving it.

Tuesday, September 1, 2009

Summary of the talk by Prof. Mike Burmester

Summary of the talk by Prof. Mike Burmester

Alan Lupsha

Professor Mike Burmester is interested in research in areas of radio frequency identification and ubiquitous applications, mobile ad hoc networks (MANET) and sensor networks, group key exchange, trust management and network security, and digital forensics. New wireless technologies offer a great wireless medium, but unfortunately the current state of world research is not mature enough to fully understand and mange these new technologies. The fourth generation of wireless technologies, which should work both in the European Union and in the United States, will offer new challenges and opportunities for maturity in this field.

The RFID revolution will be the next big factor which will allow easier management of products. This technology is already being implemented in library systems, allowing easier book management and replacing bar codes, which requires line of sight in order to scan each book. Airports are also implementing RFID for luggage management, and hospitals use RFID tags to protect newborns from being kidnapped. Different types of sensor networks are used extensively in factory floor automation, border fencing and in a plethora of military applications. Sensors will also be extensively used in monitoring biological levels in people. For example, a blood level monitor can monitor and alert a diabetic person if their sugar level is too high or too low.

Mobile ad-hoc networks (MANET) offer information routing between wireless devices which are mobile. Vehicular ad-hoc networks (VANET) are a type of mobile ad-hoc networks which allow communication between moving vehicles. These networks allow individual wireless devices to act as nodes and to route information between other communicating devices, thus reducing the need of dedicated wireless nodes. Ubiquitous networks allow applications to relocate between wireless devices, thus following a mobile user on his or her journey, while continuing to provide needed services.

These new wireless technologies will also need proper management. Some of the new issues at hand include centralizing or decentralizing systems, finding out who will protect certain systems, ensuring data security (such as confidentiality, avoiding eavesdropping, guaranteeing privacy), preserving data integrity (avoid the modification and corruption of data), and data availability (dealing with denial of service attacks, identifying rogue based stations, dealing with man in the middle attacks, detecting and avoiding session tempering and session hijacking).

There is a trade-off between security and functionality. It is extremely challenging to secure wireless networks, but in certain cases one may desire less security in order achieve cheaper wireless products and technologies. Using secured pipelines to create point to point communication does ensure some security, but there are still problems at the physical layer, where attacks can be carried out. Hackers are keen to intercept and manipulate wireless data, making this a very attractive environment for them and creating the the challenge to try and stay ahead of the users of these technologies. This gives rise to great security threats, but it also opens up a niche for researchers to study and create new wireless network security technologies.

Thursday, August 27, 2009

Bosch 5 pin SPDT relays


Bosch 5 pin relay - SPST single pole single throw:
====================================================
white 85 coil source: +
black 86 coil ground: -

yellow 87 normally open: to + of load
red 87a normally closed: to + of load
blue 30 common: to - of load
====================================================

Wednesday, August 26, 2009

IRC stuff

P - priority, from 0 to 10
T - type of message:
1: plaintext data
2: base64 encoded data
D - data
Can be indexed, ex: D1, D2 ... D65535


Examples:
[P:0][T:1][D1: // priority 0, type chat, data 1

Tuesday, August 4, 2009

How to install Sun Glassfish sges-2_1-linux.bin

# Install the missing library
apt-get install libstdc++5

# go to the download location
cd /opt/downloads/

# execute the binary
./sges-2_1-linux.bin


Answer all the prompts. If you didn't install Java 2 SDK 5.0 or greater, go to another shell (ALT F2), log in and install it:

cd /opt/downloads/jdk
chmod u+x jdk-6u14-linux-i586.bin
./jdk-6u14-linux-i586.bin

and then move the installation:
mv jdk1.6.0_14 /opt/jdk16014

and then go back to the glassfish installation (ALT F1) and specify the installation directory:
/opt/jdk16014

When prompted for all the settings, make sure to enter an admin password (ex: adminadmin)

To start the app server:
/opt/SUNWappserver/bin/asadmin start-domain domain1

Console: http://localhost:4848


Install mysql:
apt-get install mysql-server-5.1
apt-get install mysql-client-5.1

mysql -u root -p

How to install Java after a fresh Ubuntu installation

sudo apt-get update
sudo apt-get install sun-java6-bin sun-java6-jre sun-java6-plugin
java -version

Monday, July 27, 2009

Lorem Ipsum

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam vitae sollicitudin magna. Integer mi quam, tristique et tincidunt ut, scelerisque at nulla. Nunc tincidunt nibh ut nunc ultrices placerat. Phasellus dolor mi, molestie vel condimentum lacinia, mattis laoreet lacus. Cras quam diam, lobortis ac cursus a, ultricies sit amet justo. Praesent nec velit at tellus condimentum tristique. Aliquam ultrices elit eu nunc facilisis ullamcorper. Phasellus quis lacus sapien, quis mattis urna. Nulla ut felis sed elit venenatis fringilla. Cras interdum posuere augue in ornare. Donec tempus convallis leo eu posuere. Ut id mi felis. Curabitur eget sem ac leo commodo lacinia in sed libero. Mauris sit amet lacus eget erat tincidunt tempor. Proin vitae erat convallis sapien euismod sollicitudin in egestas eros. Praesent felis augue, cursus nec tincidunt a, dapibus hendrerit lacus. Aenean sapien turpis, iaculis et suscipit eu, egestas ut lorem. Pellentesque nec purus sem, ac placerat tortor. Nullam ultricies elementum commodo.

Donec eget dolor risus. Fusce at augue sed felis imperdiet auctor. Nulla auctor faucibus sapien, nec sagittis ante interdum eget. Cras molestie aliquet nisl, ut interdum risus mattis eget. Aliquam in accumsan nisl. Suspendisse feugiat magna in lacus facilisis a rutrum sem aliquam. Nam congue ultricies sagittis. Sed accumsan viverra elit. Aliquam erat volutpat. Phasellus sit amet nisi hendrerit ante fringilla convallis. Morbi at nibh vitae urna pharetra tincidunt eu non diam. Aliquam ac odio mauris. Nunc pretium, sapien non pharetra viverra, dui quam dignissim ligula, in auctor diam leo ultricies felis. Donec augue tellus, luctus id sollicitudin eget, venenatis a ante.

Nunc viverra gravida tincidunt. Aliquam sed laoreet neque. Pellentesque nec urna leo. Aliquam sit amet mauris magna, sed dignissim elit. In eget metus ante. Suspendisse quam risus, dictum vel tincidunt id, tincidunt a eros. Nam id urna ligula. Proin condimentum arcu ac nisl lacinia non luctus sem laoreet. Cras fringilla, erat id malesuada pharetra, arcu tellus aliquam mauris, in fringilla risus eros eget justo. In blandit dui vitae risus placerat ac venenatis sem scelerisque. Donec et ligula dui. Ut sed augue vel nibh rhoncus malesuada. Quisque porttitor orci aliquam elit tempus eu gravida purus tristique. Quisque mauris sem, pulvinar ac mattis vitae, aliquam in enim. Cras congue ornare porta. Duis iaculis tristique mollis. Mauris sed urna odio. Praesent fringilla lobortis metus, non dignissim magna bibendum eu.

Vivamus feugiat nulla vel nisi imperdiet semper. Quisque in neque ut nibh vulputate blandit. Duis vel augue ante. Vivamus laoreet pulvinar lectus ac pellentesque. Sed a imperdiet quam. Mauris dignissim lacinia neque eu fringilla. Nullam non velit sem, quis commodo urna. Quisque quis dolor vitae lectus accumsan blandit. Aenean eget felis felis. Praesent felis augue, vulputate sed viverra ut, rutrum in massa.

Praesent sollicitudin urna vitae est egestas ac tristique erat venenatis. Sed eget eros magna, at varius orci. Ut commodo, augue sit amet condimentum scelerisque, odio lectus gravida nulla, eget pellentesque tortor dolor commodo nisi. In tellus justo, imperdiet nec tincidunt vitae, tristique congue erat. Fusce tempus turpis sed risus euismod ultricies. Aenean in sapien tellus, sed lobortis nulla. Morbi turpis justo, semper nec convallis id, tincidunt at lorem. Etiam ornare tempus nibh id vulputate. Proin magna nunc, pellentesque in porttitor vel, porta id lectus. Vivamus id dolor ut libero volutpat adipiscing sed in enim. Morbi tincidunt eros eu diam viverra viverra. Donec consequat nibh sit amet magna auctor ornare. Nulla commodo pellentesque purus, a pulvinar neque congue at. Quisque porttitor sollicitudin magna, et semper lectus pretium ornare. Etiam non lacus quis nulla malesuada varius. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Phasellus neque eros, tincidunt sodales lacinia ut, congue quis mi. Phasellus a odio nec dolor imperdiet laoreet faucibus sed lacus. Nunc a ipsum ullamcorper ipsum pharetra tincidunt. Praesent malesuada quam vel augue sodales pulvinar.

Pellentesque egestas nulla sit amet magna luctus ac tristique nisl luctus. Nullam euismod lacinia augue in ultricies. In eu lectus vitae sem consectetur faucibus eu id felis. Nullam velit nibh, egestas eu aliquet non, posuere nec diam. Cras imperdiet porttitor libero a consequat. Maecenas aliquam erat nunc. Aenean lorem diam, convallis et posuere at, tempor ut felis. Vivamus id magna nulla. Duis porttitor interdum mauris eget rhoncus. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Vestibulum a mi eu velit pulvinar blandit nec quis ipsum. Pellentesque imperdiet pretium massa vitae tristique. Pellentesque auctor lectus sit amet erat elementum vel vehicula velit adipiscing. In sem arcu, dapibus nec venenatis ac, semper ac nunc. Aliquam id ipsum massa, non commodo nisl.

Etiam nec mi vitae metus convallis rhoncus. Suspendisse eget cursus eros. Fusce egestas ligula sit amet dui dignissim ullamcorper. Etiam nunc erat, bibendum a suscipit a, ultricies gravida lacus. Aliquam sollicitudin magna ut neque laoreet tincidunt. Curabitur vulputate lectus vitae massa viverra tempus. Ut tristique tellus massa, eget volutpat mi. Donec consequat enim elementum enim aliquam vel lacinia libero fermentum. Nam sodales turpis et est fermentum ornare. Donec libero neque, tincidunt et dignissim id, fermentum sit amet turpis. Proin at ultrices lorem. Nunc accumsan, mauris nec sollicitudin lobortis, dolor odio facilisis tellus, ut tempus lacus libero quis leo. Nullam facilisis tortor sit amet augue ullamcorper lacinia. Mauris et tortor vitae felis auctor posuere.

Pellentesque vitae velit eu risus vulputate feugiat ut eu lorem. Suspendisse venenatis pellentesque eros, quis pellentesque velit pretium a. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nam egestas dolor quis libero fringilla eu commodo lectus iaculis. Sed id ligula diam. Sed scelerisque auctor dapibus. Duis lacinia tortor porttitor nunc condimentum ut volutpat neque tempus. Donec volutpat auctor porta. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Nulla ornare ultricies dui, at suscipit sapien porta vel. Integer elementum euismod dui, vitae sollicitudin nibh tristique quis. Curabitur accumsan, ipsum eu gravida egestas, lacus lectus molestie tortor, vitae pulvinar odio purus id mi.

Cras pretium vulputate bibendum. Pellentesque pretium, velit eget vulputate gravida, ipsum tortor congue lorem, vel convallis erat ante eu nibh. Aliquam erat volutpat. Praesent neque eros, vulputate consequat dapibus ac, tristique vel sapien. Nullam dapibus mauris at urna dignissim sed venenatis est accumsan. Ut odio dui, tempus nec mattis quis, tincidunt vitae lacus. Nulla rutrum dolor ac tortor gravida pharetra adipiscing lacus aliquam. Pellentesque eleifend ipsum sed turpis fermentum hendrerit vestibulum orci accumsan. Fusce placerat dolor in justo hendrerit quis vestibulum diam tristique. Ut mollis volutpat aliquet. Ut dictum augue eget mauris viverra eu tempus lorem auctor. Nulla tristique, nisl at lobortis venenatis, tellus metus viverra mi, eget aliquam massa urna non nibh.

Nam porttitor turpis a mi vehicula dapibus egestas urna ultricies. Duis vulputate, diam ut fermentum dictum, massa diam mattis nibh, et imperdiet nibh purus vitae sem. Vivamus lorem nibh, scelerisque ut tincidunt et, iaculis vestibulum enim. In hac habitasse platea dictumst. Morbi risus mauris, hendrerit ut cursus quis, mattis in mi. Sed sed turpis vel ipsum dapibus malesuada. Nunc sit amet porta lacus. Curabitur pretium rutrum sapien a dapibus. Sed odio turpis, venenatis nec iaculis in, consequat commodo orci. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Proin non accumsan massa. Donec sit amet leo at turpis dictum rutrum. Proin sed nulla eu est molestie ultricies. Etiam in tincidunt velit. Duis eros felis, varius et fermentum vitae, imperdiet a enim.

Friday, July 24, 2009

Attn: spacetime complaints department

A brief history of time I own. I meddle with it, I occupy the space. I use the space and time, right now. It's mine, I think, I hope. I cling to it, for time will soon run out. I'll soon fall out forever into reality's dark complement. An icy fiery crush of expanding antimatter will be my only home, they say. I fear the yet unknown.

"This time is mine!", I scream in silence. The ticking clocks lose seconds, seconds tick. "Short lived those seconds' lives must be", I wonder. They fade out silently, there is no thunder. Accomplished lives those seconds couldn't have, for they exist in fractions of my time, and now they're gone.

I ponder. On yet another larger scale, my second's now. Can't see the now, it's hidden and it's alive.

But wait a minute. Am I a tick in my machine? Am I a tiny piece of matter thrown in space and whirling fast at supersonic speeds through endless fields of void? Where is my thunder, where is my essence?

I'm partially made of matter, but do I matter? My soul screams out from my neuronal machine, it wants to exit out and float in freedom! It wants to live and mingle in the social space of memes!

My soul is stuck, it hints... My flimsy capsule is all I've got, and I don't matter much, I'm out of luck.

Am I my soul, my space, my time?

Oh beauty of unknown worldly facts, which seldom are brought forth and soothe the soul! I know you're there, I wait for you in silence. My eyes are my inquisitive tool, which bring the world into my head. They can't stop scanning, they want the world.

I hardly fall asleep, I'm restless and I suffer. My second's almost up. No singularity will save us now. The mission's almost clear: suffer away, and ponder endlessly, for that's the underlying purpose.

I look up,
The truth is burning,
I wonder.

A brief "thank you" list, to some of the many who matter, but who may not know they've made a difference.

Hans Moravec
Trey Parker and Matt Stone
Leandro Asnaghi-Nicastro
Matt Groening
Louis Lee Smith
Stanely Kubrick
Harold Ramis, Danny Rubin and Bill Murray
Vlad "Dracul" Tepes
Douglas Fisher
Christopher Columbus

Monday, June 15, 2009

How to display all HTTP headers



Enumeration names = ((HttpServletRequest)request).getHeaderNames();
StringBuffer result = new StringBuffer("");
String value = null;
while (names.hasMoreElements())
{
String name = (String) names.nextElement();
Enumeration values = ((HttpServletRequest)request).getHeaders(name);

if (values != null)
{
while (values.hasMoreElements())
{
value = (String) values.nextElement();
result.append(name + ": " + value + "\n<br>");
}
}
}
System.out.println("BEGIN: all headers" + eol );
System.out.println( result.toString() );
System.out.println("END: all headers" + eol );

Tuesday, May 26, 2009

Google Android - map not working in emulator? - how to fix it (in Windows) - by Alan Lupsha

Basically, you need a Google Mpas api key: http://code.google.com/android/add-ons/google-apis/mapkey.html

Example on how to do this:

1. find your debug.keystore file, for example:
C:\Documents and Settings\developer\.android\debug.keystore

2. list the md5:
C:\jdk1.6.0_13\bin>keytool -list -alias androiddebugkey -keystore "C:\Documents and Settings\developer\.android\debug.keystore" -storepass android -keypass android

androiddebugkey, May 18, 2009, PrivateKeyEntry,Certificate fingerprint (MD5): 83:4D:2C:6F:58:B3:D1:EA:2C:AF:0D:FC:70:19:57:D6

Save the fingerprint somewhere, you'll need it later.

3. Sign up for the maps API: http://code.google.com/android/maps-api-signup.html , use your generated MD5 fingerprint

Submission result:

Thank you for signing up for an Android Maps API key!

Your key is:
0h1d_-9Wwhaterver-your-key-is4yNt-SXgQ

This key is good for all apps signed with your certificate whose fingerprint is:
83:4D:2C:6F:58:B3:D1:EA:2C:AF:0D:FC:70:19:57:D6

Here is an example xml layout to get you started on your way to mapping glory:




Go to your project's layout, i.e. in main.xml, look for your MapView definition,
take out android:apiKey="apisamples" and replace it with whatever your key is,
for example: android:apiKey="0h1d_-9Wwhaterver-your-key-is4yNt-SXgQ"

or, if you didn't define your mapView in XML, but instead you did it in code, use:
mMapView = new MapView(this, "0h1d_-9Wwhaterver-your-key-is4yNt-SXgQ");

Also, make sure that in your manifest, you have this defined:

Monday, May 25, 2009

Linux: how to send emails with attachments from the command line

echo "Sending an attachment " | mutt -a my-superb-tar-file.tar -s "attachment" alan75@my-fun-domain.com

Friday, May 22, 2009

How to stop and restart subversion using a simple script

1. create the svn restart script:
sudo nano /etc/init.d/restartsvn

sudo cat /etc/init.d/restartsvn
#!/bin/bash
echo This is a startup script to stop and restart subversion - Alan

echo Now stopping previous svn instance...
sudo kill -9 `ps aux | grep -i svn | grep -i listen-host | grep -v grep | awk '{print $2}'`

echo Sleeping 3 seconds...
#sleep 3 seconds
sleep 3

echo Now starting svn for you: /usr/bin/svnserve -d --listen-host 10.0.0.10 -r /srv/svn
/usr/bin/svnserve -d --listen-host 10.0.0.10 -r /srv/svn

echo The process id of the new svn instance is:
echo `ps aux | grep -i svn | grep -i listen-host | grep -v grep | awk '{print $2}'`
echo Done


2. make the script executable
sudo chmod u+x /etc/init.d/restartsvn

3. execute the script
sudo /etc/init.d/restartsvn

Thursday, May 21, 2009

Java - how to keep your instance NamingEnumeration object and iterate through it without .next() killing it


Problem: You have a NamingEnumeration and want to keep untouched, while passing it to different methods that need to iterate through it. Error: doing a namingOperation.next() walks the object, and when you reach the end, there's no .moveToFront() method, which allows you to re-use the NamingEnumeration object.

Solution: use Collections.list( namingEnumeration ), which returns an ArrayList. Then, instantiate a new copy of the original ArrayList and then use this copy, with an Iterator, walking each element of the arrayList object.

The example below is of a NamingEnumeration A which is made up of many NamingEnumerations B, which are made up of some NamingEnumerations C. (basically it's an LDAP naming enumeration result after performing a dirContext.search( ... )


See, this won't work:

public static void printAllAttributes( NamingEnumeration originalNamingEnumeration )


// used only to create copy
ArrayList arrayListBackup = Collections.list( originalNamingEnumeration );

// create copy
ArrayList arrayList = new ArrayList( arrayListBackup );
...
because the originalNamingEnumeration gets "exhausted" after Collections.list() is called.
So, your only option is to get the original NamingEnumeration, convert it to an ArrayList:

NamingEnumeration namingEnumeration = dirContext.search( searchBase, searchFilter, searchControls );
ArrayList tempArrayList = Collections.list( namingEnumeration );
and forget about it:

namingEnumeration.close();
namingEnumeration = null;
Then, make as many copies of that tempArrayList, and use those copies:

ArrayList arrayListCopy1 = new ArrayList( tempArrayList );
... do whatever
ArrayList arrayListCopy2 = new ArrayList( tempArrayList );
... do whatever
ArrayList arrayListCopy3 = new ArrayList( tempArrayList );
In my case, I want to keep the search results of an LDAP query, which is stored
in a NamingEnumeration, and then call different methods on the search results,
which iterate on the NamingEnumeration, screwing it up. So, keeping the search results
in a globally defined ArrayList object allows me to use it later. Final search routing looks as such:


public int search( String searchBase, String searchFilter )
{
SearchControls searchControls = new SearchControls();
searchControls.setSearchScope( SearchControls.SUBTREE_SCOPE );
try
{
NamingEnumeration namingEnumeration = dirContext.search( searchBase, searchFilter, searchControls );
ArrayList tempArrayList = Collections.list( namingEnumeration );
namingEnumeration.close();
namingEnumeration = null;

this.lastSearchResultArrayList = new ArrayList( tempArrayList );
tempArrayList = null;
}
catch ( NamingException e )
{
logerror("Error while searching: " + e.toString() );
return -1;
}
return 0;
}

Thursday, May 14, 2009

Java - Rot94 - similar to rot13 - encode text





public class NinetyFiveEncode
{
public NinetyFiveEncode()
{
String testStr = "The rain in Spain falls mainly on the plain. Call me at (555)112-xxxx. {:-) Bye.";
String encoded = NinetyFiveEncode.rot94(testStr);
System.out.println( encoded );
String decoded = NinetyFiveEncode.rot94( encoded );
System.out.println( decoded );
}

/*
* Rot94 by Alan Lupsha (c)2009
*
* Takes a string of characters, and for every character
* between ASCII value 33 and 126 (! to ~), it adds 94 to
* the character value (wraps around if the result character
* is larger than 126)
*
* Any non-printable characters and the space character
* (i.e. any char smaller than 33 and larger than 126) gets
* copied over.
*
* Sample run:
*
* String testStr = "The rain in Spain falls mainly on the plain. Call me at (850)879-xxxx. {:-) Bye.";
* String encoded = NinetyFiveEncode.rot94(testStr);
* System.out.println( encoded );
* String decoded = NinetyFiveEncode.rot94( encoded );
* System.out.println( decoded );
*
* %96 C2:? :? $A2:? 72==D >2:?=J @? E96 A=2:?] r2== >6 2E Wgd_Xgfh\IIII] Li\X qJ6]
* The rain in Spain falls mainly on the plain. Call me at (850)879-xxxx. {:-) Bye.
*/
public static String rot94(String plainText)
{
if (plainText == null) return "";

// encode plainText
StringBuffer encodedMessage = new StringBuffer("");
int abyte;
for (int i = 0; i < abyte =" plainText.charAt(i);">= 33) && (abyte <= 126 ))
abyte = (abyte - '!' + 47) % 94 + '!';

encodedMessage.append( (char)abyte );
}
return encodedMessage.toString();
}


public static void main( String[] args )
{
NinetyFiveEncode ninetyFiveEncode = new NinetyFiveEncode();
}
}


Wednesday, May 13, 2009

How to check the status of your Android phone purchased from Brightstarcorp through Google

1. Go to: http://android.brightstarcorp.com/trackorder.php

If you are logged into your Google account, the page will display the status of your order right away. Ex: "Pending shipment" Else, log in and see your status.


2. Call Brightstarcorp at 877-727-9789 and ask to track your order, and give them your order number which you received in the email right after you purchased your phone. This will likely mean that they'll send out the phone today instead of waiting another 7 days to mail you your phone (even if you already paid for FedEx overnight delivery)

Thursday, April 30, 2009

How to get dates as a String

public String getToday() {
SimpleDateFormat sdfbeginDate = new SimpleDateFormat("MM/dd/yyyy");
return sdfbeginDate.format(Calendar.getInstance().getTime());
}

public String getNowAsTimestampString() {
Calendar nowCalendar = Calendar.getInstance();
java.util.Date myDate = nowCalendar.getTime(); // ex: Thu Aug 09 13:20:36 EDT 2014
SimpleDateFormat sdf = new SimpleDateFormat( "MM/dd/yyyy/HH/mm/ss");
return sdf.format( myDate );
}

public String getNowAsDateString() {
Calendar nowCalendar = Calendar.getInstance();
java.util.Date myDate = nowCalendar.getTime(); // ex: Thu Aug 09 13:20:36 EDT 2014
SimpleDateFormat sdf = new SimpleDateFormat( "MM/dd/yyyy");
return sdf.format( myDate );
}

Tuesday, April 28, 2009

How to sync all your servers to use the same date and time

1. edit the ntpdate file:

root@Knoppix:/etc# cat /etc/default/ntpdate
# The settings in this file are used by the program ntpdate-debian, but not
# by the upstream program ntpdate.

# Set to "yes" to take the server list from /etc/ntp.conf, from package ntp,
# so you only have to keep it in one place.
NTPDATE_USE_NTP_CONF=no

# List of NTP servers to use (Separate multiple servers with spaces.)
# Not used if NTPDATE_USE_NTP_CONF is yes.
#NTPSERVERS="0.debian.pool.ntp.org 1.debian.pool.ntp.org 2.debian.pool.ntp.org 3.debian.pool.ntp.org"

NTPSERVERS="swisstime.ee.ethz.ch"

# Additional options to pass to ntpdate
NTPOPTIONS=""



2. invoke a time synchronization

root@Knoppix:/etc# ntpdate-debian
28 Apr 20:33:50 ntpdate[15046]: adjust time server 129.132.2.21 offset 0.370255 sec
root@Knoppix:/etc#


3. set up a cron job to invoke the time synchronization:

root@Knoppix:/# crontab -e

# every night at 2 am, sync/update the time
0 2 * * * ntpdate-debian


4. Optionally, update your time zone, by creating a symbolic link for /etc/localtime to the timezone of your choice. Example:

root@Knoppix:/ # ln -sf /usr/share/zoneinfo/EST /etc/localtime
(this changes the timezone to EST)

root@Knoppix:/ # ln -sf /usr/share/zoneinfo/GMT /etc/localtime
(this changes the timezone to GMT)

Monday, April 6, 2009

How to call a procedure using an OracleCallableStatement

OracleCallableStatement oracleCallableStatement = null;

try
{
oracleCallableStatement = (OracleCallableStatement)connection.prepareCall( YOUR_PROCEDURE );
oracleCallableStatement.setString( 1, VARIABLE_1 );
oracleCallableStatement.setString( 2, VARIABLE_2 );
oracleCallableStatement.setString( 3, VARIABLE_3 );
oracleCallableStatement.registerOutParameter(4, oracle.jdbc.OracleTypes.CURSOR);

oracleCallableStatement.execute();
rs = (ResultSet)oracleCallableStatement.getObject(4);
}
catch ( Exception e )
{
logerror("Error while setting up Oracle procedure. Error is: " + e.toString() );
fatalexit();
}

Wednesday, April 1, 2009

Simple shell scripting - how to test 4 Seagate drives

Title: Testing 4 seagate drives concurrently, 10 times each, and dumping all the results in different files, in different directories.

Downloaded the seagate test "st" utility. The command to run an extensive test on a drive is:

./st -G device
ex:
./st -G /dev/sg0
./st -G /dev/sg1
./st -G /dev/sg2
./st -G /dev/sg3


1. create 4 directories to dump each set of tests in:
mkdir TESTsg0
mkdir TESTsg1
mkdir TESTsg2
mkdir TESTsg3

2. create a run.sh script which takes as arguments the run number (a number such as 1, 2, 3 ... ) and the device to scan, such as sg0, sg1, sg2, sg3. This script outputs the result of the run into the TESTSG# directories created above. For example, running ./run.sh 8 sg3 will dump the outputs of the test into directory and file TESTsg3/test8sg3.txt

#!/bin/sh
RUN=$1
SG=$2
if [ "$RUN" == "" ] ; then
echo "You must run with an argument of the run number: ./run.sh 5 sg0"
RETURNCODE=1
exit $RETURNCODE
fi
echo "Now executing for run $RUN"
./st -G /dev/"$SG" > ./TEST"$SG"/test"$RUN""$SG".txt


3. create a script to test each drive individually, doing 10 runs.

sg0tests.sh

#!/bin/sh
./run.sh 1 sg0
./run.sh 2 sg0
./run.sh 3 sg0
./run.sh 4 sg0
./run.sh 5 sg0
./run.sh 6 sg0
./run.sh 7 sg0
./run.sh 8 sg0
./run.sh 9 sg0
./run.sh 10 sg0

for sg1tests.sh, sg2tests.sh, and sg3tests.sh, replace the sg0 argument with sg1, sg2, and sg3, respectively.


4. To watch the processes while they're running, create a script called myps.sh:

#!/bin/sh
while true
do
clear
ps aux | grep run.sh | grep -v grep
sleep 3
done

Thursday, March 26, 2009

The Purpose

Our purpose here is not defined in nature's harmony and carbon traces.
The lab assistant who made us all, has gone to work on other cases.

We ask ourselves why we are here, yet none of us today can say,
if our purpose serves another purpose, like others just like us in Milky Way..

The neighboring swirls may have the answer, and we must find the ways to go,
and talk to all and figure out,
if we all really matter.

When we have understood all that exists, then we shall all decide.

If we exist with purpose, fine. Enlightenment will then soothe the soul.
If not, then our purpose will have been,
To search the meaning of it all.

Wednesday, March 25, 2009

Ubuntu - software Raid5: recovering from a failing drive

- Check integrity of array:
hdparm -i -v /dev/md0

- Unmount the raid:
root@gamma:~# umount /mnt/r5/
umount: /mnt/r5: device is busy
umount: /mnt/r5: device is busy

- Find out the process that's keeping the drive busy:
root@gamma:~# fuser -m /dev/md0
/dev/md0: 5670c

- Look up the process:
root@gamma:~# ps auxw | grep 5670
root 5670 0.4 0.0 8792 3200 ? S 19:25 0:34 /usr/sbin/smbd -D

- Ah, it's Samba, stop it:
root@gamma:~# /etc/init.d/samba stop
* Stopping Samba daemons...

- Try again to unmount:
root@gamma:~# umount /mnt/r5/
root@gamma:~#


- Take a detailed look at the raid array:
mdadm --query --detail /dev/md0


...

Number Major Minor RaidDevice State
0 0 0 - removed
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 0 0 - removed

4 8 0 - faulty /dev/sda
5 8 48 - spare /dev/sdd


Found a faulty drive: /dev/sda


- tell the array the drive that is faulty:
root@gamma:~# mdadm -f /dev/md0 /dev/sdd
mdadm: set /dev/sdd faulty in /dev/md0

- hot remove the faulty drive
root@gamma:~# mdadm --remove /dev/md0 /dev/sdd
mdadm: hot removed /dev/sdd

- walk over to server and pysically remove drive.
If you don't know which one is the right drive,
remove one at a time, and then run
mdadm --query --detail /dev/md0
and see which drive is no longer there

- insert a new (and good) hard drive. Add it:
root@gamma:~# mdadm -add /dev/md0 /dev/sdd
mdadm: hot added /dev/sdd

- watch the recovery
watch cat /proc/mdstat

Every 2.0s: cat /proc/mdstat Tue Mar 24 22:45:58 2009

Personalities : [raid5]
md0 : active raid5 sdd[4] sda[0] sdc[2] sdb[1]
1465159488 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
[>....................] recovery = 0.0% (20224/488386496) finish=19245.9min speed=421K/sec

unused devices:

It seems that this will take 19,000 minutes, which is 13 days. Ugh.

Tuesday, March 24, 2009

Ubuntu - how to set up Samba

1. Install Samba:

root@gamma:~# aptitude install samba
Reading package lists... Done
Building dependency tree... Done
Reading extended state information
Initializing package states... Done
Building tag database... Done
The following packages have been kept back:
akregator apt apt-utils base-files bind9-host bsdutils bzip2 coreutils cpio cupsys cupsys-bsd cupsys-client dbus debconf debconf-i18n dnsutils
dpkg dselect e2fslibs e2fsprogs enscript file firefox gnupg gzip hpijs hplip hplip-data hplip-ppds imagemagick info initramfs-tools iptables
kaddressbook karm kdepim-kio-plugins kdepim-kresources kdepim-wizards klogd kmail kmailcvt knotes kontact korganizer libavahi-client3
libavahi-common-data libavahi-common3 libavahi-qt3-1 libbind9-0 libblkid1 libbz2-1.0 libcomerr2 libcupsimage2 libcupsys2 libcurl3 libcurl3-gnutls
libdbus-1-2 libdbus-glib-1-2 libdbus-qt-1-1c2 libdns21 libexif12 libfreetype6 libgadu3 libgnutls12 libisc11 libisccc0 libisccfg1 libjasper-1.701-1
libkcal2b libkdepim1a libkleopatra1 libkmime2 libkpimexchange1 libkpimidentities1 libkrb53 libksieve0 libktnef1 liblcms1 liblwres9 libmagic1
libmagick9 libmimelib1c2a libmysqlclient15off libnspr4 libnss3 libperl5.8 libpng12-0 libpq4 libruby1.8 libsasl2 libsasl2-modules libsnmp-base
libsnmp9 libss2 libssl0.9.8 libuuid1 libvorbis0a libvorbisenc2 libvorbisfile3 libxine-main1 libxml2 linux-image-2.6.15-26-server
linux-image-server linux-server locales login lvm2 mount mysql-client-5.0 mysql-common mysql-server-5.0 ntpdate openoffice.org openoffice.org-base
openoffice.org-calc openoffice.org-common openoffice.org-core openoffice.org-draw openoffice.org-impress openoffice.org-java-common
openoffice.org-kde openoffice.org-l10n-en-us openoffice.org-math openoffice.org-writer openssh-client openssh-server openssl passwd perl perl-base
perl-modules perl-suid popularity-contest python-crypto python-uno python2.4-crypto python2.4-dbus python2.4-libxml2 rsync ssh sysklogd tar
tcpdump tk8.4 ttf-opensymbol udev util-linux vim vim-common vim-runtime w3m xterm
The following NEW packages will be installed:
samba
0 packages upgraded, 1 newly installed, 0 to remove and 152 not upgraded.
Need to get 2852kB of archives. After unpacking 7262kB will be used.
Writing extended state information... Done
Get:1 http://archive.ubuntu.com dapper-updates/main samba 3.0.22-1ubuntu3.8 [2852kB]
Fetched 2852kB in 2m37s (18.1kB/s)
Preconfiguring packages ...
Selecting previously deselected package samba.
(Reading database ... 67170 files and directories currently installed.)
Unpacking samba (from .../samba_3.0.22-1ubuntu3.8_i386.deb) ...
Setting up samba (3.0.22-1ubuntu3.8) ...
Generating /etc/default/samba...
TDBSAM version too old (0), trying to convert it.
TDBSAM converted successfully.
account_policy_get: tdb_fetch_uint32 failed for field 1 (min password length), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 2 (password history), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 3 (user must logon to change password), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 4 (maximum password age), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 5 (minimum password age), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 6 (lockout duration), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 7 (reset count minutes), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 8 (bad lockout attempt), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 9 (disconnect time), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 10 (refuse machine password change), returning 0
* Starting Samba daemons... [ ok ]
root@gamma:~#


2. Create group samba:
root@gamma:~# groupadd samba

3. Add user my_user_id to group samba:
root@gamma:~# usermod -a -G samba my_user_id

4. Edit the smb.conf file, uncomment what you want to be used:
root@gamma:~# nano /etc/samba/smb.conf

5. Add users who can access samba shares:
root@gamma:/etc/samba# smbpasswd -a my_user_id
New SMB password:
Retype new SMB password:
root@gamma:/etc/samba#

6. If you need to make more changes to your smb.conf, restart samba after you're done:
root@gamma:/etc/samba# /etc/init.d/samba restart

How to set up Raid5 in Ubuntu

How to set up raid5:

Setting up raid5 with 4 drives, 500gb each. Using a SIL hardware sata drive controller, and setting up software raid.

1. update and install package mdadm
apt-get update
apt-get install mdadm

2. Use the 4 devices to create md0
mdadm --create /dev/md0 --level=raid5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

2b. If the array is already started, stop it.
mdadm --stop /dev/md0

3. Assemble the array
root@gamma:~# mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm: /dev/md0 has been started with 3 drives (out of 4) and 1 spare.

Live watch of the array building:
watch cat /proc/mdstat

Wait many hours...

I believe the wait here was about 6 hours.


4. Create the file system:

root@gamma:~# mkfs.ext3 /dev/md0
mke2fs 1.38 (30-Jun-2005)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
183156736 inodes, 366287952 blocks
18314397 blocks (5.00%) reserved for the super user
First data block=0
11179 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848

Writing inode tables: 158/11179
...

This command hangs for me, waiting at 158 forever. I even tried doing a kill -9 on the process, with no luck. Even a shutdown -now didn't work, so I had to do a hard boot. Then, I tried the following command:


4b. Try, try again, use mke2fs instead of mkfs.ext3

root@gamma:~# mke2fs /dev/md0
mke2fs 1.38 (30-Jun-2005)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
183156736 inodes, 366289872 blocks
18314493 blocks (5.00%) reserved for the super user
First data block=0
11179 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848

Writing inode tables: 427/11179
...
and that updated slowly, without hanging.
...

Writing inode tables: done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
root@gamma:~#

5. use fdisk to create one partition for the whole md0
root@gamma:~# fdisk /dev/md0

Command (m for help): p

Disk /dev/md0: 1500.3 GB, 1500323315712 bytes
2 heads, 4 sectors/track, 366289872 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Device Boot Start End Blocks Id System

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-366289872, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-366289872, default 366289872):
Using default value 366289872

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
root@gamma:~#


6. Restart the system:
shutdown -r now

7. Create a directory in /mnt
mkdir /mnt/r5

8. Mount the md0 device:
mount /dev/md0 /mnt/r5/

9. Check how much space is available on the drive:

root@gamma:/mnt/r5# df -h /mnt/r5
Filesystem Size Used Avail Use% Mounted on
/dev/md0 1.4T 20K 1.3T 1% /mnt/r5
root@gamma:/mnt/r5#

... and we have 1.4 Terrabytes. All ok.

Wednesday, March 11, 2009

Standard Ubuntu Setup

----
-> After installing Ubuntu:

-> log in, change root password
sudo passwd root
----
-> as root:
apt-get install ssh
apt-get install unzip zip
apt-get install build-essential



apt-get install mysql-server-5.0
* Root password is blank. To change it use:
* /etc/init.d/mysql reset-password
----
-> setup static ip
sudo vi /etc/network/interfaces
iface eth0 inet static
address 192.168.0.102
netmask 255.255.255.0
network 192.168.0.1
broadcast 192.168.0.255
gateway 192.168.0.1
----
Set up the name servers
sudo nano /etc/resolv.conf , and add:

search yahoo.com
nameserver 216.162.128.6
nameserver 216.162.128.5
(your DNS servers are specific to your internet provider)

then,
sudo /etc/init.d/networking restart

To see if it worked,
ping google.com
and it should resolve google.com to its actual IP
----
-> as username, vi .bashrc to set up paths
export PATH=/opt/jdk1.6.0_04/bin:$PATH
export JAVA_HOME="/opt/jdk1.6.0_04"
export CLASSPATH=.:/opt/jdk1.6.0_04
----
-> grab jdk from another PC
scp username@192.168.0.202:/root/software/jdk-6u4-linux-i586.bin .
----
-> benchmark Sieve
wget http://rsb.info.nih.gov/nih-image/java/benchmarks/Sieve.java
----
How to set hard drives to go into standby mode after 10 minutes:

sudo hdparm -S 120 /dev/hda
----
How to save your hard drive from being hammered by Ubuntu power saver:

- based on http://www.breakitdownblog.com/ubuntu-power-saver-settings-could-damage-hard-drive/ :

1. Make a file named 99-hdd-spin-fix.sh The important thing is starting with 99.
2. Make sure the file contains the following 2 lines (fix it if you have PATA HDD):

#!/bin/sh
hdparm -B 255 /dev/sda

3. Copy this file to 3 locations:

/etc/acpi/suspend.d/
/etc/acpi/resume.d/
/etc/acpi/start.d/
----
The "perfect" Ubuntu setup:
http://www.howtoforge.com/perfect_setup_ubuntu_6.06
----
Install mail server:

apt-get install postfix
- run through the setup
----
Install KDE or GNOME: From: http://www.psychocats.net/ubuntu/kde

> sudo aptitude update && sudo aptitude install kubuntu-desktop

During the installation process, you should be asked whether you want to use KDM or GDM as your default display manager. The default can always be changed later by modifying the /etc/X11/default-display-manager file. For KDM, the file should read /usr/bin/kdm; for GDM, the file should read /usr/sbin/gdm . When KDE is done installing, log out. If you're using 6.06 or later, once you get to the login screen, click on Options and then Select Session. In older versions of Ubuntu (5.10 or earlier), you would have a separate Session button instead of drilling down to Session from Options. In the Sessions dialogue, select KDE and then Change Session.

Finally, before you log back in again, decide whether you want to change to KDE just for this session or if you want to make KDE your default desktop environment. Then, log back in, and you should be using KDE. To switch back to Gnome, just log out and select Gnome from the session menu.

If you later decide you don't want KDE any more, go back to the terminal and paste in
> sudo aptitude remove kubuntu-desktop
----
Install VPN:

sudo apt-get install vpnc
sudo apt-get install network-manager-vpnc
----
http://www.cyberciti.biz/tips/how-do-i-find-out-linux-cpu-utilization.html
cat /proc/cpuinfo
top -n 1 -b | head
Useful admin commands: http://www.reallylinux.com/docs/admin.shtml
----
How to install mysql:

1. download mysql, ex: mysql-5.0.51a-linux-i686.tar.gz

2. add groups
shell> groupadd mysql
shell> useradd -g mysql mysql

3. gzip and untar into /usr/local/mysql
shell> cd /usr/local
shell> gunzip < /PATH/TO/MYSQL-VERSION-OS.tar.gz | tar xvf - shell> ln -s FULL-PATH-TO-MYSQL-VERSION-OS mysql
shell> cd mysql
shell> chown -R mysql .
shell> chgrp -R mysql .
shell> scripts/mysql_install_db --user=mysql
shell> chown -R root .
shell> chown -R mysql data

4. How to start the daemon:
shell> /usr/local/mysql/bin/mysqld_safe --user=mysql &

----
How to use Xterm with Xming:

username@gamma:~$ xterm Xt error: Can't open display:
xterm: DISPLAY is not set
username@gamma:~$ export DISPLAY=gamma:0.0

In putty, under Connection, Ssh, X11, check the "Enable X11 forwarding" box, and in the X-display location field type localhost:0 Save this Putty entry, load it, open connection to server.

username@server:~$ xterm &
[1] 4513
----
Installing kde:
apt-get install kubuntu-desktop
(it's over 1200MB)
----
How to disable the kde logon screen from showing at start up:

echo "false" | sudo tee /etc/X11/default-display-manager

(simply placing the word false in the file, and then restarting, should disable the kdm logon screen)

- Another option would be to change the execute rights on the /etc/init.d/kdm startup script:
chmod a-x /etc/init.d/kdm
----
How to restart apache:
/etc/init.d/apache2 restart

Virtual hosts:
nano /etc/apache2/sites-available/default

Example of setting up a virtual host (do this for both digitalagora.com and www.digitalagora.com):

ServerName www.digitalagora.com
DocumentRoot /srv/websites/digitalagora.com
LogLevel notice
CustomLog /var/log/apache2/digitalagora.com-custom.log combined
ErrorLog /var/log/apache2/digitalagora.com-error.log

----