Tuesday, December 27, 2011

getNowAsTimestampString

public String getNowAsTimestampString() {
Calendar nowCalendar = Calendar.getInstance();
java.util.Date myDate = nowCalendar.getTime(); // ex: Thu Aug 09 13:20:36 EDT 2007
SimpleDateFormat sdf = new SimpleDateFormat("yyyy.MM.dd_HH.mm.ss");
return sdf.format(myDate);
}

Monday, December 26, 2011

Setting up a gateway - fresh OpenSUSE set up

ping google.com
connect: Network is unreachable

Grrr...


cat /etc/resolv.conf
echo "alias net-pf-10 off" >> /etc/modprobe.conf
echo "alias ipv6 off" >> /etc/modprobe.conf
ip route show all
route add default gw
ip route show all
ping google.com

Fixed!

Monday, December 19, 2011

How to back up web accessible files using wget, and to remove files older than 30 days

1. Script to back up files over wget:


#!/bin/bash
#
# This script grabs all the my-server.alanlupsha.com/logs/ files and
# backs them up on the local server in /home/
#
# @author Alan Lupsha 12/19/2011
#

# keep track of the start time
STARTTIME="Started backup at: `date`"


#########################################################################
# You should change these entries
#
# make sure to create the directory first
# ex: "mkdir /home/lupsha/backups"
BACKUPDIR=/home/lupsha/backups

# save text to a temporary file
EMAILFILE=/tmp/email.txt

# update this
EMAILADDRESS=my-email-address@my-domain.com
#########################################################################

# clean up email file
rm "$EMAILFILE"

# save file in this directory
cd "$BACKUPDIR"

LOC="`pwd`"
if [ "$LOC" != "$BACKUPDIR" ]; then
echo "ERROR, I should be in $BACKUPDIR, but I'm not! Exiting...";
exit 1
fi

# download files
wget -e robots=off --recursive --no-directories \
--no-host-directories --level=1 --no-clobber \
http://my-server.alanlupsha.com/logs/


# save timestamp
echo "$STARTTIME" >> $EMAILFILE
echo "\r\n" >> $EMAILFILE
echo "Ending pass logs backup at `date`" >> $EMAILFILE

# send an email with the result
cat $EMAILFILE | mail -s "Ran pass logs backup" $EMAILADDRESS



2. Save the script and run it from a cron job, only week days, at 7:30 AM


sudo crontab -e

# run script every week day at 7:30AM
30 07 * * 1-5 /home/lupsha/backup-logs-script.sh



3. On the server where the files are stored, remove the files that are older than 30 days:


cd /var/www/html/myfiles
find . -type f -mtime +30 -exec rm -- {} \;

Saturday, December 3, 2011

How to convert video to a lossless format using ffmpeg!

How to convert a .3gp video to .flv:
ffmpeg -i a.3gp -ar 22050 -ab 32k -f flv a.flv


How to convert to almost lossless format:
ffmpeg -i a.3gp -an -f yuv4mpegpipe b.avi

(Careful, an 11MB 3gp file becomes a 1.6GB avi file!)

Wednesday, November 30, 2011

Renaming directories and files

Supposed you have the following directory structure:

/home/user/files/A/
/home/user/files/A/1/
/home/user/files/A/2/
/home/user/files/A/3/
/home/user/files/B/
/home/user/files/B/37/
/home/user/files/B/38/
/home/user/files/B/39/

and you wish to concatenate the subdirectories to the parent directories (ex: "A - 1", "A - 2", "B - 38"), to result in the following structure:

/home/user/files/A - 1/
/home/user/files/A - 2/
/home/user/files/A - 3/
/home/user/files/B - 37/
/home/user/files/B - 38/
/home/user/files/B - 39/


First,
cd /home/user/files

Next, allow spaces in file names:
IFS=$(echo -en "\n\b")

Then:
for dir in `ls`; do cd "$dir"; for files in `ls`; do mv "$files" "$dir - $files"; mv "$dir - $files" .. ; done; cd .. ; echo "rm -Rf $dir" >> /tmp/cleanup.sh ; done

Take a look at the generated cleanup file:
cat /tmp/cleanup.sh

If it looks ok, make it executable:
chmod u+x /tmp/cleanup.sh

and execute it:
/tmp/cleanup.sh

Saturday, June 18, 2011

How to convert .m2ts videos to .avi, using a script

The problem:
I have a bunch of vacation videos located in: /mnt/drive2/vacation_videos/ , but they are all .m2ts format, and I want them to be in .avi format

The solution:
- Create a generic script "convert.sh" which converts a .m2ts file to .avi
- Create a script "vacationVideos.sh" with all the custom vacation videos, and call the convert.sh script
- Run this as a background process, which allows you to log out of the session without interrupting the process. i.e. "nohup shellscriptname &"


1. Generic conversion script. Save as: nano /usr/sbin/convert.sh

#!/bin/bash
# AlanLupsha: this script converts .m2ts files to .avi files
# If you do not have mencoder, run sudo apt-get install mencoder
if [ -z $1 ] ; then
        echo "syntax: convert.sh file.m2ts"
        RETURNCODE=1
        exit $RETURNCODE
fi
mencoder "$1" -ofps 23.976 -ovc lavc -oac copy -o  "$1.avi"

2. custom script with the names of all my vacation videos. Save as: nano ~/vacationVideos.sh

#!/bin/bash
# This "vacationVideos.sh" script copies my vacation videos into a
# temporary directory and then invokes the convert.sh
# script which converts .m2ts format to the .avi format

MAINPATH=/mnt/drive2/vacation_videos/
TMP=/dev/shm
cd "$TMP"

FILE="2011.grandparents.m2ts"
cp "$MAINPATH/$FILE" "$TMP"
# convert the file from .m2ts to .avi (takes a long time)
convert.sh "$FILE"
# copy the converted .avi file back to the drive
mv "$TMP/$FILE.avi" "$MAINPATH"
# remove the temporary copy of the .m2ts file
rm "$TMP/$FILE"


FILE="2010.SouthFlorida.m2ts"
cp "$MAINPATH/$FILE" "$TMP"
# convert the file from .m2ts to .avi (takes a long time)
convert.sh "$FILE"
# copy the converted .avi file back to the drive
mv "$TMP/$FILE.avi" "$MAINPATH"
# remove the temporary copy of the .m2ts file
rm "$TMP/$FILE"


FILE="2006.DriveAcrossTheUSA.m2ts"
cp "$MAINPATH/$FILE" "$TMP"
# convert the file from .m2ts to .avi (takes a long time)
convert.sh "$FILE"
# copy the converted .avi file back to the drive
mv "$TMP/$FILE.avi" "$MAINPATH"
# remove the temporary copy of the .m2ts file
rm "$TMP/$FILE"

echo "Done converting!"

3. How to run:

ssh myname@linuxserver
nohup ~/vacationVideos.sh &

To see the progress:
tail -f nohup.out

To see the process:
ps aux | grep mencoder

To kill the process:
killall mencoder

Enjoy.

Thursday, June 9, 2011

How to set up Subversion in Ubuntu

sudo adduser subversion

sudo mkdir /home/svn
sudo svnadmin create /home/svn/dev

# change ownership
sudo chown -R subversion:www-data /home/svn/dev

# make the dir group write-able
chmod -R g+w /home/svn/dev/

# add yourself to the www-data group
sudo usermod -a -G www-data alan



sudo apt-get install apache2
sudo apt-get install libapache2-svn

sudo nano /etc/apache2/mods-available/dav_svn.conf

add at the end:


DAV svn
SVNParentPath /home/svn
SVNListParentPath On
AuthType Basic
AuthName "Subversion Repository"
# AuthUserFile /etc/subversion/passwd

Require valid-user




Save.

sudo /etc/init.d/apache2 restart


# Add your users in subversion, create their accounts:
sudo nano /home/svn/dev/conf/passwd

alan=123456
mary=654321


# For subversion over http, also add the same users to the
# subversion passwd file, first time use -c to "create" it.
sudo htpasswd -c /etc/subversion/passwd alan
sudo htpasswd /etc/subversion/passwd mary


# Empty out config file
sudo su -
echo "" > /home/svn/dev/conf/svnserve.conf

# Edit file and add next entries:
sudo -u subversion nano /home/svn/dev/conf/svnserve.conf


[general]
anon-access = none
auth-access = write
password-db = passwd
realm = digitalagora
[sasl]

# Save, exit.

# Install xinetd, and let it take care of running the subversion server
sudo apt-get install xinetd

# edit a new svn file
sudo nano /etc/xinetd.d/svn

# Paste these contents:

service svn
{
port = 3690
protocol = tcp
socket_type = stream
wait = no
disable = no
user = subversion
server = /usr/bin/svnserve
server_args = -i -r /home/svn/dev

#ALAN - increase incoming connections per second
# X connections per second, 1 sec to reset after overload
cps = 3000 1

# if in /var/log/messages you see: FAIL: svn per_source_limit from=IP...
per_source = UNLIMITED
}

# Save the file, exit.

# start the xinetd service
sudo service xinetd restart

# check that subversion is running, you should not get an error from this command (use your

username/pass)
svn --non-interactive --username alan --password abc12345 list svn://localhost/



# ===================================================
# ALTERNATIVELY, IF YOU DO NOT WANT TO USE XINETD:
# Start the server manually...
svnserve -d -r /home/svn/
#
# Check that it's running
ps aux | grep svn
#
You should see something like: svnserve -d -r /home/svn/
# to stop svnserve, run
ps aux | grep svn
killall svnserve
ps aux | grep svn
# ===================================================



Test the installation

# create project
svn --non-interactive --username alan --password abc12345 -m "new" mkdir

svn://localhost/dev/testproject

# check out project
cd ~
svn --non-interactive --username alan --password myPassw0rd checkout

svn://localhost/dev/testproject

# create new files in checked out project
touch ~/testproject/newfile1.txt
touch ~/testproject/newfile2.txt
touch ~/testproject/newfile3.txt

# go into project
cd ~/testproject
svn add *

# commit the files
svn commit -m "new" *

Adding newfile1.txt
Adding newfile2.txt
Adding newfile3.txt
Transmitting file data ...
Committed revision 2.




=================================
a2ensite default-ssl
a2enmod ssl
sudo /etc/init.d/apache2 restart
sudo make-ssl-cert generate-default-snakeoil --force-overwrite
=================================

Check that it's working:
http://localhost/

Tuesday, April 26, 2011

Ubuntu - simple setup after install

0. change root password
su -
passwd root

1. static network
Check the network interface:
ifconfig

Change to static network:

auto eth0
iface eth0 inet static
address 10.1.10.99
netmask 255.255.255.0
network 10.1.10.0
broadcast 10.1.10.255
gateway 10.1.10.1

sudo /etc/init.d/networking restart


2. Install SSH server:
sudo apt-get install openssh-server


3. Disable root logins:
nano /etc/ssh/sshd_config

Change to say "no":
PermitRootLogin no

Save file, exit.
Start up ssh:
/etc/init.d/ssh restart

4. test by logging in from another server:
ssh user@newserver

5. Set up secondary drives using UUID:

sudo mkdir /mnt/d1
sudo mkdir /mnt/d2
sudo mkdir /mnt/d3
sudo mkdir /mnt/d4

Next, we need to list all the drives and mark down the UUID of each drive, and make sure you know which drive is sda1, sdb1, sdc1, and so on.

# list all drives
sudo fdisk -l

# next, list them by uuid
ls -l /dev/disk/by-uuid/

Edit fstab and start adding entries. For each drive, copy and paste its UUID and associate it with a mount point, ex: /mnt/d1, etc.

sudo nano /etc/fstab

# sda1
UUID=ba20caf8-6ef0-42e7-9e45-4187bfa8e543 /mnt/d1 auto defaults 0 2
# sdb1
UUID=315310c3-b3f2-4092-a7fa-eb944084e335 /mnt/d2 auto defaults 0 2
# sdc1
UUID=e041f16e-a009-4390-97f9-2daf29c0c505 /mnt/d3 auto defaults 0 2
# sdd1
UUID=2a236394-5010-4742-9ef8-1a5b2d1b74fa /mnt/d4 auto defaults 0 2

Save fstab file, close it.

Mount all file systems in /etc/fstab, run:
sudo mount -a
(fix any errors, try again, until it works)

List all drive contents,
ls /mnt/d1
ls /mnt/d2
ls /mnt/d3
ls /mnt/d4

I personally create directories on each drive so that I can identify the drives later. For example, on the root of each drive, I create a directory such as: "this_is_drive_1"

6. set up Samba

#Install samba - for network file sharing
sudo apt-get install samba

#create group
sudo groupadd samba

#create a system user who should have samba access
sudo adduser sambauser

#add this user to the samba group
sudo usermod -a -G samba sambauser

# add the user who can access samba shares
sudo smbpasswd -a sambauser
(enter their password)

#edit conf file
sudo nano /etc/samba/smb.conf

# my drives
[d1]
comment = This is the /d1 shared drive
path = /mnt/d1
browseable = yes
read only = no
guest ok = no
writable = yes
admin users = sambauser
write list = sambauser
create mask = 0775
directory mask = 0775
public = yes

[d3]
comment = This is the /d3 shared drive
path = /mnt/d3
browseable = yes
read only = no
guest ok = no
writable = yes
admin users = sambauser
write list = sambauser
create mask = 0775
directory mask = 0775
public = yes


Save the file.

#restart samba
sudo service smbd restart

# list the shares from another PC (from Windows, start, run \\IP_of_server)
smbclient --username=sambauser -L 10.1.10.99
(you should see the drives being listed)

Friday, January 7, 2011

You just bought a new 1 Terrabyte drive for your Ubuntu server. Now what?

After installing the RAID card for the SATA drive, and after seeing the drive at boot-up time, you're ready to partition the drive, format it, and mount it by id, and then use it.

1. List drives:
sudo fdisk -l

2. If drive is not formatted but it is listed, format it (ex: drive is /dev/sdb):
sudo fdisk /dev/sdb
p (print partition information)
n (create new partition)
p (primary partition)
1 (partition #1)
[enter] (accept first cylinder default, which is 1)
[enter] (accept last cylinder default, whatever that is)
w (write partition information, and then the program will exit)

Verify that a partition has been created (it's not yet formatted), do a list:
sudo fdisk -l

You should see the new partition listed, and a number added to the device name. For example, the drive is /dev/sdb and the partition became /dev/sdb1

3. Format the partition (Be very careful that it's the correct partition, in this example /dev/sdb1):

mkfs.ext3 /dev/sdb1

Wait for the formatting to be over.

4. List drives by UUID:
ls -l /dev/disk/by-uuid/
Note down the UUID of the disk that you've just formatted, for example sdb1.
The UUID is a long sequence of alphanumeric characters.

5. Create directory where to mount drive (disk 1 = /mnt/d1)
sudo mkdir /mnt/d1

6. Copy UUID of drive in fstab so that it gets mounted every time the server is booted:
sudo nano /etc/fstab

# My 1TB drive on /dev/sdb1
UUID=91f86f8a-4b59-4b67-b62b-0f2a3c2b235c /mnt/d1 auto defaults 0 2

6b. Optionally, if you just want to mount the drive now without adding it to the automatic mounting in the /etc/fstab file, then you can mount it manually, also by UUID:
sudo mount -U 91f86f8a-4b59-4b67-b62b-0f2a3c2b235c
or
sudo mount /dev/sdb1 /mnt/d1


7. Restart server, if you have to. (you shouldn't have to)
sudo shutdown -r now

Now the drive is available in /mnt/d1. Check the space: df -h /mnt/d1

Sunday, January 2, 2011

How to fix annoying Ubuntu Nautilus errors such as... "Nautilus cannot handle burn locations"

"Nautilus cannot handle burn locations"
"Nautilus cannot handle COMPUTER actions... and such...


sudo apt-get remove gvfs
sudo apt-get install gvfs

Restart the system.
The whole Nautilus look and feel is different, and everything starts working, even samba shares: smb://me:passwd@192.168.0.1/myshare, etc...

Done.

Tuesday, December 14, 2010

How to convert an m2ts video to an avi

1. install missing libraries:
sudo apt-get install libavcodec-unstripped-51

2. convert:
ffmpeg -i vacation.m2ts -vcodec libxvid -b 11182k -acodec libmp3lame -ac 2 -ab 640k -deinterlace -s 1440x1080 vacation.avi

3. example:

$ nohup ffmpeg -i vacation.m2ts -vcodec libxvid -b 11182k -acodec libmp3lame -ac 2 -ab 640k -deinterlace -s 1440x1080 vacation.avi >> processing.txt &

$ tail -f processing.txt

Thursday, October 7, 2010

How to list all the jar contents in one line and redirect the output to a text file for easy searching

1. place all the jar files in a directory on a linux server

2.
$ cd jardirectory

3. list the files
$ for file in `ls | grep ".jar"`; do `unzip -l $file >> jarcontents.txt` ; done

4. use vi and search for entries
vi jarcontents.txt
In vi, type forward slash, and your search string.

Monday, October 4, 2010

On the Android phone: how to disable the annoying Verizon VCAST messages that autoplay when you plug your phone into the USB connector

dial ##7764726
Hit Call (you will not hear a ring tone)
Type Password 000000 (that's 6 zeros)
Hit Feature Settings
Choose CD ROM
Click Disable
Hit menu, commit modifications (it will say "No item changed")
Done.
----------------------------

Credits: http://community.vzw.com/t5/DROID-Incredible-by-HTC/Disable-V-CAST-Media-Manager-Web-Pop-Up/m-p/275464

Friday, October 1, 2010

How to delete a fail2ban IP from the iptables chain of rules

################################################
root@myserver:~# nano /usr/local/sbin/deleteipfromfail2ban.sh
#!/bin/bash

IP=$1

if [ "$IP" = "" ] ; then
echo "Syntax: deleteipfromfail2ban.sh "
RETURNSTATUS=1
exit $RETURNSTATUS
fi

iptables -D fail2ban-ssh -s $IP -j DROP

################################################

Example of usage:
deleteipfromfail2ban.sh 10.1.1.25

Thursday, September 30, 2010

How to change the Ubuntu message of the day (MOTD)

Source:
http://www.newbtopro.com/guide/change_message_day_ubuntudebian

Basically, you remove the symbolic link /etc/motd and re-link to a static file:

sudo touch /etc/motd.static
sudo cat /etc/motd.tail > /etc/motd.static
sudo rm /etc/motd
sudo ln -s /etc/motd.static /etc/motd
sudo vi /etc/motd.static
Edit the file to your liking! Done.

Wednesday, September 22, 2010

Nu ratati o vacanta de vis la:

Nu ratati o vacanta de vis la: ... :)

Getting banned from Facebook by Eric Hancock

Thank you, Facebook, for being unjust.

My "friend" Eric Hancock emailed Facebook and lied to them, saying that I'm abusive. Facebook simply closed my account and deleted everything I had, 4 years worth of pictures and contacts.

Thank you, Eric, thank you Facebook.

------------------


from The Facebook Team info+rxnvms@support.facebook.com
reply-to The Facebook Team info+rxnvms@support.facebook.com
to facebook@alanlupsha.com
date Wed, Sep 22, 2010 at 1:43 AM
subject Re: My Personal Profile was Disabled
mailed-by bounce.secureserver.net
signed-by support.facebook.com


Hi,

Your account was disabled because your behavior on the site was identified as harassing or threatening to other people on Facebook. Prohibited behavior includes, but is not limited to:

• Sending friend requests to people you don't know
• Regularly contacting strangers through unsolicited Inbox messages
• Soliciting others for dating or business purposes

After reviewing your situation, we have determined that your behavior violated Facebook's Statement of Rights and Responsibilities. You will no longer be able to use Facebook. This decision is final and cannot be appealed.

Please note that for technical and security reasons, we will not provide you with any further details about this decision.

Thanks,
The Facebook Team



---------------

Monday, August 23, 2010

TECHNOLOGY ENTREPRENEURSHIP AND COMMERCIALIZATION - FALL SEMESTER 2010

Grading:
• Final Commercialization Plan Report (Team): 35%
• Final Commercialization Plan Presentation (Indiv): 35%
• Class participation (Indiv): 10%
• Team evaluation of your contribution (Indiv): 20%


Final Report and Presentation: A Product Commercialization Plan
Assume you are making a pitch for a $150k state technology commercialization grant

Content Outline

1. What’s Your Idea & Technology Overview

2. What Problem is it Solving & How Big is the Problem?
3. What is the market for your idea? How would you categorize competing technologies in the market? And, where does your idea fit?
4. Who are your target customers? (i.e. target market) How will you get your product to them?

5. Who are your competitors in this category? What are their strengths & weaknesses?

6. What are your competitive advantages? Why should you be selected instead of the competition?

7. What is your product development plan? From concept, to prototype, to market? What are the steps & what is the time frame? How will you protect your invention & when will you initiate the protection in your product plan?

8. What are your commercialization alternatives? Licensing to a company in this category? Starting a company? Risks vs. Rewards.

9. Conclusion.

Thursday, July 29, 2010

Startup scripts

Sample startup file, place in /etc/init.d:

cat /etc/init.d/archiva
#! /bin/sh
# chkconfig: 345 90 10
# description: Archiva server

# uncoment to set JAVA_HOME as the value present when Continuum installed
export JAVA_HOME=/opt/SDK/jdk
export ARCHIVA=/opt/archiva/current/bin/archiva

case "$1" in

'start')
su - archiva -c "$ARCHIVA start"

;;

'stop')
su - archiva -c "$ARCHIVA stop"

;;

'restart')
su - archiva -c "$ARCHIVA stop"
sleep 20
su - archiva -c "$ARCHIVA start"

;;

*)
echo "Usage: $0 { start | stop | restart }"
exit 1
;;
esac

exit 0


To add it to startup, execute as root:
chkconfig --add archiva


Done.




Another example, using Geronimo:

[geronimo@mypc bin]$ ./geronimo.sh --help
Using GERONIMO_HOME: /home/geronimo/geronimo-tomcat6-javaee5-2.1.6
Using GERONIMO_TMPDIR: var/temp
Using JRE_HOME: /opt/SDK/jdk/jre
Usage: geronimo.sh command [geronimo_args]
commands:
debug Debug Geronimo in jdb debugger
jpda run Start Geronimo in foreground under JPDA debugger
jpda start Start Geronimo in background under JPDA debugger
run Start Geronimo in the foreground
start Start Geronimo in the background
stop Stop Geronimo
stop --force Stop Geronimo (followed by kill -KILL)




Create /etc/init.d/geronimo as follows:

#! /bin/sh
# chkconfig: 345 90 10
# description: geronimo server

# uncoment to set JAVA_HOME as the value present when Continuum installed
export JAVA_HOME=/opt/SDK/jdk
export GERONIMO=/opt/geronimo/current/bin/geronimo.sh

case "$1" in

'start')
su - geronimo -c "$GERONIMO start"

;;

'stop')
su - geronimo -c "$GERONIMO stop"

;;

'restart')
su - geronimo -c "$GERONIMO stop"
sleep 20
su - geronimo -c "$GERONIMO start"

;;

'debug')
su - geronimo -c "$GERONIMO debug"

;;

'jpdarun')
su - geronimo -c "$GERONIMO jpda run"

;;

'jpdastart')
su - geronimo -c "$GERONIMO jpda start"

;;

'jpdastop')
su - geronimo -c "$GERONIMO stop"

;;



*)
echo "Usage: $0 { start | stop | restart | debug | jpdarun | jpdastart | jpdastop }"
exit 1
;;

esac

exit 0



Test each entry:

/etc/init.d/geronimo
Usage: /etc/init.d/geronimo { start | stop | restart | debug | jpdarun | jpdastart | jpdastop }

/etc/init.d/geronimo start

/etc/init.d/geronimo stop
(login/pass = system / manager)

/etc/init.d/geronimo debug


# add to startup
chkconfig --add geronimo

# check that the file exists in the startup location, ex:
ls -la /etc/rc.d/init.d/geronimo

Friday, July 16, 2010

How to load properties files in Oracle's Weblogic 11G

PropertiesLoader propertiesLoader = new PropertiesLoader(propertiesFileName);
if (propertiesLoader.thePropertiesFileExists()) {
System.out.println("Properties file exists: " + propertiesFileName);
} else { ... }


String serverName = propertiesLoader.getProperty("serverName");


...


And the properties loader class:




package alan.lupsha.properties;

import org.apache.log4j.Logger;
import java.util.Properties;
import java.io.InputStream;

public class PropertiesLoader {
    private static final Logger logger = Logger.getLogger(PropertiesLoader.class);

    private Properties props = null;
    private String propertiesFileName = null;
    private boolean propertiesFileExists = false;

    public PropertiesLoader(String propertiesFileName) {
        this.propertiesFileName = propertiesFileName;
        try {
            props = new Properties();
            ClassLoader cl = this.getClass().getClassLoader();
            // does not work in 11G: java.net.URL url = cl.getResource(propertiesFileName);

            InputStream in = cl.getResourceAsStream( propertiesFileName );
            if( in != null )
            {
                props.load(in);
                setPropertiesFileExists(true);
            }
            else
            {
                logger.warn("InputStream is null while trying to load properties file: " + propertiesFileName );
            }
        } catch (Exception e) {
            logger.error("Error while loading properties from file "
                    + propertiesFileName + ". Error is: " + e.toString());
            System.out.println("Error while loading properties from file "
                    + propertiesFileName + ". Error is: " + e.toString());
        }
    }

    public String getProperty(String propertyName) {
        String returnStr = "";
        if (props == null) {
            logger.error("Sorry, your props file couldn't be loaded: " + propertiesFileName);
        } else {
            returnStr = props.getProperty(propertyName);
            if (returnStr == null) {
                returnStr = "";
            }
        }
        return returnStr;
    }

    public void setPropertiesFileExists(boolean propertiesFileExists) {
        this.propertiesFileExists = propertiesFileExists;
    }

    public boolean thePropertiesFileExists() {
        return propertiesFileExists;
    }
}


Tuesday, May 4, 2010

How to set up ProxyPass and ProxyPassReverse in Apache

How to set up ProxyPass and ProxyPassReverse in Apache to allow access to Continuum (which runs on port 8080) and to Archiva (which runs on port 8082):

As root, edit: /etc/httpd/conf/httpd.conf

Enable:
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so


At the bottom of the config file, add:

# ***********************************************************
ProxyRequests On
ProxyPreserveHost On
ProxyVia full


Order deny,allow
Allow from all


ProxyPass /archiva http://myserver.my.domain.com:8082/archiva
ProxyPassReverse /archiva http://myserver.my.domain.com:8082/archiva

ProxyPass /continuum http://myserver.my.domain.com:8080/continuum
ProxyPassReverse /continuum http://myserver.my.domain.com:8080/continuum
# ***********************************************************


Restart Apache: /etc/init.d/httpd restart

Friday, March 26, 2010

How to scan using the Epson Perfection 3490 Photo scanner

Source: https://bugs.launchpad.net/ubuntu/+source/sane-backends/+bug/311191

Binary package hint: libsane

Epson 3490 wont run without the following modification
The following steps did it for me (running 8.04 Hardy, 8.10, 9.04):-

1) sudo apt-get install sane-utils

2) Go to the Users and Groups screen and add yourself (and other
scanner users) to the "scanner" group.

3) Log off and on (or reboot) to make 2) effective.

4) sudo mkdir /usr/share/sane/snapscan

5) sudo cp Esfw52.bin /usr/share/sane/snapscan

6) sudo chmod 644 /usr/share/sane/snapscan/Esfw52.bin

7) sudo gedit /etc/sane.d/snapscan.conf

Change firmware entry to say:-
firmware /usr/share/sane/snapscan/Esfw52.bin

Please note point 6 as the file was initially created with insufficient access.

Firmware .bin is attacted

http://ubuntuforums.org/showthread.php?t=108256&page=6 - is the scanner being discussed

=================
# How to scan from the shell:

scanimage --device-name snapscan:libusb:002:006 --resolution 200 --high-quality=no --mode Color --format=pnm > lastscan.pnm

# Convert the file to jpg:
convert lastscan.pnm 01.jpg

# To convert all the pnm files to jpg files, type this command on one line:
for i in `ls *.pnm`; do convert $i $i.jpg; done

=================

Wednesday, March 3, 2010

Ubuntu - how to fix the annoying error "resolvconf: Error: /etc/resolv.conf must be a symlink"

How to fix the error: "resolvconf: Error: /etc/resolv.conf must be a symlink"

1. Kill the NetworkManager process:

sudo kill -9 `ps aux | grep sbin/NetworkManager | grep -v grep | awk '{print $2}'`

2. Run the resolvconf reconfiguration tool:

sudo dpkg-reconfigure resolvconf

(select YES, OK )

3. Verify that the resolv.conf file is ok:

ls -la /etc/resolv.conf

It should look very close to this:

lrwxrwxrwx 1 root root 31 2010-03-03 19:53 /etc/resolv.conf -> /etc/resolvconf/run/resolv.conf


4. Check the contents, it should have your name server(s) listed. The following shows an example using Comcast nameservers, and includes a router with address 192.168.0.1:

cat /etc/resolv.conf

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 68.87.74.162
nameserver 68.87.68.162
nameserver 192.168.0.1
search wp.comcast.net


5. Verify that it all works, try a restart of your system, and start with:
ping google.com
If you get a reply, your network works.


Share

Monday, February 8, 2010

How to install the command line transmissioncli bittorrent client in Ubuntu

mkdir ~/downloads
cd ~/downloads

# get the latest stable version from http://www.transmissionbt.com/download.php
wget http://mirrors.m0k.org/transmission/files/transmission-1.83.tar.bz2

bunzip2 transmission-1.83.tar.bz2
tar -xvf transmission-1.83.tar
cd transmission-1.83/
cat README | more

sudo apt-get install intltool
sudo apt-get install libcurl4-openssl-dev
sudo apt-get install openssl

./configure
make
sudo make install

# verify that the install worked
ls -la /usr/local/bin/transmissioncli

------

Share

Saturday, February 6, 2010

How to set up Apache with groups of users and basic HTTP authentication

1. Do this as user root:

sudo su -

and find the path to the config files for Apache, ex: ls -la /etc/apache2/

2. create users, and use the htpasswd tool to encrypt their passwords
and store them in the password file:

htpasswd /etc/apache2/passwords john

and

htpasswd /etc/apache2/passwords mary


3. add your users to a groups file

nano /etc/apache2/groups

Create a group called "trusted" followed by ":" followed by
the user names who are in that group, space delimited:

trusted:john mary


and set permissions so that user apache (who is in group "www-data")
can actually see the "passwords" and "groups" files.

chown root:www-data /etc/apache2/groups
chown root:www-data /etc/apache2/passwords


4. edit the apache config file to set up the directory which you are serving

nano /etc/apache2/apache2.conf

At the end of the Apache config file, add the following alias,
assuming that you keep all your files in /home/john/coolfiles/


alias /john "/home/john/coolfiles"

Options Indexes +MultiViews
AllowOverride None
Order allow,deny
Allow from all

AuthType Basic
AuthName "Password Required"
AuthUserFile /etc/apache2/passwords
AuthGroupFile /etc/apache2/groups
Require Group trusted




5. restart apache, as user root

sudo /etc/init.d/apache2 stop
sudo /etc/init.d/apache2 start



6. test with a browser

http://localhost/john/

or

http://yourdomain.com/john/



Share

How to use wget to download recursively, using HTTP basic authentication

wget --http-user=john --http-password=smith -e robots=off
--recursive --level=1 --no-directories --no-host-directories
http://myhost.mydomain.com/path/to/files/


Share

Friday, January 15, 2010

Chapter 1 summary

Life is getting better:
- life expectancy
- health
- income
- education
- entertainment

Economics: the study of how we make a CHOICE (selections amongst alternatives) under SCARCITY (concepts that there is less available form nature than one desires)

1. Scarcity does not equal poverty
2. Scarcity necessitates RATIONING (allocating scarce goods to those who want them)
3. Scarcity leads to competitive behavior

Resources: human, physical, natural

Capital: human made resources, used to produce other goods/services

Guideposts to economic thinking:
1. opportunity cost (highest valued alternative which you sacrifice when making your choice)
2. individuals are rational (try to get more value at less cost)
3. incentives matter (change incentives, change behavior)
4. individuals make decisions at the margin, using a cost-benefit analysis
5. information helps us make better choices
6. beware of secondary effects (intentions may not equal the result)
7. the value of goods/services is subjective
8. to test a theory = to be able to predict real world events

POSITIVE economics vs. NORMATIVE economics

Pitfalls to avoid economic thinking:
1. ceteris paribus (other things constant)
2. good intentions don't guarantee desirable outcomes
3. association is NOT causation
4. fallacy of composition (what is good for 1 may not be good for ALL)

Saturday, January 9, 2010

How to fix the error: "Linux: can't open /dev/dsp" while trying to use Festival

The error:

$ festival --tts read.txt
Linux: can't open /dev/dsp


The fix:
Create file .festivalrc in the home directory of the user and paste this in it:

(Parameter.set 'Audio_Command "aplay -q -c 1 -t raw -f s16 -r $SR $FILE")
(Parameter.set 'Audio_Method 'Audio_Command)

Then, try the "festival --tts read.txt" command again, the error should be gone, and you should hear the synthesized text to speech stream.


Share

Monday, January 4, 2010

networktraffic script - shows uploads/downloads in the shell

I didn't write this script. Credits for script: whoever wrote it and posted it on some random web site.

---

cat /usr/bin/networktraffic


#!/bin/sh

usage(){
echo "Usage: $0 [-i INTERFACE] [-s INTERVAL] [-c COUNT]"
echo
echo "-i INTERFACE"
echo " The interface to monitor, default is eth0."
echo "-s INTERVAL"
echo " The time to wait in seconds between measurements, default is 3 seconds."
echo "-c COUNT"
echo " The number of times to measure, default is 10 times."
exit 3
}

readargs(){
while [ "$#" -gt 0 ] ; do
case "$1" in
-i)
if [ "$2" ] ; then
interface="$2"
shift ; shift
else
echo "Missing a value for $1."
echo
shift
usage
fi
;;
-s)
if [ "$2" ] ; then
sleep="$2"
shift ; shift
else
echo "Missing a value for $1."
echo
shift
usage
fi
;;
-c)
if [ "$2" ] ; then
counter="$2"
shift ; shift
else
echo "Missing a value for $1."
echo
shift
usage
fi
;;
*)
echo "Unknown option $1."
echo
shift
usage
;;
esac
done
}

checkargs(){
if [ ! "$interface" ] ; then
interface="eth0"
fi
if [ ! "$sleep" ] ; then
sleep="3"
fi
if [ ! "$counter" ] ; then
counter="10"
fi
}

printrxbytes(){
/sbin/ifconfig "$interface" | grep "RX bytes" | cut -d: -f2 | awk '{ print $1 }'
}

printtxbytes(){
/sbin/ifconfig "$interface" | grep "RX bytes" | cut -d: -f3 | awk '{ print $1 }'
}

bytestohumanreadable(){
multiplier="0"
number="$1"
while [ "$number" -ge 1024 ] ; do
multiplier=$(($multiplier+1))
number=$(($number/1024))
done
case "$multiplier" in
1)
echo "$number Kb"
;;
2)
echo "$number Mb"
;;
3)
echo "$number Gb"
;;
4)
echo "$number Tb"
;;
*)
echo "$1 b"
;;
esac
}

printresults(){
while [ "$counter" -ge 0 ] ; do
NOW=`/bin/date`
counter=$(($counter - 1))
if [ "$rxbytes" ] ; then
oldrxbytes="$rxbytes"
oldtxbytes="$txbytes"
fi
rxbytes=$(printrxbytes)
txbytes=$(printtxbytes)
if [ "$oldrxbytes" -a "$rxbytes" -a "$oldtxbytes" -a "$txbytes" ] ; then
echo "$NOW RXbytes = $(bytestohumanreadable $(($rxbytes - $oldrxbytes))) TXbytes = $(bytestohumanreadable $(($txbytes - $oldtxbytes)))"
else
echo "Monitoring $interface every $sleep seconds. (RXbyte total = $(bytestohumanreadable $rxbytes) TXbytes total = $(bytestohumanreadable $txbytes))"
fi
sleep "$sleep"
done
}

readargs "$@"
checkargs
printresults




Example usage:

To monitor eth0 every 10 seconds, a total of 999 times:
networktraffic -i eth0 -s 10 -c 999

To monitor eth3 every second
networktraffic -i eth3 -s 1 -c 99999


Share

Saturday, January 2, 2010

Ubuntu: Fixing error "Error: /etc/resolv.conf must be a symlink"

My error:

root@myserver:~# sudo /etc/init.d/networking start
* Configuring network interfaces...
resolvconf: Error: /etc/resolv.conf must be a symlink
run-parts: /etc/network/if-up.d/000resolvconf exited with return code 1


The fix:

root@myserver:~# cd /etc
root@myserver:/etc# sudo rm -rf /etc/resolv.conf
(if you can't remove the file, try: chattr -i /etc/resolv.conf )
root@myserver:/etc# sudo ln -s /etc/resolvconf/run/resolv.conf
root@myserver:/etc#


Test if the solution worked:

root@zeta:/etc# /etc/init.d/networking restart
* Reconfiguring network interfaces... [ OK ]
root@zeta:/etc#

Sunday, December 6, 2009

Ubuntu - fixing the issue of /etc/resolv.conf being overwritten - edit /etc/dhcp3/dhclient.conf instead!

When you need a DNS server in order to access any sites, and since editing /etc/resolv.conf doesn't always work (because it gets overwritten regularly), thus this won't work work in /etc/resolv.conf:

search yahoo.com
namesever 10.1.10.2

(in my example, 10.1.10.2 is the IP of my router, which in turn has the proper DNS servers from Comcast, but I could use the DNS servers from Comcast just as well)

Since editing the /etc/resolv.conf may not work, instead, nano /etc/dhcp3/dhclient.conf and at the end of the file, add the entries.
Example for Comcast:

supersede domain-name "example.com"
prepend domain-name-server 68.87.74.162, 68.87.68.162

(where the domain-name-server can be a list, comma separated)

Saturday, December 5, 2009

Setting up MRTG on a managed switch (ex: on a SMC8024L switch)

Basic idea:

1
SMC switches have default IP 192.168.2.10, log into the switch via http interface from a PC on the same configured network, change IP (i.e. 10.1.10.10) and community string (from "public" to "digitalagora")

2
Install the snmp tools (example used Ubuntu), or "apt-cache search snmp" and find your favorite tools.
apt-get install snmp

3
Test if you see the switch:

snmpwalk -v 2c -Os -c digitalagora 10.1.10.10 system

sysDescr.0 = STRING: SMC8024L
sysObjectID.0 = OID: enterprises.202.20.59
sysUpTimeInstance = Timeticks: (981900) 2:43:39.00
sysContact.0 = STRING: SYSTEM CONTACT
sysName.0 = STRING: SMC8024L2
sysLocation.0 = STRING: SYSTEM LOCATION
sysServices.0 = INTEGER: 3

4
install mrtg by following: http://oss.oetiker.ch/mrtg/doc/mrtg-unix-guide.en.html Example:

wget http://www.zlib.net/zlib-1.2.3.tar.gz
gunzip -c zlib-*.tar.gz | tar xf -
rm zlib-*.tar.gz
mv zlib-* zlib
cd zlib
./configure
make
cd ..


5
run the cfgmaker tool to create your /etc/mrtg.cfg file, by telling it to connect to the digitalagora community @ the switch's ip. This creates a nice big config file with all the snmp info, provided that there was traffic on those ports. Otherwise, they're commented out.

cfgmaker --global 'WorkDir: /opt/website/mrtg' \
--global 'Options[_]: growright' \
--output /etc/mrtg.cfg \
digitalagora@10.1.10.10


6
run mrtg so that it sees the latest settings
env LANG=C /usr/bin/mrtg /etc/mrtg.cfg


7
Rebuild the web site's index file
indexmaker /etc/mrtg.cfg > /opt/website/mrtg/index.html


8
Look at the output.
http://localhost/mrtg/

Wednesday, December 2, 2009

Tinyproxy - enabing "Anonymous Host" and "Anonymous Authorization"

Testing headers at: www.digitalagora.com/headers


Client's headers when hitting digitalagora.com through Tinyproxy with disabled settings:
#Anonymous "Host"
#Anonymous "Authorization"
where in my example 8 header fields are showing:

Host = digitalagora.com
Connection = close
Via = 1.1 firewallserver (tinyproxy/1.6.5)
Accept = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent = Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.15) Gecko/2009102815 Ubuntu/9.04 (jaunty) Firefox/3.0.15
Accept-Charset = ISO-8859-1,utf-8;q=0.7,*;q=0.7
Accept-Encoding = gzip,deflate
Accept-Language = en-us,en;q=0.5



Client's headers when hitting digitalagora.com through Tinyproxy with enabled settings:
Anonymous "Host"
Anonymous "Authorization"
where in my example 3 header fields are showing:

Host = digitalagora.com
Connection = close
Via = 1.1 firewallserver (tinyproxy/1.6.5)

Tuesday, December 1, 2009

Ubuntu and Apache: How to fix the error: "you have chosen to open ... which is a: application/x-httpd-php"

Ubuntu and Apache

How to fix the error: "you have chosen to open ... which is a: application/x-httpd-php"


Edit the Apache configuration file:
sudo nano /etc/apache2/apache2.conf

Find these 2 lines:
AddType application/x-httpd-php .php .phtml
AddType application/x-httpd-php-source .phps

Comment them by adding a pound sign in front:
#AddType application/x-httpd-php .php .phtml
#AddType application/x-httpd-php-source .phps

Add the following 2 lines right under the first 2 lines:
AddType application/x-httpd-php .php .phtml
AddType application/x-httpd-php-source .phps

Restart Apache:
sudo /etc/init.d/apache2 restart

Close your browser to clear its cache, and access your web page again.

Done.

-----

In a little more detail:

You can telnet to port 80 and view the web page. From the prompt, type:
telnet localhost 80
and then type "GET / HTTP/1.0" without the quotes, and press ENTER two times.
Note that there is a space before the slash and a space after the slash.
The page should then display. Here is an example of before the fix:

root@myfunserver:~# telnet localhost 80
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.0

HTTP/1.1 200 OK
Date: Tue, 01 Dec 2009 21:40:03 GMT
Server: Apache/2.2.11 (Ubuntu) PHP/5.2.6-3ubuntu4.4 with Suhosin-Patch
Last-Modified: Fri, 20 Nov 2009 08:18:29 GMT
ETag: "b7a6b-f-478c91ee61f40"
Accept-Ranges: bytes
Content-Length: 15
Connection: close
Content-Type: x-httpd-php

Website works

Connection closed by foreign host.
root@myfunserver:~#

Notice that the Content-Type is: x-httpd-php. Now, after the change:

root@myfunserver:~# telnet localhost 80
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.0

HTTP/1.1 200 OK
Date: Tue, 01 Dec 2009 21:40:03 GMT
Server: Apache/2.2.11 (Ubuntu) PHP/5.2.6-3ubuntu4.4 with Suhosin-Patch
Last-Modified: Fri, 20 Nov 2009 08:18:29 GMT
ETag: "b7a6b-f-478c91ee61f40"
Accept-Ranges: bytes
Content-Length: 15
Connection: close
Content-Type: text/html

Website works

Connection closed by foreign host.
root@myfunserver:~#

Notice that the content type is text/html.

How to play videos in Ubuntu

sudo apt-get update
sudo apt-get install vlc vlc-plugin-esd

Wednesday, November 25, 2009

How to back up a system to another server using rsync and get an email with a result

1. Create a trust relationship between the system which needs to backed up and the system where the files will get backed up.

Replace ALPHA and BETA with the proper server names. (to add more server names, "sudo nano /etc/hosts" and add the ip and the name you wish to assign to each server)

ALPHA = server 1, where to log in from (on ALPHA, do this as user root)
BETA = server 2, destination where we log in

ALPHA: ssh-keygen -t rsa
BETA: mkdir .ssh
ALPHA: cat .ssh/id_rsa.pub | ssh user@BETA 'cat >> .ssh/authorized_keys'
BETA: chmod 644 .ssh/authorized_keys

2. As root, create a backup script

Replace "abc" with the name of your server which you are backing up.

Create the file: "nano /usr/bin/backupabc" and paste the script below, change:
- the backup server name, ex: mybackupserver
- the user id you use on the backup server, ex: my-user-id-on-backup-server
- the backup paths on the backup server, ex: /mnt/mybigdrive/backups/abc/
- your email address, ex: my.lovely.email@gmail.com (make sure you install mail: "sudo apt-get install mailutils" )


#!/bin/sh

LOG=/tmp/backupabc.log

START=$(date +%s)
echo "" > $LOG
echo "Start " >> $LOG
echo `date` >> $LOG

rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /bin/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/bin/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /boot/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/boot/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /etc/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/etc/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /home/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/home/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /lib/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/lib/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /opt/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/opt/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /root/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/root/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /sbin/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/sbin/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /srv/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/srv/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /usr/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/usr/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /var/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/var/ >> $LOG

END=$(date +%s)
DIFF=$(( $END - $START ))

echo "I have ran the /usr/bin/backupabc script and it took $DIFF seconds" >> $LOG
echo "\nEnd " >> $LOG
echo `date` >> $LOG

cat $LOG |  mail -s "mybackupserver: backed up abc" my.lovely.email@gmail.com



3. As root, run the script manually:
/usr/bin/backupabc
OR
add the script to the crontab to run every day at 10 pm (22 hrs) (as root):
crontab -e   (if prompted, use "nano" as the editor)
0 22 * * * /usr/bin/backupabc

To see the log while it's being built, open another shell and:
tail -f /tmp/backupabc

Tuesday, November 24, 2009

How to set up Apache and limit access per IP - mod_limitipconn.so module

# Get Apache with the apxs2 tool
apt-get install apache2-threaded-dev

# test that apxs works
which apxs2


nano /etc/apache2/apache2.conf

and add this at the bottom:

# This command is always needed
ExtendedStatus On

# Only needed if the module is compiled as a DSO
LoadModule limitipconn_module lib/apache/mod_limitipconn.so

<IfModule mod_limitipconn.c>

    # Set a server-wide limit of 10 simultaneous downloads per IP,
    # no matter what.
    MaxConnPerIP 10
    <Location /somewhere>
        # This section affects all files under http://your.server/somewhere
        MaxConnPerIP 3
        # exempting images from the connection limit is often a good
        # idea if your web page has lots of inline images, since these
        # pages often generate a flurry of concurrent image requests
        NoIPLimit image/*
    </Location>

    <Directory /home/*/public_html>
        # This section affects all files under /home/*/public_html
        MaxConnPerIP 1
        # In this case, all MIME types other than audio/mpeg and video*
        # are exempt from the limit check
        OnlyIPLimit audio/mpeg video
    </Directory>
</IfModule>

# Modify the "/somewhere" to match the alias (not directory) which you are protecting.



# Add this mod at the bottom of the actions.load file:
  cd /etc/apache2/mods-available
  nano actions.load
# Add this at the end of the file:
  LoadModule evasive20_module /usr/lib/apache2/modules/mod_evasive20.so

# edit the httpd conf (not the apache2.conf) config file:
  nano /etc/apache2/httpd.conf
# add the following 2 comments at the bottom of the file, with the pound sign in front,
# this will ensure that in the following steps, the "make install" won't barf.

# Dummy LoadModule directive to aid module installations
#LoadModule dummy_module /usr/lib/apache2/modules/mod_dummy.so




# Download the limit ip connection module and set it up
  wget http://dominia.org/djao/limit/mod_limitipconn-0.23.tar.bz2
  tar -jxvf mod_limitipconn-0.23.tar.bz2
  cd mod_limitipconn-0.23
  nano Makefile
# Look for apxs and modify it to apxs2
  make
  make install
# If the "make install" barfs with an error such as:
  apxs:Error: Activation failed for custom /etc/apache2/httpd.conf file..
  apxs:Error: At least one `LoadModule' directive already has to exist..
then you forgot to edit the httpd.conf file and add the dummy module entry (see above).

Friday, November 20, 2009

How to convert an .avi to .mpeg in Ubuntu

sudo apt-get install libavcodec-unstripped-51
sudo apt-get install ffmpeg
ffmpeg -i holiday.avi -aspect 16:9 -target ntsc-dvd holiday.mpeg
(and then wait a long time)

Sunday, November 15, 2009

How to convert uif to iso

This information is copied from: http://wesleybailey.com/articles/convert-uif-to-iso
Tested successfuly.
-----------------------------------


Convert UIF to ISO

The fastest way to convert an UIF image to ISO image is UIF2ISO. It is a speedy command line tool, that will save you the hassle of installing wine and MagicISO.

This is how I downloaded and installed UIF2ISO, written by Luigi Auriemma. - http://aluigi.altervista.org/

1. We first need to install zlib and OpenSSL with apt-get.

sudo apt-get install zlib1g zlib1g-dev libssl-dev build-essential

2. Now we can download UIF2ISO with wget from a terminal, or from the author’s site here.

wget http://aluigi.altervista.org/mytoolz/uif2iso.zip

3. Once you have the file downloaded, unzip it and cd into the directory.

unzip uif2iso.zip
cd src

4. Finally compile the source, and create the executable.

make
sudo make install

5. Now you can convert the .uif file to an .iso with the following command:

uif2iso example.uif output.iso

Mounting an ISO

You don't necessarily need to burn a cd in order to access the files within the ISO. You can mount it with some simple commands.

Here is how to mount the ISO from command line.

sudo modprobe loop
sudo mkdir ISO_directory
sudo mount /media/file.iso /media/ISOPoint/ -t iso9660 -o loop


Friday, November 13, 2009

Eratosthenes Sieve prime number benchmark in Java




// Eratosthenes Sieve prime number benchmark in Java
import java.awt.*;

public class Sieve // extends java.applet.Applet implements Runnable {
{
String results1, results2;

void runSieve()
{
int SIZE = 8190;
boolean flags[] = new boolean[SIZE+1];
int i, prime, k, iter, count;
int iterations = 0;
double seconds = 0.0;
int score = 0;
long startTime, elapsedTime;

startTime = System.currentTimeMillis();
while (true) {
count=0;
for(i=0; i<=SIZE; i++) flags[i]=true;
for (i=0; i<=SIZE; i++) {
if(flags[i]) {
prime=i+i+3;
for(k=i+prime; k<=SIZE; k+=prime)
flags[k]=false;
count++;
}
}
iterations++;
elapsedTime = System.currentTimeMillis() - startTime;
if (elapsedTime >= 10000) break;
}
seconds = elapsedTime / 1000.0;
score = (int) Math.round(iterations / seconds);
results1 = iterations + " iterations in " + seconds + " seconds";
if (count != 1899)
results2 = "Error: count <> 1899";
else
results2 = "Sieve score = " + score;
}

public static void main(String args[])
{
Sieve s = new Sieve();
}

public Sieve()
{
System.out.println("Running Sieve - please wait 10 seconds for results...");
runSieve();
System.out.println( results1 );
System.out.println( results2 );
}

}



Wednesday, November 11, 2009

Ubuntu: How to fix the apt-get update error: W: GPG error: http://ppa.launchpad.net intrepid Release: The following signatures couldn't be verified be

The problem is during apt-get update:

...
Reading package lists... Done
W: GPG error: http://ppa.launchpad.net intrepid Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B9FBE5158B3AFA9
W: You may want to run apt-get update to correct these problems


Solution:

gpg --keyserver keyserver.ubuntu.com --recv 8B9FBE5158B3AFA9
gpg --export --armor 8B9FBE5158B3AFA9 | sudo apt-key add -


Update should work now:

sudo apt-get update

Sunday, November 8, 2009

How to mount a remote file system in Ubuntu

# install the utility
sudo apt-get install sshfs

# make a directory where to mount the remote file system
sudo mkdir /mnt/backups
sudo chown YOURUSERNAME /mnt/alpha

# mount the remote drive
sshfs YOURUSERNAME@192.168.1.123:/home/YOURUSERNAME/backups /mnt/backups

# check to see that the files are mounted
ls -la /mnt/backups

How to listen to mp3s in Ubuntu/Linux

sudo apt-get install amarok
sudo apt-get install libxine1-ffmpeg

(Amarok needs the libxine codec to decode mp3s)

Saturday, November 7, 2009

How to log into another server without asking you for a pasword - in 4 steps.

ALPHA = server 1, where to log in from
BETA = server 2, destination where we log in


ALPHA: ssh-keygen -t rsa
BETA: mkdir .ssh
ALPHA: cat .ssh/id_rsa.pub | ssh user@BETA 'cat >> .ssh/authorized_keys'
BETA: chmod 644 .ssh/authorized_keys


To establish a mirror relationship, exchange server ALPHA with BETA and run through the 4 steps again.

Friday, October 16, 2009

How to configure and install Tinyproxy

How to configure and install Tinyproxy

Download Tinyproxy - go to https://www.banu.com/tinyproxy/download/ and download the latest version
ex: wget https://www.banu.com/pub/tinyproxy/1.6/tinyproxy-1.6.5.tar.gz

Unpackage
tar xzvf tinyproxy-1.6.5.tar.gz

Build
cd tinyproxy-1.6.5
./configure
make
sudo make install


Edit the configuration file:
nano /usr/local/etc/tinyproxy/tinyproxy.conf

or use my version of it:


sudo su -
cd /usr/local/etc/tinyproxy
echo "" > tinyproxy.conf
nano tinyproxy.conf

and paste this. Make sure to change YOUR_USER_NAME to be the name of the
user account from which you are running Tinyproxy


# ==================================================================
##
## tinyproxy.conf -- tinyproxy daemon configuration file
##

#
# Name of the user the tinyproxy daemon should switch to after the port
# has been bound.
#
User YOUR_USER_NAME
Group YOUR_USER_NAME

#
# Port to listen on.
#
Port 8888

#
# If you have multiple interfaces this allows you to bind to only one. If
# this is commented out, tinyproxy will bind to all interfaces present.
#
#Listen 192.168.0.1
Listen 127.0.0.1
#
# The Bind directive allows you to bind the outgoing connections to a
# particular IP address.
#
#Bind 192.168.0.1

#
# Timeout: The number of seconds of inactivity a connection is allowed to
# have before it closed by tinyproxy.
#
Timeout 600

#
# ErrorFile: Defines the HTML file to send when a given HTTP error
# occurs. You will probably need to customize the location to your
# particular install. The usual locations to check are:
# /usr/local/share/tinyproxy
# /usr/share/tinyproxy
# /etc/tinyproxy
#
# ErrorFile 404 "/usr/share/tinyproxy/404.html"
# ErrorFile 400 "/usr/share/tinyproxy/400.html"
# ErrorFile 503 "/usr/share/tinyproxy/503.html"
# ErrorFile 403 "/usr/share/tinyproxy/403.html"
# ErrorFile 408 "/usr/share/tinyproxy/408.html"

#
# DefaultErrorFile: The HTML file that gets sent if there is no
# HTML file defined with an ErrorFile keyword for the HTTP error
# that has occured.
#
DefaultErrorFile "/usr/share/tinyproxy/default.html"

#
# StatFile: The HTML file that gets sent when a request is made
# for the stathost. If this file doesn't exist a basic page is
# hardcoded in tinyproxy.
#
StatFile "/usr/share/tinyproxy/stats.html"

#
# Where to log the information. Either LogFile or Syslog should be set,
# but not both.
#
Logfile "/var/log/tinyproxy.log"
# Syslog On

#
# Set the logging level. Allowed settings are:
# Critical (least verbose)
# Error
# Warning
# Notice
# Connect (to log connections without Info's noise)
# Info (most verbose)
# The LogLevel logs from the set level and above. For example, if the LogLevel
# was set to Warning, than all log messages from Warning to Critical would be
# output, but Notice and below would be suppressed.
#
LogLevel Info

#
# PidFile: Write the PID of the main tinyproxy thread to this file so it
# can be used for signalling purposes.
#
PidFile "/var/run/tinyproxy.pid"

#
# Include the X-Tinyproxy header, which has the client's IP address when
# connecting to the sites listed.
#
#XTinyproxy mydomain.com

#
# Turns on upstream proxy support.
#
# The upstream rules allow you to selectively route upstream connections
# based on the host/domain of the site being accessed.
#
# For example:
# # connection to test domain goes through testproxy
# upstream testproxy:8008 ".test.domain.invalid"
# upstream testproxy:8008 ".our_testbed.example.com"
# upstream testproxy:8008 "192.168.128.0/255.255.254.0"
#
# # no upstream proxy for internal websites and unqualified hosts
# no upstream ".internal.example.com"
# no upstream "www.example.com"
# no upstream "10.0.0.0/8"
# no upstream "192.168.0.0/255.255.254.0"
# no upstream "."
#
# # connection to these boxes go through their DMZ firewalls
# upstream cust1_firewall:8008 "testbed_for_cust1"
# upstream cust2_firewall:8008 "testbed_for_cust2"
#
# # default upstream is internet firewall
# upstream firewall.internal.example.com:80
#
# The LAST matching rule wins the route decision. As you can see, you
# can use a host, or a domain:
# name matches host exactly
# .name matches any host in domain "name"
# . matches any host with no domain (in 'empty' domain)
# IP/bits matches network/mask
# IP/mask matches network/mask
#
#Upstream some.remote.proxy:port

#
# This is the absolute highest number of threads which will be created. In
# other words, only MaxClients number of clients can be connected at the
# same time.
#
MaxClients 100

#
# These settings set the upper and lower limit for the number of
# spare servers which should be available. If the number of spare servers
# falls below MinSpareServers then new ones will be created. If the number
# of servers exceeds MaxSpareServers then the extras will be killed off.
#
MinSpareServers 5
MaxSpareServers 20

#
# Number of servers to start initially.
#
StartServers 100

#
# MaxRequestsPerChild is the number of connections a thread will handle
# before it is killed. In practise this should be set to 0, which disables
# thread reaping. If you do notice problems with memory leakage, then set
# this to something like 10000
#
MaxRequestsPerChild 0

#
# The following is the authorization controls. If there are any access
# control keywords then the default action is to DENY. Otherwise, the
# default action is ALLOW.
#
# Also the order of the controls are important. The incoming connections
# are tested against the controls based on order.
#
Allow 127.0.0.1
#Allow 192.168.1.0/25

#
# The "Via" header is required by the HTTP RFC, but using the real host name
# is a security concern. If the following directive is enabled, the string
# supplied will be used as the host name in the Via header; otherwise, the
# server's host name will be used.
#
ViaProxyName "tinyproxy"

#
# The location of the filter file.
#
#Filter "/etc/tinyproxy/filter"

#
# Filter based on URLs rather than domains.
#
#FilterURLs On

#
# Use POSIX Extended regular expressions rather than basic.
#
#FilterExtended On

#
# Use case sensitive regular expressions.
#
#FilterCaseSensitive On

#
# Change the default policy of the filtering system. If this directive is
# commented out, or is set to "No" then the default policy is to allow
# everything which is not specifically denied by the filter file.
#
# However, by setting this directive to "Yes" the default policy becomes to
# deny everything which is _not_ specifically allowed by the filter file.
#
#FilterDefaultDeny Yes

#
# If an Anonymous keyword is present, then anonymous proxying is enabled.
# The headers listed are allowed through, while all others are denied. If
# no Anonymous keyword is present, then all header are allowed through.
# You must include quotes around the headers.
#
#Anonymous "Host"
#Anonymous "Authorization"

#
# This is a list of ports allowed by tinyproxy when the CONNECT method
# is used. To disable the CONNECT method altogether, set the value to 0.
# If no ConnectPort line is found, all ports are allowed (which is not
# very secure.)
#
# The following two ports are used by SSL.
#
ConnectPort 443
ConnectPort 563
ConnectPort 6667
ConnectPort 6668
ConnectPort 6669
ConnectPort 7000
ConnectPort 80
# ==================================================================

Make some config files readable:
sudo chmod a+r /usr/local/etc/tinyproxy/tinyproxy.conf

Create the log file:
sudo touch /var/log/tinyproxy.log
sudo chmod a+rw /var/log/tinyproxy.log
sudo touch /var/run/tinyproxy.pid
sudo chmod a+rw /var/run/tinyproxy.pid





You can optionally create a startup script for tinyproxy, in your home directory:
nano starttinyproxy
and paste this:

#!/bin/sh
killall tinyproxy
/usr/local/sbin/tinyproxy -c /usr/local/etc/tinyproxy/tinyproxy.conf -d &
sleep 5
tail /var/log/tinyproxy.log

save it, and make it executable:
chmod u+x starttinyproxy



Exit from root, and under your account, start up Tinyproxy:
./starttinyproxy

Wednesday, October 7, 2009

How to set up the Linksys WUSB300N wireless N device to work with Linux/Ubuntu

How to set up the Linksys WUSB300N wireless N device to work with Linux/Ubuntu

Credits: mcdsco - http://ubuntuforums.org/showthread.php?t=530772

# start a shell, and log in as root
sudo su -

# install ndiswrapper for your system, this could vary, get a new version
cd /root
wget http://downloads.sourceforge.net/project/ndiswrapper/stable/1.55/ndiswrapper-1.55.tar.gz?use_mirror=softlayer
gzip -d ndiswrapper-1.55.tar.gz
tar -xvf ndiswrapper-1.55.tar
cd ndiswrapper-1.55
make install


# get the relevant files for the Linksys WUSB300N wireless device
mkdir /opt/ndis
cd /opt/ndis
wget http://www.atvnation.com/WUSB300N.tar
tar xvf WUSB300N.tar -C /opt/ndis/
cd /opt/ndis/Drivers

# install the drivers
ndiswrapper -i netmw245.inf

# plug the USB wireless device into the PC and:
modprobe ndiswrapper

# check to see if the device is seen:
dmesg | grep ndis
[ 4336.851339] ndiswrapper version 1.53 loaded (smp=yes, preempt=no)
[ 4336.890513] usbcore: registered new interface driver ndiswrapper
[ 4636.519061] ndiswrapper: driver netmw245 (Linksys, A Division of Cisco Systems, Inc.,12/07/2006,1.0.5.1) loaded


At this point, the device should work. Go to the wireless settings, set up your connection.
Type "ifconfig" to see the network configuration, the wireless device should show up under "wlan0".

Tuesday, October 6, 2009

College of Business at FSU



College of Business faculty: http://cob.fsu.edu/faculty/faculty_staff.cfm?type=2


========================
Some fun core courses
========================
ACG5026 Financial Reporting and Managerial Control
This course provides a basic understanding of accounting systems and financial statements as a foundation for analysis. The course also addresses cost systems and controls as they pertain to organizational control. Cannot be taken for credit for the Master of Accounting degree.
9780470128824 Financial Accounting in Economic Context Pratt 2009 7TH Required Textbook
9780967507200 Code Blue (w/264 or 261 pgs) McDermott 2002 3RD Required Textbook
ACG5026 Course Notes Target Copy Required Other
Stevens, Douglas E, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=399

========================
BUL5810 The Legal & Ethical Environment of Business
no sections open for Spring 2010
========================
FIN5425 Problems in Financial Management
no sections open for Spring 2010
========================
ISM5021 Problems in Financial Management
Applied course in concepts and techniques used in the design and implementation of management information systems and decision support systems, with emphasis on management of these systems
Textbooks and materials not yet assigned
Wasko, Molly M, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=305
T R 2:00-3:15 RBA 0202
========================
MAR5125 Marketing Strategy in the Global Environment
This course examines the business-level marketing strategy in the context of global markets and uses the marketing-planning process as a framework for understanding how global environments, markets, and institutions affect the strategic marketing operations of the global business enterprise.
9780324362725 Marketing Strategy Ferrell 2008 4TH Required Textbook
9781591396192 Blue Ocean Strategy Kim 2005 Required Textbook
Hartline, Michael D, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=306
========================
MAN5245 Leadership and Organizational Behavior
This course offers a dynamic examination of managerial concepts of human behavior in work organizations.
9780324578737 Organizational Behavior Nelson 2009 6th Required Textbook
Douglas, Ceasar, http://cob.fsu.edu/man/hrcenter/faculty.cfm
========================
MAN5501 Production and Operations Management
Develops a conceptual framework which is useful in describing the nature of the operations function, with emphasis on identifying basic issues in managing the operations of a service organization.
9780324662559 Operations Management David Collier and James Evans 2009-2010 Required Textbook
Smith, Jeffery S, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=421
========================
MAN5716 Economics and Business Conditions
Problems of managing the firm in relation to the changing economic environment. Analysis of major business fluctuations and development of forecasting techniques.
No textbook required
Christiansen, William A, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=25
========================
MAN5721 Strategy and Business Policy
The course covers the relation between theories and practices of management, and focuses on utilizing methododologies and theories for strategic decision making.
9780132341387 Strategic Management: Concepts & Cases Carpenter 2009 2ND Recommended Textbook
M W 9:30 - 10:45 RBA 0202
Holcomb, Timothy R, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=427
========================


========================
Flex options
========================
FIN5515 Investments
This course offers an analysis of financial assets with emphasis on the securities market, the valuation of individual securities, and portfolio management.
9780324656121 Investment Analysis and Portfolio Management Reilly and Brown 9th Required Textbook
T R 3:35-4:50PM
Doran, James S, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=368
========================
ISM5315 Project Management
no sections open for Spring 2010
========================
MAR5465 Supply Chain Marketing
no sections open for Spring 2010
========================
RMI5011 Fundamentals of Risk Management
This course develops concepts such as time value of money, statistical analysis, information technology, and management of risk exposure. Topics include risk fundamentals, risk management, insurer operations, and insurance regulation.
9780072339703 Risk Management & Insurance Harrington 2004 2ND Required Textbook
M W 11am-12:15pm
Born, Patricia H, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=458
========================

Thursday, September 10, 2009

Summary of the talk by Prof. Ted Baker

Summary of the talk by Prof. Ted Baker

Alan Lupsha

Professor Ted Baker’s area of research is real-time systems. He focuses on real-time runtime systems, real-time scheduling and synchronization and real-time software standards.

Real-time scheduling for multiprocessors involves finding ways to guarantee deadlines for tasks which are scheduled on multiprocessor systems. A main problem with scheduling is that it is very difficult to meet constraints, given specific computational workloads. As workloads vary, meeting given constraints can be achieved with different guarantees. For example, the guarantee of execution differs when given constraints for fault tolerance, window of execution or energy usage. The quality of scheduling can vary as well, as this quality can quantify how well the schedule guarantees the meeting of deadlines or how late the task will complete over the deadline. Once an algorithm is able to schedule a workload, a schedule can also vary in sensitivity in proportionality with the variation in the parameters of the execution.

Professor Baker looks at workload models which involve jobs, tasks and task systems. Jobs are units of computation that can be scheduled with a specific arrival time, worst-case execution time, or deadline. Tasks are sequences of jobs, and can depend on other tasks. Sporadic tasks have two specific qualities: they have a minimum inter-arrival time, and they have a worst case execution time. Task systems are sets of tasks, where tasks can be related or they can be independent (scheduled without consideration of interactions, precedence or coordination).

Scheduling involves models, which can be defined as having a set of (identical) processors, shared memory, and specific algorithms. These algorithms can be preemptive or non-preemptive, on-line (decisions are made on the fly as instructions arrive) or off-line, and global or partitioned (split amongst processors where they can predict in advance the workload for each processor). There are three typical scheduling algorithms and tests. The first one is “fixed task-priority scheduling”, where the highest priority tasks run first. The second is “earliest deadline first”, where higher loads are handled without missing the deadline (these algorithms are easier to implement). The third type of algorithms (which are not used in single processing systems but only in multi-processor systems) are “earliest deadline zero laxity”, where the execution of a job can be delayed without missing the given deadline.

The difficulty of scheduling is that there is no practical algorithm for scheduling a sporadic task. One example of a scheduling test is the density test, where one can analyze what fraction of the processor is needed to serve a given task. Professor Baker researches task scheduling and is looking for acceptable algorithms which are practical, given specific processing constraints.

Tuesday, September 8, 2009

Summary of the Talk by Prof. FeiFei Li

Summary of the Talk by Prof. FeiFei Li

Alan Lupsha

Professor FeiFei Li researches Database Management and Database technologies. His research focuses on efficient indexing, querying and managing large scale databases, spatio-temporal databases and applications, and sensor and stream databases.

Efficient indexing, querying and managing large scale databases deals with problems such as retrieving structured data from the web and automating the process of identifying the structure of web sites (ex. to create customized reports for users). It is important to interpret web pages and to identify data tree structures. This allows one to first create a schema for the structure of the data, and then to integrate information from different sources together in a meaningful way. The topic of indexing higher dimensional data (using tree structures and multi dimensional structures) deals with space partitioning that indexes data anywhere from 2 to 6 dimensions.

The topic of spatio-temporal databases and applications deals with the execution of queries, like finding solutions to NP-hard problems such as the traveling salesman problem. A solution uses a greedy algorithm, which has a start node location and finds the nearest neighbor in each predefined category of nodes. By minimizing the sum distance (using the minimum sum distance algorithm), a path from a start to and end node is found in such a way that each category is visited, and the solution is at most 3 times the complexity of the optimal solution.

Sensor and stream databases deal with the integration of sensors into network models. A large set of sensors is distributed in a sensor field, and a balance is sought to solve problems such as data flow between sensors, hierarchy of sensors and efficient data transmission for the purpose of saving battery life. Professor Li analyzes the best data flow models between sensors and different ways to group sensors so that hub nodes transmit data further to other hub nodes (an example of such an application is the monitoring of temperatures on an active volcano). One can not use broadcast since this would drain the sensors’ battery life. Thus, routing methods and fail over mechanisms are examined, to ensure that all sensor data is properly being read.

Professor Li also researches problems with the method of Identical Independent Distributed Random Noise (IID), which introduces errors in data sets for the purpose of hiding secret data, while maintaining correct data averages and other data benchmarks (for example hiding real stock data or employees’ salaries, but preserving averages). The problem with IID is that attackers can filter out outliers in data and still extract the data that is meant to remain secret. A solution to this problem is to add noise to the original component of the data set by adding the same amount of noise, but in parallel to the principal component. This yields more securely obfuscated data.

Thursday, September 3, 2009

Summary of the talk by Prof. Zhenhai Duan

Summary of the talk by Prof. Zhenhai Duan

Alan Lupsha

Professor Zhenhai Duan researches accountable and dependable Internet with good end-to-end performance. There is currently a serious problem with the Internet because it lacks accountability and there is not enough law enforcement. It is very hard to find out who did something wrong because hackers do not worry about breaking the law and they cover their tracks in order to not get caught. There is a need to design protocols and architectures which can prevent bad activities from happening and which can easier identify attackers.

The current Internet lacks accountability, as even if there are no attacks, there are still many problems. For example, the time to recover during routing failures is too long, and DNS also has many issues. Dependable Internet defines higher accountability for banking and secure applications. End-to-end performance also needs to be high, especially for more important applications which need a greater guarantee of data delivery.

Professor Duan’s research projects include network security, solutions to network problems, routing, and intrusion detection. In IP spoofing attacks it is difficult to isolate attack traffic from legitimate traffic, and these attacks include the man-in-the-middle method with TCP hijacking and DNS poisoning, as well as reflector-based attacks with DNS requests and DDOS. There are distributed denial of service attacks which are issued from bot nets made up of millions of zombie (compromised) computers. To solve these network problems, professor Duan researches route-based filtering techniques. These techniques take advantage of the fact that hackers can spoof their source addresses but they can not control the route of the packets, while filters which know part of the network topology can isolate illegitimate traffic.

Inter-Domain Packet Filter (IDPF) systems identify feasible routes based on the BGP (an Internet domain routing protocol) updates. These systems evaluate the performance of other IDPFs based on Autonomous Systems graphs. It is hard to completely protect an Autonomous System from spoofing attacks, but IDPFs can effectively limit the spoofing capability of attackers. Using the vertex cover algorithm, one can prevent attackers in 80.8% of the networks which are attacked. If the attacks can not be prevented, one can still look at the topology and determine who are the candidates of the source packets. IDPFs are effective in helping IP traceback, as all Autonomous Systems can localize attackers. The placement of IDPFs also plays a very important role in the performance of protecting networks.

Since botnets are becoming a major security issue, and they are used in distributed denial of service attacks, spamming and identity theft, there is a greater need for utility based detection of zombie machines. The SPOT system is one system being researched which classifies messages as spam or not spam. It computes a function based on the sequential probability ratio test, using previously learned behavior of systems, and finally arriving at one of two different hypotheses, classifying messages as spam or not spam. Professor Duan is currently testing the SPOT system and improving it.

Tuesday, September 1, 2009

Summary of the talk by Prof. Mike Burmester

Summary of the talk by Prof. Mike Burmester

Alan Lupsha

Professor Mike Burmester is interested in research in areas of radio frequency identification and ubiquitous applications, mobile ad hoc networks (MANET) and sensor networks, group key exchange, trust management and network security, and digital forensics. New wireless technologies offer a great wireless medium, but unfortunately the current state of world research is not mature enough to fully understand and mange these new technologies. The fourth generation of wireless technologies, which should work both in the European Union and in the United States, will offer new challenges and opportunities for maturity in this field.

The RFID revolution will be the next big factor which will allow easier management of products. This technology is already being implemented in library systems, allowing easier book management and replacing bar codes, which requires line of sight in order to scan each book. Airports are also implementing RFID for luggage management, and hospitals use RFID tags to protect newborns from being kidnapped. Different types of sensor networks are used extensively in factory floor automation, border fencing and in a plethora of military applications. Sensors will also be extensively used in monitoring biological levels in people. For example, a blood level monitor can monitor and alert a diabetic person if their sugar level is too high or too low.

Mobile ad-hoc networks (MANET) offer information routing between wireless devices which are mobile. Vehicular ad-hoc networks (VANET) are a type of mobile ad-hoc networks which allow communication between moving vehicles. These networks allow individual wireless devices to act as nodes and to route information between other communicating devices, thus reducing the need of dedicated wireless nodes. Ubiquitous networks allow applications to relocate between wireless devices, thus following a mobile user on his or her journey, while continuing to provide needed services.

These new wireless technologies will also need proper management. Some of the new issues at hand include centralizing or decentralizing systems, finding out who will protect certain systems, ensuring data security (such as confidentiality, avoiding eavesdropping, guaranteeing privacy), preserving data integrity (avoid the modification and corruption of data), and data availability (dealing with denial of service attacks, identifying rogue based stations, dealing with man in the middle attacks, detecting and avoiding session tempering and session hijacking).

There is a trade-off between security and functionality. It is extremely challenging to secure wireless networks, but in certain cases one may desire less security in order achieve cheaper wireless products and technologies. Using secured pipelines to create point to point communication does ensure some security, but there are still problems at the physical layer, where attacks can be carried out. Hackers are keen to intercept and manipulate wireless data, making this a very attractive environment for them and creating the the challenge to try and stay ahead of the users of these technologies. This gives rise to great security threats, but it also opens up a niche for researchers to study and create new wireless network security technologies.