Thursday, July 29, 2010

Startup scripts

Sample startup file, place in /etc/init.d:

cat /etc/init.d/archiva
#! /bin/sh
# chkconfig: 345 90 10
# description: Archiva server

# uncoment to set JAVA_HOME as the value present when Continuum installed
export JAVA_HOME=/opt/SDK/jdk
export ARCHIVA=/opt/archiva/current/bin/archiva

case "$1" in

'start')
su - archiva -c "$ARCHIVA start"

;;

'stop')
su - archiva -c "$ARCHIVA stop"

;;

'restart')
su - archiva -c "$ARCHIVA stop"
sleep 20
su - archiva -c "$ARCHIVA start"

;;

*)
echo "Usage: $0 { start | stop | restart }"
exit 1
;;
esac

exit 0


To add it to startup, execute as root:
chkconfig --add archiva


Done.




Another example, using Geronimo:

[geronimo@mypc bin]$ ./geronimo.sh --help
Using GERONIMO_HOME: /home/geronimo/geronimo-tomcat6-javaee5-2.1.6
Using GERONIMO_TMPDIR: var/temp
Using JRE_HOME: /opt/SDK/jdk/jre
Usage: geronimo.sh command [geronimo_args]
commands:
debug Debug Geronimo in jdb debugger
jpda run Start Geronimo in foreground under JPDA debugger
jpda start Start Geronimo in background under JPDA debugger
run Start Geronimo in the foreground
start Start Geronimo in the background
stop Stop Geronimo
stop --force Stop Geronimo (followed by kill -KILL)




Create /etc/init.d/geronimo as follows:

#! /bin/sh
# chkconfig: 345 90 10
# description: geronimo server

# uncoment to set JAVA_HOME as the value present when Continuum installed
export JAVA_HOME=/opt/SDK/jdk
export GERONIMO=/opt/geronimo/current/bin/geronimo.sh

case "$1" in

'start')
su - geronimo -c "$GERONIMO start"

;;

'stop')
su - geronimo -c "$GERONIMO stop"

;;

'restart')
su - geronimo -c "$GERONIMO stop"
sleep 20
su - geronimo -c "$GERONIMO start"

;;

'debug')
su - geronimo -c "$GERONIMO debug"

;;

'jpdarun')
su - geronimo -c "$GERONIMO jpda run"

;;

'jpdastart')
su - geronimo -c "$GERONIMO jpda start"

;;

'jpdastop')
su - geronimo -c "$GERONIMO stop"

;;



*)
echo "Usage: $0 { start | stop | restart | debug | jpdarun | jpdastart | jpdastop }"
exit 1
;;

esac

exit 0



Test each entry:

/etc/init.d/geronimo
Usage: /etc/init.d/geronimo { start | stop | restart | debug | jpdarun | jpdastart | jpdastop }

/etc/init.d/geronimo start

/etc/init.d/geronimo stop
(login/pass = system / manager)

/etc/init.d/geronimo debug


# add to startup
chkconfig --add geronimo

# check that the file exists in the startup location, ex:
ls -la /etc/rc.d/init.d/geronimo

Friday, July 16, 2010

How to load properties files in Oracle's Weblogic 11G

PropertiesLoader propertiesLoader = new PropertiesLoader(propertiesFileName);
if (propertiesLoader.thePropertiesFileExists()) {
System.out.println("Properties file exists: " + propertiesFileName);
} else { ... }


String serverName = propertiesLoader.getProperty("serverName");


...


And the properties loader class:




package alan.lupsha.properties;

import org.apache.log4j.Logger;
import java.util.Properties;
import java.io.InputStream;

public class PropertiesLoader {
    private static final Logger logger = Logger.getLogger(PropertiesLoader.class);

    private Properties props = null;
    private String propertiesFileName = null;
    private boolean propertiesFileExists = false;

    public PropertiesLoader(String propertiesFileName) {
        this.propertiesFileName = propertiesFileName;
        try {
            props = new Properties();
            ClassLoader cl = this.getClass().getClassLoader();
            // does not work in 11G: java.net.URL url = cl.getResource(propertiesFileName);

            InputStream in = cl.getResourceAsStream( propertiesFileName );
            if( in != null )
            {
                props.load(in);
                setPropertiesFileExists(true);
            }
            else
            {
                logger.warn("InputStream is null while trying to load properties file: " + propertiesFileName );
            }
        } catch (Exception e) {
            logger.error("Error while loading properties from file "
                    + propertiesFileName + ". Error is: " + e.toString());
            System.out.println("Error while loading properties from file "
                    + propertiesFileName + ". Error is: " + e.toString());
        }
    }

    public String getProperty(String propertyName) {
        String returnStr = "";
        if (props == null) {
            logger.error("Sorry, your props file couldn't be loaded: " + propertiesFileName);
        } else {
            returnStr = props.getProperty(propertyName);
            if (returnStr == null) {
                returnStr = "";
            }
        }
        return returnStr;
    }

    public void setPropertiesFileExists(boolean propertiesFileExists) {
        this.propertiesFileExists = propertiesFileExists;
    }

    public boolean thePropertiesFileExists() {
        return propertiesFileExists;
    }
}


Tuesday, May 4, 2010

How to set up ProxyPass and ProxyPassReverse in Apache

How to set up ProxyPass and ProxyPassReverse in Apache to allow access to Continuum (which runs on port 8080) and to Archiva (which runs on port 8082):

As root, edit: /etc/httpd/conf/httpd.conf

Enable:
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so


At the bottom of the config file, add:

# ***********************************************************
ProxyRequests On
ProxyPreserveHost On
ProxyVia full


Order deny,allow
Allow from all


ProxyPass /archiva http://myserver.my.domain.com:8082/archiva
ProxyPassReverse /archiva http://myserver.my.domain.com:8082/archiva

ProxyPass /continuum http://myserver.my.domain.com:8080/continuum
ProxyPassReverse /continuum http://myserver.my.domain.com:8080/continuum
# ***********************************************************


Restart Apache: /etc/init.d/httpd restart

Friday, March 26, 2010

How to scan using the Epson Perfection 3490 Photo scanner

Source: https://bugs.launchpad.net/ubuntu/+source/sane-backends/+bug/311191

Binary package hint: libsane

Epson 3490 wont run without the following modification
The following steps did it for me (running 8.04 Hardy, 8.10, 9.04):-

1) sudo apt-get install sane-utils

2) Go to the Users and Groups screen and add yourself (and other
scanner users) to the "scanner" group.

3) Log off and on (or reboot) to make 2) effective.

4) sudo mkdir /usr/share/sane/snapscan

5) sudo cp Esfw52.bin /usr/share/sane/snapscan

6) sudo chmod 644 /usr/share/sane/snapscan/Esfw52.bin

7) sudo gedit /etc/sane.d/snapscan.conf

Change firmware entry to say:-
firmware /usr/share/sane/snapscan/Esfw52.bin

Please note point 6 as the file was initially created with insufficient access.

Firmware .bin is attacted

http://ubuntuforums.org/showthread.php?t=108256&page=6 - is the scanner being discussed

=================
# How to scan from the shell:

scanimage --device-name snapscan:libusb:002:006 --resolution 200 --high-quality=no --mode Color --format=pnm > lastscan.pnm

# Convert the file to jpg:
convert lastscan.pnm 01.jpg

# To convert all the pnm files to jpg files, type this command on one line:
for i in `ls *.pnm`; do convert $i $i.jpg; done

=================

Wednesday, March 3, 2010

Ubuntu - how to fix the annoying error "resolvconf: Error: /etc/resolv.conf must be a symlink"

How to fix the error: "resolvconf: Error: /etc/resolv.conf must be a symlink"

1. Kill the NetworkManager process:

sudo kill -9 `ps aux | grep sbin/NetworkManager | grep -v grep | awk '{print $2}'`

2. Run the resolvconf reconfiguration tool:

sudo dpkg-reconfigure resolvconf

(select YES, OK )

3. Verify that the resolv.conf file is ok:

ls -la /etc/resolv.conf

It should look very close to this:

lrwxrwxrwx 1 root root 31 2010-03-03 19:53 /etc/resolv.conf -> /etc/resolvconf/run/resolv.conf


4. Check the contents, it should have your name server(s) listed. The following shows an example using Comcast nameservers, and includes a router with address 192.168.0.1:

cat /etc/resolv.conf

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 68.87.74.162
nameserver 68.87.68.162
nameserver 192.168.0.1
search wp.comcast.net


5. Verify that it all works, try a restart of your system, and start with:
ping google.com
If you get a reply, your network works.


Share

Monday, February 8, 2010

How to install the command line transmissioncli bittorrent client in Ubuntu

mkdir ~/downloads
cd ~/downloads

# get the latest stable version from http://www.transmissionbt.com/download.php
wget http://mirrors.m0k.org/transmission/files/transmission-1.83.tar.bz2

bunzip2 transmission-1.83.tar.bz2
tar -xvf transmission-1.83.tar
cd transmission-1.83/
cat README | more

sudo apt-get install intltool
sudo apt-get install libcurl4-openssl-dev
sudo apt-get install openssl

./configure
make
sudo make install

# verify that the install worked
ls -la /usr/local/bin/transmissioncli

------

Share

Saturday, February 6, 2010

How to set up Apache with groups of users and basic HTTP authentication

1. Do this as user root:

sudo su -

and find the path to the config files for Apache, ex: ls -la /etc/apache2/

2. create users, and use the htpasswd tool to encrypt their passwords
and store them in the password file:

htpasswd /etc/apache2/passwords john

and

htpasswd /etc/apache2/passwords mary


3. add your users to a groups file

nano /etc/apache2/groups

Create a group called "trusted" followed by ":" followed by
the user names who are in that group, space delimited:

trusted:john mary


and set permissions so that user apache (who is in group "www-data")
can actually see the "passwords" and "groups" files.

chown root:www-data /etc/apache2/groups
chown root:www-data /etc/apache2/passwords


4. edit the apache config file to set up the directory which you are serving

nano /etc/apache2/apache2.conf

At the end of the Apache config file, add the following alias,
assuming that you keep all your files in /home/john/coolfiles/


alias /john "/home/john/coolfiles"

Options Indexes +MultiViews
AllowOverride None
Order allow,deny
Allow from all

AuthType Basic
AuthName "Password Required"
AuthUserFile /etc/apache2/passwords
AuthGroupFile /etc/apache2/groups
Require Group trusted




5. restart apache, as user root

sudo /etc/init.d/apache2 stop
sudo /etc/init.d/apache2 start



6. test with a browser

http://localhost/john/

or

http://yourdomain.com/john/



Share

How to use wget to download recursively, using HTTP basic authentication

wget --http-user=john --http-password=smith -e robots=off
--recursive --level=1 --no-directories --no-host-directories
http://myhost.mydomain.com/path/to/files/


Share

Friday, January 15, 2010

Chapter 1 summary

Life is getting better:
- life expectancy
- health
- income
- education
- entertainment

Economics: the study of how we make a CHOICE (selections amongst alternatives) under SCARCITY (concepts that there is less available form nature than one desires)

1. Scarcity does not equal poverty
2. Scarcity necessitates RATIONING (allocating scarce goods to those who want them)
3. Scarcity leads to competitive behavior

Resources: human, physical, natural

Capital: human made resources, used to produce other goods/services

Guideposts to economic thinking:
1. opportunity cost (highest valued alternative which you sacrifice when making your choice)
2. individuals are rational (try to get more value at less cost)
3. incentives matter (change incentives, change behavior)
4. individuals make decisions at the margin, using a cost-benefit analysis
5. information helps us make better choices
6. beware of secondary effects (intentions may not equal the result)
7. the value of goods/services is subjective
8. to test a theory = to be able to predict real world events

POSITIVE economics vs. NORMATIVE economics

Pitfalls to avoid economic thinking:
1. ceteris paribus (other things constant)
2. good intentions don't guarantee desirable outcomes
3. association is NOT causation
4. fallacy of composition (what is good for 1 may not be good for ALL)

Saturday, January 9, 2010

How to fix the error: "Linux: can't open /dev/dsp" while trying to use Festival

The error:

$ festival --tts read.txt
Linux: can't open /dev/dsp


The fix:
Create file .festivalrc in the home directory of the user and paste this in it:

(Parameter.set 'Audio_Command "aplay -q -c 1 -t raw -f s16 -r $SR $FILE")
(Parameter.set 'Audio_Method 'Audio_Command)

Then, try the "festival --tts read.txt" command again, the error should be gone, and you should hear the synthesized text to speech stream.


Share

Monday, January 4, 2010

networktraffic script - shows uploads/downloads in the shell

I didn't write this script. Credits for script: whoever wrote it and posted it on some random web site.

---

cat /usr/bin/networktraffic


#!/bin/sh

usage(){
echo "Usage: $0 [-i INTERFACE] [-s INTERVAL] [-c COUNT]"
echo
echo "-i INTERFACE"
echo " The interface to monitor, default is eth0."
echo "-s INTERVAL"
echo " The time to wait in seconds between measurements, default is 3 seconds."
echo "-c COUNT"
echo " The number of times to measure, default is 10 times."
exit 3
}

readargs(){
while [ "$#" -gt 0 ] ; do
case "$1" in
-i)
if [ "$2" ] ; then
interface="$2"
shift ; shift
else
echo "Missing a value for $1."
echo
shift
usage
fi
;;
-s)
if [ "$2" ] ; then
sleep="$2"
shift ; shift
else
echo "Missing a value for $1."
echo
shift
usage
fi
;;
-c)
if [ "$2" ] ; then
counter="$2"
shift ; shift
else
echo "Missing a value for $1."
echo
shift
usage
fi
;;
*)
echo "Unknown option $1."
echo
shift
usage
;;
esac
done
}

checkargs(){
if [ ! "$interface" ] ; then
interface="eth0"
fi
if [ ! "$sleep" ] ; then
sleep="3"
fi
if [ ! "$counter" ] ; then
counter="10"
fi
}

printrxbytes(){
/sbin/ifconfig "$interface" | grep "RX bytes" | cut -d: -f2 | awk '{ print $1 }'
}

printtxbytes(){
/sbin/ifconfig "$interface" | grep "RX bytes" | cut -d: -f3 | awk '{ print $1 }'
}

bytestohumanreadable(){
multiplier="0"
number="$1"
while [ "$number" -ge 1024 ] ; do
multiplier=$(($multiplier+1))
number=$(($number/1024))
done
case "$multiplier" in
1)
echo "$number Kb"
;;
2)
echo "$number Mb"
;;
3)
echo "$number Gb"
;;
4)
echo "$number Tb"
;;
*)
echo "$1 b"
;;
esac
}

printresults(){
while [ "$counter" -ge 0 ] ; do
NOW=`/bin/date`
counter=$(($counter - 1))
if [ "$rxbytes" ] ; then
oldrxbytes="$rxbytes"
oldtxbytes="$txbytes"
fi
rxbytes=$(printrxbytes)
txbytes=$(printtxbytes)
if [ "$oldrxbytes" -a "$rxbytes" -a "$oldtxbytes" -a "$txbytes" ] ; then
echo "$NOW RXbytes = $(bytestohumanreadable $(($rxbytes - $oldrxbytes))) TXbytes = $(bytestohumanreadable $(($txbytes - $oldtxbytes)))"
else
echo "Monitoring $interface every $sleep seconds. (RXbyte total = $(bytestohumanreadable $rxbytes) TXbytes total = $(bytestohumanreadable $txbytes))"
fi
sleep "$sleep"
done
}

readargs "$@"
checkargs
printresults




Example usage:

To monitor eth0 every 10 seconds, a total of 999 times:
networktraffic -i eth0 -s 10 -c 999

To monitor eth3 every second
networktraffic -i eth3 -s 1 -c 99999


Share

Saturday, January 2, 2010

Ubuntu: Fixing error "Error: /etc/resolv.conf must be a symlink"

My error:

root@myserver:~# sudo /etc/init.d/networking start
* Configuring network interfaces...
resolvconf: Error: /etc/resolv.conf must be a symlink
run-parts: /etc/network/if-up.d/000resolvconf exited with return code 1


The fix:

root@myserver:~# cd /etc
root@myserver:/etc# sudo rm -rf /etc/resolv.conf
(if you can't remove the file, try: chattr -i /etc/resolv.conf )
root@myserver:/etc# sudo ln -s /etc/resolvconf/run/resolv.conf
root@myserver:/etc#


Test if the solution worked:

root@zeta:/etc# /etc/init.d/networking restart
* Reconfiguring network interfaces... [ OK ]
root@zeta:/etc#

Sunday, December 6, 2009

Ubuntu - fixing the issue of /etc/resolv.conf being overwritten - edit /etc/dhcp3/dhclient.conf instead!

When you need a DNS server in order to access any sites, and since editing /etc/resolv.conf doesn't always work (because it gets overwritten regularly), thus this won't work work in /etc/resolv.conf:

search yahoo.com
namesever 10.1.10.2

(in my example, 10.1.10.2 is the IP of my router, which in turn has the proper DNS servers from Comcast, but I could use the DNS servers from Comcast just as well)

Since editing the /etc/resolv.conf may not work, instead, nano /etc/dhcp3/dhclient.conf and at the end of the file, add the entries.
Example for Comcast:

supersede domain-name "example.com"
prepend domain-name-server 68.87.74.162, 68.87.68.162

(where the domain-name-server can be a list, comma separated)

Saturday, December 5, 2009

Setting up MRTG on a managed switch (ex: on a SMC8024L switch)

Basic idea:

1
SMC switches have default IP 192.168.2.10, log into the switch via http interface from a PC on the same configured network, change IP (i.e. 10.1.10.10) and community string (from "public" to "digitalagora")

2
Install the snmp tools (example used Ubuntu), or "apt-cache search snmp" and find your favorite tools.
apt-get install snmp

3
Test if you see the switch:

snmpwalk -v 2c -Os -c digitalagora 10.1.10.10 system

sysDescr.0 = STRING: SMC8024L
sysObjectID.0 = OID: enterprises.202.20.59
sysUpTimeInstance = Timeticks: (981900) 2:43:39.00
sysContact.0 = STRING: SYSTEM CONTACT
sysName.0 = STRING: SMC8024L2
sysLocation.0 = STRING: SYSTEM LOCATION
sysServices.0 = INTEGER: 3

4
install mrtg by following: http://oss.oetiker.ch/mrtg/doc/mrtg-unix-guide.en.html Example:

wget http://www.zlib.net/zlib-1.2.3.tar.gz
gunzip -c zlib-*.tar.gz | tar xf -
rm zlib-*.tar.gz
mv zlib-* zlib
cd zlib
./configure
make
cd ..


5
run the cfgmaker tool to create your /etc/mrtg.cfg file, by telling it to connect to the digitalagora community @ the switch's ip. This creates a nice big config file with all the snmp info, provided that there was traffic on those ports. Otherwise, they're commented out.

cfgmaker --global 'WorkDir: /opt/website/mrtg' \
--global 'Options[_]: growright' \
--output /etc/mrtg.cfg \
digitalagora@10.1.10.10


6
run mrtg so that it sees the latest settings
env LANG=C /usr/bin/mrtg /etc/mrtg.cfg


7
Rebuild the web site's index file
indexmaker /etc/mrtg.cfg > /opt/website/mrtg/index.html


8
Look at the output.
http://localhost/mrtg/

Wednesday, December 2, 2009

Tinyproxy - enabing "Anonymous Host" and "Anonymous Authorization"

Testing headers at: www.digitalagora.com/headers


Client's headers when hitting digitalagora.com through Tinyproxy with disabled settings:
#Anonymous "Host"
#Anonymous "Authorization"
where in my example 8 header fields are showing:

Host = digitalagora.com
Connection = close
Via = 1.1 firewallserver (tinyproxy/1.6.5)
Accept = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent = Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.15) Gecko/2009102815 Ubuntu/9.04 (jaunty) Firefox/3.0.15
Accept-Charset = ISO-8859-1,utf-8;q=0.7,*;q=0.7
Accept-Encoding = gzip,deflate
Accept-Language = en-us,en;q=0.5



Client's headers when hitting digitalagora.com through Tinyproxy with enabled settings:
Anonymous "Host"
Anonymous "Authorization"
where in my example 3 header fields are showing:

Host = digitalagora.com
Connection = close
Via = 1.1 firewallserver (tinyproxy/1.6.5)

Tuesday, December 1, 2009

Ubuntu and Apache: How to fix the error: "you have chosen to open ... which is a: application/x-httpd-php"

Ubuntu and Apache

How to fix the error: "you have chosen to open ... which is a: application/x-httpd-php"


Edit the Apache configuration file:
sudo nano /etc/apache2/apache2.conf

Find these 2 lines:
AddType application/x-httpd-php .php .phtml
AddType application/x-httpd-php-source .phps

Comment them by adding a pound sign in front:
#AddType application/x-httpd-php .php .phtml
#AddType application/x-httpd-php-source .phps

Add the following 2 lines right under the first 2 lines:
AddType application/x-httpd-php .php .phtml
AddType application/x-httpd-php-source .phps

Restart Apache:
sudo /etc/init.d/apache2 restart

Close your browser to clear its cache, and access your web page again.

Done.

-----

In a little more detail:

You can telnet to port 80 and view the web page. From the prompt, type:
telnet localhost 80
and then type "GET / HTTP/1.0" without the quotes, and press ENTER two times.
Note that there is a space before the slash and a space after the slash.
The page should then display. Here is an example of before the fix:

root@myfunserver:~# telnet localhost 80
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.0

HTTP/1.1 200 OK
Date: Tue, 01 Dec 2009 21:40:03 GMT
Server: Apache/2.2.11 (Ubuntu) PHP/5.2.6-3ubuntu4.4 with Suhosin-Patch
Last-Modified: Fri, 20 Nov 2009 08:18:29 GMT
ETag: "b7a6b-f-478c91ee61f40"
Accept-Ranges: bytes
Content-Length: 15
Connection: close
Content-Type: x-httpd-php

Website works

Connection closed by foreign host.
root@myfunserver:~#

Notice that the Content-Type is: x-httpd-php. Now, after the change:

root@myfunserver:~# telnet localhost 80
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.0

HTTP/1.1 200 OK
Date: Tue, 01 Dec 2009 21:40:03 GMT
Server: Apache/2.2.11 (Ubuntu) PHP/5.2.6-3ubuntu4.4 with Suhosin-Patch
Last-Modified: Fri, 20 Nov 2009 08:18:29 GMT
ETag: "b7a6b-f-478c91ee61f40"
Accept-Ranges: bytes
Content-Length: 15
Connection: close
Content-Type: text/html

Website works

Connection closed by foreign host.
root@myfunserver:~#

Notice that the content type is text/html.

How to play videos in Ubuntu

sudo apt-get update
sudo apt-get install vlc vlc-plugin-esd

Wednesday, November 25, 2009

How to back up a system to another server using rsync and get an email with a result

1. Create a trust relationship between the system which needs to backed up and the system where the files will get backed up.

Replace ALPHA and BETA with the proper server names. (to add more server names, "sudo nano /etc/hosts" and add the ip and the name you wish to assign to each server)

ALPHA = server 1, where to log in from (on ALPHA, do this as user root)
BETA = server 2, destination where we log in

ALPHA: ssh-keygen -t rsa
BETA: mkdir .ssh
ALPHA: cat .ssh/id_rsa.pub | ssh user@BETA 'cat >> .ssh/authorized_keys'
BETA: chmod 644 .ssh/authorized_keys

2. As root, create a backup script

Replace "abc" with the name of your server which you are backing up.

Create the file: "nano /usr/bin/backupabc" and paste the script below, change:
- the backup server name, ex: mybackupserver
- the user id you use on the backup server, ex: my-user-id-on-backup-server
- the backup paths on the backup server, ex: /mnt/mybigdrive/backups/abc/
- your email address, ex: my.lovely.email@gmail.com (make sure you install mail: "sudo apt-get install mailutils" )


#!/bin/sh

LOG=/tmp/backupabc.log

START=$(date +%s)
echo "" > $LOG
echo "Start " >> $LOG
echo `date` >> $LOG

rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /bin/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/bin/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /boot/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/boot/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /etc/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/etc/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /home/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/home/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /lib/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/lib/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /opt/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/opt/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /root/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/root/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /sbin/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/sbin/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /srv/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/srv/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /usr/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/usr/ >> $LOG
rsync --verbose --links --recursive --delete-during --human-readable --progress --itemize-changes /var/ my-user-id-on-backup-server@mybackupserver:/mnt/mybigdrive/backups/abc/var/ >> $LOG

END=$(date +%s)
DIFF=$(( $END - $START ))

echo "I have ran the /usr/bin/backupabc script and it took $DIFF seconds" >> $LOG
echo "\nEnd " >> $LOG
echo `date` >> $LOG

cat $LOG |  mail -s "mybackupserver: backed up abc" my.lovely.email@gmail.com



3. As root, run the script manually:
/usr/bin/backupabc
OR
add the script to the crontab to run every day at 10 pm (22 hrs) (as root):
crontab -e   (if prompted, use "nano" as the editor)
0 22 * * * /usr/bin/backupabc

To see the log while it's being built, open another shell and:
tail -f /tmp/backupabc

Tuesday, November 24, 2009

How to set up Apache and limit access per IP - mod_limitipconn.so module

# Get Apache with the apxs2 tool
apt-get install apache2-threaded-dev

# test that apxs works
which apxs2


nano /etc/apache2/apache2.conf

and add this at the bottom:

# This command is always needed
ExtendedStatus On

# Only needed if the module is compiled as a DSO
LoadModule limitipconn_module lib/apache/mod_limitipconn.so

<IfModule mod_limitipconn.c>

    # Set a server-wide limit of 10 simultaneous downloads per IP,
    # no matter what.
    MaxConnPerIP 10
    <Location /somewhere>
        # This section affects all files under http://your.server/somewhere
        MaxConnPerIP 3
        # exempting images from the connection limit is often a good
        # idea if your web page has lots of inline images, since these
        # pages often generate a flurry of concurrent image requests
        NoIPLimit image/*
    </Location>

    <Directory /home/*/public_html>
        # This section affects all files under /home/*/public_html
        MaxConnPerIP 1
        # In this case, all MIME types other than audio/mpeg and video*
        # are exempt from the limit check
        OnlyIPLimit audio/mpeg video
    </Directory>
</IfModule>

# Modify the "/somewhere" to match the alias (not directory) which you are protecting.



# Add this mod at the bottom of the actions.load file:
  cd /etc/apache2/mods-available
  nano actions.load
# Add this at the end of the file:
  LoadModule evasive20_module /usr/lib/apache2/modules/mod_evasive20.so

# edit the httpd conf (not the apache2.conf) config file:
  nano /etc/apache2/httpd.conf
# add the following 2 comments at the bottom of the file, with the pound sign in front,
# this will ensure that in the following steps, the "make install" won't barf.

# Dummy LoadModule directive to aid module installations
#LoadModule dummy_module /usr/lib/apache2/modules/mod_dummy.so




# Download the limit ip connection module and set it up
  wget http://dominia.org/djao/limit/mod_limitipconn-0.23.tar.bz2
  tar -jxvf mod_limitipconn-0.23.tar.bz2
  cd mod_limitipconn-0.23
  nano Makefile
# Look for apxs and modify it to apxs2
  make
  make install
# If the "make install" barfs with an error such as:
  apxs:Error: Activation failed for custom /etc/apache2/httpd.conf file..
  apxs:Error: At least one `LoadModule' directive already has to exist..
then you forgot to edit the httpd.conf file and add the dummy module entry (see above).

Friday, November 20, 2009

How to convert an .avi to .mpeg in Ubuntu

sudo apt-get install libavcodec-unstripped-51
sudo apt-get install ffmpeg
ffmpeg -i holiday.avi -aspect 16:9 -target ntsc-dvd holiday.mpeg
(and then wait a long time)

Sunday, November 15, 2009

How to convert uif to iso

This information is copied from: http://wesleybailey.com/articles/convert-uif-to-iso
Tested successfuly.
-----------------------------------


Convert UIF to ISO

The fastest way to convert an UIF image to ISO image is UIF2ISO. It is a speedy command line tool, that will save you the hassle of installing wine and MagicISO.

This is how I downloaded and installed UIF2ISO, written by Luigi Auriemma. - http://aluigi.altervista.org/

1. We first need to install zlib and OpenSSL with apt-get.

sudo apt-get install zlib1g zlib1g-dev libssl-dev build-essential

2. Now we can download UIF2ISO with wget from a terminal, or from the author’s site here.

wget http://aluigi.altervista.org/mytoolz/uif2iso.zip

3. Once you have the file downloaded, unzip it and cd into the directory.

unzip uif2iso.zip
cd src

4. Finally compile the source, and create the executable.

make
sudo make install

5. Now you can convert the .uif file to an .iso with the following command:

uif2iso example.uif output.iso

Mounting an ISO

You don't necessarily need to burn a cd in order to access the files within the ISO. You can mount it with some simple commands.

Here is how to mount the ISO from command line.

sudo modprobe loop
sudo mkdir ISO_directory
sudo mount /media/file.iso /media/ISOPoint/ -t iso9660 -o loop


Friday, November 13, 2009

Eratosthenes Sieve prime number benchmark in Java




// Eratosthenes Sieve prime number benchmark in Java
import java.awt.*;

public class Sieve // extends java.applet.Applet implements Runnable {
{
String results1, results2;

void runSieve()
{
int SIZE = 8190;
boolean flags[] = new boolean[SIZE+1];
int i, prime, k, iter, count;
int iterations = 0;
double seconds = 0.0;
int score = 0;
long startTime, elapsedTime;

startTime = System.currentTimeMillis();
while (true) {
count=0;
for(i=0; i<=SIZE; i++) flags[i]=true;
for (i=0; i<=SIZE; i++) {
if(flags[i]) {
prime=i+i+3;
for(k=i+prime; k<=SIZE; k+=prime)
flags[k]=false;
count++;
}
}
iterations++;
elapsedTime = System.currentTimeMillis() - startTime;
if (elapsedTime >= 10000) break;
}
seconds = elapsedTime / 1000.0;
score = (int) Math.round(iterations / seconds);
results1 = iterations + " iterations in " + seconds + " seconds";
if (count != 1899)
results2 = "Error: count <> 1899";
else
results2 = "Sieve score = " + score;
}

public static void main(String args[])
{
Sieve s = new Sieve();
}

public Sieve()
{
System.out.println("Running Sieve - please wait 10 seconds for results...");
runSieve();
System.out.println( results1 );
System.out.println( results2 );
}

}



Wednesday, November 11, 2009

Ubuntu: How to fix the apt-get update error: W: GPG error: http://ppa.launchpad.net intrepid Release: The following signatures couldn't be verified be

The problem is during apt-get update:

...
Reading package lists... Done
W: GPG error: http://ppa.launchpad.net intrepid Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B9FBE5158B3AFA9
W: You may want to run apt-get update to correct these problems


Solution:

gpg --keyserver keyserver.ubuntu.com --recv 8B9FBE5158B3AFA9
gpg --export --armor 8B9FBE5158B3AFA9 | sudo apt-key add -


Update should work now:

sudo apt-get update

Sunday, November 8, 2009

How to mount a remote file system in Ubuntu

# install the utility
sudo apt-get install sshfs

# make a directory where to mount the remote file system
sudo mkdir /mnt/backups
sudo chown YOURUSERNAME /mnt/alpha

# mount the remote drive
sshfs YOURUSERNAME@192.168.1.123:/home/YOURUSERNAME/backups /mnt/backups

# check to see that the files are mounted
ls -la /mnt/backups

How to listen to mp3s in Ubuntu/Linux

sudo apt-get install amarok
sudo apt-get install libxine1-ffmpeg

(Amarok needs the libxine codec to decode mp3s)

Saturday, November 7, 2009

How to log into another server without asking you for a pasword - in 4 steps.

ALPHA = server 1, where to log in from
BETA = server 2, destination where we log in


ALPHA: ssh-keygen -t rsa
BETA: mkdir .ssh
ALPHA: cat .ssh/id_rsa.pub | ssh user@BETA 'cat >> .ssh/authorized_keys'
BETA: chmod 644 .ssh/authorized_keys


To establish a mirror relationship, exchange server ALPHA with BETA and run through the 4 steps again.

Friday, October 16, 2009

How to configure and install Tinyproxy

How to configure and install Tinyproxy

Download Tinyproxy - go to https://www.banu.com/tinyproxy/download/ and download the latest version
ex: wget https://www.banu.com/pub/tinyproxy/1.6/tinyproxy-1.6.5.tar.gz

Unpackage
tar xzvf tinyproxy-1.6.5.tar.gz

Build
cd tinyproxy-1.6.5
./configure
make
sudo make install


Edit the configuration file:
nano /usr/local/etc/tinyproxy/tinyproxy.conf

or use my version of it:


sudo su -
cd /usr/local/etc/tinyproxy
echo "" > tinyproxy.conf
nano tinyproxy.conf

and paste this. Make sure to change YOUR_USER_NAME to be the name of the
user account from which you are running Tinyproxy


# ==================================================================
##
## tinyproxy.conf -- tinyproxy daemon configuration file
##

#
# Name of the user the tinyproxy daemon should switch to after the port
# has been bound.
#
User YOUR_USER_NAME
Group YOUR_USER_NAME

#
# Port to listen on.
#
Port 8888

#
# If you have multiple interfaces this allows you to bind to only one. If
# this is commented out, tinyproxy will bind to all interfaces present.
#
#Listen 192.168.0.1
Listen 127.0.0.1
#
# The Bind directive allows you to bind the outgoing connections to a
# particular IP address.
#
#Bind 192.168.0.1

#
# Timeout: The number of seconds of inactivity a connection is allowed to
# have before it closed by tinyproxy.
#
Timeout 600

#
# ErrorFile: Defines the HTML file to send when a given HTTP error
# occurs. You will probably need to customize the location to your
# particular install. The usual locations to check are:
# /usr/local/share/tinyproxy
# /usr/share/tinyproxy
# /etc/tinyproxy
#
# ErrorFile 404 "/usr/share/tinyproxy/404.html"
# ErrorFile 400 "/usr/share/tinyproxy/400.html"
# ErrorFile 503 "/usr/share/tinyproxy/503.html"
# ErrorFile 403 "/usr/share/tinyproxy/403.html"
# ErrorFile 408 "/usr/share/tinyproxy/408.html"

#
# DefaultErrorFile: The HTML file that gets sent if there is no
# HTML file defined with an ErrorFile keyword for the HTTP error
# that has occured.
#
DefaultErrorFile "/usr/share/tinyproxy/default.html"

#
# StatFile: The HTML file that gets sent when a request is made
# for the stathost. If this file doesn't exist a basic page is
# hardcoded in tinyproxy.
#
StatFile "/usr/share/tinyproxy/stats.html"

#
# Where to log the information. Either LogFile or Syslog should be set,
# but not both.
#
Logfile "/var/log/tinyproxy.log"
# Syslog On

#
# Set the logging level. Allowed settings are:
# Critical (least verbose)
# Error
# Warning
# Notice
# Connect (to log connections without Info's noise)
# Info (most verbose)
# The LogLevel logs from the set level and above. For example, if the LogLevel
# was set to Warning, than all log messages from Warning to Critical would be
# output, but Notice and below would be suppressed.
#
LogLevel Info

#
# PidFile: Write the PID of the main tinyproxy thread to this file so it
# can be used for signalling purposes.
#
PidFile "/var/run/tinyproxy.pid"

#
# Include the X-Tinyproxy header, which has the client's IP address when
# connecting to the sites listed.
#
#XTinyproxy mydomain.com

#
# Turns on upstream proxy support.
#
# The upstream rules allow you to selectively route upstream connections
# based on the host/domain of the site being accessed.
#
# For example:
# # connection to test domain goes through testproxy
# upstream testproxy:8008 ".test.domain.invalid"
# upstream testproxy:8008 ".our_testbed.example.com"
# upstream testproxy:8008 "192.168.128.0/255.255.254.0"
#
# # no upstream proxy for internal websites and unqualified hosts
# no upstream ".internal.example.com"
# no upstream "www.example.com"
# no upstream "10.0.0.0/8"
# no upstream "192.168.0.0/255.255.254.0"
# no upstream "."
#
# # connection to these boxes go through their DMZ firewalls
# upstream cust1_firewall:8008 "testbed_for_cust1"
# upstream cust2_firewall:8008 "testbed_for_cust2"
#
# # default upstream is internet firewall
# upstream firewall.internal.example.com:80
#
# The LAST matching rule wins the route decision. As you can see, you
# can use a host, or a domain:
# name matches host exactly
# .name matches any host in domain "name"
# . matches any host with no domain (in 'empty' domain)
# IP/bits matches network/mask
# IP/mask matches network/mask
#
#Upstream some.remote.proxy:port

#
# This is the absolute highest number of threads which will be created. In
# other words, only MaxClients number of clients can be connected at the
# same time.
#
MaxClients 100

#
# These settings set the upper and lower limit for the number of
# spare servers which should be available. If the number of spare servers
# falls below MinSpareServers then new ones will be created. If the number
# of servers exceeds MaxSpareServers then the extras will be killed off.
#
MinSpareServers 5
MaxSpareServers 20

#
# Number of servers to start initially.
#
StartServers 100

#
# MaxRequestsPerChild is the number of connections a thread will handle
# before it is killed. In practise this should be set to 0, which disables
# thread reaping. If you do notice problems with memory leakage, then set
# this to something like 10000
#
MaxRequestsPerChild 0

#
# The following is the authorization controls. If there are any access
# control keywords then the default action is to DENY. Otherwise, the
# default action is ALLOW.
#
# Also the order of the controls are important. The incoming connections
# are tested against the controls based on order.
#
Allow 127.0.0.1
#Allow 192.168.1.0/25

#
# The "Via" header is required by the HTTP RFC, but using the real host name
# is a security concern. If the following directive is enabled, the string
# supplied will be used as the host name in the Via header; otherwise, the
# server's host name will be used.
#
ViaProxyName "tinyproxy"

#
# The location of the filter file.
#
#Filter "/etc/tinyproxy/filter"

#
# Filter based on URLs rather than domains.
#
#FilterURLs On

#
# Use POSIX Extended regular expressions rather than basic.
#
#FilterExtended On

#
# Use case sensitive regular expressions.
#
#FilterCaseSensitive On

#
# Change the default policy of the filtering system. If this directive is
# commented out, or is set to "No" then the default policy is to allow
# everything which is not specifically denied by the filter file.
#
# However, by setting this directive to "Yes" the default policy becomes to
# deny everything which is _not_ specifically allowed by the filter file.
#
#FilterDefaultDeny Yes

#
# If an Anonymous keyword is present, then anonymous proxying is enabled.
# The headers listed are allowed through, while all others are denied. If
# no Anonymous keyword is present, then all header are allowed through.
# You must include quotes around the headers.
#
#Anonymous "Host"
#Anonymous "Authorization"

#
# This is a list of ports allowed by tinyproxy when the CONNECT method
# is used. To disable the CONNECT method altogether, set the value to 0.
# If no ConnectPort line is found, all ports are allowed (which is not
# very secure.)
#
# The following two ports are used by SSL.
#
ConnectPort 443
ConnectPort 563
ConnectPort 6667
ConnectPort 6668
ConnectPort 6669
ConnectPort 7000
ConnectPort 80
# ==================================================================

Make some config files readable:
sudo chmod a+r /usr/local/etc/tinyproxy/tinyproxy.conf

Create the log file:
sudo touch /var/log/tinyproxy.log
sudo chmod a+rw /var/log/tinyproxy.log
sudo touch /var/run/tinyproxy.pid
sudo chmod a+rw /var/run/tinyproxy.pid





You can optionally create a startup script for tinyproxy, in your home directory:
nano starttinyproxy
and paste this:

#!/bin/sh
killall tinyproxy
/usr/local/sbin/tinyproxy -c /usr/local/etc/tinyproxy/tinyproxy.conf -d &
sleep 5
tail /var/log/tinyproxy.log

save it, and make it executable:
chmod u+x starttinyproxy



Exit from root, and under your account, start up Tinyproxy:
./starttinyproxy

Wednesday, October 7, 2009

How to set up the Linksys WUSB300N wireless N device to work with Linux/Ubuntu

How to set up the Linksys WUSB300N wireless N device to work with Linux/Ubuntu

Credits: mcdsco - http://ubuntuforums.org/showthread.php?t=530772

# start a shell, and log in as root
sudo su -

# install ndiswrapper for your system, this could vary, get a new version
cd /root
wget http://downloads.sourceforge.net/project/ndiswrapper/stable/1.55/ndiswrapper-1.55.tar.gz?use_mirror=softlayer
gzip -d ndiswrapper-1.55.tar.gz
tar -xvf ndiswrapper-1.55.tar
cd ndiswrapper-1.55
make install


# get the relevant files for the Linksys WUSB300N wireless device
mkdir /opt/ndis
cd /opt/ndis
wget http://www.atvnation.com/WUSB300N.tar
tar xvf WUSB300N.tar -C /opt/ndis/
cd /opt/ndis/Drivers

# install the drivers
ndiswrapper -i netmw245.inf

# plug the USB wireless device into the PC and:
modprobe ndiswrapper

# check to see if the device is seen:
dmesg | grep ndis
[ 4336.851339] ndiswrapper version 1.53 loaded (smp=yes, preempt=no)
[ 4336.890513] usbcore: registered new interface driver ndiswrapper
[ 4636.519061] ndiswrapper: driver netmw245 (Linksys, A Division of Cisco Systems, Inc.,12/07/2006,1.0.5.1) loaded


At this point, the device should work. Go to the wireless settings, set up your connection.
Type "ifconfig" to see the network configuration, the wireless device should show up under "wlan0".

Tuesday, October 6, 2009

College of Business at FSU



College of Business faculty: http://cob.fsu.edu/faculty/faculty_staff.cfm?type=2


========================
Some fun core courses
========================
ACG5026 Financial Reporting and Managerial Control
This course provides a basic understanding of accounting systems and financial statements as a foundation for analysis. The course also addresses cost systems and controls as they pertain to organizational control. Cannot be taken for credit for the Master of Accounting degree.
9780470128824 Financial Accounting in Economic Context Pratt 2009 7TH Required Textbook
9780967507200 Code Blue (w/264 or 261 pgs) McDermott 2002 3RD Required Textbook
ACG5026 Course Notes Target Copy Required Other
Stevens, Douglas E, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=399

========================
BUL5810 The Legal & Ethical Environment of Business
no sections open for Spring 2010
========================
FIN5425 Problems in Financial Management
no sections open for Spring 2010
========================
ISM5021 Problems in Financial Management
Applied course in concepts and techniques used in the design and implementation of management information systems and decision support systems, with emphasis on management of these systems
Textbooks and materials not yet assigned
Wasko, Molly M, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=305
T R 2:00-3:15 RBA 0202
========================
MAR5125 Marketing Strategy in the Global Environment
This course examines the business-level marketing strategy in the context of global markets and uses the marketing-planning process as a framework for understanding how global environments, markets, and institutions affect the strategic marketing operations of the global business enterprise.
9780324362725 Marketing Strategy Ferrell 2008 4TH Required Textbook
9781591396192 Blue Ocean Strategy Kim 2005 Required Textbook
Hartline, Michael D, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=306
========================
MAN5245 Leadership and Organizational Behavior
This course offers a dynamic examination of managerial concepts of human behavior in work organizations.
9780324578737 Organizational Behavior Nelson 2009 6th Required Textbook
Douglas, Ceasar, http://cob.fsu.edu/man/hrcenter/faculty.cfm
========================
MAN5501 Production and Operations Management
Develops a conceptual framework which is useful in describing the nature of the operations function, with emphasis on identifying basic issues in managing the operations of a service organization.
9780324662559 Operations Management David Collier and James Evans 2009-2010 Required Textbook
Smith, Jeffery S, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=421
========================
MAN5716 Economics and Business Conditions
Problems of managing the firm in relation to the changing economic environment. Analysis of major business fluctuations and development of forecasting techniques.
No textbook required
Christiansen, William A, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=25
========================
MAN5721 Strategy and Business Policy
The course covers the relation between theories and practices of management, and focuses on utilizing methododologies and theories for strategic decision making.
9780132341387 Strategic Management: Concepts & Cases Carpenter 2009 2ND Recommended Textbook
M W 9:30 - 10:45 RBA 0202
Holcomb, Timothy R, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=427
========================


========================
Flex options
========================
FIN5515 Investments
This course offers an analysis of financial assets with emphasis on the securities market, the valuation of individual securities, and portfolio management.
9780324656121 Investment Analysis and Portfolio Management Reilly and Brown 9th Required Textbook
T R 3:35-4:50PM
Doran, James S, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=368
========================
ISM5315 Project Management
no sections open for Spring 2010
========================
MAR5465 Supply Chain Marketing
no sections open for Spring 2010
========================
RMI5011 Fundamentals of Risk Management
This course develops concepts such as time value of money, statistical analysis, information technology, and management of risk exposure. Topics include risk fundamentals, risk management, insurer operations, and insurance regulation.
9780072339703 Risk Management & Insurance Harrington 2004 2ND Required Textbook
M W 11am-12:15pm
Born, Patricia H, http://cob.fsu.edu/faculty/display_faculty_info.cfm?pID=458
========================

Thursday, September 10, 2009

Summary of the talk by Prof. Ted Baker

Summary of the talk by Prof. Ted Baker

Alan Lupsha

Professor Ted Baker’s area of research is real-time systems. He focuses on real-time runtime systems, real-time scheduling and synchronization and real-time software standards.

Real-time scheduling for multiprocessors involves finding ways to guarantee deadlines for tasks which are scheduled on multiprocessor systems. A main problem with scheduling is that it is very difficult to meet constraints, given specific computational workloads. As workloads vary, meeting given constraints can be achieved with different guarantees. For example, the guarantee of execution differs when given constraints for fault tolerance, window of execution or energy usage. The quality of scheduling can vary as well, as this quality can quantify how well the schedule guarantees the meeting of deadlines or how late the task will complete over the deadline. Once an algorithm is able to schedule a workload, a schedule can also vary in sensitivity in proportionality with the variation in the parameters of the execution.

Professor Baker looks at workload models which involve jobs, tasks and task systems. Jobs are units of computation that can be scheduled with a specific arrival time, worst-case execution time, or deadline. Tasks are sequences of jobs, and can depend on other tasks. Sporadic tasks have two specific qualities: they have a minimum inter-arrival time, and they have a worst case execution time. Task systems are sets of tasks, where tasks can be related or they can be independent (scheduled without consideration of interactions, precedence or coordination).

Scheduling involves models, which can be defined as having a set of (identical) processors, shared memory, and specific algorithms. These algorithms can be preemptive or non-preemptive, on-line (decisions are made on the fly as instructions arrive) or off-line, and global or partitioned (split amongst processors where they can predict in advance the workload for each processor). There are three typical scheduling algorithms and tests. The first one is “fixed task-priority scheduling”, where the highest priority tasks run first. The second is “earliest deadline first”, where higher loads are handled without missing the deadline (these algorithms are easier to implement). The third type of algorithms (which are not used in single processing systems but only in multi-processor systems) are “earliest deadline zero laxity”, where the execution of a job can be delayed without missing the given deadline.

The difficulty of scheduling is that there is no practical algorithm for scheduling a sporadic task. One example of a scheduling test is the density test, where one can analyze what fraction of the processor is needed to serve a given task. Professor Baker researches task scheduling and is looking for acceptable algorithms which are practical, given specific processing constraints.

Tuesday, September 8, 2009

Summary of the Talk by Prof. FeiFei Li

Summary of the Talk by Prof. FeiFei Li

Alan Lupsha

Professor FeiFei Li researches Database Management and Database technologies. His research focuses on efficient indexing, querying and managing large scale databases, spatio-temporal databases and applications, and sensor and stream databases.

Efficient indexing, querying and managing large scale databases deals with problems such as retrieving structured data from the web and automating the process of identifying the structure of web sites (ex. to create customized reports for users). It is important to interpret web pages and to identify data tree structures. This allows one to first create a schema for the structure of the data, and then to integrate information from different sources together in a meaningful way. The topic of indexing higher dimensional data (using tree structures and multi dimensional structures) deals with space partitioning that indexes data anywhere from 2 to 6 dimensions.

The topic of spatio-temporal databases and applications deals with the execution of queries, like finding solutions to NP-hard problems such as the traveling salesman problem. A solution uses a greedy algorithm, which has a start node location and finds the nearest neighbor in each predefined category of nodes. By minimizing the sum distance (using the minimum sum distance algorithm), a path from a start to and end node is found in such a way that each category is visited, and the solution is at most 3 times the complexity of the optimal solution.

Sensor and stream databases deal with the integration of sensors into network models. A large set of sensors is distributed in a sensor field, and a balance is sought to solve problems such as data flow between sensors, hierarchy of sensors and efficient data transmission for the purpose of saving battery life. Professor Li analyzes the best data flow models between sensors and different ways to group sensors so that hub nodes transmit data further to other hub nodes (an example of such an application is the monitoring of temperatures on an active volcano). One can not use broadcast since this would drain the sensors’ battery life. Thus, routing methods and fail over mechanisms are examined, to ensure that all sensor data is properly being read.

Professor Li also researches problems with the method of Identical Independent Distributed Random Noise (IID), which introduces errors in data sets for the purpose of hiding secret data, while maintaining correct data averages and other data benchmarks (for example hiding real stock data or employees’ salaries, but preserving averages). The problem with IID is that attackers can filter out outliers in data and still extract the data that is meant to remain secret. A solution to this problem is to add noise to the original component of the data set by adding the same amount of noise, but in parallel to the principal component. This yields more securely obfuscated data.

Thursday, September 3, 2009

Summary of the talk by Prof. Zhenhai Duan

Summary of the talk by Prof. Zhenhai Duan

Alan Lupsha

Professor Zhenhai Duan researches accountable and dependable Internet with good end-to-end performance. There is currently a serious problem with the Internet because it lacks accountability and there is not enough law enforcement. It is very hard to find out who did something wrong because hackers do not worry about breaking the law and they cover their tracks in order to not get caught. There is a need to design protocols and architectures which can prevent bad activities from happening and which can easier identify attackers.

The current Internet lacks accountability, as even if there are no attacks, there are still many problems. For example, the time to recover during routing failures is too long, and DNS also has many issues. Dependable Internet defines higher accountability for banking and secure applications. End-to-end performance also needs to be high, especially for more important applications which need a greater guarantee of data delivery.

Professor Duan’s research projects include network security, solutions to network problems, routing, and intrusion detection. In IP spoofing attacks it is difficult to isolate attack traffic from legitimate traffic, and these attacks include the man-in-the-middle method with TCP hijacking and DNS poisoning, as well as reflector-based attacks with DNS requests and DDOS. There are distributed denial of service attacks which are issued from bot nets made up of millions of zombie (compromised) computers. To solve these network problems, professor Duan researches route-based filtering techniques. These techniques take advantage of the fact that hackers can spoof their source addresses but they can not control the route of the packets, while filters which know part of the network topology can isolate illegitimate traffic.

Inter-Domain Packet Filter (IDPF) systems identify feasible routes based on the BGP (an Internet domain routing protocol) updates. These systems evaluate the performance of other IDPFs based on Autonomous Systems graphs. It is hard to completely protect an Autonomous System from spoofing attacks, but IDPFs can effectively limit the spoofing capability of attackers. Using the vertex cover algorithm, one can prevent attackers in 80.8% of the networks which are attacked. If the attacks can not be prevented, one can still look at the topology and determine who are the candidates of the source packets. IDPFs are effective in helping IP traceback, as all Autonomous Systems can localize attackers. The placement of IDPFs also plays a very important role in the performance of protecting networks.

Since botnets are becoming a major security issue, and they are used in distributed denial of service attacks, spamming and identity theft, there is a greater need for utility based detection of zombie machines. The SPOT system is one system being researched which classifies messages as spam or not spam. It computes a function based on the sequential probability ratio test, using previously learned behavior of systems, and finally arriving at one of two different hypotheses, classifying messages as spam or not spam. Professor Duan is currently testing the SPOT system and improving it.

Tuesday, September 1, 2009

Summary of the talk by Prof. Mike Burmester

Summary of the talk by Prof. Mike Burmester

Alan Lupsha

Professor Mike Burmester is interested in research in areas of radio frequency identification and ubiquitous applications, mobile ad hoc networks (MANET) and sensor networks, group key exchange, trust management and network security, and digital forensics. New wireless technologies offer a great wireless medium, but unfortunately the current state of world research is not mature enough to fully understand and mange these new technologies. The fourth generation of wireless technologies, which should work both in the European Union and in the United States, will offer new challenges and opportunities for maturity in this field.

The RFID revolution will be the next big factor which will allow easier management of products. This technology is already being implemented in library systems, allowing easier book management and replacing bar codes, which requires line of sight in order to scan each book. Airports are also implementing RFID for luggage management, and hospitals use RFID tags to protect newborns from being kidnapped. Different types of sensor networks are used extensively in factory floor automation, border fencing and in a plethora of military applications. Sensors will also be extensively used in monitoring biological levels in people. For example, a blood level monitor can monitor and alert a diabetic person if their sugar level is too high or too low.

Mobile ad-hoc networks (MANET) offer information routing between wireless devices which are mobile. Vehicular ad-hoc networks (VANET) are a type of mobile ad-hoc networks which allow communication between moving vehicles. These networks allow individual wireless devices to act as nodes and to route information between other communicating devices, thus reducing the need of dedicated wireless nodes. Ubiquitous networks allow applications to relocate between wireless devices, thus following a mobile user on his or her journey, while continuing to provide needed services.

These new wireless technologies will also need proper management. Some of the new issues at hand include centralizing or decentralizing systems, finding out who will protect certain systems, ensuring data security (such as confidentiality, avoiding eavesdropping, guaranteeing privacy), preserving data integrity (avoid the modification and corruption of data), and data availability (dealing with denial of service attacks, identifying rogue based stations, dealing with man in the middle attacks, detecting and avoiding session tempering and session hijacking).

There is a trade-off between security and functionality. It is extremely challenging to secure wireless networks, but in certain cases one may desire less security in order achieve cheaper wireless products and technologies. Using secured pipelines to create point to point communication does ensure some security, but there are still problems at the physical layer, where attacks can be carried out. Hackers are keen to intercept and manipulate wireless data, making this a very attractive environment for them and creating the the challenge to try and stay ahead of the users of these technologies. This gives rise to great security threats, but it also opens up a niche for researchers to study and create new wireless network security technologies.

Thursday, August 27, 2009

Bosch 5 pin SPDT relays


Bosch 5 pin relay - SPST single pole single throw:
====================================================
white 85 coil source: +
black 86 coil ground: -

yellow 87 normally open: to + of load
red 87a normally closed: to + of load
blue 30 common: to - of load
====================================================

Wednesday, August 26, 2009

IRC stuff

P - priority, from 0 to 10
T - type of message:
1: plaintext data
2: base64 encoded data
D - data
Can be indexed, ex: D1, D2 ... D65535


Examples:
[P:0][T:1][D1: // priority 0, type chat, data 1

Tuesday, August 4, 2009

How to install Sun Glassfish sges-2_1-linux.bin

# Install the missing library
apt-get install libstdc++5

# go to the download location
cd /opt/downloads/

# execute the binary
./sges-2_1-linux.bin


Answer all the prompts. If you didn't install Java 2 SDK 5.0 or greater, go to another shell (ALT F2), log in and install it:

cd /opt/downloads/jdk
chmod u+x jdk-6u14-linux-i586.bin
./jdk-6u14-linux-i586.bin

and then move the installation:
mv jdk1.6.0_14 /opt/jdk16014

and then go back to the glassfish installation (ALT F1) and specify the installation directory:
/opt/jdk16014

When prompted for all the settings, make sure to enter an admin password (ex: adminadmin)

To start the app server:
/opt/SUNWappserver/bin/asadmin start-domain domain1

Console: http://localhost:4848


Install mysql:
apt-get install mysql-server-5.1
apt-get install mysql-client-5.1

mysql -u root -p

How to install Java after a fresh Ubuntu installation

sudo apt-get update
sudo apt-get install sun-java6-bin sun-java6-jre sun-java6-plugin
java -version

Monday, July 27, 2009

Lorem Ipsum

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam vitae sollicitudin magna. Integer mi quam, tristique et tincidunt ut, scelerisque at nulla. Nunc tincidunt nibh ut nunc ultrices placerat. Phasellus dolor mi, molestie vel condimentum lacinia, mattis laoreet lacus. Cras quam diam, lobortis ac cursus a, ultricies sit amet justo. Praesent nec velit at tellus condimentum tristique. Aliquam ultrices elit eu nunc facilisis ullamcorper. Phasellus quis lacus sapien, quis mattis urna. Nulla ut felis sed elit venenatis fringilla. Cras interdum posuere augue in ornare. Donec tempus convallis leo eu posuere. Ut id mi felis. Curabitur eget sem ac leo commodo lacinia in sed libero. Mauris sit amet lacus eget erat tincidunt tempor. Proin vitae erat convallis sapien euismod sollicitudin in egestas eros. Praesent felis augue, cursus nec tincidunt a, dapibus hendrerit lacus. Aenean sapien turpis, iaculis et suscipit eu, egestas ut lorem. Pellentesque nec purus sem, ac placerat tortor. Nullam ultricies elementum commodo.

Donec eget dolor risus. Fusce at augue sed felis imperdiet auctor. Nulla auctor faucibus sapien, nec sagittis ante interdum eget. Cras molestie aliquet nisl, ut interdum risus mattis eget. Aliquam in accumsan nisl. Suspendisse feugiat magna in lacus facilisis a rutrum sem aliquam. Nam congue ultricies sagittis. Sed accumsan viverra elit. Aliquam erat volutpat. Phasellus sit amet nisi hendrerit ante fringilla convallis. Morbi at nibh vitae urna pharetra tincidunt eu non diam. Aliquam ac odio mauris. Nunc pretium, sapien non pharetra viverra, dui quam dignissim ligula, in auctor diam leo ultricies felis. Donec augue tellus, luctus id sollicitudin eget, venenatis a ante.

Nunc viverra gravida tincidunt. Aliquam sed laoreet neque. Pellentesque nec urna leo. Aliquam sit amet mauris magna, sed dignissim elit. In eget metus ante. Suspendisse quam risus, dictum vel tincidunt id, tincidunt a eros. Nam id urna ligula. Proin condimentum arcu ac nisl lacinia non luctus sem laoreet. Cras fringilla, erat id malesuada pharetra, arcu tellus aliquam mauris, in fringilla risus eros eget justo. In blandit dui vitae risus placerat ac venenatis sem scelerisque. Donec et ligula dui. Ut sed augue vel nibh rhoncus malesuada. Quisque porttitor orci aliquam elit tempus eu gravida purus tristique. Quisque mauris sem, pulvinar ac mattis vitae, aliquam in enim. Cras congue ornare porta. Duis iaculis tristique mollis. Mauris sed urna odio. Praesent fringilla lobortis metus, non dignissim magna bibendum eu.

Vivamus feugiat nulla vel nisi imperdiet semper. Quisque in neque ut nibh vulputate blandit. Duis vel augue ante. Vivamus laoreet pulvinar lectus ac pellentesque. Sed a imperdiet quam. Mauris dignissim lacinia neque eu fringilla. Nullam non velit sem, quis commodo urna. Quisque quis dolor vitae lectus accumsan blandit. Aenean eget felis felis. Praesent felis augue, vulputate sed viverra ut, rutrum in massa.

Praesent sollicitudin urna vitae est egestas ac tristique erat venenatis. Sed eget eros magna, at varius orci. Ut commodo, augue sit amet condimentum scelerisque, odio lectus gravida nulla, eget pellentesque tortor dolor commodo nisi. In tellus justo, imperdiet nec tincidunt vitae, tristique congue erat. Fusce tempus turpis sed risus euismod ultricies. Aenean in sapien tellus, sed lobortis nulla. Morbi turpis justo, semper nec convallis id, tincidunt at lorem. Etiam ornare tempus nibh id vulputate. Proin magna nunc, pellentesque in porttitor vel, porta id lectus. Vivamus id dolor ut libero volutpat adipiscing sed in enim. Morbi tincidunt eros eu diam viverra viverra. Donec consequat nibh sit amet magna auctor ornare. Nulla commodo pellentesque purus, a pulvinar neque congue at. Quisque porttitor sollicitudin magna, et semper lectus pretium ornare. Etiam non lacus quis nulla malesuada varius. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Phasellus neque eros, tincidunt sodales lacinia ut, congue quis mi. Phasellus a odio nec dolor imperdiet laoreet faucibus sed lacus. Nunc a ipsum ullamcorper ipsum pharetra tincidunt. Praesent malesuada quam vel augue sodales pulvinar.

Pellentesque egestas nulla sit amet magna luctus ac tristique nisl luctus. Nullam euismod lacinia augue in ultricies. In eu lectus vitae sem consectetur faucibus eu id felis. Nullam velit nibh, egestas eu aliquet non, posuere nec diam. Cras imperdiet porttitor libero a consequat. Maecenas aliquam erat nunc. Aenean lorem diam, convallis et posuere at, tempor ut felis. Vivamus id magna nulla. Duis porttitor interdum mauris eget rhoncus. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Vestibulum a mi eu velit pulvinar blandit nec quis ipsum. Pellentesque imperdiet pretium massa vitae tristique. Pellentesque auctor lectus sit amet erat elementum vel vehicula velit adipiscing. In sem arcu, dapibus nec venenatis ac, semper ac nunc. Aliquam id ipsum massa, non commodo nisl.

Etiam nec mi vitae metus convallis rhoncus. Suspendisse eget cursus eros. Fusce egestas ligula sit amet dui dignissim ullamcorper. Etiam nunc erat, bibendum a suscipit a, ultricies gravida lacus. Aliquam sollicitudin magna ut neque laoreet tincidunt. Curabitur vulputate lectus vitae massa viverra tempus. Ut tristique tellus massa, eget volutpat mi. Donec consequat enim elementum enim aliquam vel lacinia libero fermentum. Nam sodales turpis et est fermentum ornare. Donec libero neque, tincidunt et dignissim id, fermentum sit amet turpis. Proin at ultrices lorem. Nunc accumsan, mauris nec sollicitudin lobortis, dolor odio facilisis tellus, ut tempus lacus libero quis leo. Nullam facilisis tortor sit amet augue ullamcorper lacinia. Mauris et tortor vitae felis auctor posuere.

Pellentesque vitae velit eu risus vulputate feugiat ut eu lorem. Suspendisse venenatis pellentesque eros, quis pellentesque velit pretium a. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nam egestas dolor quis libero fringilla eu commodo lectus iaculis. Sed id ligula diam. Sed scelerisque auctor dapibus. Duis lacinia tortor porttitor nunc condimentum ut volutpat neque tempus. Donec volutpat auctor porta. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Nulla ornare ultricies dui, at suscipit sapien porta vel. Integer elementum euismod dui, vitae sollicitudin nibh tristique quis. Curabitur accumsan, ipsum eu gravida egestas, lacus lectus molestie tortor, vitae pulvinar odio purus id mi.

Cras pretium vulputate bibendum. Pellentesque pretium, velit eget vulputate gravida, ipsum tortor congue lorem, vel convallis erat ante eu nibh. Aliquam erat volutpat. Praesent neque eros, vulputate consequat dapibus ac, tristique vel sapien. Nullam dapibus mauris at urna dignissim sed venenatis est accumsan. Ut odio dui, tempus nec mattis quis, tincidunt vitae lacus. Nulla rutrum dolor ac tortor gravida pharetra adipiscing lacus aliquam. Pellentesque eleifend ipsum sed turpis fermentum hendrerit vestibulum orci accumsan. Fusce placerat dolor in justo hendrerit quis vestibulum diam tristique. Ut mollis volutpat aliquet. Ut dictum augue eget mauris viverra eu tempus lorem auctor. Nulla tristique, nisl at lobortis venenatis, tellus metus viverra mi, eget aliquam massa urna non nibh.

Nam porttitor turpis a mi vehicula dapibus egestas urna ultricies. Duis vulputate, diam ut fermentum dictum, massa diam mattis nibh, et imperdiet nibh purus vitae sem. Vivamus lorem nibh, scelerisque ut tincidunt et, iaculis vestibulum enim. In hac habitasse platea dictumst. Morbi risus mauris, hendrerit ut cursus quis, mattis in mi. Sed sed turpis vel ipsum dapibus malesuada. Nunc sit amet porta lacus. Curabitur pretium rutrum sapien a dapibus. Sed odio turpis, venenatis nec iaculis in, consequat commodo orci. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Proin non accumsan massa. Donec sit amet leo at turpis dictum rutrum. Proin sed nulla eu est molestie ultricies. Etiam in tincidunt velit. Duis eros felis, varius et fermentum vitae, imperdiet a enim.

Friday, July 24, 2009

Attn: spacetime complaints department

A brief history of time I own. I meddle with it, I occupy the space. I use the space and time, right now. It's mine, I think, I hope. I cling to it, for time will soon run out. I'll soon fall out forever into reality's dark complement. An icy fiery crush of expanding antimatter will be my only home, they say. I fear the yet unknown.

"This time is mine!", I scream in silence. The ticking clocks lose seconds, seconds tick. "Short lived those seconds' lives must be", I wonder. They fade out silently, there is no thunder. Accomplished lives those seconds couldn't have, for they exist in fractions of my time, and now they're gone.

I ponder. On yet another larger scale, my second's now. Can't see the now, it's hidden and it's alive.

But wait a minute. Am I a tick in my machine? Am I a tiny piece of matter thrown in space and whirling fast at supersonic speeds through endless fields of void? Where is my thunder, where is my essence?

I'm partially made of matter, but do I matter? My soul screams out from my neuronal machine, it wants to exit out and float in freedom! It wants to live and mingle in the social space of memes!

My soul is stuck, it hints... My flimsy capsule is all I've got, and I don't matter much, I'm out of luck.

Am I my soul, my space, my time?

Oh beauty of unknown worldly facts, which seldom are brought forth and soothe the soul! I know you're there, I wait for you in silence. My eyes are my inquisitive tool, which bring the world into my head. They can't stop scanning, they want the world.

I hardly fall asleep, I'm restless and I suffer. My second's almost up. No singularity will save us now. The mission's almost clear: suffer away, and ponder endlessly, for that's the underlying purpose.

I look up,
The truth is burning,
I wonder.

A brief "thank you" list, to some of the many who matter, but who may not know they've made a difference.

Hans Moravec
Trey Parker and Matt Stone
Leandro Asnaghi-Nicastro
Matt Groening
Louis Lee Smith
Stanely Kubrick
Harold Ramis, Danny Rubin and Bill Murray
Vlad "Dracul" Tepes
Douglas Fisher
Christopher Columbus

Monday, June 15, 2009

How to display all HTTP headers



Enumeration names = ((HttpServletRequest)request).getHeaderNames();
StringBuffer result = new StringBuffer("");
String value = null;
while (names.hasMoreElements())
{
String name = (String) names.nextElement();
Enumeration values = ((HttpServletRequest)request).getHeaders(name);

if (values != null)
{
while (values.hasMoreElements())
{
value = (String) values.nextElement();
result.append(name + ": " + value + "\n<br>");
}
}
}
System.out.println("BEGIN: all headers" + eol );
System.out.println( result.toString() );
System.out.println("END: all headers" + eol );

Tuesday, May 26, 2009

Google Android - map not working in emulator? - how to fix it (in Windows) - by Alan Lupsha

Basically, you need a Google Mpas api key: http://code.google.com/android/add-ons/google-apis/mapkey.html

Example on how to do this:

1. find your debug.keystore file, for example:
C:\Documents and Settings\developer\.android\debug.keystore

2. list the md5:
C:\jdk1.6.0_13\bin>keytool -list -alias androiddebugkey -keystore "C:\Documents and Settings\developer\.android\debug.keystore" -storepass android -keypass android

androiddebugkey, May 18, 2009, PrivateKeyEntry,Certificate fingerprint (MD5): 83:4D:2C:6F:58:B3:D1:EA:2C:AF:0D:FC:70:19:57:D6

Save the fingerprint somewhere, you'll need it later.

3. Sign up for the maps API: http://code.google.com/android/maps-api-signup.html , use your generated MD5 fingerprint

Submission result:

Thank you for signing up for an Android Maps API key!

Your key is:
0h1d_-9Wwhaterver-your-key-is4yNt-SXgQ

This key is good for all apps signed with your certificate whose fingerprint is:
83:4D:2C:6F:58:B3:D1:EA:2C:AF:0D:FC:70:19:57:D6

Here is an example xml layout to get you started on your way to mapping glory:




Go to your project's layout, i.e. in main.xml, look for your MapView definition,
take out android:apiKey="apisamples" and replace it with whatever your key is,
for example: android:apiKey="0h1d_-9Wwhaterver-your-key-is4yNt-SXgQ"

or, if you didn't define your mapView in XML, but instead you did it in code, use:
mMapView = new MapView(this, "0h1d_-9Wwhaterver-your-key-is4yNt-SXgQ");

Also, make sure that in your manifest, you have this defined:

Monday, May 25, 2009

Linux: how to send emails with attachments from the command line

echo "Sending an attachment " | mutt -a my-superb-tar-file.tar -s "attachment" alan75@my-fun-domain.com

Friday, May 22, 2009

How to stop and restart subversion using a simple script

1. create the svn restart script:
sudo nano /etc/init.d/restartsvn

sudo cat /etc/init.d/restartsvn
#!/bin/bash
echo This is a startup script to stop and restart subversion - Alan

echo Now stopping previous svn instance...
sudo kill -9 `ps aux | grep -i svn | grep -i listen-host | grep -v grep | awk '{print $2}'`

echo Sleeping 3 seconds...
#sleep 3 seconds
sleep 3

echo Now starting svn for you: /usr/bin/svnserve -d --listen-host 10.0.0.10 -r /srv/svn
/usr/bin/svnserve -d --listen-host 10.0.0.10 -r /srv/svn

echo The process id of the new svn instance is:
echo `ps aux | grep -i svn | grep -i listen-host | grep -v grep | awk '{print $2}'`
echo Done


2. make the script executable
sudo chmod u+x /etc/init.d/restartsvn

3. execute the script
sudo /etc/init.d/restartsvn

Thursday, May 21, 2009

Java - how to keep your instance NamingEnumeration object and iterate through it without .next() killing it


Problem: You have a NamingEnumeration and want to keep untouched, while passing it to different methods that need to iterate through it. Error: doing a namingOperation.next() walks the object, and when you reach the end, there's no .moveToFront() method, which allows you to re-use the NamingEnumeration object.

Solution: use Collections.list( namingEnumeration ), which returns an ArrayList. Then, instantiate a new copy of the original ArrayList and then use this copy, with an Iterator, walking each element of the arrayList object.

The example below is of a NamingEnumeration A which is made up of many NamingEnumerations B, which are made up of some NamingEnumerations C. (basically it's an LDAP naming enumeration result after performing a dirContext.search( ... )


See, this won't work:

public static void printAllAttributes( NamingEnumeration originalNamingEnumeration )


// used only to create copy
ArrayList arrayListBackup = Collections.list( originalNamingEnumeration );

// create copy
ArrayList arrayList = new ArrayList( arrayListBackup );
...
because the originalNamingEnumeration gets "exhausted" after Collections.list() is called.
So, your only option is to get the original NamingEnumeration, convert it to an ArrayList:

NamingEnumeration namingEnumeration = dirContext.search( searchBase, searchFilter, searchControls );
ArrayList tempArrayList = Collections.list( namingEnumeration );
and forget about it:

namingEnumeration.close();
namingEnumeration = null;
Then, make as many copies of that tempArrayList, and use those copies:

ArrayList arrayListCopy1 = new ArrayList( tempArrayList );
... do whatever
ArrayList arrayListCopy2 = new ArrayList( tempArrayList );
... do whatever
ArrayList arrayListCopy3 = new ArrayList( tempArrayList );
In my case, I want to keep the search results of an LDAP query, which is stored
in a NamingEnumeration, and then call different methods on the search results,
which iterate on the NamingEnumeration, screwing it up. So, keeping the search results
in a globally defined ArrayList object allows me to use it later. Final search routing looks as such:


public int search( String searchBase, String searchFilter )
{
SearchControls searchControls = new SearchControls();
searchControls.setSearchScope( SearchControls.SUBTREE_SCOPE );
try
{
NamingEnumeration namingEnumeration = dirContext.search( searchBase, searchFilter, searchControls );
ArrayList tempArrayList = Collections.list( namingEnumeration );
namingEnumeration.close();
namingEnumeration = null;

this.lastSearchResultArrayList = new ArrayList( tempArrayList );
tempArrayList = null;
}
catch ( NamingException e )
{
logerror("Error while searching: " + e.toString() );
return -1;
}
return 0;
}

Thursday, May 14, 2009

Java - Rot94 - similar to rot13 - encode text





public class NinetyFiveEncode
{
public NinetyFiveEncode()
{
String testStr = "The rain in Spain falls mainly on the plain. Call me at (555)112-xxxx. {:-) Bye.";
String encoded = NinetyFiveEncode.rot94(testStr);
System.out.println( encoded );
String decoded = NinetyFiveEncode.rot94( encoded );
System.out.println( decoded );
}

/*
* Rot94 by Alan Lupsha (c)2009
*
* Takes a string of characters, and for every character
* between ASCII value 33 and 126 (! to ~), it adds 94 to
* the character value (wraps around if the result character
* is larger than 126)
*
* Any non-printable characters and the space character
* (i.e. any char smaller than 33 and larger than 126) gets
* copied over.
*
* Sample run:
*
* String testStr = "The rain in Spain falls mainly on the plain. Call me at (850)879-xxxx. {:-) Bye.";
* String encoded = NinetyFiveEncode.rot94(testStr);
* System.out.println( encoded );
* String decoded = NinetyFiveEncode.rot94( encoded );
* System.out.println( decoded );
*
* %96 C2:? :? $A2:? 72==D >2:?=J @? E96 A=2:?] r2== >6 2E Wgd_Xgfh\IIII] Li\X qJ6]
* The rain in Spain falls mainly on the plain. Call me at (850)879-xxxx. {:-) Bye.
*/
public static String rot94(String plainText)
{
if (plainText == null) return "";

// encode plainText
StringBuffer encodedMessage = new StringBuffer("");
int abyte;
for (int i = 0; i < abyte =" plainText.charAt(i);">= 33) && (abyte <= 126 ))
abyte = (abyte - '!' + 47) % 94 + '!';

encodedMessage.append( (char)abyte );
}
return encodedMessage.toString();
}


public static void main( String[] args )
{
NinetyFiveEncode ninetyFiveEncode = new NinetyFiveEncode();
}
}


Wednesday, May 13, 2009

How to check the status of your Android phone purchased from Brightstarcorp through Google

1. Go to: http://android.brightstarcorp.com/trackorder.php

If you are logged into your Google account, the page will display the status of your order right away. Ex: "Pending shipment" Else, log in and see your status.


2. Call Brightstarcorp at 877-727-9789 and ask to track your order, and give them your order number which you received in the email right after you purchased your phone. This will likely mean that they'll send out the phone today instead of waiting another 7 days to mail you your phone (even if you already paid for FedEx overnight delivery)