Thursday, September 10, 2009

Summary of the talk by Prof. Ted Baker

Summary of the talk by Prof. Ted Baker

Alan Lupsha

Professor Ted Baker’s area of research is real-time systems. He focuses on real-time runtime systems, real-time scheduling and synchronization and real-time software standards.

Real-time scheduling for multiprocessors involves finding ways to guarantee deadlines for tasks which are scheduled on multiprocessor systems. A main problem with scheduling is that it is very difficult to meet constraints, given specific computational workloads. As workloads vary, meeting given constraints can be achieved with different guarantees. For example, the guarantee of execution differs when given constraints for fault tolerance, window of execution or energy usage. The quality of scheduling can vary as well, as this quality can quantify how well the schedule guarantees the meeting of deadlines or how late the task will complete over the deadline. Once an algorithm is able to schedule a workload, a schedule can also vary in sensitivity in proportionality with the variation in the parameters of the execution.

Professor Baker looks at workload models which involve jobs, tasks and task systems. Jobs are units of computation that can be scheduled with a specific arrival time, worst-case execution time, or deadline. Tasks are sequences of jobs, and can depend on other tasks. Sporadic tasks have two specific qualities: they have a minimum inter-arrival time, and they have a worst case execution time. Task systems are sets of tasks, where tasks can be related or they can be independent (scheduled without consideration of interactions, precedence or coordination).

Scheduling involves models, which can be defined as having a set of (identical) processors, shared memory, and specific algorithms. These algorithms can be preemptive or non-preemptive, on-line (decisions are made on the fly as instructions arrive) or off-line, and global or partitioned (split amongst processors where they can predict in advance the workload for each processor). There are three typical scheduling algorithms and tests. The first one is “fixed task-priority scheduling”, where the highest priority tasks run first. The second is “earliest deadline first”, where higher loads are handled without missing the deadline (these algorithms are easier to implement). The third type of algorithms (which are not used in single processing systems but only in multi-processor systems) are “earliest deadline zero laxity”, where the execution of a job can be delayed without missing the given deadline.

The difficulty of scheduling is that there is no practical algorithm for scheduling a sporadic task. One example of a scheduling test is the density test, where one can analyze what fraction of the processor is needed to serve a given task. Professor Baker researches task scheduling and is looking for acceptable algorithms which are practical, given specific processing constraints.

Tuesday, September 8, 2009

Summary of the Talk by Prof. FeiFei Li

Summary of the Talk by Prof. FeiFei Li

Alan Lupsha

Professor FeiFei Li researches Database Management and Database technologies. His research focuses on efficient indexing, querying and managing large scale databases, spatio-temporal databases and applications, and sensor and stream databases.

Efficient indexing, querying and managing large scale databases deals with problems such as retrieving structured data from the web and automating the process of identifying the structure of web sites (ex. to create customized reports for users). It is important to interpret web pages and to identify data tree structures. This allows one to first create a schema for the structure of the data, and then to integrate information from different sources together in a meaningful way. The topic of indexing higher dimensional data (using tree structures and multi dimensional structures) deals with space partitioning that indexes data anywhere from 2 to 6 dimensions.

The topic of spatio-temporal databases and applications deals with the execution of queries, like finding solutions to NP-hard problems such as the traveling salesman problem. A solution uses a greedy algorithm, which has a start node location and finds the nearest neighbor in each predefined category of nodes. By minimizing the sum distance (using the minimum sum distance algorithm), a path from a start to and end node is found in such a way that each category is visited, and the solution is at most 3 times the complexity of the optimal solution.

Sensor and stream databases deal with the integration of sensors into network models. A large set of sensors is distributed in a sensor field, and a balance is sought to solve problems such as data flow between sensors, hierarchy of sensors and efficient data transmission for the purpose of saving battery life. Professor Li analyzes the best data flow models between sensors and different ways to group sensors so that hub nodes transmit data further to other hub nodes (an example of such an application is the monitoring of temperatures on an active volcano). One can not use broadcast since this would drain the sensors’ battery life. Thus, routing methods and fail over mechanisms are examined, to ensure that all sensor data is properly being read.

Professor Li also researches problems with the method of Identical Independent Distributed Random Noise (IID), which introduces errors in data sets for the purpose of hiding secret data, while maintaining correct data averages and other data benchmarks (for example hiding real stock data or employees’ salaries, but preserving averages). The problem with IID is that attackers can filter out outliers in data and still extract the data that is meant to remain secret. A solution to this problem is to add noise to the original component of the data set by adding the same amount of noise, but in parallel to the principal component. This yields more securely obfuscated data.

Thursday, September 3, 2009

Summary of the talk by Prof. Zhenhai Duan

Summary of the talk by Prof. Zhenhai Duan

Alan Lupsha

Professor Zhenhai Duan researches accountable and dependable Internet with good end-to-end performance. There is currently a serious problem with the Internet because it lacks accountability and there is not enough law enforcement. It is very hard to find out who did something wrong because hackers do not worry about breaking the law and they cover their tracks in order to not get caught. There is a need to design protocols and architectures which can prevent bad activities from happening and which can easier identify attackers.

The current Internet lacks accountability, as even if there are no attacks, there are still many problems. For example, the time to recover during routing failures is too long, and DNS also has many issues. Dependable Internet defines higher accountability for banking and secure applications. End-to-end performance also needs to be high, especially for more important applications which need a greater guarantee of data delivery.

Professor Duan’s research projects include network security, solutions to network problems, routing, and intrusion detection. In IP spoofing attacks it is difficult to isolate attack traffic from legitimate traffic, and these attacks include the man-in-the-middle method with TCP hijacking and DNS poisoning, as well as reflector-based attacks with DNS requests and DDOS. There are distributed denial of service attacks which are issued from bot nets made up of millions of zombie (compromised) computers. To solve these network problems, professor Duan researches route-based filtering techniques. These techniques take advantage of the fact that hackers can spoof their source addresses but they can not control the route of the packets, while filters which know part of the network topology can isolate illegitimate traffic.

Inter-Domain Packet Filter (IDPF) systems identify feasible routes based on the BGP (an Internet domain routing protocol) updates. These systems evaluate the performance of other IDPFs based on Autonomous Systems graphs. It is hard to completely protect an Autonomous System from spoofing attacks, but IDPFs can effectively limit the spoofing capability of attackers. Using the vertex cover algorithm, one can prevent attackers in 80.8% of the networks which are attacked. If the attacks can not be prevented, one can still look at the topology and determine who are the candidates of the source packets. IDPFs are effective in helping IP traceback, as all Autonomous Systems can localize attackers. The placement of IDPFs also plays a very important role in the performance of protecting networks.

Since botnets are becoming a major security issue, and they are used in distributed denial of service attacks, spamming and identity theft, there is a greater need for utility based detection of zombie machines. The SPOT system is one system being researched which classifies messages as spam or not spam. It computes a function based on the sequential probability ratio test, using previously learned behavior of systems, and finally arriving at one of two different hypotheses, classifying messages as spam or not spam. Professor Duan is currently testing the SPOT system and improving it.

Tuesday, September 1, 2009

Summary of the talk by Prof. Mike Burmester

Summary of the talk by Prof. Mike Burmester

Alan Lupsha

Professor Mike Burmester is interested in research in areas of radio frequency identification and ubiquitous applications, mobile ad hoc networks (MANET) and sensor networks, group key exchange, trust management and network security, and digital forensics. New wireless technologies offer a great wireless medium, but unfortunately the current state of world research is not mature enough to fully understand and mange these new technologies. The fourth generation of wireless technologies, which should work both in the European Union and in the United States, will offer new challenges and opportunities for maturity in this field.

The RFID revolution will be the next big factor which will allow easier management of products. This technology is already being implemented in library systems, allowing easier book management and replacing bar codes, which requires line of sight in order to scan each book. Airports are also implementing RFID for luggage management, and hospitals use RFID tags to protect newborns from being kidnapped. Different types of sensor networks are used extensively in factory floor automation, border fencing and in a plethora of military applications. Sensors will also be extensively used in monitoring biological levels in people. For example, a blood level monitor can monitor and alert a diabetic person if their sugar level is too high or too low.

Mobile ad-hoc networks (MANET) offer information routing between wireless devices which are mobile. Vehicular ad-hoc networks (VANET) are a type of mobile ad-hoc networks which allow communication between moving vehicles. These networks allow individual wireless devices to act as nodes and to route information between other communicating devices, thus reducing the need of dedicated wireless nodes. Ubiquitous networks allow applications to relocate between wireless devices, thus following a mobile user on his or her journey, while continuing to provide needed services.

These new wireless technologies will also need proper management. Some of the new issues at hand include centralizing or decentralizing systems, finding out who will protect certain systems, ensuring data security (such as confidentiality, avoiding eavesdropping, guaranteeing privacy), preserving data integrity (avoid the modification and corruption of data), and data availability (dealing with denial of service attacks, identifying rogue based stations, dealing with man in the middle attacks, detecting and avoiding session tempering and session hijacking).

There is a trade-off between security and functionality. It is extremely challenging to secure wireless networks, but in certain cases one may desire less security in order achieve cheaper wireless products and technologies. Using secured pipelines to create point to point communication does ensure some security, but there are still problems at the physical layer, where attacks can be carried out. Hackers are keen to intercept and manipulate wireless data, making this a very attractive environment for them and creating the the challenge to try and stay ahead of the users of these technologies. This gives rise to great security threats, but it also opens up a niche for researchers to study and create new wireless network security technologies.