Monday 28 September 2015

File Compression

The Domain "File Compression" lets you reduce the overall number of bits and bytes in a file so it can be transmitted faster over slower Internet connections, or take up less space on a disk.
Domain File compression is a System Based Software. The software will be done using Core Java. It can use in the System as a utility.
The type of compression we will use here is called lossless compression . The user need not depend on third party software's like winzip, winrar, Stuff etc. the software can be used to compress files and they can be decompressed when the need arises. For implementing this Software we want to use algorithms
The main algorithms are: Huffman algorithm
The Domain File Compression mainly include 7 modules
• Compress A File Or Folder
• De-Compress the file or folder
• View files in the compressed file
• Facility to set icon
• Facility to set your own extension
Compress file or folder
This module helps us to compress a file or folder. The compressed file will have a extension that has been given at the development time. We can send the compressed file over the internet so that users having this software can decompress it.
Decompress a file or folder
This is the reverse process of file compression. Here we can decompress the compressed file and get the original file.
View files in the compressed file
Here we can view the list of files inside our compressed file. We can view the files before decompressing and decide to decompress or not.
Set icon and extension
This is additional feature in our project. We can set our own extension to the compressed file. More than that we can specify the style of icon for the compressed file. Users will also be given a option to change the icon as per their preference.
The aim of proposed system is to develop a system of improved facilities. The proposed system can overcome all the limitations of the existing system. The system provides data accuracy and save disc space.
The existing system has several disadvantages and many more difficulties to work well. The proposed system tries to eliminate or reduce these difficulties up to some extent. The proposed system is file/folder compression or decompression based on the Huffman algorithm and GZip algorithm. The proposed system will help the user to consume time.
The proposed system helps the user to work user friendly and he can easily do the file compression process without time lagging . The system is very simple in design and to implement.
The system requires very low system resources and the system will work in almost all configurations. It has got following features E nsure data accuracy, minimize manual data entry , m inimum time needed for the various processing , g reater efficiency , better s ervice .
 

Device Switching Using PCs Parallel Port

Imagine the convenience, if we could control different devices at home/industry by using a single PC. Our project aims at the same and could be used to control the printer power, loads & other household electrical appliances. The circuit comprises decoder, inverter, latch & relay driver sections. To control these equipments we are using PC's Parallel port. The program of controlling is written in VB language. The PC parallel port is an expensive yet a powerful platform for implementing projects dealing with the control of real-world peripherals. This port can be used to control the printer as also household and other electrical appliances. The computer program through the interface circuit controls the relays, which, in turn, switch the appliances on or off.The parallel port has 12 outputs including 8 data lines and 4 control lines. The circuit described here can be used to control up to 255 electrical appliances using only eight data lines from the parallel port. Besides, the software program allows the users to know the current status of the loads.
System Requirements :
Hardware Requirements:
PC with Min 64MB and recommended 256MB RAM
Harddisk Min. 10 GB and recommended 20 GB.
Min. Processor 1GHz and recommended 3GHz.
Software Requirements:
Operating System: Window xp/window vista
Developing Platform: VB 6.0
PC parallel port can be damaged quite easily if you make mistakes in the circuits you connect to it. If the parallel port is integrated to the motherboard (like in many new computers) repairing damaged parallel port may be expensive (in many cases it it is cheaper to replace the whole motherborard than repair that port).
Safest bet is to buy an inexpensive I/O card which has an extra parallel port and use it for your experiment. If you manage to damage the parallel port on that card, replacing it is easy and inexpensive.
Imagine the convenience, if we could control different devices at home/industry by using a single PC. Our project aims at the same and could be used to control the printer power, loads & other household electrical appliances.
The circuit comprises decoder, inverter, latch & relay driver sections. To control these equipments we are using PC's Parallel port. The program of controlling is written in VB language
This project, can be effectively and conveniently utilized for the control of different appliances. As this project could be extended to control about 255 devices, this could be used for computerization of an office, home, or a firm. Though it is quiet costlier, the circuit is simple and the working mechanism could be easily understood.
An added advantage of this project is that we are able to know the status of the device to be controlled. The program to control the appliances is written in VB language which is more user friendly and easy to understandable.
 

Accelerating Ranking System Using Webgraph

The Page-Rank System of the Needle Search Engine is designed and implemented using Cluster Rank algorithm, which is similar to famous Google's PageRank algorithm.
Google's PageRank algorithm is based on the link structure of the graph. A "WebGraph" package is used to represent the graph in most efficient manner, which helps in accelerating the ranking procedure of the World Wide Web.
Two latest Page-Rank algorithms called Source Rank, Truncated PageRank are taken to compare the existing ranking system, which is Cluster Rank , and deploy the best in the Needle Search Engine.
Two attributes are taken in to consideration for selecting the best algorithm. The first one is the time and second one is human evaluation for the quality of the search. A survey is conducted with the help of the research team on finding the best algorithm on different search topics .
Related Work
The existing Page-Rank system of the Needle Search Engine takes long update time as the number of URLs increases. Research was done on the published ranking system papers, and below are the details of those papers.
There is a paper called "Efficient Computation of page-rank" written by Taher H.Haveliwala. This paper discusses efficient techniques for computing Page-Rank, a ranking metric for hypertext documents and showed that the Page-Rank can be computed for very large sub graphs of the Web on machines with limited main memory.
They discussed several methods for analyzing the convergence of Page-Rank based on the induced ordering of pages.
The main advantage of the Google's PageRank measure is that it is independent of the query posed by user, this means that it can be pre computed and then used to optimize the layout of the inverted index structure accordingly.
However, computing the Page-Rank requires implementing an iterative process on a massive graph corresponding to billions of Web pages and hyperlinks.
There is a paper written by Yen-Yu Chen and Qingqing gan on Page-Rank calculation by using efficient techniques to perform iterative computation. They derived two algorithms for Page-Rank and compared those with two existing algorithms proposed by Havveliwala], and the results were impressive.
In this paper, the authors namely Lawrence Page, Sergey Brin, Motwani and Terry Winograd took advantage of the link structure of the Web to produce a global "importance" ranking of every Web page.
This ranking, called PageRank, helps search engines and users quickly make sense of the vast heterogeneity of the World Wide Web.
This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path.
PageRank is the most widely known ranking function of this family. The main objective of the paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. 
The Page Rank is computed similar to Google's PageRank, except that the supporters that are too close to a target node, do not contribute to wards it ranking. Spammers can afford spam up to few levels only. Using this technique, a group of pages that are linked together with the sole purpose of obtaining an undeservedly high score can be detected.
The authors of this paper apply only link-based methods that are they study the topology of the Web graph with out looking at the content of the web pages.
 

Enhancing an Application Server to Support Available Components

Modern client-server distributed computing systems may be seen as implementations of N-tier architectures. Typically, the first tier consists of client applications containing browsers, with the remaining tiers deployed within an enterprise representing the server side; the second tier (Web tier) consists of web servers that receive requests from clients and pass on the requests to specific applications residing in the third tier (middle tier) consisting of application servers where the computations implementing the business logic are performed; and the fourth tier (database/back-end tier) contains databases that maintain persistent data for applications. Applications in this architecture are typically structured as a set of interrelated components hosted by containers within an application server.
Various services required by applications, such as transaction, persistence, security, and concurrency control, are provided via the containers, and a developer can simply specify the services required by components in a declarative manner.
This architecture also allows flexible configuration using clustering for improved performance and scalability. Availability measures, such as replication, can be introduced in each tier in an application-specific manner. In a typical n-tier system, the interactions between clients and the web tier are performed across the Internet.
The infrastructures supporting these interactions are generally beyond the direct control of an application service provider.
The middle and the database tiers are the most important, as it is on these tiers that the computations are performed and persistency provided.
These two tiers are considered in this paper. Data as well as object replication techniques have been studied extensively in the literature, so our task is not to invent new replication techniques for components, but to investigate how existing techniques can be migrated to components.
Component-oriented middleware infrastructure provides clear separation between components that have persistent state and those that do not.
Therefore, it is natural to divide the replication support for these components into two categories: state replication and computation replication.
State replication deals with masking data store failures to make persistent data highly available to components, while computation replication deals with masking application server failures where the computations are performed.
We examine how an application server can be enhanced to support replication for availability so that components that are transparently using persistence and transactions can also be made highly available, enabling a transaction involving EJBs to commit despite a finite number of failures involving application servers and databases.
The Model-View-Controller (MVC) design pattern was originally brought forward by Trygve Reenskaug and applied in the SmallTalk-80 environment for the first time, the purpose of it is to implement a dynamic software design which simplifies the maintenance and extension of the software application.
MVC seeks to break an application into different parts and define the interactions between these components, thereby limiting the coupling between them and allowing for each one to focus on its responsibilities without worrying about the others.
MVC consists of three categories of the components: Model, View and Controller. This means that it separates the input, processing, and output for the applications and constructs them into a Model-View-Controller three tier structure
 

Dynamic Flash Interface

The Internet is a suitable tool for distributing information because it is a communication medium that reaches a wide audience. By using the web to advertise, companies can easily attract potential customers. Some elements used for web advertisements include animated graphics and sounds.
The aim for this project is to provide a website that can showcase the elements of a dance studio. The front-end, or interface of the website is designed with Flash technology that incorporates tasteful decorations as well as a consistent color theme. The content of the website is stored in a database, which can be updated easily.
The back-end of this dynamic website consists of PHP technology that connects the MySQL database with the Flash presentation. This project also provides an opportunity for this author to learn how to design with Flash.
While Flash animation has been around over a decade, it is a relatively new technology compared to the hypertext markup language. In the early days, Flash animation provided some entertainment value but did not contribute to the information sharing focus of the Internet, so it is found on a small percentage of all websites.
In addition, most browsers require a Flash player plug in to render the Flash animations. Even with frequent upgrading of Flash players to the latest version for proper compatibility, many companies in the entertainment prefer using Flash because of its possibilities for multimedia integration.
The Flash technology that we know of today can work with text, sounds, video and graphics in both static and dynamic contexts.
For small companies such as a local dance studio, Flash websites can be used to the same extent as larger companies to portray the professional image.
Shin Dance Academy (SDA) is one out of many dance studios in Southern California that offers a selection of dance classes to the public, which are taught by temporary instructors.
SDA is interested in utilizing whatever tools available to efficiently promote and thereby successfully recruit more students. As such, SDA finds marketability in maintaining a lively and up-to-date website. 
 

Customer Complaint Report Software

This application package uses VISUAL BASIC 6. For inputs/outputs user will use Menu for working on the system. They will use forms for displaying the database and also for view/update this.
Software
• System should be run under windows 95, 98, window NT 4.0 environment.
• Reports should be designed in Seagate Crystal Reports.
• Oracle 9i should be used as backend to store database.
• Forms should be designed in Microsoft Visual Basic 6.0 to enhance the productivity, for expressive form design, and less update time.
Hardware
• Visual Basic 6.0 and other application package should be installed on the Pentium III and other compatible machine.
Any machine that is connected to LAN must have Microsoft Visual Basic 6.0 and Seagate Crystal Report packages
Objectives Of System
• The aim of this project is to deliver a verified & validated data retrieval system.
• The primary objective of design & development of this system is to simplify the process of report generation.
• It should provide correct data in the reports independent of the objects & conditions used.
• Should reduce the time for provisioning of requested data.
• The enterprise managers can easily get data for analysis purposes.
User Requirement
The foremost requirement is to provide initial reporting framework.
• Should provide feature of Single point-of-entry to access reports.
• Should provide a range of information demand addressed.
• Should provide Business Self Service where possible to take up business decisions.
• Should provide Data Abstraction.
• Treat data from a business perspective instead of from an application perspective
• Should embed in itself Service-Oriented Architecture.
• Should be Maintainable.
• Should be Business aligned
• Should provide a framework for Efficient Report Development Process
• Should have a detailed source data analysis for effective modeling of universe.
• Should provide feature for changing the login and password of the logged in user.
• Should cater to adhoc-reporting requirements of the customer

An Intelligent Eye

This An Intelligent Eye project assumes that reader has a basic signal processing understanding and some algorithmic notions (complexity, optimisations).
The aim of this project is to give the reader a little overview of the existing techniques in digital image processing.
We are developing software and software is a part of larger system, so work begins by establishing requirements for all system elements and then allocating some subset of those requirements to software.
This system view is essential when software must interact with other elements such as hardware, people and database. System engineering and analysis encompass requirements gathering at system level with a small amount of top-level design and analysis.
Information engineering encompasses requirements gathering at the strategic business level and the business area level. In this step we just allot requirement to each system element according to their requirements.
To begin with we decided the operating system that has to be used as building round for project development i.e. Windows NT/XP and then we decided the language to be used for developing the software i.e. JAVA 1.5.
• Design
Software designing process partitions the equipments to either hardware or software system. It establishes overall system architecture. These processes translate the requirements into a representation that can in future be transformed into one or more executable programs. This design process translates requirements into a representation of the software that can be assessed for quality before coding begins.
The design is documented and becomes part of software configuration. Here we establish overall system architecture. All the requirements are divided into two levels - the software levels and the hardware levels. The framework of the project is developed here and is documented to become a part of the configuration. Here we set the interaction with the hardware will be performed with the help of modules which will contain functions that solely perform only this task.
• Testing
After the generation of the code, now our work can be tested for whether it is doing the desired operation or not. Testing is performed to uncover the errors if any, in the program so that result matches with the specified requirements. It focuses on logical internals of the software, ensuring that all the statements have been tested. After the
Successful completions of the projects tests were performed to test its smooth operation. The testing procedures used were:
• Unit Testing
• Black Box Testing
• White Box Testing
• Integration Testing
Support or Maintenance
This is the longest life cycle phrase. The system is tested and put into use. Maintenance involves correcting errors, which were not discovered in earlier stages of the process life cycle. The project was so designed that it was easier to maintain and to upgrade the Image processing software according to the need.
 

Image Steganography

The techniques for secret hiding of messages in an otherwise innocent looking carrier message belong to the field of steganography. The purpose of steganography is to conceal the very presence of secret information.
To make the communication more secure, the secret information can be compressed and encrypted before it is hidden in the carrier.
This is important because in this way we minimize the amount of information that is to be sent, and it is also easier to hide a random looking message into the carrier than to hide a message with a high degree of regularity. Encrypting the compressed message before hiding is recommended and provides double protection
Image steganography
As stated earlier, images are the most popular cover objects used for steganography. In the domain of digital images many different image file formats exist, most of them for specific applications. For these different image file formats, different steganographic algorithms exist.
Image definition
To a computer, an image is a collection of numbers that constitute different light intensities in different areas of the image . This numeric representation forms a grid and the individual points are referred to as pixels.
Most images on the Internet consists of a rectangular map of the image's pixels (represented as bits) where each pixel is located and its colour . These pixels are displayed horizontally row by row.
The number of bits in a colour scheme, called the bit depth, refers to the number of bits used for each pixel . The smallest bit depth in current colour schemes is 8, meaning that there are 8 bits used to describe the colour of each pixel .
Monochrome and greyscale images use 8 bits for each pixel and are able to display 256 different colours or shades of grey. Digital colour images are typically stored in 24-bit files and use the RGB colour model, also known as true colour .
All colour variations for the pixels of a 24-bit image are derived from three primary colours: red, green and blue, and each primary colour is represented by 8 bits . Thus in one given pixel, there can be 256 different quantities of red, green and blue, adding up to more than 16-million combinations, resulting in more than 16-million colours . Not surprisingly the larger amount of colours that can be displayed, the larger the file size
The development of the software involves a series of production activities where opportunities of human feasibility are enormous.
Errors may begin to occur at the very inception of the process where the objectives maybe erroneously or imperfectly specified as well as in later design and development stages because of the human inability to perform and communicate with perfection. 
A test case is simply a test with formal steps and instructions. They are valuable because they are repeatable, reproducible under the same environments, and easy to improve upon with feedback.
A test case is the difference between saying that something seems to be working ok and proving that a set of specific tasks are known to be working correctly. 
Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design and testing.
Once the source code has been generated, the software must be tested to uncover as many errors as possible before delivery to the customer .Software testing is critical element of software quality assurance and represents the ultimate review of specification design and code generation.
Testing Principles
• All test should be traceable to customer requirements
• Test should be planned large before testing
• Testing should begin in the small and progress towards in the large.
• Testing is the major quality measure employed during the software engineering development. Its basic function is to detect error in the software. Testing is necessary for the proper functioning of the system.
Testing Objectives
• Testing is a process of executing a program with the intention of finding an error .
• A good test case is one that has a high probability of finding an error as yet discovered.
A successful test is one that uncovers an as yet undiscovered error
 

Bug Tracking System

With the strong focus that undergraduate computer science programs are now placing on software engineering there is still one area where students are not exposed to the technology that our colleges in the commercial sector have been embracing for some time.
That area is the implementation and use of bug tracking systems. This paper will discuss the design goals and development of a bug tracking system that is targeted for use by student groups to aid them in managing the development of software projects by minimizing the hassle of tracking bugs in the various versions of the software.
The final system itself allows groups to collect and store various computer anomalies, errors, and problems from the project that they are working on and to be able to effectively document and communicate these bugs to the members of their group in a real-time system.
The paper will examine design choices that were made to create such a system that provides powerful bug tracking abilities to students, while still serving as a gentle introduction to these systems as they are used in commercial software development.
We will also examine the ideal audience for this type of system and talk about its ability to scale to larger groups as the number of communication paths between group members grows.
Finally this paper will examine the benefits of exposing students to this type of software system during their time in school.
Bug tracking systems are a tool used by software developers to enhance the quality of their software products. The automation that bug tracking systems provide allows for efficient monitoring and controlling of reported bugs.
Generally bug tracking systems allow developers and testers to add bugs to a database, provide details about the bugs in the database and make updates to the bugs as progress is made. Along with these features there also features that allow bugs to be assigned to specific developers.
This allows for allows for developers to use the bug tracking system as a to do list when the development cycle enters a maintenance stage.
Finally bug tracking systems allow for the bugs that are entered into them to be prioritized making critical bugs easier to find in the database.
By providing a bug tracking system aimed at students we are giving students a chance to experiment with technology that the industry is currently using, but still have a system that serves as a gentle introduction to the use of bug tracking systems.
The use of a bug tracking system provides groups with an advantage when it come to documenting the system since the bug database serves as a record of the maintenance and improvements that have been made to the software.
The use of a bug tracking system also allows developer to keep track of bugs in a more organized fashion than other methods. Finally, after reviewing currently available bug tracking systems there were none available that were targeted to be used by students as an introduction to using bug tracking systems.
There are many benefits to introducing students to bug tracking systems. Specific to students there is the benefits that exposure to this technology can bring when they enter the work force.
However all developers can receive benefits offered by bug tracking systems like streamlining the bug tracking process, providing repeatable tests, and reducing development time and cost by allowing for improved coordination and communication among team members.
Exposure
One of the most important benefits of introducing this type of knowledge management technology is the exposure that it gives to undergrads. By giving them this exposure before they first enter the workforce they will be familiar with bug tracking systems.
This will be an advantage as when they start new projects there are many other things that they will likely need to be brought up to speed on such as coding standards, libraries, and development environments. Eliminating the need to learn how to use a bug tracking system will allow them to become productive members of the team in a shorter time span.
 

3D Graphics Library for Mobile Applications

The scope of this project is to develop a 3D graphics library for embedded systems like Mobile. The library is created for the basic API’s that will be able to render a 3D moving object into the screen. These sub-sets of API contain the basic operations that are necessary for rendering 3D pictures, games, and movies. The API’s are implemented based on the OpenGLES 1.1 specification.
The activities carried out include Study of OpenGL based on the specifications, study and comparison of OpenGLES with OpenGL, understanding and analysis of the State Variables that depict the various states in the functionality, which is to be added, new architecture & design proposal, design for new architecture & Data Structures, identification of sub-set of APIs, implementation, testing.
The API’s to be implemented were given priority based on the functionality of the API towards rendering the object into the screen, adding color and lighting effects to the object, storing the coordinates of the object.
And finally the movement of the object such as the rotation of the object through an angle in the screen, translating the object in the screen thro a distance and scaling which determines the position in which the object exists in the screen based on the depth buffer values. The coordinates corresponding to the object and the color coordinates corresponding to the RGBA values are stored and then rendered using various state variables.
What is OpenGL?
OpenGL is a software interface to graphics hardware. This interface consists of about 150 distinct commands that you use to specify the objects and operations needed to produce interactive threedimensional applications.
OpenGL is designed as a streamlined, hardware-independent interface to be implemented on many different hardware platforms. To achieve these qualities, no commands for performing windowing tasks or obtaining user input are included in OpenGL.
OpenGL doesn’t provide high-level commands for describing models of three-dimensional objects. These commands might allow you to specify relatively complicated shapes such as automobiles, parts of the body, airplanes, or molecules. With OpenGL, we can build up our desired model from a small set of geometric primitives - points, lines, and polygons.
A sophisticated library that provides these features could certainly be built on top of OpenGL. The OpenGL Utility Library (GLU) provides many of the modeling features, such as quadric surfaces and NURBS curves and surfaces. GLU is a standard part of every OpenGL implementation.
Also, there is a higher-level, object-oriented toolkit, Open Inventor, which is built atop OpenGL, and is available separately for many implementations of OpenGL.
OpenGL Rendering Pipeline
Most implementations of OpenGL have a similar order of operations, a series of processing stages called the OpenGL rendering pipeline. This ordering is shown in the following figure that describes how OpenGL is implemented and provides a reliable guide for predicting what OpenGL will do.
The following diagram shows the Henry Ford assembly line approach, which OpenGL takes to processing data. Geometric data (vertices, lines, and polygons) follow the path through the row of boxes that includes evaluators and per-vertex operations, while pixel data (pixels, images, and bitmaps) are treated differently for part of the process. Both types of data undergo the same final steps (Rasterization and per-fragment operations) before the final pixel data is written into the frame buffer.
Advantages of the proposed system of OpenGLES over the existing system and its Applications
• OpenGL is concerned only with rendering into a framebuffer (and reading values stored in that framebuffer).
• There is no support for other peripherals sometimes associated with graphics hardware, such as mice and keyboards.
• Programmers must rely on other mechanisms to obtain user input. • The GL draws primitives subject to a number of selectable modes. Each primitive is a point, line segment, polygon, or pixel rectangle. Each mode may be changed independently; the setting of one does not affect the settings of others (although many modes may interact to determine what eventually ends up in the framebuffer).
• Modes are set, primitives specified, and other GL operations described by sending commands in the form of function or procedure calls.
• Primitives are defined by a group of one or more vertices. A vertex defines a point, an endpoint of an edge, or a corner of a polygon where two edges meet. Data (consisting of positional coordinates, colors, normals, and texture coordinates) are associated with a vertex and each vertex is processed independently, in order, and in the same way. The only exception to this rule is if the group of vertices must be clipped so that the indicated primitive fits within a specified region; in this case vertex data may be modified and new vertices created. The type of clipping depends on which primitive the group of vertices represents.
• Commands are always processed in the order in which they are received, although there may be an indeterminate delay before the effects of a command are realized. This means, for example, that one primitive must be drawn completely before any subsequent one can affect the framebuffer. It also means that queries and pixel read operations return state consistent with complete execution of all previously invoked GL commands, except where explicitly specified otherwise.
 

Design And Analysis Of Sense Amplifier

This Project includes the study of the processes involved in VLSI design. Analog properties of the circuit have been considered through out this project. Sense amplifier is an essential circuit in a RAM.
The purpose of the whole circuit is to sense the low swing signal and read it from the RAM cells. The report explains how the design is done based on the new technology models 65 and 45 nm technologies.
A differential voltage sense amplifier has been considered here. Practically there might be some changes based on the model files received from the foundry.
Designed circuit is analyzed and then fine tuned looking in to the output as per the requirement. It is basically a trade of between power consumption and the frequency of operation etc.
Once the schematic or the circuit diagram is frozen, the layout has been done in different method and compared the performance.
Through out this project I have used LT Spice for the simulation of the circuitry and Microwind Lite for the layout and performance comparison. These tools are freely available.
A sense amplifier is an active circuit that reduces the time of propagation from an accessed memory cell to the logic circuit located at the periphery of the memory cell array, and converts the arbitrary logic levels occurring in a bit line to the digital logic level of the peripheral.
Layout area restrictions are specific to the memory design. In memories sense amplifier layout should fit either in the bit line pitch when each bit line requires individual data sensing or in the decoder pitch when a multiplicity of bit lines are connected to a single sense amplifier.
Sense amplifiers are classified by circuit types such as differential and non differential, and by operation modes such as voltage, current and charge sense amplifiers.
Working
A sense amplifier is nothing but a differential voltage amplifier with a current mirror load.
In the figure there is a current sink ISS which is a constant current sink. The current through this would be always constant and hence the Sum of iD1 and iD2 would be always constant. To keep ISS constant in this circuit, if there is a decrease in one of the branch current the other branch current increases.
Here in this circuit the load is a current mirror. The current mirror circuit consists of transistors M3 and M4. When a differential input is applied (bit and bit' lines), let's assume the higher voltage is applied to the transistor M1 gate and the compliment is on the M2 gate. In this case the current through M1, that is iD1 increases and the current iD2 decreases. Since iD1=iD3 the current mirror draws the same current as iD1, which results in a mirrored current iD4 = iD3 = iD1.
It can be observed that the current iD4 increases where as the iD2 decreases. Also notice that ISS needs to be constant and hence when ever iD2 decreases iD1 will increase and iD4 too increases. If the increase in iD1 is ?I then the decrease in iD2 will also be ?I.
At the output it can be observed that there is a difference of 2?I in current. The excess 2?I will be sunk by the output. This means if a ?I current difference is there at the input there would be a 2?I current flowing through the output. Thus the voltage difference is transferred in to a current difference and this current is amplified at the output and again this is converted as a voltage at the output.
The ISS keeps the current always constant and because of this it brings one of the transistors to higher saturation by drawing more current through that branch when the other branch supplies reduced current.
 

Cold Boot Attack

Contrary to popular assumption, DRAMs used in most modern computers retain their contents for several seconds after power is lost, even at room temperature and even if removed from a motherboard.
Although DRAMs become less reliable when they are not refreshed, they are not immediately erased, and their contents persist suf?ciently for malicious (or forensic) acquisition of usable full-system memory images.
Researchers at Princeton University have shown that there are surprisingly large number of machines where the contents of RAM survive undamaged well after the system BIOS or boot code has finished running, and these can be exploited. To demonstrate this, we will try to capture and analyze the memory content after the system is powered off.
This project is a Proof of Concept(POC) for capturing memory dumps from Intel x86-64 based PC system. RAM persistence can be exploited using both hardware and software mechanisms.
However exploit requires a certain amount of specialized expertise and a willingness and/or opportunity to dissect the obtained information that will be not necessarily in human readable form.
What Is Cold Boot Attack ?
In cryptography, a cold boot attack is a type of side channel attack in which an attacker with physical access to a computer is able to retrieve user's specific sensitive information from a running operating system after using a cold reboot to restart the machine from a completely "off" state.
The attack relies on the data remanence property of DRAM and SRAM to retrieve memory contents which remain readable in the seconds to minutes after power has been removed.
It has been known since the 1970s that DRAM cell contents survive to some extent even at room temperature and that retention times can be increased by cooling. In a 1978 experiment , a DRAM showed no data loss for a full week without refresh when cooled with liquid nitrogen.
Machines using newer memory technologies tend to exhibit a shorter time to total decay than machines using older memory technologies, but even the shorter times are long enough to facilitate most of the attacks.
Launching An Attack :
Step 1 : Powering Off The Machine :
Simple reboots The simplest attack is to reboot the machine and con?gure the BIOS to boot an imaging tool. A warm boot, invoked with the operating system's restart procedure, provides software an opportunity to wipe sensitive data prior to shutdown. A cold boot, initiated using the system's restart switch or by brie?y removing and restoring power, will result in little or no decay depending on the memory's retention time. Restarting the system in this way denies the operating system and applications any chance to scrub memory before shutting down.
Step 2 : Fetching The Contents Of The Ram :
For this , simply place the dram in other machine and start the system, Or alternatively , keep the ram in the same machine , attach the bootable USB flash drive in the USB PORT , and reboot the system . Note that the boot priority of the system must be set to 'External USB Drive' and not to 'Internal hardDrive'. Otherwise the system will reboot again into its native Operating System .
Having done this , the memory-imaging tool or scrapper present on USB Drive starts executing . It fetches the memory dump present on the RAM into the USB Drive.
Step 3: Making The Memory Dump Readable:
After taking the memory dump of the RAM in a USB drive. It can now be analysed. For this purpose, the data can be read straight out of the dump either by dumping it to a flat-file using 'dd' or by examining it in-place. For our experiment, we will dump the data to a flat-file. We also extract the human-readable content to a separate file.
 

Chat Server and Client Application

The aim of this project is to develop a Chat application using client server architecture which relies on Socket programming provided by java .
CONTEXT: there are many client (Geographically spread), at any time, these client can established their connection to a centralized server and can share their ideas through this chat application. One to one chat by using private chat is also provided in the developed chat application.
COMPLEXITY INVOLVED:
• Concept of broad cast and private chat.
• Concept of enable user and disable user .
• Concept view ,edit and delete user record.
• Transferring the text
• Designing UI by using AWT/SWING.
Scope / Functional Requirements:
The complete system can be divided in following modules:
1. User Management Module : This module has two sub modules
1 User Registration Module
2 User Validation Module
2. Change Password Module
3. Administration Module : register new user records.
Enable user and Disable User, view and edit and Delete user record.
4. Server Module : This module is used to enter server name (pc name ) where server is running.
5. BroadCast ingChat Module : This module is used for chatting like broadcasting and display his inbox and outbox and list of user.
6. Private Chat Module : This module is used for private chatting and display his inbox and outbox.
Users of the System:
• Administrator: Administrator can register new user, enable and disable user.
• Administrator can see the list of registered user and also view user record and edit record and delete user record.
• Disabled user can not login.
• Administrator can search user record by this module.
• Administrator can send message all registered user .
• Administrator can be private chatting to particular user.
Chat User :
Unregistered User can register by login module
• and after registered this user can login when enter valid user name and password.
• User can change his password.
• message all registered user
• Administrator can be private chatting to particular user.

Comparison Of Clustering Algorithms

Clustering is one of the important streams in data mining useful for discovering groups and identifying interesting distributions in the underlying data.
This project aims in analyzing and comparing the partitional and hierarchical clustering algorithms namely DBSCAN and k-means (partitional) with Agglomerative and CURE (hierarchical).
The comparison is done based on the extent to which each of these algorithms identify the clusters, their pros and cons and the timing that each algorithm takes to identify the clusters present in the dataset.
Among each clustering algorithm, computation time was measures as the size of data set increased. This was used to test the scalability of the algorithm and if it could be disintegrated and executed concurrently on several machines.
k-means is a partitional clustering technique that helps to identify k clusters from a given set of n data points in d-dimensional space. It starts with k random centers and a single cluster, and refines it at each step arriving to k clusters. Currently, the time complexity for implementing k - means is O (I * k * d * n), where I is the number of iterations. If we could use the KD-Tree data structure in the implementation, it can further reduce the complexity to O (I * k * d * log (n)).
DBSCAN discovers clusters of arbitrary shape relying on a density based notion of clusters. Given eps as the input parameter, unlike k-means clustering, it tries to find out all possible clusters by classifying each point as core, border or noise.
DBSCAN can be expensive as computation of nearest neighbors requires computing all pair wise proximities. Additional implementation includes KD-Trees to store the data which would allow efficient retrieval of data and bring down the time complexity from O(m^2) to O(m log m).
Agglomerative Hierarchical Clustering is one of the non-parametric approaches to Clustering which is based on the measures of the dissimilarities among the current cluster set in each iteration.
In general we will start with the points as individual clusters and at each step merge the closest pair of clusters by defining a notion of cluster proximity. We will implement three algorithms, namely, Single-Linkage Clustering and Complete-Linkage Clustering.
We will be analyzing the advantages and drawbacks of Agglomerative Hierarchical Clustering by comparing it with the other Algorithms CURE, DBSCAN and K-Means.
CURE clustering algorithm helps in attaining scalability for clustering in large databases without sacrificing quality of the generated clusters. The algorithm uses KD-Trees and Min Heaps for efficient data analysis and repetitive clustering.
The random sampling, partitioning of clusters and two pass merging helps in scaling the algorithm for large datasets. Our implementation would provide a comparative study of CURE against other partitioning and hierarchical algorithm.
Observations regarding DBSCAN Issues
The following are our observations:
DBSCAN algorithm performs efficiently for low dimensional data.
The algorithm is robust towards outliers and noise points
Using KD Tree improves the efficiency over traditional DBSCAN algorithm
DBSCAN is highly sensitive to user parameters MinPts and Eps. Slight change in the values may produce different clustering results and prior knowledge about these values cannot be inferred that easily.
The dataset cannot be sampled as sampling would affect the density measures.
The Algorithm is not partitionable for multi-processor systems.
DBSCAN fails to identify clusters if density varies and if the dataset is too sparse.

Sunday 27 September 2015

Web Mining

Web Mining plays an important role in the e-commerce era. Web mining is the integration of web traffic with other traditional business data like sales automaton system, inventory management, accounting, customer profile database, and e-commerce databases to enable the discovery of business co-relations and trends.
The system basically deals with web configuration on over network, the web divides various domains for hosting and supporting n-number of web against virus, spam and hackers, web manager have to analyze visualization structure of webs for manipulating access details and graph structure for sorting process, when data transfer over network it has to privilege data by encoding and it has to Proactive management support that continually monitors and automatically improves the network topology and configuration in real time based on route efficiency and end-user performance, ensuring the fastest and most reliable network connections.
Following is the list of possible challenges
Identification of the origin of the visitor is required:
To get the more out of the click stream data it is required to characterize the web site visitors, based on their demographics. The customers are to be identified by the IP address of the connection from which he is accessing the web site.
Calculation of the Dwell time for a content page:
The time spent by the visitor on a particular page provides a good measure showing the interests of the visitor.
Identification of an User Session:
A visitor can be characterized by studying his browsing behavior in a session, which is a collection of web-based transactions related by time.
Managing Web-site Structure Information:
The structure of the web site is important information. With the continuous changes in creating and maintaining electronic documents.
This system describes Trace graph it's a data presentation system for Network Simulator on over internet Trace graph system provides many options for analysis, has capabilities to calculate many parameters characterizing network simulation .
The simulator leaves lot of statistical data as the output of a particular simulation of various domains and domain functionalities. Using this data that particular network can be analyzed for its performance. This analysis may include the capturing of Information from the simulator and drawing the graphs, according to this network performance the domain can be configured in a SEO system for providing better search results to end users
Hosting Layer : This module describes about web server hosting into the web of various end users, this hosting mechanism describes about the system hosting functionalities according to the various servers with respective memory usage
Access Mode : the basic functionalities about this module to provide accessing permission for various web servers to support at global system where server will registered by various domain levels
Control panel layer : this layer creates control structure of a system for managing various functionalities of a system such as like controlling the domains and sub domains and user access structure
Performance Layer : this layer is monitor performance about web servers in the web against various users accessibility here this layer manipulates how system supports to multiple end users while serving to the users
Network Analyzer mode : this mode monitors various network assessors in between various network layer standards which describes how data is transferring on over net with respective actions
Visualization Manager : this module supports network trace graph for visualizing the performance factors by understanding the various network services which shows a performance graph about various servers
Report manager : This module is all about various MIS reports of web servers and web traffics and user accessing etc
 

Employee Tracking System

In today’s world, man struggles to make his life easier. The needfor tracking has assumed high importance because of varied and diverseresources, then be it a product of a company being shipped from thecompany to consumer, be it the assets, and be it in the supply chainmanagement or for that matter even the man-power.
In large organizational buildings, where the man-power is high, people are not always in their cabins. They have to wander from room to room, floor to floor to performtheir work. In such cases, it becomes extremely difficult to keep a track of people and find them when they are needed.
Solution for the above problem is as further a tracking system whichcan track an individual when they enter a room would suffice the need. This process should take place in a hassle free manner and therefore a wirelesssystem would be advantageous.
A receiver can be placed in each of therooms in the building and connected to a computer system which can takeinput from the connected receiver and enter it in a database of all individualsor personnel in the building. The receiver would receive input from atransmitter which would be given to all the people working in the building.
Since all the information is logged in a database, any person in the building will be able to access this information through any computer connected to this system and come to know the location of the person he or she is seeking for.
The system architecture consists of simple format of Central system and peripheral system.
1.Peripheral systems: It includes nodes, desktops that acceptinformation from the RFID readers which are mounted on the doors or at entry to any department.
2.Central systems: It includes server that handles all information of desktops. Server also controls time, attendance, log and managedatabase.
3. Front end consist of GUI for administrator that do tracking.
4. Back end consist of Database that maintaining log and database,Time and attendance of employee.
PROJECT MODULE:
A software system is always divided into several sub systems that makes iteasier for the development and testing. The different subsystems are knownas the modules and the process of dividing an entire system into subsystemsis known as modularization or decomposition.The different modules are:
1. Add New Employee: This operation is performed when newemployee needs to be added to the system, for e.g. when companyrecruits a new employees, their entry is inserted in the employeedatabase. This option has three choices:
2. Manage Log: When Employee passes from door his information is getretrieved in database and log of entire day will be stored in databasemeans it gives where employee has gone through departments.
3. Time And Attendance: This module keeps entry time of employeeinto company and maintains attendance of employee.
4. Tracking: This module is used to track particular employee and givesflow which is followed by employee throughout the day in thecompany.
5. Authentication Of Employee: When employee passes using RFIDTags through doors using RFID readers he must be authenticated byusing capturing image of employee. If any employee is doing proxy of any one it can be easily identified by administrator and followingaction must be taken on that employee.
6. Provide Interactive GUI: This gives user simple interactionthrough the system.
7. Administrator Login: It gives authentication for administrator for system.
8. REQUIREMENT ANALYSIS
Hardware Requirement Specification:
Processor –Pentium 4 processor with or above 2.4GHz.
Hard-disk –20 G.B. or above.
RAM -256 or more (recommended).
Display option – Monitor.
Input devices – Keyboard, mouse.
Web camera.RFID reader and tags.LAN connection cable.
 

Ant Colony Optimization Technique For Manets

The purpose of this project is to provide a clear understanting of the Ants-based algorithm, by giving a formal and comprehensive systematization of the subject. The simulation developed in Java will be a support of a deeper analysis of the factors of the algorithm, its potentialities and its limitations.
Swarm intelligence (SI) is a type of artificial intelligence based on the collective behavior of decentralized, self-organized systems. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.
SI systems are typically made up of a population of simple agents or boids interacting locally with one another and with their environment. The agents follow very simple rules, and although there is no centraized control structure dictating how individual agents should behave, local, and to a certain degree random, interactions between such agents.
PARTICLE SWARM OPTIMISATION
Particle swarm optimization (PSO) is a swarm intelligence based algorithm to find a solution to an optimization problem in a search space, or model.
ANT COLONY OPTIMISATION
The ant colony optimization algorithm (ACO), is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. This algorithm is a member of ant colony algorithms family, in swarm intelligence methods,the first algorithm was aiming to search for an optimal path in a graph; based on the behavior of ants seeking a path between their colony and a source of food. The original idea has since diversified to solve a wider class of Numerical problems, and as a result, several problems have emerged, drawing on various aspects of the behavior of ants.
OBJECTIVES
§ Propose an easy approach to the Ant Colony Algorithm, with appropriated vocabulary and global explanation, as well as details about its behaviour.
§ Develop a Java application which shows the working of the algorithm and gives a better understanding.
§ Give a straightforward analysis of the state-of-the-art studies on Ants-based Routing
Algorithms and the implementations which have been done.
THE SOURCE OF INSPIRATION: THE ANTS
Ant as a single individual has a very limited effectiveness. But as a part of a well-organised colony, it becomes one powerful agent, working for the development of the colony. The ant lives for the colony and exists only as a part of it. Each ant is able to communicate, learn, cooperate, and all together they are capable of develop themselves and colonise a large area. They manage such great successes by increasing the number of individuals and being exceptionally well organised.
The self organising principles they are using allow a highly coordinated behaviour of the colony. Pierre Paul Grassé, a French entomologist, was one of the first researchers who investigate the social behaviour of insects. He discoveredi that these insects are capable to react to what he called significant stimuli," signals that activate a genetically encoded reaction.
He observed thatthe effects of these reactions can act as new significant stimuli for both the insect that produced them and for the other insects in the colony. Grassé used the term stigmergy to describe this particular type of indirect communication in which the workers are stimulated by the performance they have achieved
Stigmergy is defined as a method of indirect communication in a self-organizing emergent system where its individual parts communicate with one another by modifying their local environment.
Ants communicate to one another by laying down pheromones along their trails, so where ants go within and around their ant colony is a stigmergic system In many ant species, ants walking from or to a food source, deposit on the ground a substance called pheromone.
Other ants are able to smell this pheromone, and its presence influences the choice of their path, that is, they tend to follow strong pheromone concentrations. The pheromone deposited on the ground forms a pheromone trail, which allows the ants to find good sources of food that have been previously identified by other ants.
 

Empirical Model of HTTP Network Traffic

The workload of the global Internet is dominated by the Hypertext Transfer Protocol (HTTP), an application protocol used by World Wide Web clients and servers.
Simulation studies of IP networks will require a model of the traSJic putterns of the World Wide Web, in order to investigate the effects of this increasingly popular application.
We have developed an empirical model of network trafic produced by HTTI? Instead of relying on server or client logs, our approach is based on packet traces of HTTP conversations.
Through traffic analysis, we have determined statistics and distributions for higher-level quantities such as the size of HTTPJiles, the number ofjles per “Web page”, and user browsing behavior. These quantities form a model can then be used by simulations to mimic World Wide Web network applications.
Our model of HTTP traffic captures logically meaningful parameters of Web client behavior, such as file sizes and “think times”. The traffic traces described in the preceding section provide us with empirical probability distributions describing various components of this behavior. It is used these distributions to determine a synthetic workload.
At the lowest level, our model deals with individual HTTP transfers, each of which consists of a request-reply pair of messages, sent over a single TCP connection.
We model both the request length and reply length of HTTP transfers. At first glance, it may seem more appropriate for a model of network traffic to deal with the number, size, and interarrival times of TCP segments. However, we note that 3
It is shown that it is appropriate to model the first HTTP transfer on a Web page separately from subsequent retrievals for that page. For simplicity, we have postponed discussion of this distinction. these quantities are governed by the TCP flow control and congestion control algorithms.
These algorithms depend in part on the latency and effective bandwidth on the path between the client and server. Since thi information cannot be known a priori, an accurate packet-level network simulation will depend on a simulation of the actual TCP algorithms.
This is in fact the approach taken for other types of TCP bulk transfers in the traffic model described in [lo]. In a similar fashion, our model generates transfers which need to be run through TCP’s algorithms; it does not generate packet sizes and interarrivals by itself.
A Web document can consist of multiple files. A server and client may need to employ multiple HTTP transactions, each of which requires a separate TCP connection, to transfer a single document. For example, a document could consist of HTML text 131, which in turn could specify three images to be displayed “inline” in the body of the document.
Such a document would require four TCP connections, each carrying one request and one reply. The next higher level above individual files is naturally the Web document, which we characterize in terms of the number offiles needed to represent a document.
Between Web page retrievals, the user is generally considering her next action. We admit the difficulty of characterizing user behavior, due to its dependency on human factors beyond the scope of this study.
However, we can model user think time based on our observations. Assuming that users tend to access strings of documents from the same server, we characterize the locality of reference between different Web pages.
We therefore define the consecutive document retrievals distribution as the number of consecutive pages that a user will retrieve from a single Web server before moving to a new one! Finally, the server selection distribution defines the relative popularity of each Web server, in terms of how likely it is that a particular server will be accessed for a set of consecutive document retrievals.