Saturday 2 May 2015

Email Flyer

Email Flyer is an online marketing system. It promote advertise and sell products through internet mailing. Send e-mail with the purpose of acquiring new customers or convincing old customers to buy something immediately. Money simply travels up the chain some levels of users get commission. Those who want to market the product either a Company or Individual will contact the Administrator for marketing.
The administrator will register the client with the details given by them. Once the client is registered, the product can be registered based on the client request. Only after launching the product which is registered, it will be available for email marketing. The product details are send to different users in random. The users will get the product details in mail they can buy the product if they are interested or can either forward the product to other users, but only after being a member of Email Flyer.
E-mail marketing is a form of direct marketing which uses electronic mail as a means of communicating commercial or fund raising messages to different users. In its broadest sense, every e-mail send to a potential or current customer could be considered e-mail marketing. However, the term is usually used to refer to:
Sending e-mails with the purpose of enhancing the relationship of a merchant with its current or old customers and to encourage customer loyalty and repeat business.
Sending e-mails with the purpose of acquiring new customers or convincing old customers to buy something immediately.
Adding advertisements in e-mails sent by other companies to their customers.
Emails that are being sent on the Internet.
Uniqueness in Email Marketing
Email marketing offers a number of advantages over traditional forms of direct marketing:
Greater speed — E-mail is probably the fastest way to reach your prospects and customers. It reaches recipients much quicker than postal mail, print advertising, catalogs or other forms of marketing. In addition, there is much less production time required to create an e-mail campaign than a print campaign. There may be very little elapsed time between developing a campaign, executing it and seeing results. E-mail is not only well suited for pre-planned campaigns, it is a powerful tactic for short-term opportunities arising throughout the year.
Cost effective — E-mail marketing is typically cheaper than print marketing. This is absolutely the case when doing your e-mail in-house. And while some rented e-mail prospect lists cost more per name than postal lists, the money you save on production, printing and postage can more than offset the cost of lists.
Easier to test different messages and offers — With e-mail you can easily test different subject lines, different messages and different offers because few production resources are required. You can segment your lists and put different text in for each group. Or you can try several different layouts by simply moving a few elements around in HTML.
Immediate results — Most e-mail services and software programs allow you to easily track the results of a campaign, almost from the minute you send it out. Reports can tell you who opens the e-mail, what links are clicked, what messages were not delivered, which creative or offer gets the best response and other information. This type of data is extremely valuable when refining your campaign.
Benefits for Consumers through Email Marketing
Convenience
The ability to purchase products and services over the Internet provides an attractive alternative to conventional shopping practices. Internet shoppers can research and buy products and services online according to their individual schedules. The Internet offers consumers the convenience of being able to shop 24 hours a day, seven days a week, and in the privacy of their own homes, for products and services from around the world.
Price-Competition
The Internet puts comparison shopping at a consumer's fingertips. Studies have demonstrated that products in certain retail categories, especially mid-to-high-priced commodity-oriented items, sell for lower prices online than in traditional stores.
Selection
Purchasing products and services over the Internet offers consumers the opportunity to find a much broader and deeper selection of items.
Customization
Purchasing on the Internet provides consumers with the opportunity to customize products to their individual needs or desires. A good example is the Dell Computer Internet site that allows shoppers to custom build their own computer hardware and software configurations. In addition to consumers being able to select exactly what they want, retailers also benefit by custom building or ordering products, thus optimizing their inventory management practices.
Information
Many consumers use the Internet as a source of product information before they buy, even if they don't actually make a purchase on the Internet. The Internet provides retailers with a low-cost distribution channel for disseminating all types of business information. As a result, Internet consumers are often able to access information that would not otherwise be available to them.
Entertainment
In a recent survey of consumers who have made a purchase using the Internet, 25% stated that they purchase on the Internet because it is more fun than traditional shopping. This ranked as the fourth most popular reason, behind convenience, selection, and price. In addition, the use of multimedia technology, and the novelty of discovering a new way of doing things are unique qualities that make shopping on the Internet a unique experience.
Customer-Service-Level
The Internet serves as a communication tool between consumers and retailers, through the use of e-mail, or other online feedback mechanisms. With Internet retailing this dialogue does not need to stop at the end of the transaction or even at the time of product delivery. These feedback mechanisms allow a retailer to maintain an ongoing dialogue with consumers in order, among other things, to ensure that they are fully satisfied with their purchase.
Advantages of Proposed System
Compared to other media investments such as direct mail or printed newsletters, it is less expensive.
Return on investment has proven to be high when done properly and e-mail marketing is often reported as second only to search marketing as the most effective online marketing tactic.
It is instant, as opposed to a mailed advertisement; an e-mail arrives in a few seconds or minutes.
It lets the advertiser "push" the message to its audience, as opposed to a website that waits for customers to come in.
It is easy to track. An advertiser can track users via web bugs, bounce messages, un-subscribes, read-receipts, click-throughs, etc. These can be used to measure open rates, positive or negative responses, corrolate sales with marketing.
Advertisers can reach substantial numbers of e-mail subscribers who have opted in (consented) to receive e-mail communications on subjects of interest to them
Over half of Internet users check or send e-mail on a typical day.
Specific types of interaction with messages can trigger other messages to be automatically delivered.
Highlights of the Project
The project was successfully completed within the time span allotted. Every effort has been made to present the system in more user friendly manner. All the activities provide a feeling like an easy walk over to the user who is interfacing with the system. All the disadvantages of the existing system have been overcome using the present system of “Email Flyer” which has been successfully implemented at clients location. A trial run of the system has been made and is giving good results. The system has been developed in an attractive dialogs fashion and the entire user interface is attractive and user friendly and suits all the necessities laid down by the clients initially. So user with minimum knowledge about the computers and the system can easily work with the system.
System Specification
The hardware and software requirements for the development phase of our project are:
Software Requirements :
OPERATING SYSTEM : WINDOWS XP
ENVIRONMENT : MICROSOFT .NET FRAME WORK
FRONT END : ASP .NET
SERVER SIDE SCRIPTING : VB.NET
CLIENT SIDE SCRIPTING : JAVASCRIPT
BACKEND : MICROSOFT SQL SERVER 2000
BROWSER : INTERNET EXPLORER 6.0
Hardware Requirements :
PROCESSOR : PENTIUM IV CLOCK SPEED : 500 MHZ
SYSTEM BUS : 32 BIT
RAM : 512 MB
HDD : 40 GB
MONITOR : SVGA COLOR
KEY BOARD : 108 KEYS
MOUSE : LOGITECH
FDD : 1.44 MB
 

Gesture Recognition System using Privacy

With the ever advancing technology, there is a growing need for systems that provide better human interaction. Human gestures are one such way in which users can interact with the system in a user-friendly, cost-effective and time efficient manner. This gesture recognition is used in a wide variety of applications like Gaming, Object/Motion Tracking, and System Application Control etc.
Our project demonstrates a few applications of gesture recognition, with main emphasis on application control using gestures. Some of these applications include: Mouse Control using Gestures .All these applications are designed to use the integrated webcam of any computer. The webcam is used to capture live video frames which are then processed to identify corresponding gestures for corresponding applications.
We use Head gestures as the inputs for gestures. The recognized gestures are then mapped to corresponding actions. The accuracy of the system can also be improved by using webcams with better range and resolutions.
Gestures Recognition System Using Privacy aims to demonstrate the use of gestures for control of various system applications by means of Head movement with a level of privacy. Our project provides a way in which users can interact with the system easily in a user-friendly manner to control mouse and thereby control a wide variety of other applications running on the system by means of mouse operations.
It also allows controlling a game application using gestures, which is in turn designed to be controlled using mouse movement. Thus, even applications that are not designed to work with gestures directly can be made to be controlled using Head gestures. In our system, the user needs only to move his/her Head to perform all these operations.
Our project demonstrates a few applications of gesture recognition, with main emphasis on application control using Head gestures and Privacy .Every user should have Privacy whilst on their work. Users often surf their laptops on public places and they might access their credit card numbers, bank account number, or may fill their username and password etc. in such a case the user fall to the victim of shoulder surfing.
Shoulder Surfing:
In computer security, shoulder surfing refers to using direct observation techniques, such as looking over someone's shoulder, to get information. Shoulder surfing is an effective way to get information in crowded places because it's relatively easy to stand next to someone and watch as they fill out a form, enter a PIN number at an ATM machine, or use a calling card at a public pay phone. Shoulder surfing can also be done long distance with the aid of binoculars or other vision-enhancing devices. So the user who has worn the customizable glass will only be able to view the activity on the monitor, other people who have not work the customizable glass won’t be able to see the desktop .Thus by integration of the customizable glass and monitor with the gesture based recognition the user can view the desktop environment and navigate in it, without being a Victim to Shoulder Surfing.
Create Privacy Monitor for Use of Individual User
Every user should have Privacy whilst on their work. Users often surf their laptops on public places and they might access their credit card numbers ,bank account number, or may fill their username and password etc….in such case the user fall to the victim of shoulder surfing. Shoulder surfing: In computer security, shoulder surfing refers to using direct observation techniques, such as looking over someone's shoulder, to get information.Thereby Only the user should be able to see the contents on his screen, others must not see the contents of the screen. Whenever the light passes from the first polarizer sheet, the screen floods with garbage light so when the light passes another polarizer then the garbage light gets filtered and the screen. Row of LED lights at the back of the Monitor. These are the only lights in the monitor. Now the optical System mask the light even across the monitor.
1. The first sheet makes a nice even background for the light.
2. The next piece is called the light guide plate. It is covered with dots, when light enters from the bottom edge it propagate down the plates hitting some of the dots , thereby having a total internal reflection and makes the light rays emerge from the front.
3. Then a diffuser film is place that eliminated the dark pattern from the light guide plate.
4. Then a prism film is placed .The backlight emerges from the light guide plate at many angles. This sheet increases the perpendicular component.
5. Then finally last diffuser film is put on to have a evenly distributed light all coming from the row of single led lights from the bottom.
At the back and front of the sheets are the two polarizers, they stick tightly with the piece of the glass. The bottom polarizer will create polarized light which will only pass through another polarizer with the same Orientation. When the polarizers are oriented perpendicular to each other so that light could pass from each other that mode is called as normally white mode. 
Normally Black mode
1. Light polarization not changes so it cannot pass through front polarizer.
2. The colors are changed by alternating the intensity of the light.
3. The polarized sheet is sheared from the monitor screen and is cut out.
There are two antiglare films one in the front side and the other in the back side of the polarizer film. So we have to remove both the antiglare film. Antiglare film limit the focal length to less than a cm, so in order to make the focal length infinite, the antiglare film is removed. The payoff is, some glaring can be seen in the screen.
Connecting Web-Cam to Application.
In this step we configure the setting for the OpenCV environment by making the OpenCV build using Cmake. Cmake is used for making the binary files for OpenCV so that it can be worked upon visual studio. We write an OpenCV application and use various OpenCV algorithms on images such as histogram, RBG levels, Grayscale. etc. Connect the OpenCV application with the web cam and receive video input through the webcam in OpenCV application. The webcam port is connected to the hardware port through Visual studio. The webcam is external one, so we have to connect the webcam to the address of the external not internal.
Tracking Head Movement
In this phase of the methodology, we will program an application that will track the Head gesture of user. We will use various OpenCV algorithms to trace the color of the user. We will maintain a pixel counter and a threshold for the pixel boundary, so that if the boundary is exceeded we can trigger an action. The action involves tracking the color of skin where ever it moves in the viewing area of the window via Camshift algorithm. Camshift is an open-source algorithm of OpenCV libraries which tracks color via binary thresholding.
Custom Monitor and Head Gesture Integration
We have programmed an application in VC++ that will link the OpenCV application to the mouse pointer, so that the change in the value of the image pixels will result in the movement of mouse, it can be used to play a simple window’s game. We have integrated both the OpenCV application as well as the Privacy monitor to enhance the entire Architecture of the System so that it will be a Head gesture Recognition system with Privacy to user’s working Environment.
Highlights of the Project
This project Application Control using Gestures is developed to assist a wide variety of users in controlling system applications by means of simple Head gestures. This project is very useful in a number of real-time scenarios. For instance, a person giving a PowerPoint demonstration can use this system to browse through the slides. This is also useful for users with motor neuron disabilities who cannot perform click operations. This Project can help to eliminate Shoulder surfing which is a security threat. This idea can be implemented in banks, ATM’s etc… to add a level of security. This can also be implemented in Laptops, LCD Displays, Smart phones...etc. .
This Project can also help eliminate piracy of movies as in theatres. This system can be incorporated at high level security chambers where data’s are highly confidential. It has a Great marketing potential in the field of games as well. The software used here are open source so, a lot of research can be done in this area. The hardware costs are also cheap, so from the marketing point of view it is also beneficial.
Despite its many uses, Application Control using Gestures does have its own limitations. The monitor glare can be reduced by using different kind of polarizer and by using different materials. All the while, it has a lot of future scope. The system can be improved in terms of complexity of recognized gestures, accuracy, and performance and can also be extended to assist in complicated real time applications like sign to speech conversion for dumb people etc…
System Specification
The hardware and software requirements for the development phase of our project are:
Software Requirements :
Operating System Requirements : Windows XP/Vista/Linux.
Packages: OpenCV 2.3, Microsoft visual studio 2010, .Net Framework 4.0 , Cmake 2.8.
Languages Used : Visual c++, OpenCv Libraries.
Drivers: Web Cam Drivers.
Hardware Requirements :
Customised LCD monitor.
Customized wearable Spectacles.
Pentium processor (preferable).
Hard disk: 40 GB (minimum).
Graphics card support needed. NVIDIA 8600GS Used.
Ram: 256 MB (minimum).

Smart Card Reader Using Mifare Technology

Contactless smart card technology is used in applications that need to protect personal information or deliver secure transactions. There are an increasing number of contactless smart card technology implementations that capitalize on its ability to enable fast, convenient transactions. Current and emerging applications using contactless smart card technology include transit fare payment cards, government and corporate identification cards, documents such as electronic passports and visas, and contactless financial payment cards.
The contactless device includes a smart card secure microcontroller, or equivalent intelligence, and internal memory and has the unique ability to securely manage, store and provide access to data on the card, perform complex functions (for example, encryption or other security functions) and interact intelligently via RF with a contactless reader.
Applications that require the highest degree of information and communications security (for example, payment applications, government IDs, electronic passports) use contactless smart card technology based on an international standard that limits the ability to read the contactless device to approximately 4 inches (10 centimeters). Applications that need longer reading distances can use other forms of contactless technologies that can be read at longer distances.
With a substantial market share according to Derrick Robinson at IMS Research, Philips’ MIFARE portfolio is the established industry benchmark for contactless and dual interface smart card schemes. Operating at 13.56 MHz and in full accordance with ISO14443A – the international standard for contactless smart cards and readers – the MIFARE platform consists of chip solutions for pure contact less and dual interface smart cards and reader devices.
The diversity of the MIFARE product portfolio covers low- and high-end chip solutions, providing smart identification technologies suitable for use in a wide array of design scenarios. MIFARE® is a registered trademark Koninklijke Philips Electronics N.V.
Mifare Security Overview
There has been a lot of discussion recently over the security of the Mifare card particularly because of the extended business applications such as an ePurse being proposed for this platform. Expressions such as low security are thrown around in a way that could confuse or even misrepresent the platform.
In any scheme it is the overall security that matters not the individual components. It is also fundamental to ensure that the components are used in the right way, in most high visibility failures it has been a protocol or procedure failure that has resulted in the end disaster. However memory cards such as Mifare do have restricted security functionality.
The Mifare chip technology is based on a simple contact less memory device with discrete logic to provide some security functionality across the air gap with the reader (i.e. at the radio frequency level). This technology is proprietary to Philips Semiconductors and requires their IPR to be available in both the Smart Card chip and the Mifare reader. In practice this means that both the Smart Card and the reader need to have a Philips (and a Mifare licensed chip, e.g. Infineon) chip embedded within them.
The original Mifare 1K memory was introduced in 1994 and there are now 6 chips in the Mifare range from Philips; Mifare Classic (1 Kbytes of EEPROM nonvolatile memory), Mifare 4K (4 Kbytes of EEPROM), Mifare DESFire (4 Kbytes of EEPROM), Mifare Ultralite (64 bytes of EEPROM), Mifare ProX (1 Kbytes or 4 Kbytes Mifare emulation in a microcontroller chip.
Total chip EEPROM including Mifare emulation memory is 16 Kbytes) and Smart MX ( a Mifare ProX upgrade with 72 Kbytes of EEPROM). The Mifare ProX and the Smart MX are microcontroller based chips and provide the Mifare functionality as emulation in the chip. The discussion that follows relates to the Classic 1k Mifare but the arguments would hold for most other memory cards.
Mifare Card Operation
The Mifare 1K card has its 1 Kbyte memory arranged as 16 sectors, each with 4 blocks of 16 bytes. The last block in each sector stores two keys, A and B, which are used to access (depending on the access conditions also set in this block), the other data blocks. The Mifare reader interacts with the card as follows:
1) Select card (ISO 14443 allows multiple cards in its field)
2) Log-in to a sector (by providing key A or key B) and
3) Read, Write, Increment, or Decrement a block (must conform to the access conditions).
The Increment and Decrement operations allow the block to be treated as an electronic purse. It is important to note that the cryptographic interchange takes place between the reader and the card and more precisely between the Mifare chip in the reader and the Mifare chip in the card.
The terminal has to present the appropriate key to the reader and normally this key would be derived from a Master key stored in a Secure Access Module (SAM) at the terminal. The card ID and parameters, which are unique to each card, can act as the derivation factor. This means that each card is using a different key set to protect a particular sector. Breaking an individual card will not reveal the Master keys.
The Log-in process referred to above implements a mutual authentication process (a challenge/response mechanism) which then sets up an encrypted channel between the card and the reader using Philips proprietary Crypto-1 algorithm. These security services operate at the RF (Radio Frequency) level and cannot provide any cryptographic audit trail. In essence this means that you must trust the terminal but more particularly you have no evidence if it misbehaves.
Secure Messaging
In a transaction-based scheme it is standard practice to protect the messages with some Cryptographic Check Value (CCV) or digital signature. This ensures the authenticity of the source of the message and that the message has been unchanged in transit from source to destination. This requires that the Smart Card is able to both create and check such CCVs or digital signatures. Without such security services being applied it is not easy to resolve disputes and the scheme is vulnerable to a wide range of attacks. The Mifare card because it hasn't got a CPU is not capable of creating or checking such cryptographic messages.
Consider the operation of a CPU Card. In this case the transactions operate between the SAM (Secure Access Module) and the card. Cryptographic protection operates between these end points. Consider for example the case where you want to increment the value of a purse stored on the card. The card is set up so that the command to increment the purse has a CCV attached, the chip checks this CCV before it effects the value load process. This Cryptographic CCV is created by the Secure Access Module (SAM) attached to the terminal.
No where in this scenario are the cryptographic keys available in plain text. Even if the terminal is attacked with some Trojan software, the transaction records can be subsequently checked for authenticity. It is not possible for the Trojan operation to fool this process. In addition sequencing controls can be incorporated in the messages which are checked by the CPU to stop replays.
MIFARE Advantages
• Open architecture platform - convenient, secure and fast
• Compatibility with all current and future products
• Broadest product portfolio available
• microcontrollers and hardwired logic ICs available
• mixed installations possible
• Broadest offer of card and reader suppliers
• Operable in harsh environmental conditions
• maintenance-free, reliable and proven technology
• Established and running infrastructure around the world
• Proven, reliable and robust technology
• First choice for fraud-proof, contactless payment transactions
Applications:
• Employee access card with secured ID and the potential to employ biometrics to protect physical access to facilities:
• Transportation • Drivers Licenses.
• Mass Transit Fare Collection Systems.
• Electronic Toll Collection Systems.
• Retail and Loyalty / Consumer reward/redemption tracking on a smart loyalty card, that is marketed to specific consumer profiles and linked to one or more specific retailers serving that profile set.
• Health Card / Consumer health card containing insurance eligibility and emergency medical data.
• University Identification / All-purpose student ID card (a/k/a/ campus card), containing a variety of applications such as electronic purse (for vending and laundry machines), library card, and meal card.e.
Highlights of the Project
Smart Card Reader is used and one such vital application is “THE PREVENTION OF DRUG ABUSE”. The smart card reader employs MIFARE technology to communicate with the card; Mifare refers to the contactless technology. The main objective of the project is to show how the Smart Card is used to READ and WRITE data, and how data is transmitted and displayed/entered by means of the Computer. For the Read operation the Microcontroller is interfaced to the Computer and to the Mifare reader, hence the data from the card is first transmitted to the microcontroller and then displayed in the computer. Thus the Read operation on the Card is performed.
The Mifare Reader is also used for Write operation, for write operation all the data to be written is directly transmitted from the Computer to the Card, for this the reader is connected directly to the Computer, and hence all the data is directly transmitted from the Computer and it is written on the card. Thus both Read and Write operations are successfully accomplished by means of the Smartcard. Since, the smart card is very cost effective, and very apt for the modern scenario. These smart cards are affordable to the employee / patient depending on the application. The prescription which can be stored in the smart card can thus be accessed and dosage can be constantly monitored. For this purpose this is a boon for the pharmaceutical industry,
System Specification
The hardware and software requirements for the development phase of our project are:
Software Requirements :
Platform : Windows 98/2000/NT
Front End : Java JDK1.3, Java Applets, Tom cat 1.4
Tool : J2ME Wireless Tool Kit 1.0.4
Back End : Microsoft Access
Hardware Requirements :
Processor : Pentium IV 2.5 Ghz
RAM : 256 MB
HDD : 40 GB
FDD : 1.44 MB
Keyboard : 105 Keys
Monitor : 14” Soft White color SVG

Load Shedding In Mobile Systems Using Mobiqual

In location-based mobile continual query (CQ) systems, the two key measures of quality-ofservice (QoS) are freshness and accuracy. To achieve freshness the CQ server must perform frequent query reevaluations. To attain accuracy the CQ server must receive and process frequent position updates from the mobile nodes. However, it is often difficult to obtain fresh and accurate CQ results simultaneously due to limited resources in computing and communication and fast-changing load conditions caused by continuous mobile node movement.
Hence, a key challenge for a mobile CQ system is to achieve the highest possible quality of the CQ results in both freshness and accuracy with currently available resources. In this paper we formulate this problem as a load shedding one and develop MobiQual-a QoS-aware approach to performing both update load shedding and query load shedding.
The design of MobiQual highlights three important features. Firstly, the differentiated load shedding where we apply different amounts of query load shedding and update load shedding to different groups of queries and mobile nodes. Secondly, Per-query QoS specification in which individualized QoS specifications are used to maximize the overall freshness and accuracy of the query results. There is low-cost adaptation as well where MobiQual dynamically adapts with a minimal overhead to changing load conditions and available resources.
We conduct a set of comprehensive experiments to evaluate the effectiveness of MobiQual. The results show that through a careful combination of update and query load shedding the MobiQual approach leads to much higher freshness and accuracy in the query results in all cases compared to existing approaches that lack the QoS-awareness properties of MobiQual, as well as the solutions that perform queryonly or update-only load shedding.
To obtain fresher query results, the CQ server must re-evaluate the continual queries more frequently, requiring more computing resources. Similarly, to attain more accurate query results, the CQ server must receive and process position updates from the mobile nodes at a higher rate, demanding communication as well as computing resources.
However, it is almost impossible for a mobile CQ system to achieve 100% fresh and accurate results due to continuously changing positions of mobile nodes. A key challenge therefore is: How do we achieve the highest possible quality of the query results in both freshness and accuracy, in the presence of changing availability of resources and changing workloads of location updates and location queries
Modules
Mobile Node
A client is an application or system that accesses a remote service on another computer system known as a server by way of a network. The term was first applied to devices that were not capable of running their own stand-alone programs, but could interact with remote computers via a network. These dumb terminals were clients of the time-sharing mainframe computer. In the architecture used by our system, the client is represented by a mobile node enabled to send multiple messages of variable character length irrespective of any time or message length constraint. The number of mobile nodes is also variable depending upon the traffic in the network.
Server Node
In computing, a server is any combination of hardware or software designed to provide services to clients. The server performs some computational task on behalf of clients. The clients either run on the same computer or connect through the network. When used alone the term typically refers to a computer which may be running a server operating system but is commonly used to refer to any software or dedicated hardware capable of providing services. In our proposed system the server used is a CQ server which continues to execute over time. The mobile node acting as a client sends streams of data to the CQ server. When new data enters the stream the results of the continuous query are updated.
Network Model
Generally, the channel quality is time varying. For the server-AP association decision, a user performs multiple samplings of the channel quality and only the signal attenuation that results from long term channel condition changes are utilized our load model can accommodate various additive load definitions such as the number of users associated with an AP. It can also deal with the multiplicative user load contributions.
Load Shedding in Mobile CQ Systems
In a mobile CQ system, the CQ server receives position updates from the mobile nodes through a set of base stations and periodically evaluates the installed continual queries (such as continual range or nearest neighbor queries) This is because a larger number of position updates must be processed by the server, for instance, to maintain an index. When the position update rates are high, the amount of position updates is huge and the server may randomly drop some of the updates if resources are limited. This can cause unbounded inaccuracy in the query results. In MobiQual, we use accuracy-conscious update load shedding to regulate the load incurred on the CQ server due to position update processing by dynamically configuring the inaccuracy thresholds at the mobile nodes.
Algorithm used
Congestion Load Minimization The existing system minimizes the load of the congested AP, but they do not necessarily balance the load. In this section, we consider min-max load balancing approach that not only minimizes the network congestion load but also balances the load of the no congested APs. As mentioned earlier, the proposed approach can be used for obtaining various max-min fairness objectives by associating each user with appropriate load contributions. Unfortunately, min-max load balancing is NP-hard problem and it is hard to find even an approximated solution. In this paper we solve a variant of the min-max problem termed min-max priority-load balancing problem whose optimal solution can be found in polynomial time.
System Architecture
System architecture diagram is a simple diagrammatic form of the functions happening in the load shedding process, and it is used to understand the entire process as shown in the figure
Functional Specifications
Reduction:
It includes the algorithm for grouping the queries into k clusters and the algorithm for partitioning the geographical space of interest into l regions. The query groups are incrementally updated when queries are installed or removed from the system. The space partitioning is recomputed prior to the periodic adaptation.
Aggregation:
It involves computing aggregate QoS functions for each query group and region. The aggregated QoS functions for each query group represent the freshness aspect of the quality. The aggregated QoS functions for each region represent the accuracy aspect of the quality. We argue that the separation of these two aspects is essential to the development of a fast algorithm for configuring the re-evaluation periods and the inaccuracy thresholds to perform adaptation. QoS aggregation is repeated only when there is a change in the query grouping or the space partitioning.
Adaptation:
It is performed periodically to determine the throttle fraction which defines the amount of load that can be retained relative to the load of providing perfect quality, setting of re-evaluation periods and the setting of inaccuracy thresholds.
Activity Diagram
Activity diagrams are loosely defined diagrams to show workflow of stepwise activities and actions with support for choice, iteration and concurrency. UML activity diagrams can be used to describe the business and operational step-by-step workflow of the components in a system. UML activity diagrams could potentially model the internal logic of a complex operation. In many ways UML activity diagrams are the object-oriented equivalent of flow charts and data flow diagrams (DFDs) from structural development. The working of our proposed system is shown in the activity diagram in the figure.
Highlights of the Project
The cloud has become a widely used term in academia and the industry. Education has not remained unaware of this trend, and several educational solutions based on cloud technologies are already in place, especially for software as a service cloud. However, an evaluation of the educational potential of infrastructure and platform clouds has not been explored yet. An evaluation of which type of cloud would be the most beneficial for students to learn, depending on the technical knowledge required for its usage, is missing.
In this paper we have presented MobiQual, a load shedding system aimed at providing high quality query results in mobile continual query systems. MobiQual has three unique properties. First, it uses per-query QoS specifications that characterize the tolerance of queries to staleness and inaccuracy in the query results, in order to maximize the overall QoS of the system. Second, it effectively combines query load shedding and update load shedding within the same framework, through the use of differentiated load shedding concept. Finally, the load shedding mechanisms used by MobiQual are lightweight, enabling quick adaption to changes in the workload, in terms of the number of queries, number of mobile nodes or their changing movement patterns.
Through a detailed experimental study, we have shown that the MobiQual system significantly outperforms approaches that are based on query-only or update-only load shedding, as well as approaches that do combined query and update load shedding but lack the differentiated load shedding elements of the MobiQual solution, in particular the query grouping and space partitioning mechanisms. There are several interesting issues for future work. In this paper, we only considered range queries. However, MobiQual can be applied to kNN queries as well. There are various query processing approaches where kNN queries are first approximated by circular regions based on upper bounds on the kth distances.
MobiQual should be able to dynamically adjust the values of the (number of shedding regions) and k (number of query groups) parameters as the workload changes. An overestimated value for these paramters means lost opportunity in terms of minimizing the cost of adaptation, whereas an underestimated value means lost opportunity in terms of maximizing the overall QoS. In this paper we have shown that the time it takes to run the adaptation step is relatively small compared to the adaptation period in most practical scenarios. This means relatively aggressive values for l and k could be used to optimize for QoS without worrying about the cost of adaptation.Load shedding system aimed at providing high quality query results in mobile continual query systems. The unique property of MobiQual is that it uses per-query QoS specifications that characterize the tolerance of queries to staleness and inaccuracy in the query results, in order to maximize the overall QoS of the system.
System Specification
The hardware and software requirements for the development phase of our project are:
Software Requirements :
OPERATING SYSTEM : Windows 07/ XP Professional
FRONT END : Visual Studio 2010
BACK END : SQL SERVER 2005
Hardware Requirements :
PROCESSOR : PENTIUM IV 2.6 GHz, Intel Core 2 Duo.
RAM : 512 MB DD RAM
HARD DISK : 40 GB
KEYBOARD : STANDARD 102 KEY
MONITOR : 15” COLOUR
CD DRIVE : LG 52x

Face Detection Using Template Matching

Face detection is concerned with finding whether or not there are any faces in a given image (usually in gray scale) and, if present, return the image location and content of each face. This is the first step of any fully automatic system that analyzes the information contained in faces (e.g., identity, gender, expression, age, race and pose). While earlier work dealt mainly with upright frontal faces, several systems have been developed that are able to detect faces fairly accurately with in-plane or out-of-plane rotations in real time. Although a face detection module is typically designed to deal with single images, its performance can be further improved if video stream is available.
Face detection is the first stage of an automatic face recognition system, since a face has to be located in the input image before it is recognized. A definition of face detection could be: given an image, detect all faces in it (if any) and locate their exact positions and size. Usually, face detection is a twostep procedure: first the whole image is examined to find regions that are identified as ―face‖. After the rough position and size of a face are estimated, a localization procedure follows which provides a more accurate estimation of the exact position and scale of the face. So while face detection is most concerned with roughly finding all the faces in large, complex images, which include many faces and much clutter, localization emphasizes spatial accuracy, usually achieved by accurate detection of facial features.
Face-detection algorithms focus on the detection of frontal human faces, whereas newer algorithms attempt to solve the more general and difficult problem of multi-view face detection. That is, the detection of faces that are either rotated along the axis from the face to the observer (in-plane rotation), or rotated along the vertical or left-right axis (out-of-plane rotation), or both. The newer algorithms take into account variations in the image or video by factors such as face appearance, lighting, and pose.
Techniques
Many algorithms implement the face-detection task as a binary pattern-classification task. That is, the content of a given part of an image is transformed into features, after which a classifier trained on example faces decides whether that particular region of the image is a face, or not. Often, a window-sliding technique is employed. That is, the classifier is used to classify the (usually square or rectangular) portions of an image, at all locations and scales, as either faces or non-faces (background pattern). Images with a plain or a static background are easy to process. Remove the background and only the faces will be left, assuming the image only contains a frontal face. Using skin color to find face segments is a vulnerable technique. The database may not contain all the skin colors possible.
Lighting can also affect the results. Non-animate objects with the same color as skin can be picked up since the technique uses color segmentation. The advantages are the lack of restriction to orientation or size of faces and a good algorithm can handle complex backgrounds. Faces are usually moving in real-time videos. Calculating the moving area will get the face segment. However, other objects in the video can also be moving and would affect the results. A specific type of motion on faces is blinking. Detecting a blinking pattern in an image sequence can detect the presence of a face. Eyes usually blink together and symmetrically positioned, which eliminates similar motions in the video. Each image is subtracted from the previous image.
The difference image will show boundaries of moved pixels. If the eyes happen to be blinking, there will be a small boundary within the face. A face model can contain the appearance, shape, and motion of faces. There are several shapes of faces. Some common ones are oval, rectangle, round, square, heart, and triangle. Motions include, but not limited to, blinking, raised eyebrows, flared nostrils, wrinkled forehead, and opened mouth. The face models will not be able to represent any person making any expression, but the technique does result in an acceptable degree of accuracy.
The models are passed over the image to find faces, however this technique works better with face tracking. Once the face is detected, the model is laid over the face and the system is able to track face movements. A method for human face detection from color videos or images is to combine various methods of detecting color, shape, and texture. First, use a skin color model to single out objects of that color. Next, use face models to eliminate false detections from the color models and to extract facial features such as eyes, nose, and mouth.
Applications
Face detection is used in biometrics, often as a part of (or together with) a facial recognition system. It is also used in video surveillance, human computer interface and image database management. Some recent digital cameras use face detection for autofocus. Face detection is also useful for selecting regions of interest in photo slideshows that use a pan-and-scale Ken Burns effect. Face detection is gaining the interest of marketers. A webcam can be integrated into a television and detect any face that walks by. The system then calculates the race, gender, and age range of the face. Once the information is collected, a series of advertisements can be played that is specific toward the detected race/gender/age. Face detection is also being researched in the area of energy conservation. Televisions and computers can save energy by reducing the brightness. People tend to watch TV while doing other tasks and not focused 100% on the screen. The TV brightness stays the same level unless the user lowers it manually. 
The system can recognize the face direction of the TV user. When the user is not looking at the screen, the TV brightness is lowered. When the face returns to the screen, the brightness is increased.
Template Matching
Here, we present an original template based on edge direction. It has been noticed that the contour of a human head can be approximated by an ellipse. This accords with our visual perception and has been verified by numerous experiments. The existing methods have not sufficiently used the global information of face images in which edge direction is a crucial part, so we present a deformable template based on the edge information to match the face contour.
The face contour is of course not a perfect ellipse. To achieve good performance, the template must tolerate some deviations.
In this paper, an elliptical ring is used as the template as illustrated in Fig. Fig. 2.1.5(a) is a normal upright face; 2.1.5(b) is the binary image after edge linking. In 2.1.5(b), we can note that the external contour cannot be represented by a single ellipse no matter how the parameters are adjusted. However, if we use an elliptical ring representing the contour as shown in Fig. 2.1.5(c), almost all the edge points on the contour can be included. The other important advantage of this template is that we can choose a relatively big step in matching so as to reduce the computational cost.
Proposed Method for Face Detection
We follow a few steps for achieving our goal. First of all we take an input image which contains a single face in it. On the given input image we apply the sobel operator for detecting the edge in the image. Then we apply a threshold value to the image to binarize the image. The edge which we get after applying the sobel operator is thick. We apply a thinning algorithm to thin the edge which is discussed later. After thinning We try to eliminate the noise present in the image. We apply edge linking to the nearest points. Lastly we apply template matching to get the face from the image which is of the elliptical shape. The steps are discussed in detail below.
Step 1 :
We take as input a image file which is of the pgm format. Then we allocate memory dynamically for a two dimensional array in a file. Copy pixel by pixel from the original image to the file created.
Step 2 :
Application of Sobel operator for detecting the edge in the image. The Sobel operator is used in image processing, particularly within edge detection algorithms. Technically, it is a discrete differentiation operator, computing an approximation of the opposite of the gradient of the image intensity function. At each point in the image, the result of the Sobel operator is either the corresponding opposite of the gradient vector or the norm of this vector. The Sobel operator is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical direction and is therefore relatively inexpensive in terms of computations. On the other hand, the opposite of the gradient approximation that it produces is relatively crude, in particular for high frequency variations in the image.
In simple terms, the operator calculates the opposite of the gradient of the image intensity at each point, giving the direction of the largest possible change from light to dark and the rate of change in that direction. The result therefore shows how "abruptly" or "smoothly" the image changes at that point, and therefore how likely it is that that part of the image represents an edge, as well as how that edge is likely to be oriented. In practice, the magnitude (likelihood of an edge) calculation is more reliable and easier to interpret than the direction calculation.
Mathematically, the gradient of a two-variable function (here the image intensity function) is at each image point a 2D vector with the components given by the derivatives in the horizontal and vertical directions. At each image point, the gradient vector points in the direction of largest possible intensity increase, and the length of the gradient vector corresponds to the rate of change in that direction. This implies that the result of the Sobel operator at an image point which is in a region of constant image intensity is a zero vector and at a point on an edge is a vector which points across the edge, from brighter to darker values.
Thresholding
Thresholding is the simplest method of image segmentation. From a grayscale image, thresholding can be used to create binary images. During the thresholding process, individual pixels in an image are marked as "object" pixels if their value is greater than some threshold value (assuming an object to be brighter than the background) and as "background" pixels otherwise. This convention is known as threshold above. Variants include threshold below, which is opposite of threshold above; threshold inside, where a pixel is labeled "object" if its value is between two thresholds; and threshold outside, which is the opposite of threshold inside. Typically, an object pixel is given a value of ―1‖ while a background pixel is given a value of ―0.‖ Finally, a binary image is created by coloring each pixel white or black, depending on a pixel's labels. We have taken the threshold value as 55. Above this value, we make all pixels 255 and below the threshold value we make all pixels values 0.
Highlights of the Project
We have demonstrated the effectiveness of a new face detection algorithm in the images with simple. The algorithm is able to correctly detect all faces in the images with simple backgrounds. We are planning to test the method over a larger set of images; also over images, that don't contain faces. Such tests should help to detect weak points of the algorithm and consequently give directions for algorithm upgrade. We intend to extend the algorithm for multiface detection. First, the number of faces should be known before the detection. Second, we assume that faces do not overlap each other in images.
System Specification
The hardware and software requirements for the development phase of our project are:
Software Requirements :
OPERATING SYSTEM : Windows 07/ XP Professional
FRONT END : Visual Studio 2010
BACK END : SQL SERVER 2005
Hardware Requirements :
PROCESSOR : PENTIUM IV 2.6 GHz, Intel Core 2 Duo.
RAM : 512 MB DD RAM
HARD DISK : 40 GB
KEYBOARD : STANDARD 102 KEY
MONITOR : 15” COLOUR
CD DRIVE : LG 52x

Continual Adaptation of Acoustic Models for Domain-Specific Speech Recognition

The objective: Advances in automatic speech understanding bring a new paradigm of natural interaction with computers. The Web-Accessible Multi-Modal Interface (WAMI) system developed by MIT provides a speech recognition service to a range of lightweight applications for Web browsers and cell phones. However, WAMI currently has two problems. First, to improve performance, it requires continual human intervention through expert tuning--an impractical endeavor for a large shared speech recognition system serving many applications. Second, WAMI is limited by its global set of models, suboptimal for its variety of unrelated applications.
Methods/Materials
In this research I developed a method to automatically adapt acoustic models and improve performance. The system automatically produces a training set from the utterances recognized with high confidence in the application context. I implemented this adaptive system and tested its performance using a data set of 106,663 utterances collected over one month from a voice-controlled game. To solve the second problem, I also extended the WAMI system to create separate models for each application.
Results
The utterance error rate decreased 13.8% by training with an adaptation set of 32,500 automatically selected utterances, and the trend suggests that accuracy will continue to improve with more usage. The system can now adapt to domain-specific features such as specific vocabularies, user demographics, and recording conditions. It also allows recognition domains to be defined based on any criteria, including gender, age group, or geographic location.
Conclusions/Discussion
This research has enabled the WAMI system to automatically learn from its users and reduce its error rate. The extended WAMI can create customized models to optimize performance for each application and user group. These improvements to WAMI bring it one step closer towards being an "organic," automatically-learning system.
This project extended MIT's speech recognition system to make it learn on-the-fly as more people use it. The system serves many Web and mobile applications simultaneously. My work brings it closer to being an "organic" and self-learning system. 
 

Audio Manager

An Mp3 is an audio file format which is encoded from an Audio CD or other audio formats like wav, ram etc... Mp3 is the most popular format for music in the present world.
Nowadays Mp3 players are available making Mp3's the best form of audio.
These Mp3 files are of different quality which is decided by the bit-rate of the file. Greater the bit-rate bigger the size of the file.
These Mp3's have a great advantage of being able to store the various information about the audio file, namely the artist, album , year etc.. This is achieved with the help of a tag.
The basic idea of this project is to create a software which can store the contents of our Mp3 CD's in a catalog based form so that if we have 'n' number of CD's it will be easy for us to manage the CD's.
This helps us to create a catalog of our disk collection.
By using such catalog, we can easily find all necessary files and folders without the need to insert disks into the drive.
We can also Sort files, folders, disks and categories by attributes, names, locations, creation date, artist, album etc. The duplicate files will be notified by the software.
We can update file, folder, and disk information at any time.
 

Online Bus Reservation

This is Online Bus Reservation package to manage B uses, Routes, Services, Passengers & avail a degree of comfort to both Organization & Passenger. Today the leading Bus Travel companies are using these Packages to have a ease of mentality with their work.
Features:
The project keep track of following modules:
Avail Online Reservation
Rootmaps
Availability of seats
Fares
Services
Payment
Development:
This project is coded under c # .net environment in «\A S P.NETProj\Online_reservation
DATABASE: The finance management system handles MS - Access database called«\ Finance\App_Data\ bus.mdb
This database contains four tables with following structures:
1. Source
2.Destination
3.Arrival Time
4.Departure Time
5. Fare

Friday 1 May 2015

Multi Tasking Sockets

This article is about a client/server multi-threaded socket class. The thread is optional since the developer is still responsible to decide if needs it.
There are other Socket classes here and other places over the Internet but none of them can provide feedback (event detection) to your application like this one does.
It provides you with the following events detection: connection established, connection dropped, connection failed and data reception (including 0 byte packet).
This article presents a new socket class which supports both TCP and UDP communication. But it provides some advantages compared to other classes that you may find here or on some other Socket Programming articles.
First of all, this class doesn't have any limitation like the need to provide a window handle to be used. This limitation is bad if all you want is a simple console application.
So this library doesn't have such a limitation. It also provides threading support automatically for you, which handles the socket connection and disconnection to a peer.
It also features some options not yet found in any socket classes that I have seen so far. It supports both client and server sockets.
A server socket can be referred as to a socket that can accept many connections. And a client socket is a socket that is connected to server socket. You may still use this class to communicate between two applications without establishing a connection.
In the latter case, you will want to create two UDP server sockets (one for each application). This class also helps reduce coding need to create chat-like applications and IPC (Inter-Process Communication) between two or more applications (processes).
Reliable communication between two peers is also supported with TCP/IP with error handling. You may want to use the smart addressing operation to control the destination of the data being transmitted (UDP only). TCP operation of this class deals only with communication between two peers
SYSTEM REQUIREMENTS
HARDWARE REQUIREMENTS
    Processor : Intel Pentium IV
    RAM : 128 MB
    Hard Disk : 20GB
SOFTWARE REQUIREMENTS
    Operating System : Windows 98,2000,xp
    Tools : jdk1.5.0
    Technologies : Java Swings, JDBC, Servlets

Securable Network in three-party Protocols

This work presents quantum key distribution protocols (QKDPs) to safeguard security in large networks, ushering in new directions in classical cryptography and quantum cryptography.
Two mediator protocols, one with implicit user authentication and the other with explicit mutual authentication, are proposed to demonstrate the merits of the new combination, which include the following:
1) security against such attacks as man-in-the-middle, eavesdropping and replay,
2) efficiency is improved as the proposed protocols contain the fewest number of communication rounds among existing QKDPs, and
3) two parties can share and use a long-term secret (repeatedly).
To prove the security of the proposed schemes, this work also presents a new primitive called the Unbiased-Chosen Basis (UCB) assumption.
SYSTEM REQUIREMENTS
HARDWARE REQUIREMENTS
Processor : Intel Pentium IV
RAM : 512 MB
Hard Disk : 40GB
SOFTWARE REQUIREMENTS
Operating System : Windows 98,2000,xp
Tools : jdk1.5.0
Technologies : J2SE (network,IO,Swings, Util,crypto)

Implementation of Security in WAN

Internet is a network of networks that consists of millions of private, public, academic, business,and government networks of local to global scope that are linked by a broad array of electronic and optical networking technologies.
The Internet carries a vast array of information resources and services. Different Network elements such as routers, switches, hubs etc.. has been interconnected together for commincation of data over the transmission media.
The routers are connecting the WAN interefaces through the serial ports for data transmission and forwarding the packets using routing tables where as switches and hubs are connecting the LAN.
Static routing tables and dynamic routing protocols tell the routers where to forward IP traffic .
These routers mainly route the data traffic between networks without any filterattion. The job of traffic filtering is best performed by Access Control Lists.
Access list filters the traffic by restricting the packets are to be forwarded or to be blocked at the router’s interfaces.
Router examine each packet and forward or discard the packet based on the information available in the access control list.
Restriction of the traffic can be either source I.P. address, Destination I.P. address, port numbers also
Finally, in the reports trinees has to give different commands on the router on various routing techniques and accesslists to get the require results for connection establishment and troubleshooting of WAN interefaces and LAN networks.