Accepted Papers

  • An Efficient Re-routing scheme for Cost effective mechanism in WDM Network
    Punam R.Patil1 and Bhushan V. Patil2, 1Laxmi Narain College of Technology, India and 2R.C.Patel Institute of Technology, India
    In this paper, we have proposed an efficient re-routing algorithm by dynamic routing in Bi-directional WDM optical network. In wavelength routed WDM network, a heuristic algorithm is used for routing and wavelength assignment. The main objective of this algorithm is to minimize the requirement of wavelength and also to minimize the hop length between source and destination nodes in the traffic. In this paper we have to considering a wavelength routed WDM optical network and then implementing Heuristic algorithm on it. We have to divide over work mainly into two phases. In the first phase of the algorithm, existing routing algorithm is performed. In second phase of algorithm proposed rerouting is performed to reduce the number of wavelengths required in the first phase of the network which also minimizes the hop length of each route. The need of wavelength converter reduced, after minimizing wavelength requirement and then the network cost is reduced. It shows that in previous work of routing algorithm requires more parameters as compared to proposed rerouting algorithm. Both the algorithms are compared in terms of different parameters like Number of hops, Network Congestion (NC) and Network Converter requirement, Network Wavelength Requirement (NWR) etc. The efficiency of the proposed approach has been verified through analytical and simulated results for a well known 6 node 9 links WDM Network. Then simulation Results shows that Rerouting algorithm converges quickly compared to the routing algorithm and the cost of the network reduced and also the resource utilization is maximized.
  • Uncertainty isn't Dilemma in Project Development Process
    Kardile Vilas Vasantrao, Tulajaram Chaturchand College, India
    Software development organization is still facing the problem of failures, late deliver, over budget,...etc. Though numerous reason behind it but, that can be rises due to complexity and uncertainty. Because how one can take confidently take decision when more than one successful option available in the sense of available software methodology At the same numerous studies explore "Feasible formation of project and its development approach is essential for software project success".. So, by considering intensity of this problem, it is must reduce complexity and to know level and sources of uncertainty. That wills the developer to design feasible approach for project development. That is essential for software project success. With this consideration this study tries to explore framework for the modeling process with, fuzzy logic base solution for tackle to uncertainty.
  • Efficient Data Storage in Desktop Data-Grid Computing using Real-Time parameters
    Karthika S1 and Rajasubramanian S2, 1Anna University, India and 2T.J.S.Engineering College, India
    To protects data against attacks, appropriate security measures have to be taken. This includes analyzing the likelihood of attack and being aware of threatening situations or people. Computer data storage provides one of the core functions of the modern computer that has information retain storage techniques at all levels of the storage hierarchy. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance. Enterprise businesses and government agencies around the world face the certainty of losing sensitive data from crashed devices (server or client). This drives the need for a complete data protection solution that secures data on all common platforms, deploys easily, scales to any size organization and meets strict compliance requirements related to privacy laws and regulations. It helps in ensuring privacy of data and improves availability of the data. Particularly in the paradigm of "Volunteer Computing " which is a specific type of distributed system , where shared resources (processor or storage) are provided in a volunteer fashion by the clients of the Desktop Data Grid System. The number of PC's connected to internet is project High-level security is given to the Volunteer Storage Clients (VSC's) for redundant data storage and the VSC is classified into different levels based on real-time parameters like bandwidth, zero-time shit down and storage capacity. We propose an architecture for the desktop data grids with a centralized server which increases the performance of the system and reduces the complexity of the server. The efficiency of the system not only depends on the security level of the client but also considers the sensitivity of the data being stored in the system. Altogether a simple metric termed Fragmentation Factor (FF) is proposed in this system which considers both the security of the client and the sensitivity of the data. The erasure tornado codes are applied to cope up with unreliable data storage components. The enhanced levels of security can be achieved in the corporate environment by imposing Security Auditing. To ensure the security of the various servers and systems, the security has to be assessed through proper audit tools. For selecting the high priority virtual server for data storage, three factors like bandwidth, availability and storage are analyzed. Implementation was performed by using Java in our laboratory and seen that the efficiency of data storage and security is high than the existing architecture.
  • A New Partition based Association rule Mining Algorithm for BigData
    Ramesh kumar, M.Sambath and S.Ravi, Hindustan University, India
    Association Rule Mining is an important research area in the field of Data Mining especially in case of "Sales transactions?. A number of algorithms have been presented in this regard. In this paper a comparison of PARTITION algorithm with CMA algorithm is presented after improving the PARTITION algorithm. In this study, randomized partitioning of database is done. The database is randomized so that real random data is available for better results. The randomized partitioning of database has been implemented in different tool, i.e., VB. Net, as compared to CMA, which uses MATLAB for randomization so as to achieve better performance and efficient results. In the end it has been proved with extensive experiments that although Randomized PARTITION algorithm takes two database scans as compared to CMA that takes single database scan, still it gives better results with more efficiency than CMA.
  • A Secure Trust Management Scheme and Energy Aware Data Transmission in MANETS
    K.Jagadheeswari and V.Umadevi, Arunai Engineering College, India
    Mobile devices can dynamically move and reorganize themselves. The dynamic nature and energy nodes of mobile ad hoc networks (MANETs) lead to security attacks. A trust management scheme based on direct and indirect observation is developed to enhance the security in MANETs. The new security mechanism is proposed to ensure security in MANETs. The proposed work is implemented with AODV protocol. The AODV routing protocol is implemented to find nodes with maximum energy for the security routing in the trust management scheme for direct and indirect observation of nodes. It helps to identify and prevent the compromised or malicious nodes in the network. It prevent security issues such as malicious node attacks, misbehavior of nodes such as dropping or modifying packets during transmission. Further the proposed scheme will be useful for secure data transfer, security, increase network performance, avoid congestion and hidden terminal situations.
  • A Scalable Congestion Management Technique for Bi-directional Multistage Interconnection Networks
    Naseera Rahmathilahi and Chempavathy. B, TOCE, India
    Increasing power consumption and cost concerns lead to the designing of cost-effective networks using a limited number of resources while keeping the required performance level. However, as network size decreases, so does network offered bandwidth, thus increasing the probability of congestion, as the saturation point of the network is reached with lower traffic loads. Efforts were always made to avoid congestion, but an alternative approach would be the removal of the negative effect of congestion - the Head of Line (HOL) blocking and not to avoid congestion itself. An effective congestion management technique to remove the HoL-blocking in multistage interconnection networks is proposed that addresses low-order as well as high-order HoL. The proposed technique presents lesser resource requirements and outperforms current congestion management techniques.
  • Automated Disentangling of Overlapping Human Chromosome Images
    Saranyadevi V, Kaaviya S and Nirmala M, K.S.Rangasamy College of Technology, India
    The identification of chromosome abnormalities is an essential part of diagnosis and treatment of genetic disorders such as chromosome syndromes and many types of cancer. Modern cytogenetic imaging techniques have improved the study of chromosome aberrations. The techniques such as Comparative Genomic Hybridization(CGH), Multicolour Fluorescence In Situ Hybridization(M-FISH) and Spectral Karyotyping(SKY) are able to detect the structural abnormalities. Before that, the important process to be carried out is segmentation. There are now a wide variety of image segmentation techniques, some considered general purpose and some designed for specific classes of images. Image segmentation is the process of partitioning an image into multiple segments. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful. The result of image segmentation is a set of segments that collectively cover the entire image or set of contours. The segmentation is needed in order to find the overlapping or touching cells, as a result that the analysis can be easier. This paper proposes an algorithm to automatically segment the chromosomes.
  • Implementation of Multimode power gating in CMOS VLSI circuits to reduce the Standby Leakage
    Haripriya.S and Jessintha.D, Easwari Engineering College, India
    Static power reduction is one of the key factor in VLSI implementation, Multi threshold CMOS is a key factor for reducing standby leakage power during long periods of inactivity. So a power-gating scheme was presented to support multiple power-off Modes which reduces the leakage power during short periods of inactivity. Since it suffers from high sensitivity to process variations, which impedes manufacturability. A new powergating technique is introduced that is tolerant to process variations and scalable to more than two intermediate power-off modes. In this design it requires less design effort and offers greater power reduction and smaller area cost than the previous method. It can be combined with existing techniques to offer further static power reduction benefits. Detailed analysis and extensive simulation results demonstrate the effectiveness of the design. The digital design of multi mode power switches is realized with configurable modes in which the power reduction is monitored. Evaluations and comparisons with the large core is validated through the simulations and can be implemented for digital circuits.
  • Development of DNS for Quantify the Operational Status of DNSSEC Deployment
    D.Stalin David and V.Anusuya Devi, National Engineering College, India
    The Domain Name System (DNS) is a hierarchical distributed naming system for computers, services, or any resource connected to the Internet or a private network. A domain name is a unique name that identifies an internet resource such as a website.DNSSEC not only protects the DNS, but has generated interest in secured global database for new services.DNSSEC lacks the robustness property found in the original DNS design. But DNS has balanced because its design tolerates failures and misconfigurations. For example, top level domains, .gov, and .fr suffered outages from configuration errors that made their entire sub trees are unverifiable. So that all zones under .gov are unverifiable because of an operational misstep higher in the hierarchy.DNS hierarchy that was designed to distribute authority and provide name uniqueness but not provide key verification. So there is a fundamental misalignment in using DNS hierarchy for key verification. The proposed system using the AES cryptography algorithm for verifying the key and also validate a key from an malicious servers. This verification and validation process can be done on the server side. The input is domain name which is get from the user. The input is send to the server and the server will provide the key then it can be verified and validated from particular group of agents, and then sending the public key to the client for corresponding domain. The output is getting the key in an encrypted format and then client will decrypt the key for authentication purposes. Proposed key verification technique is designed to properly verify and validate its data more than twice as DNSSEC.In DNS, some misconfigurations can occurs in time, proposed model key verification technique cannot be failed because it is high securable and high robustness and it can be able to properly verify and validated its data up to 99.5% of the time in specific deployment.Verifying Public Data helps to protect client resolvers from malicious servers.
  • Moving ATM Applications to Smartphones with a Secured Pin-Enty Methods
    Kavitha V, S.A.Engineering College, India
    A personal identification number (pin) is a widely used numeric password. The 4-digit pin numeric password is being used as authentication in many important applications such as, an ATM. An ATM is a place where the shoulder surfing attack is of great concern. There are some existing methods that provide security to the pin entry. But, those methods use only limited cognitive capabilities of the human adversary. The major disadvantage that exists here is that human adversaries can be more effective at eavesdropping and assumptions by training themselves. The proposed method called improved black white (BW) method can be more secure, as it uses bi-colored keys. Another contribution is the authentication service that uses local databases and a hash function. The hash function is mainly used to send the pin securely to the server through the public channel. An ATM application is created as an android application, where transactions can be performed in smart phones using a virtual money concept.
  • Cost Effective Automated Scaling of Web Applications for Multi Cloud Services
    T.Annapoorani and P.Neelaveni, G.K.M College of Engineering and Technology, India
    Automatic scaling property can benefit many internet applications where their resource usage can be scaled up and down automatically by the cloud service provider. We present a system that provides automatic scaling for Internet applications in a multi cloud environment. A skewness and Apriori algorithm is developed for achieving good demand satisfaction ratio. We also evaluate the performance analysis of common cloud server using queuing systems and load distribution. To obtain accurate estimation of the performance indicators. The model allows cloud operators to determine the relationship between the number of servers and input buffer size, on one side, and the performance indicators on another side. It supports green computing by avoiding idle state for a server. Common cloud server continuously schedules task for every server. Load will be balanced by the skewness algorithm. Our experimental results demonstrate that our system can improve the throughput by 190% over an open source implementation of Amazon EC2.
  • A Trinity Construction for Web Extraction Using Efficient Algorithm
    T. Indhuja Priyadharshini and G.Umarani, S.A Engineering College, India
    Trinity - An innovative framework to automatically extract the data from the internet based web applications to process data in linear tree fashion. Most of the users are searching for an effective system which can provide an optimized comparative solution without any big expenditure. An automatic parser will be placed in the back end of the trinity framework which takes care of subdividing the web pattern into smaller pieces of patterns which includes prefix, suffix and separator. After getting the exact information about the data located in the web pages, the data will be cleared up and formatted for manipulation which enables an emergent of efficient cost comparative system. In the proposed system an "Ant Colony Optimization" algorithm is used in order to extract the effective content from the website without any major computational impacts of the system. Hence the trinity will give only the extracted data from the web in the approximated format. But the ant colony optimization provides accuracy without NP- complete problem.

AIRCC Library


Technically Sponsored by