• Scope & Topics
  • Paper Submission
  • Program Committee
  • Accepted Papers
  • Workshops
  • Contact Us
  • AIRCC
  • Scope & Topics
  • Paper Submission
  • Program Committee
  • Accepted Papers
  • Workshops
  • Contact Us
  • AIRCC

Accepted Papers

  • Performance Optimized Routing for SLA Enforcement in Cloud Computing
    Chinnu Edwin A and A.Neela Madheswari, KMEA Engineering College, India
    ABSTRACT
    Cloud computing has recently emerged as a new technology for hosting and delivering services over the Internet. The cloud service providers negotiate with their potential clients to sell computing power. The terms of the Quality of Service (QoS) and the economic conditions are established in a Service Level Agreement (SLA). There are many scenarios in which the agreed QoS cannot be provided. Since providers have usually different types of clients, according to their relationship with the provider or by the fee that they pay, it is important to minimize the impact of the SLA violations in preferential clients. Here, in the proposed system, a Performance Optimized routing technique is used by the Cloud Broker. The Broker decides which datacenter should provide the service to user requests. The QOS requirements of high priority clients can be satisfied by the routing technique. Hence, SLA enforcement can be achieved by the proposed system.
  • An Extended Hybrid Recommender System Based on Association Rules Mining in Discussion Groups
    Mahnaz Ebrahimi and Mandana Goudarzian, AmirkabirUniversity of Technology, Iran
    ABSTRACT
    Social groups in the form of different discussion forums are proliferating rapidly. Most of these forums have been created to exchange and share members' knowledge in various domains. Members in these groups may need to use and retrieve other members' knowledge. Therefore, recommender systems are one of the techniques which can be employed in order to extract knowledge based on the members' needs and favorites. It is noteworthy that not only the users' comments and posts can have valuable information, but also there are some other valuable information which can be obtained from social data; moreover, it could be extracted from relations and interactions among users. Hence, association rules mining techniques are one of the techniques which can be applied in order to extract more implicit data as input to the recommender system. Our objective in this study is to improve the performance of a hybrid recommender system by defining new hybrid rules. In this regard, for the first time, we have defined new hybrid rules by considering both users and posts' content data. Each of the defined rules has been examined on an asynchronous discussion group in this study. In addition, the impact of the defined rules on the precision and recall values of the recommender system has been examined. We found that according to this impact, a classification of the defined rules can be considered and a number of weights can be assigned to each rule based on their impact and usability in the specific domain or application. It is noteworthy that the results of the experiments have been promising.
  • Network Traffic Analysis : Hadoop Pig vs Typical MapReduce
    Anjali P P and Binu A, Rajagiri School of Engineering and Technology, India
    ABSTRACT
    Big data analysis has become much popular in the present day scenario and the manipulation of big data has gained the keen attention of researchers in the field of data analytics. Analysis of big data is currently considered as an integral part of many computational and statistical departments. As a result, novel approaches in data analysis are evolving on a daily basis. Thousands of transaction requests are handled and processed everyday by different websites associated with e-commerce, e-banking, e-shopping carts etc. The network traffic and weblog analysis comes to play a crucial role in such situations where Hadoop can be suggested as an efficient solution for processing the Netflow data collected from switches as well as website access-logs during fixed intervals.
  • Deduplicating and Linking the Records in a Repository Using Genetic Programming
    Anju Abraham,Fouziya Majeed and Ajo A Mundackal, KMEA Engineering College,India
    ABSTRACT
    Identifying and handling replicas is important to guarantee the quality of the information made available by data intensive systems such as digital libraries and e-commerce brokers. The consequences of allowing the existence of dirty data in the repositories are performance degradation, quality loss, increasing operational costs etc. Removing duplicate records in a single database is a crucial step in the data cleaning process. Because duplicates can severely influence the outcomes of any subsequent data processing or data mining. In this paper we use a genetic approach to record deduplication and record linkage. Deduplication is the problem of detecting and removing duplicate entries in a repository and record linkage is the process of matching record from several databases that refer to the same entities. We use the cora dataset for the experiment. In this paper we remove the duplicate entries in the cora data set using Genetic Programming (GP) and then match the authors in the cora data set with their different papers. For record deduplication our approach combines several different pieces of evidence extracted from the data content to produce a deduplication function that is able to identify whether two or more entries in a repository are replicas or not and for the record linkage we use a similarity function that match the authors with their different papers.
  • Temporal Change Pattern Discovery in the Presence of a Generalization Hierarchy Based on FP-Growth
    Lekha C. Warrier and Arifa Azeez, KMEA Engineering College,India
    ABSTRACT
    Data mining has significant importance in the field of information industry. Due to the wide availability of data and the need for turning these data into useful information to the user many methods have been implemented. Mining frequent itemsets from the large database has prominent place in Market analysis. To cope with the steadfast evolution of markets Change mining is introduced. Change mining in the context of frequent itemset represents the changes in the itemset from one time period to another. The paper proposes a support driven History Generalised Pattern Miner based on Frequent Pattern Growth Algorithm. It avoids postprocessing after itemset mining by using the support driven approach . By using FP- Growth the candidate generation and testing cost can be avoided. Data set scan for mining is also reduced. Non Redundant HIGEN based on FP Growth is also proposed for focusing the attention on minimally redundant itemsets. Experiments performed on real dataset to analyse the significance of the proposed approach in real world applications.
  • Frame correlation among different Resource Access Protocol in Real Time System
    Susmita Saha and Leena Das, KIIT University, India
    ABSTRACT
    The main intent of resource access protocol is to schedule, and synchronizes the tasks when many tasks use same resources, means where resource is shared. We know various scheduling algorithm like EDF, RMA that are popularly used for sharing a set of serially reusable resources. But for share non preemptable resources those algorithm cannot satisfactorily be used. So here in this paper the traditional techniques of resource access protocol which are used to share critical resources among a set of real time tasks, is discussed. In this paper mainly focus in the different types of resource access protocol and their comparison
  • Large-Scale Data Mining and Materialization using CM-Sketch and Mr Cube Algorithm
    Amrutha S and Viji Mohan A, KMEA Engineering College,India
    ABSTRACT
    Data cube analysis is a powerful tool for analyzing multidimensional data. Cube analysis provides the users with a convenient way to discover insights from the data by computing aggregate measures over all possible groups. Many studies have been devoted to designing techniques for efficiently computing the cube. But there are certain limitations dealing with single machine or clusters with small number of nodes. Such limitations can be overcome using the MR-cube approach. Large scale data are being maintained and analyzed using the MapReduce programming paradigm. For groups with a large number of tuples, the memory requirement for maintaining such intermediate data can become overwhelming. We address this challenge by identifying an important subset of holistic measures, partially algebraic measures, and introducing a value partition mechanism such that the data load on each machine can be controlled. The most interesting and challenging issue is that of extreme data skew, which occurs if a few cube groups are unusually large even when they belong to a cube region at the top of the lattice. By making use of compressed counting data structures such as CM-Sketch and Log-Frequency Sketch, the problem of data skew is overcome.
  • Information Security in Cloud
    Divisha Manral, Jasmine Dalal and Kavya Goel, Guru Gobind Singh Indraprastha University,India
    ABSTRACT
    With the advent of the internet, security became a major concern where every piece of information was vulnerable to a number of threats. Cloud is kind of centralized database where many clients store, retrieve and possibly modify data. Cloud computing is environment which enables convenient and efficient access to a shared pool of configurable computing resources. However the data stored and retrieved in such a way may not be fully trustworthy. The range of study encompasses an intricate research on various information security technologies and proposal of an efficient system for ensuring information security in cloud computing platforms. Information security has become critical to not only personal computers but also corporate organizations and government agencies, given organizations these days rely extensively on cloud for collaboration. Aim is to develop a secure system by encryption mechanisms that allow a client's data to be transformed into unintelligible data for transmission.
  • Fuzzy similarity documents
    Marwa Massa abi and Wahiba Ben Abdessalem Karaa, Higher Institute of Management,Tunisia.
    ABSTRACT
    Information retrieval has become a delicate task due to the huge amount of data available on the web. Duplicated documents are a major cause that made of it an irritating burden. In this paper, we propose a new fuzzy similarity measure which detects automatically the similarity between documents and eliminates duplications. This approach combines fuzzy logic and distance measurements to achieve its goal.
  • A Survey on Association Rule Hiding Methods
    Khyati B. Jadav1,Shri. Jignesh Vania1 and Dhiren R. Patel2, 1LJ Institute of Engineering & Technology, India and 2National Institute of Technology,India
    ABSTRACT
    In recent years, the use of data mining techniques and related applications has increased a lot as it is used to extract important knowledge from large amount of data. This has increased the disclosure risks to sensitive information when the data is released to outside parties. Database containing sensitive knowledge must be protected against unauthorized access. Seeing this it has become necessary to hide sensitive knowledge in database. To address this problem, Privacy Preservation Data Mining (PPDM) include association rule hiding method to protect privacy of sensitive data against association rule mining. In this paper, we survey existing approaches to association rule hiding, along with some open challenges. We have also summarized few of the recent evolution.
  • A Review on Content Based Image Retrieval of User's Choice using Interactive Genetic Algorithm
    Vaibhav Jain, Oriental College of Technology,India
    ABSTRACT
    In present time, digital image libraries and other multimedia databases have been suddenly expanded. Therefore Semantic gap that between the visual features and human semantics has become very important area of research known as content based image retrieval (CBIR). If there is a need of retrieving an image from a large image database effectively and precisely, the development of content-based image retrieval (CBIR) system has become an important research issue. The need for improving the retrieval accuracy of image retrieval systems and narrowing down the semantic gap is high in view of the fast growing need of image retrieval. In this paper we review, a user-oriented technique for CBIR method based on low level visual features and interactive genetic algorithm (IGA). Before applying the visuality features we have divided the images into k x k blocks and a block wise comparison has been done. Color attributes like the mean value, the standard deviation, and the image bitmap of a color image are used as the features for retrieval. In addition, the entropy based on the gray level co-occurrence matrix considered as the texture features and the Canny edge detection technique for image can be considered as edge features. Finally, some future research directions and problems of image retrieval are presented.
  • Fuzzy Logic Based Energy Efficient Routing Protocol under Load and Noisy Environment for Mobile Ad Hoc Networks
    Supriya Srivastava1, A. K. Daniel1, R Singh1 and Sanjay Silakari3, 1M.M.M .Engineering College Gorakhpur ,India, 2Ruhalkhand University, India and 3Rajiv Gandhi Tech University,India
    ABSTRACT
    Energy efficiency is a critical issue for battery powered mobile devices in ad hoc networks and routing based on energy related parameters is used to extend the network life time.The paper proposed a widespread fuzzy logic based approach for energy-aware routing in Mobile Ad-hoc networks. This protocol helps every node in MANET to choose next efficient successor node on the basis of channel parameters such as environment noise, load and residual energy. It uses crisp metrics for making routing decisions energy-aware. A fuzzy logic controller has been developed, on above parameters. The obtained, value signifies the importance of a node for network formation and maintenance. The protocol improves the performance of a network by selecting best node for forwarding the data packet to next node and increasing network life time.
  • Techniques for Load Balancing in Wireless LAN's
    Krishnanjali A.Magade and Abhijit Patankar, D.Y.Patil College of engg.,India
    ABSTRACT
    optimal load allocation strategies are proposed for a wireless sensor network which is connected in a star topology. The load considered here is of arbitrarily divisible kind, such that each fraction of the job can be distributed and assigned to any processor for computation purpose. Divisible Load Theory emphasizes on how to partition the load among a number of processors and links, such that the load is distributed optimally. Its objective is to partition the load in such a way so that the load can be distributed and processed in the shortest possible time. The existing strategies for both star and bus topologies are investigated. The performance of the suggested strategy is compared with the existing ones and it is found that it reduces the overall communication and processing time if allocation time is considered in the previous strategies. Wireless communications is one of the most active areas of technology development. Over the recent years it has rapidly emerged in the market providing users with network mobility, scalability and connectivity. Wireless Local Area Networks (WLANs) have been developed to provide users in a limited geographical area with high bandwidth and similar services supported by the wired Local Area Network (LAN). Many mechanisms have been studied aimed at resolving load imbalance among access points (APs),(Ffig.1) which is a key issue in Wireless LANs (WLANs); meanwhile, cell breathing (CB), first appearing in cellular networks, provides another method in controlling coverage scale to realize load balance. In this paper, we present a new load balancing algorithm through power management based on cell breathing
  • Software Quality Prediction Using Evolutionary Algorithm: An Artificial Neural Network Approach
    Bhavya G, BMSIT,India
    ABSTRACT
    Assessing software quality at the early stages of the design and development process is very difficult since most of the software quality characteristics like Capability, Reliability, Security, Performance etc. are not directly measurable. Nonetheless, they can be derived from other measurable attributes in the form of software attributes. Several methodologies have been proposed to estimate the quality of the software. Artificial neural network is a better option. Various algorithms are used to train the neural network for efficient prediction. Here firefly algorithm which is a type of evolutionary algorithm is proposed to train the neural network in order to determine whether the software is faulty or fault free.
  • Cloud Computing: Network Security Management System for Forensic Analysis
    Gnanasundaram.R, Adiyamaan College of Engineering, India
    ABSTRACT
    Internet security problems remain a major challenge with many security concerns such as Internet worms, spam, and phishing attacks. Botnets, well-organized distributed network attacks, consist of a large number of bots that generate huge volumes of spam or launch Distributed Denial of Service (DDoS) attacks on victim hosts. New emerging botnet attacks degrade the status of Internet security further. To address these problems, a practical collaborative network security management system is proposed with an effective collaborative Unified Threat Management (UTM) and traffic probers. A distributed security overlay network with a centralized security center leverages a peer-to-peer communication protocol used in the UTMs collaborative module and connects them virtually to exchange network events and security rules. Security functions for the UTM are retrofitted to share security rules. In this paper, we propose a design and implementation of a cloud-based security center for network security forensic analysis. We propose using cloud storage to keep collected traffic data and then processing it with cloud computing platforms to find the malicious attacks. As a practical example, phishing attack forensic analysis is presented and the required computing and storage resources are evaluated based on real trace data. The cloud based security center can instruct each collaborative UTM and prober to collect events and raw traffic, send them back for deep analysis, and generate new security rules. These new security rules are enforced by collaborative UTM and the feedback events of such rules are returned to the security center. By this type of close-loop control, the collaborative network security management system can identify and address new distributed attacks more quickly and effectively.
  • Rainfall Prediction using Data Mining Techniques - A Survey
    B.Kavita Rani1 and A.Govardhan2, 1JITS, India and 2JNTUH, India
    ABSTRACT
    Rainfall is considered as one of the major components of the hydrological process; it takes significant part in evaluating drought and flooding events. Therefore, it is important to have an accurate model for rainfall prediction. Recently, several data-driven modeling approaches have been investigated to perform such forecasting tasks as multilayer perceptron neural networks (MLP-NN). In fact, the rainfall time series modeling (SARIMA) involves important temporal dimensions. In order to evaluate the incomes of both models, statistical parameters were used to make the comparison between the two models. These parameters include the Root Mean Square Error RMSE, Mean Absolute Error MAE, Coefficient Of Correlation CC and BIAS. Two-Third of the data was used for training and One-third for testing.
  • Reduced Universal Background Model for Speech Identification System based improved Minimum Enclosing Ball Algorithms
    LACHACHI Nour-Eddine, Oran University,Algeria
    ABSTRACT
    As a powerful Tool in machine Learning, Support Vector Machine (SVM) also suffers from expensive com-putational cost in the training phase due to the large number of original training samples. Thus, we use Minimal Enclosing Ball (MEB) to overcome this problem. This paper presents an improved two approaches using Fuzzy C-mean clustering method based SVMs reduced to Minimal Enclosing Ball (MEB) problems. These approaches find the concentric balls with minimum volume of data description to reduce the chance of accepting abnormal data that contain most of the training samples. Our method trains each decomposed sub-problems to get support vectors and retrains with the support vectors to find a global data description of a whole target class. Our study is experimented on speech information to eliminate all noise data and reducing time training. Numerical experiments on some real-world datasets verify the usefulness of our approaches for data mining.
  • Taxonomic Classification of Plant Species using Support Vector Machine
    Manimekalai .K1 and Vijaya.MS2, 1PSGR Krishnammal College for Women,India and 2G.R. Govindarajulu School of Applied Computer Technology,India
    ABSTRACT
    Plant species are living things and are generally categorized in terms of Domain, Kingdom, Phylum, Class, Order, Family, Genus and name of Species in a hierarchical fashion. This paper formulates the taxonomic leaf categorization problem as the hierarchical classification task and provides a suitable solution using a supervised learning technique namely support vector machine. Features are extracted from scanned images of plant leaves and trained using SVM. Only class, order, family of plants and species are considered for hierarchical classification in this research work. The trained models corresponding to class - Magnoliopsida, orders - Brassicales, Rosales, families - Cariaceae, Brassicacea, Rosaceae, Rhamnaceae have been used to develop a three level hierarchical classification model for hierarchical classification of plant species and the results are analyzed.
  • A Novel Approach For Generating Face Template using BDA
    Shraddha S. Shinde and Anagha P. Khedkar, MCERC, Nashik (M.S.),India
    ABSTRACT
    In identity management system, commonly used biometric recognition system needs attention towards issue of biometric template protection as far as more reliable solution is concerned. In view of this biometric template protection algorithm should satisfy security, discriminability and cancelability. As no single template protection method is capable of satisfying the basic requirements, a novel technique for face template generation and protection is proposed. The novel approach is proposed to provide security and accuracy in new user enrollment as well as authentication process. This novel technique takes advantage of both the hybrid approach and the binary discriminant analysis algorithm. This algorithm is designed on the basis of random projection, binary discriminant analysis and fuzzy commitment scheme. Three publicly available benchmark face databases are used for evaluation. The proposed novel technique enhances the discriminability and recognition accuracy by 80% in terms of matching score of the face images and provides high security
  • Identity Management Frameworks for Cloud
    Roshni M. Bhandari1, Upendra R. Bhoi1 and Dhiren R. Patel2, 1Parul Institute of Technology, India and 2National Institute of Technology Surat, India
    ABSTRACT
    Cloud computing is a new trend of computing paradigm that provides a set of scalable resources on demand. However, it also being a target of cyber attacks and creates risk for data privacy and protection. An Identity Management System (IDM) supports the management of multiple digital identities for authentication and authorization. This paper reviews various identity management frameworks that help making Cloud environment more secure.
  • Analysis Music concerts adopting the Mathematical Model of Hit Phenomena
    Yasuko KAWAHATA1,Etsuo GENDA1 and Akira ISHII2, 1Kyushu University,Japan and 2Tottori University,Japan
    ABSTRACT
    A mathematical model for the hit phenomenon in entertainment within a society is presented as a stochastic process of interactions of human dynamics. The calculations for the Japanese motion picture market based on to the mathematical model agree very well with the actual residue distribution in time. The world most popular rock band Coldplay is analyzed using the SNS data. L'Arcen- Ciel of Japanese rock band and LADYGAGA are also analyzed using the data of SNS as well. The results agree very well, so that the mathematical theory for hit phenomena can be applied to the estimation of tickets sold for Music concerts.
  • ICT based Telemedicine for the Egyptian Society
    Hafez A. Fouad and Haythem H. Abdullah, Microwave Department, Electronics research Institute, Egypt
    ABSTRACT
    The One of the most challenging problems that encounter the Egyptian society is the lack of significant health care in the rural areas. This problem leads to more severe problems that face the society; the patients from the different rural areas needs to travel to the Egyptian capital where the most experienced physicians are available. This will make overhead not only on the patient budget but on the country budget since the focus on the capital makes a severe traffic problem which threaten most of the economic sectors. The telemedicine is considered one of the most important solutions that could mitigate the accumulated problems of lack of experienced physicians in the Egyptian rural areas. The application of the telemedicine encounters several challenges in Egypt; the lack in the experience in dealing with the telemedicine in these areas and the problem of insufficient medical experts that could fulfil the gab. In this paper, a new ICT-based telemedicine system is proposed to serve the Egyptian society. The portal is already released and snapshots are included.
  • QoS of Web Service: Survey on Performance and Scalability
    Ch Ram Mohan Reddy1, R. V Raghavendra Rao1, D Evangelin Geetha2, T V Suresh Kumar2, K Rajani Kanth2, 1B M S College of Engineering, India and 2M S Ramaiah Institute of Technology, India
    ABSTRACT
    In today's scenario, most of the organizations provide the services through the web. This makes the web service an important research area. In addition, early design and building web services, it is necessary to concentrate on the quality of web services. Performance is an important quality attributes that to be considered during the designing of web services. The expected performance can be achieved by proper scheduling of resources and scalability of the system. Scalability is a desirable attribute of a process computer system or network. Poor scalability can result in lacking system performance. Hence, in this paper, we have reviewed the literature available for the quality attributes of performance and scalability and identified the issues that affect the quality attributes related to Web Services.
  • Web-based Database Management to support Telemedicine
    Hafez Fouad, Electronics research Institute, Egypt
    ABSTRACT
    The transfer of the medical care services to the patient, rather than the transport of the patient to the medical services providers is aim of the project. This is achieved by using web-based applications including Modern Medical Informatics Services which is easier, faster and less expensive. This system will be efficient by finding the suitable informatics and electronics solutions for the Tele-medicine care. An approach to manage multimedia medical databases in a telemedicine system is proposed. In order to manage, search, and display patient information more efficiently and effectively , we define a doctor and patient information package as a concise data set of a doctor's data and patient's medical information from each visit. We also provide the methodology for accessing various types of patient medical records as well as design two types of user interfaces, high-quality data display and web-based interface, for different medical service purposes.
Site Designed By NnN