Accepted Papers


  • Multimodal Biometrics Feature extraction using PCA and bifurcation

    Chintan patel,C.malathy,SRM UNIVERSITY,India
    Abstract

    Biometric systems based on a single source of information suffer from limitations such as the lack of uniqueness and non-universality of the chosen biometric trait, noisy data and spoof attacks. Multimodal biometric systems fuse feature extracted from multiple biometric traits. An optimal combination of information can alleviate some of the limitations of unimodal biometric systems. Consequently, multimodal biometric systems achieve better performance compared to unimodal biometric systems and are being increasingly adopted in a number of applications. Major purpose of developing a multimodal biometric system is improving the level of security system using more than one biometric traits and optimal feature extraction and fusion techniques. The goal of this paper is to build multimodal biometric system using Ear and Fingerprint as biometric traits. Method, we are developing for extracting the features for ear is PCA (Principal component analysis) and method for finger print feature extraction is count minutiae and bifurcations. Fusion method used is Feature Level Fusion. Matching will decide accessibility of system. Here we are going to develop one of the most unique security systems which can be best path for developing more secure systems in Advanced Computing.

  • Digital Video Broadcast (DVB) Standards: Comparison based on Technical Characteristics

    S V Arun Kumar and B S Harish,S J College of Engineering,India.
    Abstract

    Digital Video Broadcast (DVB) has become the synonym for digital television and for data broadcasting world-wide. There are 3 basic DVB standards: Digital Video Broadcast-Satellite (DVB-S), Digital Video Broadcast -Terrestrial (DVB-T), Digital Video Broadcast-Cable (DVB-C). DVB developed the second generation standard (DVB-S2, DVB-T2, DVB-C2) to meet the user requirements like HDTV, VoD. In this paper first and second generation Digital Video Broadcast standards are compared and contrasted based on the technical characteristics of the DVB standard.

  • Evaluation of the SHAPD2 Algorithm Efficiency in Plagiarism Detection Task using PAN Plagiarism Corpus

    Dariusz Ceglarek,Department of Applied Informatics, Poznan School of Banking,Poland.
    Abstract

    This work presents results of the ongoing novel research in the area of natural language processing focusing on plagiarism detection, semantic networks and semantic compression. The results demonstrate that the semantic compression is a valuable addition to the existing methods used in plagiary detection. The application of the semantic compression boosts the efficiency of Sentence Hashing Algorithm for Plagiarism Detection 2 (SHAPD2) and authors’ implementation of the w - shingling algorithm. Experiments were performed on Clough & Stephenson corpus as well as an available PAN–PC-10 plagiarism corpus used to evaluate plagiarism detection methods, so the results can be compared with other research teams.

  • A Regularized Robust Super-resolution approach for Aliased Images and Low Resolution Videos

    Pankaj Kumar Gautam, M. A. Zaveri,NIT Surat,India.
    Abstract

    This paper presents a hybrid approach for images and video super-resolution. We have proposed the approach for enhancing the resolution of images and low resolution, under sampled videos. We exploited the shift and motion based robust super-resolution (SR) algorithm [1] and the diffusion image regularization method proposed in [2] to obtain the alias free and jerk free smooth SR image. We presented a framework for obtaining super-resolution video that is robust, even in the presence of fast changing video frames. We compare our hybrid approach framework’s simulation results with different resolution enhancement techniques i.e. Robust Superresolution, IBP and Interpolation methods reported in the literature. This approach shows good results in term of different quality parameters.

  • Semantic Network Based Mechanisms for Knowledge Acquisition

    Dariusz Ceglarek,Department of Applied Informatics, Poznan School of Banking,Poland .
    Abstract

    This article summarizes research work started with the SeiPro2S (Semantically Enhanced Intellectual Property Protection System) system designed to protect resources from the unauthorized use of intellectual property. The system implements semantic network as a structure of knowledge representation and a new idea of semantic compression. As the author proved that semantic compression is viable concept for English, he decided to focus on potential applications. An algorithm is presented that employing semantic network WiSENet for knowledge acquisition with flexible rules that yield high precision results. Developed algorithm is implemented as a Finite State Automaton with advanced methods for triggering desired actions. Detailed discussion is given with description of devised algorithm, usage examples and results of experiments.

  • Effective FCM in Complex Database

    S Ramathilagam, S.1 R. Kannan, R Devib2 , 1Periyar Govt. College,India , India 2Pondicherry Central University, India.
    Abstract

    The Euclidian distance based standard fuzzy c-means fails in clustering more complicated or general shaped dataset and clustering data which contains heavy noise. In order to strengthen the fuzzy c-means in clustering more general shaped dataset, this paper introduces effective kernel based fuzzy c-means instead ordinary distance based fuzzy c-means. In order to introduce effective kernel based fuzzy c-means this paper incorporates hyper tangent induced distance function, entropy terms and neighborhood term. Further this paper introduces a method for initializing clusters center instead random selection of cluster centers, so that is tries to reduce the computation time. The effectiveness of the proposed method has been proved by the comparison of the results with Standard FCM through the experimental work on 2 – dimensional artificial image. The validity of clustering results has been examined using Silhouette accuracy index.

  • Classification of Handwritten Gujarati Numeral

    Archana N. Vyas and Mukesh M. Goswami,Dharmsinh Desai University,India.
    Abstract

    This paper addresses the problem of recognizing handwritten numerals for Gujarati Language. Three methods are presented for feature extraction. One belongs to the spatial domain and other two belongs to the transform domain. In first technique, a new method has been proposed for spatial domain which is based on Freeman chain code. This method obtains the global direction by considering n x n neighborhood and thus eliminates the noise which occurs due to local direction. In the second method 85 dimensional Fourier descriptors were computed and treated as feature vector and in third feature set DCT coefficients are used as feature vector. These methods are tested with three different classifiers namely KNN, SVM and ANN with back propagation. Various preprocessing steps are applied before classification. Experimental results were evaluated using 10 cross fold validation. The average recognition rate for full data set with modified chain code is 85.67%, 83.63% and 84.89% with KNN, SVM and ANN respectively. The overall recognition rates with DFT are 93.60%, 92.43% and 92.86% with KNN, SVM and ANN respectively where as DCT coefficients provides average recognition rates as 91.03%, 93.00% and 92.07% with KNN, SVM and ANN classifiers respectively.

  • A Clustering Algorithm to Find Social Network Users’ Haunts Based on DBSCAN

    Xinchao Jiang*, Qun Wu,and SuoJu He,Beijing University of Post and Telecommunication,China
    Abstract

    In order to offer people with better individual service, we use the passive log generated by mobile phone application, which contain users’ real-time location information, to find out features of each user. With combination of clustering algorithm, DBSCAN, and records’ time, we can find out where an user often visit. DBSCAN is a cluster algorithm based on density, it can use near records to generate clusters, that is to say, find the users' haunts. If we combine those clusters with when the records are made, we can find out what did users do in the cluster.. The experiments result is meet common sense and as we expect.