• Scope & Topics
  • Paper Submission
  • Program Committee
  • Accepted Papers
  • Workshops
  • Contact Us
  • AIRCC
  • Scope & Topics
  • Paper Submission
  • Program Committee
  • Accepted Papers
  • Workshops
  • Contact Us
  • AIRCC

Accepted Papers

  • A Study on Image Quality Assessment Techniques
    Nalluri Sunny, V. Hima Deepthi, K. Srinivas, V.Srinivas Rao,V.R. Siddhartha Engineering College, India
    ABSTRACT
    Image quality assessment plays an important role in various image processing applications. Generally image quality assessment has been made by using different types of techniques. Many of the techniques gives about the changes in structure, contrast and luminance differences between the original and distorted versions of the images. Human Visuality System(HVS) is the final assessment system of the image quality. HVS only identifies the perceived image distortions, but it is not capable of identifying all types of distortions.So various types of image quality assessment techniques has been studied. In this paper we analysed and identified drawbacks of various existing techniques and the final results are shown accordingly.
  • Survey On Big Data Challenges and Technologies
    P. Vidhya Bharani, S. Aswini, A. Rajeshwari,Vel Tech High Tech Dr.RR Dr.SR Engineering College,India.
    ABSTRACT
    Big Data is a large volume of data from various data sources such as, web, genomics, cameras, medical records, social media ,aerial sensory technologies, and also information sensing mobile devices. it is an opportunity to find insights in new and emerging types of data and content, to make your business more agile . Until now, there was no practical way to harvest this opportunity. In addition, conventional software tools are not capable of handling Big Data.This paper reveals the challenges in the Big Data and provides an comprehensive view on big data.
  • Experiences With Using Thread Level Speculation in Bluegene/Q
    Arnamoy Bhattacharyya,University of Alberta, Canada
    ABSTRACT
    Thread Level Speculation (TLS) is a hardware/software technique that guarantees correct execution of loops even in the presence of dependence. Due to the overheads of TLS, only beneficial loops have to be executed speculatively to avoid a slow-down. Data-dependence profiling is often used to find out if the may-dependences reported by the static analysis of the compiler materializes at runtime. This paper investigates whether loops' dependence behaviour changes based on input to the program. A study of 57 different benchmarks reveals that loops' dependence behaviour does not change with respect to inputs. An automatic speculative parallelization framework, SpecEval, is developed and used to evaluate the TLS performance of SPEC2006 and PolyBench/C benchmarks in IBM's BlueGene/Q (BG/Q) supercomputer. Experimental results show that factors such as: number and coverage of speculative loops, mispeculation overhead due to function calls inside the speculative loop body affect TLS performance in BG/Q.
  • Performance Analysis of Nonlinear Decision Based Filter and Weighted Median ε-Filter for Salt and Pepper Noise Removal
    Rejina.A.S, M.Selin, A.Neela Madheswari,KMEA Engineering college, India
    ABSTRACT
    Noise removal is the most important challenging preprocessing for almost all applications of Image Processing. This paper address the analysis of the efficiency of two efficient filtering technique called Nonlinear decision based filter(NDBF) and Weighted median ε-filter (WMεF) for removal of salt and pepper noise from grayscale images. The performance of these two filtering methods are tested with different grayscale images. Denoising performances are quantitatively measured by Mean Square Error(MSE), Peak-Signal to- Noise Ratio (PSNR) and Image Enhancement Factor(IEF). The experimental results and subjective analysis show that Weighted median ε-filter removes the noise effectively even at noise level as high as 50% and has better subjective quality when compared to Nonlinear decision based filter.It gives best result for removal of salt and pepper noise.
  • A new CAPTCHA Authentication Mechanism based on Eight-Panel Cartoon CAPTCHA and Clickspell
    Chinnu. R,Maria Joy,KMEA Engineering College,India
    ABSTRACT
    CAPTCHA is a technique that is used to prevent automated programs from being able to acquire free e-mail or online service accounts. However, as many researchers have already reported, conventional CAPTCHA could be overcome by state-of-the-art malware since the capabilities of computers are approaching those of humans. Therefore, CAPTCHA should be based on even more advanced human-cognitive-processing abilities. In this paper, two level of CAPTCHA authentication is proposed. The first level is a Cartoon CAPTCHA , which ask the users to arrange an image into correct order by the process of Drag and Drop. The second level Clickspell, combined the features of text-based and image-based CAPTCHAs. Clickspell asks users to spell a randomly chosen word by clicking distorted letters for passing the test. Users can learn the definition(s) of the chosen word. Sound signature is added for right and wrong clicks. Also a background image is placed under the letters which add more security. In addition, Clickspell can add an advertisement image op-tionally. By this advertisement image, Clickspell improved the capability of resistance to the attack by malicious programs. The analysis shows that Clickspell is practical in the aspects of security and usability.
  • Efficiently Exploring Clusters using Genetic Algorithm and Fuzzy Rules
    Dinesh P. Pitambare, Pravin M. Kamde,Sonoma County Office of Education,India
    ABSTRACT
    Cluster is bunch of similar items. Unsupervised classification of patterns into clusters is known as clustering. It is useful in knowledge discovery in data. Clustering is able to deal with different data types. Fuzzy rules are used for data intelligence illustration purpose. User gets highly interpretable discovered clusters using fuzzy rules. To generate accurate fuzzy rules triangular membership function is used.

    This paper is proposed to automatically explore the number of clusters efficiently from a given numeric dataset. To discover clusters efficiently genetic algorithm is used. Fuzzy rules are generated from genetic algorithm, whose derivative is best fuzzy rules. Best rules are obtained among generated fuzzy rules according to maximum fitness value. Proposed work is carried out on benchmark numeric datasets to validate the capability of the proposed system.
Copyright (c) www.airccse.org