Agile Methods and Quality _ A Survey BODJE N'KAUH NATHAN-REGIS1 and G.M Nasira2, 1Christ University ,India and 2Govt. Arts College (Autonomous), India
ABSTRACT
Agile software processes, such as extreme programming (XP), Scrum, Lean, etc., rely on best practices
that are considered to improve software development quality.
It can be said that best practices aim to induce software quality assurance (SQA) into the project at hand.
Some researchers of agile methods claim that because of the very nature of such methods, quality in agile
software projects should be a natural outcome of the applied method.
As a consequence, agile quality is expected to be more or less embedded in the agile software processes. Many
reports support and evangelize the advantages of agile methods with respect to quality assurance, Is it so ?
An ambitious goal of this paper is to present work done to understand how quality is or should be handled.
This paper as all survey papers attempt to summarize and organizes research results in the field of software
engineering, precisely for the topic of agile methods related to software quality.
Software Architecture at the Glance to Make A Long Story Short BODJE N'KAUH NATHAN-REGIS1 and G.M Nasira2, 1Christ University ,India and 2Govt. Arts College (Autonomous), India
ABSTRACT
Today, With the rapid growth of technology; the complexity of software systems is increasing. The success of a software system depends on a good architectural design, the design and specification of overall system structure. Software architecture becomes more significant issues than the choice of algorithms and data structures for computation. There is a need of architecture as Garlan noted that "as the size and complexity of software systems increases, the design problem goes beyond the algorithms and data structures of the computation: designing and specifying the overall system structure emerges as a new kind of problem... This is the software architecture level of design." (Garlan, 1992).
As Gardazi (2009) at the emerging technology conference revealed the lack of technical knowledge about Software Architecture by engineer and also an underestimate importance of the software architect by companies; Software Architecture need to gain a re-active of interest.
Despite the volume of research since the first paper on the field were published (Dijkstra , Spooner , Perry and Wolf , and Shaw's 1989 paper), it is suprising that the software development community has failed to agree on exactly what we mean by the "software architecture ".
Practitioners and academics have somehow a different view to what software architecture really is. The figure 1, shows the basic concepts that help to visualizes architecture related topics from the book: Software Architecture
A Comprehensive Framework and Guide for Practitioners.
The paper is a literature review that proposes a way to define the architecture with five means and subareas that merge the ideas from these two branches of software community allowing the discipline of software architecture have a real and one available set definition.
User Authentication Mechanism Based On Persuasive Cued Click Points with Sound Signature Jisha Anna Alex, Sheena Anees and A.Neela Madheswari, KMEA Engineering college,India
ABSTRACT
Various graphical password schemes have been
proposed as alternatives to text-based passwords. Researches
have shown that text-based passwords are fraught with both
usability and security problems that make them less than
desirable solution. Psychology studies have revealed that the
human brain is better at recognizing and recalling images than
text. Graphical passwords are intended to capitalize on this
human characteristic in hopes that by reducing the memory
burden on users, coupled with a larger full password space
offered by images, more secure passwords can be produced and
users will not resort to unsafe practices in order to cope. Cued
click points (CCP) is a click-based graphical password scheme, a
cued-recall graphical password technique. Users Click on one
point per image for a sequence of images. The presence of
hotspots remains as an issue in CCP. We propose a new clickbased
graphical password scheme for authentication called
Persuasive Cued Click Points (PCCP) with sound signature. A
password consists of one click-point per image for a sequence of
images. The next image displayed is based on the previous clickpoint
so users receive immediate implicit feedback as to whether
they are on the correct path when logging in. In order to provide
greater security, the concept of viewport is introduced here. The
viewport is positioned randomly, rather than specifically to avoid
known hotspots. PCCP offers both improved usability and
security. A graphical password system with a supportive sound
signature increases the remembrance of the password. Here, user
is asked to select a sound signature corresponding to click point.
This sound signature will be used to help the user in recalling the
click point on an image.
PhD: The Human Optimization Satish Gajawada, Indian Institute of Technology Roorkee, India
ABSTRACT
This paper is dedicated to everyone who is interested in the Artificial Intelligence. John Henry Holland proposed Genetic Algorithm in the early 1970s. Ant Colony Optimization was proposed by Marco Dorigo in 1992. Particle Swarm Optimization was introduced by Kennedy and Eberhart in 1995. Storn and Price introduced Differential Evolution in 1996. K.M. Passino introduced Bacterial Foraging Optimization Algorithm in 2002. In 2003, X.L. Li proposed Artificial Fish Swarm Algorithm. Artificial Bee Colony algorithm was introduced by Karaboga in 2005. In the past, researchers have explored behavior of chromosomes, birds, fishes, ants, bacteria, bees and so on to create excellent optimization methods for solving complex optimization problems. In this paper, Satish Gajawada proposed The Human Optimization. Humans progressed like anything. They help each other. There are so many plus points in Humans. In fact all optimization algorithms based on other beings are created by Humans. There is so much to explore in behavior of Human for creating awesome optimization algorithms. Artificial Fishes, birds, ants, bees etc have solved optimization problems. Similarly, optimization method based on Humans is expected to solve complex problems. This paper sets the trend for all optimization algorithms that come in future based on Humans.
Handling Uncertainty and Clustering in Uncertain Data Based on Kl Divergence Technique Reshma MR and Suchismita Sahoo, KMEA Engineering College, India
ABSTRACT
Data uncertainty is an inherent property in various applications due to reasons such as outdated sources or imprecise measurement. Data mining problems are significantly influenced by the uncertainty in these underlying data. Clustering is one of the most comprehensively studied problems in the uncertain data mining literature. Techniques have been designed for clustering uncertain data based on the traditional partitioning clustering methods like k-means and density-based clustering methods like DBSCAN to uncertain data, they rely on geometric distances between objects. Such methods cannot handle uncertain objects that are geometrically indistinguishable. In the proposed system we are using probability distributions, which are essential characteristics of uncertain objects, and are considered in measuring similarity between uncertain objects. The well-known Kullback-Leibler divergence is used to measure similarity between uncertain objects and integrate it into partitioning and density-based clustering methods to cluster uncertain objects. The proposed system aims to handle the tolerance/uncertainty of uncertain data so we are using FCM clustering algorithm for tolerance/uncertainty and check the validity of the clusters, which are obtained after modelling uncertain objects using the KL divergence.
Architecting the Network for the Cloud using Security Guidelines K. Krishna Chaitanya, MVGR College of Engineering,India
ABSTRACT
Cloud computing is the latest emerging technology which offers reduced capital expenditure, complexity, operational risks, maintenance, and increased scalability while providing the services at different abstraction levels, namely Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). A new approach called cloud networking that enables and adds the networking functions and resources to cloud computing and provide the means of dynamic and flexible placement of virtual resources that crossing over provider borders. It also allows various kinds of functionalities and optimization techniques, e.g., reducing network load. This approach brings out and enables new security challenges. Our paper presents a new security architecture, which enables the user of cloud networking to illustrate the security requirements and apply them in the cloud networking infrastructure.
Smart Test Case Quantifier using MC/DC Coverage Criterion S. Shanmuga Priya and P. D. Sheba, J.J. College of Engineering and Technology,India
ABSTRACT
Software testing, an important phase in Software Development Life Cycle (SDLC) is a time consuming process. Information shows that nearly 40 to 50% of software development time is spent in testing. Manual testing is labor-intensive and error-prone so there is a need for automatic testing technique. Automation brings down the time and cost involved in testing. When testing software, there are often a massive amount of possible test-cases even for quite simple systems. Running each and every feasible test-case is certainly not a choice, so designing test-cases becomes a significant part of the testing process. NASA proposed Modified Condition/Decision Coverage (MC/DC) testing criterion in 1994, which is a white box testing criterion. The objective of this paper is to automate the generation of minimum number of test cases required to test a system with maximum coverage by removing the redundant test cases using MC/DC criterion. The work also gives a tool Smart Test Case Generator Tool (STCGT) that automates the minimum number of test cases required to test the source code. This will give an idea about the test cases execution for the beginners of the testing team, thereby, aids in a quality on-time product.
A Review on Segmentation of Medical Images using Active Contour Optimized with Stochastic Gradient Descent Parul Saxena, Oriental College of Technology,India
ABSTRACT
This paper is a review of image segmentation is to partition an image into non-overlapping regions based on intensity or textural information. The active contour is one of the most successful variational models in image segmentation. It consists of evolving a contour in images toward the boundaries of objects. After reviewing, we have proposed an approach which is used in image segmentation using level set and is then optimized using Stochastic GD. So, this paper pro- poses a modification of Stochastic Gradient Descent Method (SGD), called Modified SGD. This Modified Stochastic Gradient Descent Method is often used to solve the optimization problem since they are very easy to implement. Before starting image segmentation using Level Set, Noise removal techniques are also applied on the input images provided by the users. The proposed methods are very simple modification of the basic methods and are directly compatible with any type of level set implementation. In this survey, some techniques have been reviewed that helps in the further research.
Improved Object Tracking using Mixture Particle Filter for Maintaining Multy-Modality Lubina P Ibrahim, Smitha.T.C and A.Neela madheswari, KMEA Engineering college,India
ABSTRACT
Object tracking is an important research area in computer vision. It is the process of establishing the correspondences of the object of interest between the successive frames. While various particle filters and conventional Markov-chain Monte Carlo (MCMC) methods have been proposed for visual tracking, these methods often suffer from the well-known local-trap problem. It cannot able to track multiple objects at a time. The problem of tracking multiple non-rigid objects has two major difficulties. First, the observation models and target distributions can be highly non-linear and non-Gaussian. Second, the presence of a large, varying number of objects creates complex interactions with overlap and ambiguities. To surmount these difficulties, we introduce a vision system that is capable of learning, detecting and tracking the objects of interest. In our approach, we extend particle filters to mixture particle filter (MPF) for multi-target tracking. The MPF is ideally suited to multi-target tracking as it assigns a mixture component to each object. AdaBoost cascade classifier is used to learn the models of objects and this detection are used to guide the MPF. The proposal distribution consists of a probabilistic mixture model that incorporates information from boosted cascaded classifier and the dynamic models of the individual object. We can detect the object of interest in a dynamically changing background using this proposal distribution. The proposed method is effective and efficient in addressing the multiple object tracking. We compare our approach with stochastic approximation Monte Carlo (SAMC) tracking method, and extensive experimental results are presented to demonstrate the effectiveness and the efficiency of the proposed method.
Fuzzy C-Means and Classification-Regression Based Super-Resolution Neena Susan Varghese, Suchismita sahoo and A.Neela madheswari, KMEA Engineering college,India
ABSTRACT
Image super-resolution (SR) reconstruction is the
process of generating an image at a higher spatial resolution by
using one or more low-resolution (LR) inputs from a scene. By
super-resolving an LR image, more robust performance can be
achieved in many applications such as computer vision, medical
imaging, video surveillance, and entertainment. Neighbourembedding-
based (NE) algorithm is an effective method for
image super-resolution (SR) where the Histograms of oriented
gradients (HoG) features of the patched image are extracted in a
raster-scan order from the up scaled version of the LR input by
using the bicubic (BI) interpolation. Then perform k-means
clustering on HoG features to partition the training data set into
a set of subsets. A sparse neighbor selection (SpNS) algorithm is
applied to search the neighbors by incorporating the Robust-SL0
algorithm and the K-NN criterion. We developed a sparse
neighbor selection (SpNS) algorithm to search the neighbors by
incorporating the Robust-SL0 algorithm and the classificationregression
based super-resolution criterion. Also here we
partition the whole training data into a set of subsets by
clustering the histograms of oriented gradients (HoG). Here
instead of k-means clustering we are using fuzzy c-means
clustering for effective clustering. SpNE is applied to synthesize
the HR image patch of the LR input, in which searching
neighbors and estimating weights are simultaneously conducted.
After constructing all the HR patches, the total-variation-based
(TV) deblurring and the iterative back-projection (IBP)
algorithm are sequentially performed to obtain the final HR
outcome.
Reduced Massive Data Sets using improved algorithms based Core-Sets LACHACHI Nour-Eddine, Oran University,Algeria
ABSTRACT
Minimal Enclosing Ball (MEB) is a spherically shaped boundary around a normal dataset, it used to separate this set from abnormal data. MEB has a limitation for dealing with a large dataset in which computational load drastically increases as training data size becomes large. To handle this problem in huge dataset used in different domains, we propose an improved two approaches using k-NN clustering method. It produce a reduced data optimally matched to the input demands of different background of systems as UBM architectures in language recognition and identification systems. In this paper, we improve a technique for reduced massive data sets using core-set. For this, the training data, learned by Support Vector Machines (SVMs), partitioned among several data sources. Computation of such SVMs achieved by finding a core-set for the image of the data. Experimentation used on speech information had eliminate all noise data