Accepted Papers

  • The Capability Maturity Model in Curriculum Design
    Thong Chee Ling1, Yusmadi Yah Jusoh2, Rusli Abdullah2 and Nor Hayati Alwi2, 1UCSI University, Malaysia and 2Universiti Putra Malaysia, Malaysia
    ABSTRACT
    Capability Maturity Model (CMM) is a process improvement model applied in software industry, and it is also applied in education industry particularly in curriculum design. In prior study, it is found that there are numerous CMM applied in curriculum design. However, there is a short of step-by-step maturity model for course design level. This triggers the idea of proposing a maturity model which is adapted from two maturity models developed for curriculum design. The proposed model contains curriculum design process which is the step-by-step process at course design level. The research work in this paper verifies the content and sequence of the proposed maturity model. The findings show that the proposed model is appropriate through expert review. Once the verification process is complete, the proposed model is embedded as one component in a information system model in future work.
  • Towards an Adaptive High-performance Execution of scientific Applications in a Dynamic Cloud Environment
    Mohamed Hussein, Tabuk University, Saudi Arabia
    ABSTRACT
    During the last decade, the needs for high-performance computing for distributed scientific applications have been addressed over multiple high-performance environments including clusters and Grid computing technology. Recently, popular Cloud computing technology offers cheap and large-scale high-performance computing environment. Infrastructure as a Service (IaaS) Clouds offer instant access to large-scale computing resources. However, the performance of the resources can be dynamically varies according to the changing load conditions on the resources. Further, scientific applications require complex communication/computation pattern, such as optimized MPI for communication. For these reasons, it is challenging to achieve high-performance in a Cloud environment.

    This paper presents an initial framework towards achieving adaptive high-performance execution for a distributed scientific application over a private dynamic Cloud environment. The adaptation is achieved by migrating the distributed components of the benchmark application, which suffer performance degradation, to a promising different resource. The proposed framework contains a monitoring layer which monitors the execution times of the running application's components. A decision layer issues the migration decision considering the execution times and the cost of the migration. Finally, the paper presents the applicability of the proposed framework on virtual resources managed by Eucalyptus Cloud.
  • Transaction Management in Context-Aware Service Oriented Systems
    Widad Ettazi1, HatimHafiddi2 and Mahmoud Nassar1, 1Mohammed V Rabat University, Morocco and 2STRS Laboratory, Morocco
    ABSTRACT
    One goal of ubiquitous computing is to enable users to access and manipulate data from anywhere at any time and from any device. Since its emergence in database management systems, the concept of transaction has grown considerably as much as transactions are now employed at all application levels from operating systems to e-commerce applications. These applications are characterized by complex information systems that require a reliable mechanism for maintaining the consistency of their data. In this article, we discuss the problematic of transaction management in a context-aware environment. Then we discuss the various researches that have addressed this issue. Finally, we propose a new approach to managing context-aware transactions.
  • Successful Assessment of Categorization of Indian News Using JRIP and NNGE Algorithm
    S.R.Kalmegh1 and S.N.Deshmukh2, 1Sant Gadge Baba Amravati University, India and 2Dr.Babasaheb Ambedkar Marathwada University, India
    ABSTRACT
    Recent developments of e-Iearning specifications such as Learning Object Metadata (LOM), Sharable Content Object Reference Model (SCORM), Learning Design and other pedagogy research in semantic e-Learning have shown a trend of applying innovative computational techniques, especially Semantic Web technologies, to promote existing content-focused learning services to semantic-aware and personalized learning services. Classification may refer to categorization, the process in which ideas and objects are recognized, differentiated, and understood. This paper has been carried out to make a performance evaluation of JRip and NNge classification algorithm. The paper sets out to make comparative evaluation of classifiers JRip and NNge in the context of dataset of Indian news to maximize true positive rate and minimize false positive rate. For processing Weka API were Used. The results in the paper on dataset of news also show that the efficiency and accuracy of NNge is good than JRip.
  • A Modeling Approach for IT Governance Basics
    Rabii El Ghorfi1, Mohamed Ouadou1, Driss Aboutajdine1 and Mohamed El Aroussi2, 1Mohammed V-Agdal University, Morocco and 2EHTP Casablanca, Morocco
    ABSTRACT
    IT governance meets the need of decision making and leads to settling the main preoccupations company leaders have. It aims at developing good practice through the transmission of reliable, structured and intelligible information on the state of the IT (Information Technology) and the development of the effective steering devices. IT Governance is built around notions such as IT project, IT goal and based upon structured frameworks that can be found in the literature. Despite their existence, these notions are seldom developed and there is no common model posing and describing them as classical governance notions.

    We therefore put forward IT governance modeling as a means to sort out this lack of consideration, laying out a representation template of IT governance concepts build on probabilities and the Monte Carlo simulation. Our approach stands out by setting up a simple way to deal with these concepts based on the analysis of the literature in order to support the various IT governance syntaxes.

    This approach thus allows opening new IT Governance knowledge modeling and structuring consistent with IT governance activities.
  • Clustering of Multi Script Documents Using k-Means Algorithm
    Neeru Garg1 and Munish Kumar2, 1Lovely Professional University, India and 2Punjab University Rural Centre, India
    ABSTRACT
    This paper aims at the script identification problem of handwritten text document which facilitates the clustering of data according to their type of script. In this paper, collection of different types of handwritten text document i.e. Devanagari, Gurumukhi and Roman is taken as input and then cluster all these documents according to script type whether i.e. Gurumukhi, Devanagari or Roman. Clustering of handwritten multi-script document scheme proposed in this paper is divided into two phases. First phase used to extract the features of given text images. We have extracted four types of features, namely, circular curvature feature, horizontal stroke density feature, pixel density feature value and zoning based feature. In the second phase, features extracted in the previous phase, used for clustering with k-Means algorithm. In this study, we have considered 4850 samples of isolated characters of Devanagari, Gurumukhi and Roman script.
  • Scraping and Clustering Techniques for the Characterization of Linkedin Profiles
    Kais Dai, Celia Gonzalez Nespereira, Ana Fernandez Vilas and Rebeca P. Diaz Redondo, University of Vigo, Spain
    ABSTRACT
    The socialization of the web has undertaken a new dimension after the emergence of the Online Social Networks (OSN) concept. The fact that each Internet user becomes a potential content creator entails managing a big amount of data. This paper explores the most popular professional OSN: LinkedIn. A scraping technique was implemented to get around 5 Million public profiles. The application of natural language processing techniques (NLP) to classify the educational background and to cluster the professional background of the collected profiles led us to provide some insights about this OSN's users and to evaluate the relationships between educational degrees and professional careers.
  • Understanding Physicians' Adoption of Smart Health Clouds
    Tatiana Ermakova, University of Berlin, Berlin
    ABSTRACT
    Smart health applications are able to enforce essential advancements in the healthcare sector. The design of these innovative solutions is often enabled through the cloud computing model. With regards to this technology, high concerns about information security and privacy are common in practice. These concerns with respect to sensitive medical information could be a hurdle to smart health services' successful adoption and consumption, despite high expectations and interest in these services. This research attempts to understand behavioural intentions of healthcare professionals to adopt smart health clouds in their clinical practice. Based on different established theories on IT adoption and further related theoretical insights, we develop a research model and a corresponding instrument to test the proposed research model using the partial least squares (PLS) approach. We suppose that healthcare professionals' adoption intentions with regards to smart health clouds will be formed by their outweighing two conflicting beliefs which are performance expectancy and medical information security and privacy concerns associated with the usage of smart health clouds. We further suppose that security and privacy concerns can be explained through perceived risks.
  • Image Retrieval Using VLAD with Multiple Features
    Pin-Syuan Huang, Jing-Yi Tsai, Yu-Fang Wang and Chun-Yi Tsai, National Taitung University, Taiwan
    ABSTRACT
    The objective of this paper is to propose a combinatorial encoding method based on VLAD to facilitate the promotion of accuracy for large scale image retrieval. Unlike using a single feature in VLAD, the proposed method applies multiple heterogeneous types of features, such as SIFT, SURF, DAISY, and HOG, to form an integrated encoding vector for an image representation. The experimental results show that combining complementary types of features and increasing codebook size yield high precision for retrieval.
  • Distributed Lock Services Chubby VS. ZooKeeper
    Ahmed Ismail Ouda Alghalban
    ABSTRACT
    This report aims to review and make a comparison between two implementation for the distributed lock service; Chubby and Zookeeper. Chubby is an implementation for the distributed lock service. It is a Google initiative, which provides coarse-grained and advisory locking. It emphasis on availability, reliability and easy to understand semantics rather than performance (see &2.1.7).

    ZooKeeper is an open-sourced project by the Apache foundation used to coordinate processes in distributed systems. It provides a simple and high performance interface that can be used by developers to implement complex coordination primitives needed for such systems (see &3.1.5). This report focuses on the main components and the system structure and design of both services.

 

Copyrights @ ITCS 2015