keyboard_arrow_up
Accepted Papers
Survey of Streaming Data With Dynamic Compact Streaming Algorithm

Ayodeji Oyewale1 and Chris Hughes2, 1School of Computing, Science and Engineering, University of Salford, Salford, Manchester and 2The Crescent, Salford, Manchester, United Kingdom

ABSTRACT

A growing number of applications that generate massive streams of data need intelligent data processing and online analysis. Data & Knowledge Engineering (DKE) has been known to stimulate the exchange of ideas and interaction between these two related fields of interest. DKE makes it possible to understand, apply and assess knowledge and skills required for the development and application data mining systems. With present technology, companies are able to collect vast amounts of data with relative ease. With no hesitation, many companies now have more data than they can handle. A vital portion of this data entails large unstructured data sets which amount up to 90 percent of an organization’s data. With data quantities growing steadily, the explosion of data is putting a strain on infrastructures as diverse companies having to increase their data center capacity with more servers and storages. This study conceptualized handling enormous data as a stream mining problem that applies to continuous data stream and proposes an ensemble of unsupervised learning methods for efficiently detecting anomalies in stream data.

KEYWORDS

Stream data, Steam Mining, Compact data structurres, FP Tree, Path Adjustment Method


Compression and Reconstruction of Angiographic Images Using Compressive Sensing

N. Rada, L. E. Mendoza, E. G. Florez ,TelecommunicationsEngineering, Biomedical engineering, Mechanical Engineering, Research Group in Mechanical Engineering, Universityof Pamplona, Colombia

ABSTRACT

This article presents a robust compression method known as compression sensitivity (CS). CS, allows to reconstruct scat-tered signals with very few samples unlike the Shannon-Nyquist theorem. In this article the discrete cosine transform and the wavelet transform were used to find most adequate sparse space. Angiographic images were used, which were reconstructed using algorithms such as Large-scale Sparse Reconstruction (SPGL) and Gradient Projection for Sparse Reconstruction (GPRS). In this work, it was demonstrated that using the wavelet-cosine transformed transpose allowed achieving a more satisfactory sparse space than those obtained by other research. Finally, it was demonstrated that CS works in a relevant way for compressing angiographic images and the maximum percentage of error in the reconstruction was 3.56% for SPGL.

KEYWORDS

Compressive Sensing, sparse signal, images, reconstruction, SPGL1, SPSR.


Optimizing the Performance of Convolutional Neural Networks on Raspberry PI for Real-Time Object Detection

Hyun Woo Jung, Hankuk Academy of Foreign Studies, Yongin, South Korea

ABSTRACT

Deep learning has facilitated major advancements in various fields including image detection. This paper is an exploratory study on improving the performance of Convolutional Neural Network (CNN) models in environments with limited computing resources, such as the Raspberry Pi. A pretrained state-of-art algorithm for doing near-real time object detection in videos, YOLO (“You-Only-Look-Once”) CNN model, was selected for evaluating strategies for optimizng the runtime performance. Various performance analysis tools provided by the Linux kernel were used to measure CPU time and memory footprint. Our results show that loop parallelization, static compilation of weights, and flattening of convolution layers reduce the total runtime by 85% and reduce memory footprint by 53% on a Raspberry Pi 3 device. These findings suggest that the methodological improvements proposed in this work can reduce the computational overload of running CNN models on devices with limited computing resources.

KEYWORDS

Deep Learning, Convolutional Neural Networks, Raspberry Pi, real-time object detection


A New Hybrid Descriptor Based on Spatiogram and Region Covariance Descriptor

Niloufar Salehi Dastjerdi and M. Omair Ahmad, Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada

ABSTRACT

Image descriptors play an important role in any computer vision system e.g. object recognition and tracking. Effective representation of an image is challenging due to significant appearance changes, viewpoint shifts, lighting variations and varied object poses. These challenges have led to the development of several features and their representations. Spatiogram and region covariance are two excellent image descriptors which are widely used in the field of computer vision. Spatiogram is a generalization of the histogram and contains some moments upon the coordinates of the pixels corresponding to each bin. Spatiogram captures richer appearance information as it computes not only information about the range of the function like histograms, also information about the (spatial) domain. However, there is a drawback that multi modal spatial patterns cannot be well modelled. Region covariance descriptor provides a compact and natural way of fusing different visual features inside a region of interest. However, it is based on a global distribution of pixel features inside a region and loses the local structure. In this paper, we aim to overcome the existing drawbacks of these descriptors. To this, we propose r-spatiogram and then a new hybrid descriptor is presented which is combination of r-spatiogram and traditional region covariance descriptors. The results show that our descriptors have the discriminative capability improved in comparison with other descriptors.

KEYWORDS

Feature Descriptor, Spatiogram, Region Covariance


Visual Tracking Applying Depth Spatiogram and Multi-Feature Data

Niloufar Salehi Dastjerdi and M. Omair Ahmad, Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada

ABSTRACT

Object tracking, in general, is a promising technology that can be utilized in a wide variety of applications. It is a challenging problem and its difficulties in tracking objects may fail when confronted with challenging scenarios such as similar background color, occlusion, illumination variation, or background clutter. A number of ongoing challenges still remain and an improvement on accuracy can be obtained with additional processing of information. Hence, utilizing depth information can potentially be exploited to boost the performance of traditional object tracking algorithms. Therefore, a large trend in this paper is to integrate depth data with other features in tracking to improve the performance of tracking algorithm and disambiguate occlusions and overcome other challenges such as illumination artifacts. For this, we use the basic idea of many trackers which consists of three main components of the reference model, i.e., object modeling, object detection and localization, and model updating. However, there are major improvements in our system. Our forth component, occlusion handling, utilizes the depth spatiogram of target and occluder to localize the target and occluder. The proposed research develops an efficient and robust way to keep tracking the object throughout video sequences in the presence of significant appearance variations and severe occlusions. The proposed method is evaluated on the Princeton RGBD tracking dataset and the obtained results demonstrate the effectiveness of the proposed method.

KEYWORDS

Visual Tracking, Depth Spatiogram, Multi-feature Data, Occlusion Handling


Digitization and Transliteration of Script Identified Words from Handwritten Bilingual Documents

Ranjana S. Zinjore1 and Rakesh J. Ramteke2,1Department of Computer Science, G.G. Khadse College, Muktainagar and 2School of Computer Sciences, KBC North Maharashtra University, Jalgaon

ABSTRACT

Optical Character Recognition has got a special significance in Multi-lingual, Multi-Script country like India, where a single document may contain words in two or more languages/scripts. There is a need to digitize such type of documents for easy communication and storage. It is also useful in applications like processing of handwritten messages on social media and processing of handwritten criminal records for judicial purpose. This paper reveals the approach used in the digitization of handwritten bilingual documents consist of Marathi and English languages. In this approach three phases are used. The first phase focuses on preprocessing of handwritten bilingual document and solution of merged line segmentation. An algorithm Two _Fold_ Word _Segmentation is developed to extract words from lines. A fusion of two feature extraction methods is used for script identification. Second phase focuses on recognition of script identified words. For recognition of words two different feature extraction methods are used. The first method is based on combination of structural and statistical features and second method is based on Histogram of Oriented Gradient Method. K-Nearest Neighbor classifier gives good recognition accuracy for second feature extraction method than that of first method. Finally in third phase digitization and transliteration of recognized words is performed. A graphical user interface is designed for conversion of transliterated text into speech which is useful in the society for blind and visually impaired people to read a book consisting of bilingual text.

KEYWORDS

Digitization, Transliteration, Script Identification Histogram of Oriented Gradient, K-Nearest Neighbor


A Novel Multitexton Histogram to Identify the Human Parasite Eggs Based on Textons of Irregular Shape

Roxana Flores-Quispe and Yuber Velazco-Paredes, Deparment of Computer Science, Universidad Nacional de San Agustin, Arequipa, Peru

ABSTRACT

This paper proposes a method based on Multitexton Histogram (MTH) descriptor to classificate eight different human parasite eggs: Ascaris, Uncinarias, Trichuris, Hymenolepis Nana, Dyphillobothrium Pacificum, Taenia-Solium, Fasciola Hepatica and Enterobius-Vermicularis identifying textons of irregular shapes in their microscopic images. This proposed method includes two stages. In the first a feature extraction mechanism integrates the advantages of co-occurrence matrix and histograms to identify irregular morphological structures in the biological images throughs textons of irregular shape. In the second stage the Support Vector Machine (SVM) is used to classificate the different human parasite eggs. The results were obtaining using a dataset with 2053 human parasite eggs images achieving a success rate of 96,82% in the classification.

KEYWORDS

Human Parasite Eggs, Multitexton Histogram descriptor, Textons.


Augmentation for small object detection

Mate Kisantal1, Zbigniew Wojna1,2, Jakub Murawski2,3, Jacek Naruniec3, Kyunghyun Cho4, 1Tensor ight, Inc., 2University College London, 3Warsaw University of Technology and 4New York University

ABSTRACT

In the recent years, object detection has experienced impressive progress. Despite these improvements, there is still a significant gap in the performance between the detection of small and large objects. We analyze the current state-of-the-art model, Mask-RCNN, on a challenging dataset, MS COCO. We show that the overlap between small ground-truth objects and the predicted anchors is much lower than the expected IoU threshold. We conjecture this is due to two factors; (1) only a few images are containing small objects, and (2) small objects do not appear enough even within each image containing them.We thus propose to oversample those images with small objects and augment each of those images by copy-pasting small objects many times. It allows us to trade off the quality of the detector on large objects with that on small objects. We evaluate different pasting augmentation strategies, and ultimately, we achieve 9.7% relative improvement on the instance segmentation and 7.1% on the object detection of small objects, compared to the current state of the art method on MS COCO.


Construction Mathematical Model Of Spectrometer Based On Curved Prism

Lei Feng, Key Laboratory of Computational Optical Imaging Technology, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China

ABSTRACT

Wide spectral band, combination of imaging and spectrum and fine spectral detection capability are the outstanding advantages of imaging spectrometer. Rich spectral information combined with spatial image of the object point greatly improves the accuracy of target detection, expands the function of traditional detection technology, and realizes the qualitative analysis of target characteristics. Spectrometer plays an irreplaceable role compared with other technologies. It has been widely used in many military and civilian fields, such as land and ocean remote sensing, remote sensing monitoring of pollutants in atmosphere, soil and water, military target detection, medical spectral imaging diagnosis, scientific experiments and so on. Curved prism spectrometer is widely used because of its high energy and no ghost image. However, curved prism spectrometer is as a non coaxial symmetric system and its aberration theory is complex. Therefore, it is necessary to establish a numerical model and construct an initial structure to provide a good starting point for system optimization. In a practical imaging spectrometer based on prism, there are many aberrations when ray incidented on the surface of each element. In the design, it is very important to establish a mathematical model to analyze these aberrations. Curved prism is a kind of non-coaxial prism which is obtained by processing the front and rear surfaces of triangular prism into two spheres. Its front and rear surfaces are not coaxial with the optical axis, so its characteristics are complex. Firstly, on the basis of the primary aberration theory, the numerical calculation model of curved prism is established, and the optimal object distance of curved prism and the effective incident angle of curved prism are solved according to the principle of minimum aberration. For given system parameters, the coordinates of object points are known, and then the numerical model of curved surface prism spectrometer is established. The vector method is used to solve the incident and output vectors of given light rays. After transmission, the optical path extremum function is established, and the second-order partial differential equation is derived. The surface equation of each element is expanded by higher order Taylor series, that is, each surface is expressed as a functional expression of the incident point and structural parameters. A set of partial differential equations is constructed, and the least square method is used to solve the minimum of the equations, and then the initial structural parameters are calculated.

KEYWORDS

Mathematical computation, partial differential equations, vector solving, curved prism


An Introduction to Quantum Computers

Hamidreza Bolhasani, Amir Masoud Rahmani and Farid Kheiri, Department of Computer Engineering, Science and Research branch, Islamic Azad University, Tehran, Iran

ABSTRACT

Since 1982 that Richard Feynman proposed the idea of quantum computing for the first time, it has become a new field of interest for many physics and computer scientists. Although it’s more than 30 years that this concept has been presented but it’s still considered as unknown and several subjects are open for research. Accordingly, concepts and theoretical reviews may always be useful. In this paper, a brief history and fundamental ideas of quantum computers are introduced with focus on architecture part.

KEYWORDS

Quantum, Computer, Hardware, Qubit, Gate


Protecting Legacy Mobile Medical Devices Using A Wearable Security Device

Vahab Pournaghshband1 and Peter Reiher2 , 1Computer Science Department, University of San Francisco, San Francisco, USA, 2Computer Science Department, University of California, Los Angeles, Los Angeles,USA

ABSTRACT

The market is currently sated with mobile medical devices and new technology is continuously emerging. Thus, it is costly, and in some cases impractical, to replace these devices for new ones with greater security. In this paper, we present the implementation of a prototype for Personal Security Device a self-contained, specialized wearable device that augments security to existing mobile medical devices. The main research challenge for, and hence the state of the art of, the proposed hardware design is that the device, to work with legacy devices, must require no changes to either the medical device or its monitoring software. This requirement is essential since we aim to protect already existing devices, as making modifications to the device or its proprietary software often impossible or impractical (e.g., closed source executables and implantable medical devices). Through performance evaluation of this prototype, we confirmed the feasibility of having a special-purpose hardware with limited computational and memory resources to perform necessary security operations.

KEYWORDS

Wireless medical device security, Man-in-the-middle attack.


Design and Implementation of User-Centered Adaptive Search Engine

Shailja Dalmia, Ashwin T S and Ram Mohana Reddy Guddeti, National Institute of Technology Karnataka, Surathkal, Mangalore, Karnataka, India

ABSTRACT

With the ever-growing variety of information, the retrieval demands of different users are so multifarious that the traditional search engine cannot afford such heterogeneous retrieval results of huge magnitudes. Harnessing the advancements in a user-centered adaptive search engine will aid in groundbreaking retrieval results achieved efficiently for high-quality content. Previous work in this field have made using the excessive server load to achieve good retrieval results but with the limited extended ability and ignoring on demand generated content. To address this gap, we propose a novel model of adaptive search engine and describe how this model is realized in a distributed cluster environment. Using an improved current algorithm of topic-oriented web crawler with User Interface based Information Extraction Technique was able to produce a renewed set of user-centered retrieval results with higher efficiency than all existing methods. The proposed method was found to exceed by 1.5 times and two times for crawler and indexer, respectively than all prevailing methods with improved and highly precise results in extracting semantic information from Deep web.

KEYWORDS

Search Engine, WWW, Web Content Mining, Inverted Indexing, Hidden Crawler, Distributed Web Crawler, Precision, Deep Web


Identifying Data and Information Streams in Cyberspace: A Multi-Dimensional Perspective

Ikwu Ruth and Louvieris Panos, Department of Computer Sciences, Brunel University, London

ABSTRACT

Cyberspace has gradually replaced the physical reality, its role evolving from a simple enabler of daily live processes to a necessity for modern existence. As a result of this convergence of physical and virtual realities, for all processes being critically dependent on networked communications, information representative of our physical, logical and social thoughts are constantly being generated in cyberspace. The interconnection and integration of links between our physical and virtual realities create a new hyperspace as a source of data and information. Additionally, significant studies in cyber analysis have predominantly revolved around a single linear analysis of information from a single source of evidence (The Network). These studies are limited in their ability to understand the dynamics of relationships across the multiple dimensions of cyberspace. This paper introduces a multi-dimensional perspective for data identification in cyberspace. It provides critical discussions for identifying entangled relationships amongst entities across cyberspace.

KEYWORDS

Cyberspace, Data-streams, Multi-Dimensional Cyberspace


An Intelligent Internet-of-things (IoT) System to Etect and Predict Amenity Usage

Solomon Cheung1, Yu Sun1 and Fangyan Zhang2, 1Department of Computer Science, California State Polytechnic University, Pomona, CA, 91768 and 2ASML, San Jose, CA, 95131

ABSTRACT

As an act of disposing waste and maintaining homeostasis, humans have to use the restroom multiple times a day. One item that is consumed in the process is toilet paper; it often runs out easily in the most inconvenient times. One of the most fatal positions to be in is to be stuck without toilet paper. Since humans are not capable of a 100% resupply rate, we should give this task to a computer. The approach we selected was to use a pair of laser sensors to detect whether toilet paper was absent or not. Utilizing an ultrasound sensor, we would be able to detect whether a person was nearby and send a notification to a database. The online app, PaperSafe, takes the information stored and displays it onto a device for quick access. Once a sufficient amount of data is acquired, we can train a machine learning algorithm to predict the next supply date, optimized for the specific scenario.

KEYWORDS

Amenity, Homeostasis, Machine Learning, Mobile Application


An Efficient Agent Based Offloading Decision Maker for Mobile Cloud Computing

Vijayalakshmi M,Shanthi ThangamM and Bushra H, Department of Information Science and Technology, Anna University, Chennai City, Tamil Nadu, India

ABSTRACT

The usages of mobile devices are drastically increasing every day with high end support to the users. Due to high end configurations mobile devices such as smart phones, laptops, tablets, etc., computations are complex in these devices. Computation intensive and data intensive are plays a vital role in the mobile devices. The main challenges in the mobile devices are handling the mobile applications in the devices with high computation and high storage. The above mentioned challenges can be overcome by using mobile cloud computing. The limitations while handling the mobile cloud computing is offloading decision making, which part of computation should offload and which should execute in the mobile side. The proposed work provides the solution to the limitations and challenges mentioned earlier by providing agent based offloading decision maker for mobile cloud. The decision maker should decide which computation part is executed in the mobile side and the cloud side. The evaluation shows the mobile applications having high complexity get benefited over other high applications.

KEYWORDS

Agent based, Mobile cloud, Offloading, Computational device.


A Hybrid-based Architecture for Web Service Selection

Sandile Mhlanga1, Dr Tawanda Blessing Chiyangwa2, Dr Lall Manoj1 and Prof Sunday Ojo1, 1Tshwane University of Technology,South Africa and 2University of South Africa,South Africa

ABSTRACT

With the rapid growth of Web services in recent years, it is very difficult to choose the suitable web services among those services that offer similar functionality. Selecting the right web service does not only include the problem of discovering services on the basis of their functionalities, but also assessing the quality aspects of those services. Quality of services (QoS) is considered a distinguishing factor between similar web services and plays a vital role in web service selection. The aim of the model is to evaluate and rank the alternatives consisting of conflicting criteria or criteria with different QoS requirements. To address this issue, this study proposes a model for determining the most suitable candidate web service by integrating AHP (Analytic Hierarchy Process) and VIKOR (Vlsekriterjumska optimizacija I KOmpromisno Resenje) methods. AHP method computes the weights assigned to QoS criteria using pairwise comparison. Thereafter, the ranking of the web services, according to a user preferred criteria, is obtained using VIKOR method. Finally, a software prototype for implementing AHP and VIKOR was implemented. To illustrate and validate the proposed approach, data from QWS dataset is used by the software prototype in a service selection process.

KEYWORDS

Web services, Web services selection, Quality of service, Multi-Criteria decision making, AHP, VIKOR


Semantic Process Based Framework for Regulatory Reporting Process Management

Manjula Pilaka, Fethi A. Rabhi and Madhushi Bandara, School of Computer Science and Engineering, University of New South Wales, Sydney, Australia

ABSTRACT

Regulatory processes are normally tracked by regulatory bodies in terms of monitoring safety, soundness, risk, policy and compliance. Such processes are loosely framed processes and it is a considerable challenge for data scientists and academics to extract instances of such processes from event records and analyse their characteristics e.g. if they satisfy certain process compliance requirements. Existing approaches are inadequate in dealing with the challenges as they demand both technical knowledge and domain expertise from the users. In addition, the level of abstraction provided does not extend to the concepts required by a typical data scientist or a business analyst. This paper extends a software framework which is based on a semantic data model that helps in deriving and analysing regulatory reporting processes from event repositories for complex scenarios. The key idea is in using complex business-like templates for expressing commonly used constraints associated with the definition of regulatory reporting processes and mapping these templates with those provided by an existing process definition language. The efficiency of the architecture in evaluation, compliance and impact was done by implementing a prototype using complex templates of Declare ConDec language and applying it to a case study related to process instances of Australian Company Announcements.

KEYWORDS

Regulatory Reporting, Process Extraction, Semantic Technology, Events


Strategy to Fix Register-To-Register Timing for Large Feed Through Blocks Having Limited Internal Pipelines

Rajendra Pratap, Sonia Sharma and Ankita Bhaskar, eInfochips (An Arrow Company), Noida, Uttar Pradesh, India

ABSTRACT

Feedthrough blocks are the communication channels present at the top chip level with many hierarchical blocks to ensure smooth interaction between two or more blocks. Since it is like a channel between blocks so port positions and size are hard fixed. If the size of feedthrough block is large, then many times it becomes a challenge to satisfy internal register-to-register timing for these blocks. In this manuscript, the authors present a simple technique to have controlled internal register-to-registertimings for such large feedthrough blocks present in big integrated chips.

KEYWORDS

VLSI, chip, Setup Fixing Techniques &Feedthrough Block


Methodology to Reduce Run Time of Timing/Functional ECO

Ashwani Kumar Gupta and Dr. Rajendra Pratap, Department of ASIC, einfochips (An Arrow Company), Noida, Uttar Pradesh, India

ABSTRACT

Chemical Mechanical Planarization is a process of smoothing wafer surface through exerting the chemical and mechanical forces on wafer. It is an important step in IC fabrication process. To achieve the planarity on the surface of IC, Dummy metal fills are required to be inserted. Dummy fill insertion is a time consuming process for moderate and bigger sized blocks or chips. Insertion of Dummy metal fills affects the coupling capacitance of the signal metal layers which causes signal integrity issues. In the last stages of the design closure while doing Timing ECOs, re-doing Dummy metal fills can cause timing/noise violations and ECOs can be unpredictable. In this paper we are suggesting a methodology wherein eco can be implemented without re-running the Dummy metal fill again on the complete block/chip. This will save ECO implementation time and reduce the risk of any new signal integrity issues.

KEYWORDS

CMP (Chemical Mechanical Planarization), ECO (Engineering Change Order), ILD (Inter-Level dielectric), GDS (Graphic Data System), TCL (Tool Command Language), Crosstalk, Dummy Metal Fill, Coupling Cap, PnR (Place And Route).


A VLSI-friendly LZW-based Optimisation for A Jpeg Compression Flow

Eric Ohana, Science and Engineering Faculty, Queensland University of Technology, Brisbane, Australia

ABSTRACT

The paper presents an optimisation on the baseline JPEG hardware implementation that improves the compression ratio for many image types. The baseline JPEG flow is briefly reviewed along with prior art then it is explained how and where the LZW-based optimisation fits in this flow and what it replaces. The variations taken from a standard LZW compression flow are explained along with why they are necessary in this specific hardware application. The micro architecture of the hardware implementation and its FPGA build are then detailed. The various trade-offs between implementation decisions and compression efficiency are explained. Finally comparison results between the baseline JPEG flow and the LZW based optimised one are shown and conclusions are drawn.

KEYWORDS

JPEG, Huffman Encoding, LZW Compression, Content Addressable Memory (CAM), Cache Memory


A Systematic Evaluation of Manet Routing Protocols over UDP And TCP in Multi-Hop Wireless Network

Adebayo Seyi1 and Ogunseyi Taiwo2, 1Department of Information and Communication Engineering, China University of Mining and Technology, Xuzhou, China and 2Department of Information Security, Communication University of China, Beijing, China

ABSTRACT

There are genuine concerns for the right transport connection to be deployed on a particular routing protocol in order to have a reliable, fast and robust communication in spite of the size and the dynamics of the network topology. This work comparatively studies the individual implementation of reactive and proactive protocols on both UDP and TCP transport connection using end to end delay, average throughput, jitter and packet delivery ratio (PDR) as QoS metrics. We studied the combination of both the transport connection and routing protocol that will deliver the best QoS in simple and complex network scenarios with source and destination nodes fixed and the intermediate nodes randomly moving throughout the simulation time. More so, the intrinsic characteristics of the routing protocols regarding the QoS metrics and transport connection are studied. Forty simulations were run for simple and complex multi-hop network models and the results were analyzed and presented.

KEYWORDS

MANET, Wireless Network, Proactive, Reactive, QoS, UDP, TCP


Balancing Security and Agility in Software Engineering: A Survey of Secure Agile Software Development Methods

Peter John A. Francisco, Department of Computer Science, University of the Philippines, Quezon City, Philippines

ABSTRACT

Many approaches have been proposed to integrate security activities into agile software development methodologies. These studies did not seem to have made the jump into practice, however, since, per our experience, most software development teams are not familiar with the range of methods developed for this purpose. This knowledge gap makes the task especially difficult for agile project managers and security specialists attempting to achieve the delicate balance of agility and security for the first time. In this study, we surveyed proposed methods available in current literature for integrating security activities into agile software engineering. From 11 proposed secure agile methods published between 2004 to 2017, we extracted 5 insights which practitioners in agile software development and security engineering can use to more effectively, jointly embed security into their software development flows. We then used the insights in a retrospective case study of a software engineering project in a fintech startup company, a high-risk industry in terms of security, and conclude that prior knowledge of the insights would have addressed major challenges in their security integration task.

KEYWORDS

Agile Process, Software Engineering, Security, Survey


An Effective Cybersecurity Exercises Platform Cyexec and its Training Contents

Nobuaki Maki1, Ryotaro Nakata2, Shinichi Toyoda1, Yosuke Kasai1, Sanggyu Shin3 and Yoichi Seto1, 1Advanced Institute of Industrial Technology, Tokyo, Japan, 2Institute of Information Security, Yokohama City, Kanagawa, Japan and 3Tokai University, Hiratsuka City, Kanagawa, Japan

ABSTRACT

Recently the threats of cyberattacks, especially of targeted attacks are increasing rapidly and a large number of cybersecurity incidents are occurring frequently. On the other hand, capable personnel are greatly lacking, and strengthen the systematic human resource development cultivating capabilities for cybersecurity activities is becoming an urgent issue. However, only a few parts of academia and private sector in Japan can carry out the cybersecurity exercises because of high cost and inflexibility of commercial or existing training software. On this account, in order to enforce cybersecurity practical exercises cost-effectively and flexibly, we developed a virtual environment Cybersecurity Exercises (CyExec) system utilizing VirtualBox and Docker. We also implemented an open source vulnerability scanner tool WebGoat and our original cyberattack and defense training contents on CyExec.

KEYWORDS

Ecosystem, Virtualization, WebGoat, Cyberattack and Defense Exercise, Cyber Range Exercise.


Text-based Captcha Using Text-adhesion and Visual Compensation

Zeng Dangquan, Department of Information Science and Technology, Xiamen University Tan Kahkee College, Zhang Zhou, Fu Jian, China

ABSTRACT

CAPTCHA (Completely Automated Public Turing Test to tell Computers and Humans Apart) is a test for distinguishing between computers and humans, and has a very wide range of applications on the Internet. Most websites require users to submit CAPTCHAs when registering to log in or submit some form data to improve the security of the site, thus avoiding malicious attacks by automated robots and spammers. In this paper, a text-based CAPTCHA Using text-adhesion and visual compensation is introduced. This CAPTCHA is designed to use character cascading technology and partial defect technology to effectively prevent the machine from using the character cutting and machine learning techniques to verify the CAPTCHA, but human vision can easily separate the characters that are layered together and can fill in some of the missing characters to easily identify the CAPTCHA. In order to test the ability of the CAPTCHA to resist automatic machine identification, four kinds of specialized OCR recognition software and two online OCR recognition websites were used to identify and verify 1000 CAPTCHAs. The test results show that none of the six OCR recognition tools can correctly identify one completely CAPTCHA(correctly recognized) , and the probability of not being recognized and misidentified is over 99.5%, which proves that the CAPTCHA is a very high security technology.

KEYWORDS

CAPTCHA, text-adhesion, visual compensation, OCR, security


Understanding How Colour Contrast in Hotel & Travel Website Affects Emotional Perception, Trust, and Purchase Intention of Visitors

Pimmanee Rattanawicha and Sutthipong Yungratog, Chulalongkorn Business School, Chulalongkorn University, Bangkok, Thailand

ABSTRACT

To understand how colour contrast in e-Commerce websites, such as hotel & travel websites, affects (1) emotional perception (i.e. pleasant, arousal, and dominance), (2) trust, and (3) purchase intention of visitors, a two-phase empirical study is conducted. In the first phase of this study, 120 volunteer participants are asked to choose the most appropriate colour from a colour wheel for a hotel & travel website. The colour “Blue Cyan”, the most chosen colour from this phase of study, is then used as the foreground colour to develop three hotel & travel websites with three different colour contrast patterns for the second phase of the study. A questionnaire is also developed from previous studies to collect emotional perception, trust, and purchase intention data from another group of 145 volunteer participants. It is found from data analysis that, for visitors as a whole, colour contrast has significant effects on their purchase intention. For male visitors, colour contrast significantly affects their trust and purchase intention. Moreover, for generation X and generation Z visitors, colour contrast has effects on their emotional perception, trust, and purchase intention. However, no significant effect of colour contrast is found in female or generation Y visitors.

KEYWORDS

Colour Contrast, e-Commerce, Website Design


Star Ensemble: A Novel Algorithm for Spatiotemporal Data Decomposition and Interpolation

James Monks and LiwanLiyanage

ABSTRACT

Spatio-temporal data is becoming increasingly prevalent in our society. This has largely been spurred on from the capability of building arrays and sensors into every day items, along with highly specialised measuring equipment becoming cheaper. The result of this prevalence can be seen in the wealth of data of this kind that is now available for analysis. This spatiotemporal data is particularly useful for contextualising events in other data sets by providing background information for a point in space and time. Problems arise however, when the contextualising data and the data set of interest do not align in space and time in the exact way needed. This problem is becoming more common due to the precise data recorded from GPS systems not overlapping with points of interest and not being easily generalised to a region. This is Interpolating data for the points of interest in space and time is important and a number of methods have been proposed with varying levels of success. These methods are all lacking in usability and the models are limited by strict assumptions and constraints. This paper proposes a new method for the interpolation of points in the patio-temporal scope, based on a set of known points. It utilises an ensemble of models to take into account the nuanced directional effects in both space and time. This ensemble of models allows it to be more robust to missing values in the data which are common in spatio-temporal data sets due to variation in conditions across space and time. The method is inherently flexible, as it can be implemented without any further customisation whilst allowing for the user to input and customise their own underlying model based on domain knowledge. It addresses the usability issues of other methods, accounts for directional effects and allows for full control over the interpolation process.


menu
close