Lei Feng, Key Laboratory of Computational Optical Imaging Technology, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China
Wide spectral band, combination of imaging and spectrum and fine spectral detection capability are the outstanding advantages of imaging spectrometer. Rich spectral information combined with spatial image of the object point greatly improves the accuracy of target detection, expands the function of traditional detection technology, and realizes the qualitative analysis of target characteristics. Spectrometer plays an irreplaceable role compared with other technologies. It has been widely used in many military and civilian fields, such as land and ocean remote sensing, remote sensing monitoring of pollutants in atmosphere, soil and water, military target detection, medical spectral imaging diagnosis, scientific experiments and so on. Curved prism spectrometer is widely used because of its high energy and no ghost image. However, curved prism spectrometer is as a non coaxial symmetric system and its aberration theory is complex. Therefore, it is necessary to establish a numerical model and construct an initial structure to provide a good starting point for system optimization. In a practical imaging spectrometer based on prism, there are many aberrations when ray incidented on the surface of each element. In the design, it is very important to establish a mathematical model to analyze these aberrations. Curved prism is a kind of non-coaxial prism which is obtained by processing the front and rear surfaces of triangular prism into two spheres. Its front and rear surfaces are not coaxial with the optical axis, so its characteristics are complex. Firstly, on the basis of the primary aberration theory, the numerical calculation model of curved prism is established, and the optimal object distance of curved prism and the effective incident angle of curved prism are solved according to the principle of minimum aberration. For given system parameters, the coordinates of object points are known, and then the numerical model of curved surface prism spectrometer is established. The vector method is used to solve the incident and output vectors of given light rays. After transmission, the optical path extremum function is established, and the second-order partial differential equation is derived. The surface equation of each element is expanded by higher order Taylor series, that is, each surface is expressed as a functional expression of the incident point and structural parameters. A set of partial differential equations is constructed, and the least square method is used to solve the minimum of the equations, and then the initial structural parameters are calculated.
Mathematical computation, partial differential equations, vector solving, curved prism
Hamidreza Bolhasani, Amir Masoud Rahmani and Farid Kheiri, Department of Computer Engineering, Science and Research branch, Islamic Azad University, Tehran, Iran
Since 1982 that Richard Feynman proposed the idea of quantum computing for the first time, it has become a new field of interest for many physics and computer scientists. Although it’s more than 30 years that this concept has been presented but it’s still considered as unknown and several subjects are open for research. Accordingly, concepts and theoretical reviews may always be useful. In this paper, a brief history and fundamental ideas of quantum computers are introduced with focus on architecture part.
Quantum, Computer, Hardware, Qubit, Gate
Vahab Pournaghshband1 and Peter Reiher2 , 1Computer Science Department, University of San Francisco, San Francisco, USA, 2Computer Science Department, University of California, Los Angeles, Los Angeles,USA
The market is currently sated with mobile medical devices and new technology is continuously emerging. Thus, it is costly, and in some cases impractical, to replace these devices for new ones with greater security. In this paper, we present the implementation of a prototype for Personal Security Device a self-contained, specialized wearable device that augments security to existing mobile medical devices. The main research challenge for, and hence the state of the art of, the proposed hardware design is that the device, to work with legacy devices, must require no changes to either the medical device or its monitoring software. This requirement is essential since we aim to protect already existing devices, as making modifications to the device or its proprietary software often impossible or impractical (e.g., closed source executables and implantable medical devices). Through performance evaluation of this prototype, we confirmed the feasibility of having a special-purpose hardware with limited computational and memory resources to perform necessary security operations.
Wireless medical device security, Man-in-the-middle attack.
Hirantha Athapaththu1, Shavinda Herath1, Geeth Sameera1, Supun Gamlath1, Pramadhi Atapattu2 and Malitha Wijesundara3, 1Department of Software Engineering, Sri Lanka Institute of Information Technology, Sri Lanka,2Pulzsolutions (Pvt) Ltd., Sri Lanka Technology Incubator and 3Department of Information Systems Engineering, Sri Lanka Institute of Information Technology, Sri Lanka
This paper proposes a WebRTC based live teaching platform which facilitates the e-learning requirements of universities and other institutes. This solution includes lecture live streaming, lecture playback, a vector based interactive whiteboard, chatting and file sharing module and a real-time lecture movement tracking module using a PTZ camera. The system is capable of streaming two simultaneous streams of a 1080p camera and a 720p screen capture seamlessly using a network connection with 256KB/s bandwidth. Live streaming component is very less CPU intensive and it use around 14% of the CPU for streaming 10 simultaneous sessions with 10 listeners per each on a AWS t2.micro instance with 1 vCPU 2.5 GHz, Intel Xeon Family, 1 GiB memory. The original recorded videos get down-scaled and re-encoded to make the file size smaller up to 1% of the original file size. With adaptive streaming enabled player, users with an internet connection of 128KB/s bandwidth can experience an uninterrupted playback of the recorded lectures.
WebRTC, MPEG-DASH, PTZ Camera, e-Learning, Simulcast
Shailja Dalmia, Ashwin T S and Ram Mohana Reddy Guddeti, National Institute of Technology Karnataka, Surathkal, Mangalore, Karnataka, India
With the ever-growing variety of information, the retrieval demands of different users are so multifarious that the traditional search engine cannot afford such heterogeneous retrieval results of huge magnitudes. Harnessing the advancements in a user-centered adaptive search engine will aid in groundbreaking retrieval results achieved efficiently for high-quality content. Previous work in this field have made using the excessive server load to achieve good retrieval results but with the limited extended ability and ignoring on demand generated content. To address this gap, we propose a novel model of adaptive search engine and describe how this model is realized in a distributed cluster environment. Using an improved current algorithm of topic-oriented web crawler with User Interface based Information Extraction Technique was able to produce a renewed set of user-centered retrieval results with higher efficiency than all existing methods. The proposed method was found to exceed by 1.5 times and two times for crawler and indexer, respectively than all prevailing methods with improved and highly precise results in extracting semantic information from Deep web.
Search Engine, WWW, Web Content Mining, Inverted Indexing, Hidden Crawler, Distributed Web Crawler, Precision, Deep Web
Ikwu Ruth and Louvieris Panos, Department of Computer Sciences, Brunel University, London
Cyberspace has gradually replaced the physical reality, its role evolving from a simple enabler of daily live processes to a necessity for modern existence. As a result of this convergence of physical and virtual realities, for all processes being critically dependent on networked communications, information representative of our physical, logical and social thoughts are constantly being generated in cyberspace. The interconnection and integration of links between our physical and virtual realities create a new hyperspace as a source of data and information. Additionally, significant studies in cyber analysis have predominantly revolved around a single linear analysis of information from a single source of evidence (The Network). These studies are limited in their ability to understand the dynamics of relationships across the multiple dimensions of cyberspace. This paper introduces a multi-dimensional perspective for data identification in cyberspace. It provides critical discussions for identifying entangled relationships amongst entities across cyberspace.
Cyberspace, Data-streams, Multi-Dimensional Cyberspace
Solomon Cheung1, Yu Sun1 and Fangyan Zhang2, 1Department of Computer Science, California State Polytechnic University, Pomona, CA, 91768 and 2ASML, San Jose, CA, 95131
As an act of disposing waste and maintaining homeostasis, humans have to use the restroom multiple times a day. One item that is consumed in the process is toilet paper; it often runs out easily in the most inconvenient times. One of the most fatal positions to be in is to be stuck without toilet paper. Since humans are not capable of a 100% resupply rate, we should give this task to a computer. The approach we selected was to use a pair of laser sensors to detect whether toilet paper was absent or not. Utilizing an ultrasound sensor, we would be able to detect whether a person was nearby and send a notification to a database. The online app, PaperSafe, takes the information stored and displays it onto a device for quick access. Once a sufficient amount of data is acquired, we can train a machine learning algorithm to predict the next supply date, optimized for the specific scenario.
Amenity, Homeostasis, Machine Learning, Mobile Application
Vijayalakshmi M,Shanthi ThangamM and Bushra H, Department of Information Science and Technology, Anna University, Chennai City, Tamil Nadu, India
The usages of mobile devices are drastically increasing every day with high end support to the users. Due to high end configurations mobile devices such as smart phones, laptops, tablets, etc., computations are complex in these devices. Computation intensive and data intensive are plays a vital role in the mobile devices. The main challenges in the mobile devices are handling the mobile applications in the devices with high computation and high storage. The above mentioned challenges can be overcome by using mobile cloud computing. The limitations while handling the mobile cloud computing is offloading decision making, which part of computation should offload and which should execute in the mobile side. The proposed work provides the solution to the limitations and challenges mentioned earlier by providing agent based offloading decision maker for mobile cloud. The decision maker should decide which computation part is executed in the mobile side and the cloud side. The evaluation shows the mobile applications having high complexity get benefited over other high applications.
Agent based, Mobile cloud, Offloading, Computational device.
Ho Joong Kim1 and Shahid Ali2, 1Department of Information Technology, AGI Institute, Auckland, New Zealand, 2Department of Information Technology, AGI Institute, Auckland, New Zealand
Regression testing is a necessary process to ensure that the existing functionalities of a piece of software are not affected by new features or fixing defects. However, in the case for the web application of PB Tech, this process is very repetitive and time-consuming. In order to solve this issue, automation testing is implemented and a new test case prioritisation technique is proposed based on a combination of human-evaluation and statistical data of the highest earning features of retailer websites. Using this technique, a regression test suite is created and the test execution times are compared against a full regression test suite. The results revealed that the prioritisation technique is effective at reducing test execution times. This technique could prove to be effective for use in projects missing defect and requirements documentation.
Automation Testing, Regression Testing, Test Case Prioritization.
Roberto Bruzzese, Freelancer
The present work aims to enhance the application logs of a hypothetical infrastructure platform, and to build an App that displays the synthetic data about performance, anomalies and security incidents synthesized in the form of a Dashboard. The reference architecture, with multiple applications and multiple HW distribution, implementing a Service Oriented Architecture, is a real case of which the details have been abstracted because we want to extend the concept to all architectures with similar characteristics. This paper is taken from my Master Thesis in Cybersecurity for which I express a special thank to Prof. M. Bernaschi.
Dr.Djamal Ziani and Nada Alfaadhel, King Saud University, College of Computer since, Department of Information Systems, Riyadh, Saudi Arabia
Recently, organizations have shown more interest in cloud computing because of the many advantages they provide (cost savings, storage capacity, scalability, and speed of loading). Enterprise resource planning (ERP) systems are one of the most important systems that have been upgraded to cloud computing. In this thesis, we focus on cloud ERP interoperability, which is an important challenge in cloud ERP. Interoperability is the ability of different components to work in independent clouds with no or minimum user effort. More than 20% of the risk rate of cloud adoption is caused by interoperability. Thus, we propose web services as a solution for cloud ERP interoperability. The proposed solution increases interoperability between different cloud service providers and between cloud ERP systems with other applications in a company.
Cloud computing, ERP,interoperability, web services.
Ruchita Dahiya and Shahid Ali, Department of Information Technology, AGI Institute, Auckland, New Zealand
Automation testing has become increasingly needed due to the nature of the current software development project which comprises of complex application with shorter development time. Most of the companies in the industry have used Selenium extensively as functional automation tool to verify their web application’s functionalities are working as expected. However, for any new project Manual testing is equally important instead of automating. Thus, this research project is about the importance of manual and exploratory testing in industry when our project is under develop stage.
Automation Testing, Regression Test Suite, Selenium, Java Automation Framework, Test Ng, Manual Testing, Exploratory Testing
Ayodeji Oyewale1 and Chris Hughes2, 1School of Computing, Science and Engineering, University of Salford, Salford, Manchester and 2The Crescent, Salford, Manchester, United Kingdom
A growing number of applications that generate massive streams of data need intelligent data processing and online analysis. Data & Knowledge Engineering (DKE) has been known to stimulate the exchange of ideas and interaction between these two related fields of interest. DKE makes it possible to understand, apply and assess knowledge and skills required for the development and application data mining systems. With present technology, companies are able to collect vast amounts of data with relative ease. With no hesitation, many companies now have more data than they can handle. A vital portion of this data entails large unstructured data sets which amount up to 90 percent of an organization’s data. With data quantities growing steadily, the explosion of data is putting a strain on infrastructures as diverse companies having to increase their data center capacity with more servers and storages. This study conceptualized handling enormous data as a stream mining problem that applies to continuous data stream and proposes an ensemble of unsupervised learning methods for efficiently detecting anomalies in stream data.
Stream data, Steam Mining, Compact data structurres, FP Tree, Path Adjustment Method
N. Rada, L. E. Mendoza, E. G. Florez ,TelecommunicationsEngineering, Biomedical engineering, Mechanical Engineering, Research Group in Mechanical Engineering, Universityof Pamplona, Colombia
This article presents a robust compression method known as compression sensitivity (CS). CS, allows to reconstruct scat-tered signals with very few samples unlike the Shannon-Nyquist theorem. In this article the discrete cosine transform and the wavelet transform were used to find most adequate sparse space. Angiographic images were used, which were reconstructed using algorithms such as Large-scale Sparse Reconstruction (SPGL) and Gradient Projection for Sparse Reconstruction (GPRS). In this work, it was demonstrated that using the wavelet-cosine transformed transpose allowed achieving a more satisfactory sparse space than those obtained by other research. Finally, it was demonstrated that CS works in a relevant way for compressing angiographic images and the maximum percentage of error in the reconstruction was 3.56% for SPGL.
Compressive Sensing, sparse signal, images, reconstruction, SPGL1, SPSR.
Hyun Woo Jung, Hankuk Academy of Foreign Studies, Yongin, South Korea
Deep learning has facilitated major advancements in various fields including image detection. This paper is an exploratory study on improving the performance of Convolutional Neural Network (CNN) models in environments with limited computing resources, such as the Raspberry Pi. A pretrained state-of-art algorithm for doing near-real time object detection in videos, YOLO (“You-Only-Look-Once”) CNN model, was selected for evaluating strategies for optimizng the runtime performance. Various performance analysis tools provided by the Linux kernel were used to measure CPU time and memory footprint. Our results show that loop parallelization, static compilation of weights, and flattening of convolution layers reduce the total runtime by 85% and reduce memory footprint by 53% on a Raspberry Pi 3 device. These findings suggest that the methodological improvements proposed in this work can reduce the computational overload of running CNN models on devices with limited computing resources.
Deep Learning, Convolutional Neural Networks, Raspberry Pi, real-time object detection
Niloufar Salehi Dastjerdi and M. Omair Ahmad, Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada
Image descriptors play an important role in any computer vision system e.g. object recognition and tracking. Effective representation of an image is challenging due to significant appearance changes, viewpoint shifts, lighting variations and varied object poses. These challenges have led to the development of several features and their representations. Spatiogram and region covariance are two excellent image descriptors which are widely used in the field of computer vision. Spatiogram is a generalization of the histogram and contains some moments upon the coordinates of the pixels corresponding to each bin. Spatiogram captures richer appearance information as it computes not only information about the range of the function like histograms, also information about the (spatial) domain. However, there is a drawback that multi modal spatial patterns cannot be well modelled. Region covariance descriptor provides a compact and natural way of fusing different visual features inside a region of interest. However, it is based on a global distribution of pixel features inside a region and loses the local structure. In this paper, we aim to overcome the existing drawbacks of these descriptors. To this, we propose r-spatiogram and then a new hybrid descriptor is presented which is combination of r-spatiogram and traditional region covariance descriptors. The results show that our descriptors have the discriminative capability improved in comparison with other descriptors.
Feature Descriptor, Spatiogram, Region Covariance
Ranjana S. Zinjore1 and Rakesh J. Ramteke2,1Department of Computer Science, G.G. Khadse College, Muktainagar and 2School of Computer Sciences, KBC North Maharashtra University, Jalgaon
Optical Character Recognition has got a special significance in Multi-lingual, Multi-Script country like India, where a single document may contain words in two or more languages/scripts. There is a need to digitize such type of documents for easy communication and storage. It is also useful in applications like processing of handwritten messages on social media and processing of handwritten criminal records for judicial purpose. This paper reveals the approach used in the digitization of handwritten bilingual documents consist of Marathi and English languages. In this approach three phases are used. The first phase focuses on preprocessing of handwritten bilingual document and solution of merged line segmentation. An algorithm Two _Fold_ Word _Segmentation is developed to extract words from lines. A fusion of two feature extraction methods is used for script identification. Second phase focuses on recognition of script identified words. For recognition of words two different feature extraction methods are used. The first method is based on combination of structural and statistical features and second method is based on Histogram of Oriented Gradient Method. K-Nearest Neighbor classifier gives good recognition accuracy for second feature extraction method than that of first method. Finally in third phase digitization and transliteration of recognized words is performed. A graphical user interface is designed for conversion of transliterated text into speech which is useful in the society for blind and visually impaired people to read a book consisting of bilingual text.
Digitization, Transliteration, Script Identification Histogram of Oriented Gradient, K-Nearest Neighbor
Roxana Flores-Quispe and Yuber Velazco-Paredes, Deparment of Computer Science, Universidad Nacional de San Agustin, Arequipa, Peru
This paper proposes a method based on Multitexton Histogram (MTH) descriptor to classificate eight different human parasite eggs: Ascaris, Uncinarias, Trichuris, Hymenolepis Nana, Dyphillobothrium Pacificum, Taenia-Solium, Fasciola Hepatica and Enterobius-Vermicularis identifying textons of irregular shapes in their microscopic images. This proposed method includes two stages. In the first a feature extraction mechanism integrates the advantages of co-occurrence matrix and histograms to identify irregular morphological structures in the biological images throughs textons of irregular shape. In the second stage the Support Vector Machine (SVM) is used to classificate the different human parasite eggs. The results were obtaining using a dataset with 2053 human parasite eggs images achieving a success rate of 96,82% in the classification.
Human Parasite Eggs, Multitexton Histogram descriptor, Textons.
Mate Kisantal1, Zbigniew Wojna1,2, Jakub Murawski2,3, Jacek Naruniec3, Kyunghyun Cho4, 1Tensor ight, Inc., 2University College London, 3Warsaw University of Technology and 4New York University
In the recent years, object detection has experienced impressive progress. Despite these improvements, there is still a significant gap in the performance between the detection of small and large objects. We analyze the current state-of-the-art model, Mask-RCNN, on a challenging dataset, MS COCO. We show that the overlap between small ground-truth objects and the predicted anchors is much lower than the expected IoU threshold. We conjecture this is due to two factors; (1) only a few images are containing small objects, and (2) small objects do not appear enough even within each image containing them.We thus propose to oversample those images with small objects and augment each of those images by copy-pasting small objects many times. It allows us to trade off the quality of the detector on large objects with that on small objects. We evaluate different pasting augmentation strategies, and ultimately, we achieve 9.7% relative improvement on the instance segmentation and 7.1% on the object detection of small objects, compared to the current state of the art method on MS COCO.
Sandile Mhlanga1, Dr Tawanda Blessing Chiyangwa2, Dr Lall Manoj1 and Prof Sunday Ojo1, 1Tshwane University of Technology,South Africa and 2University of South Africa,South Africa
With the rapid growth of Web services in recent years, it is very difficult to choose the suitable web services among those services that offer similar functionality. Selecting the right web service does not only include the problem of discovering services on the basis of their functionalities, but also assessing the quality aspects of those services. Quality of services (QoS) is considered a distinguishing factor between similar web services and plays a vital role in web service selection. The aim of the model is to evaluate and rank the alternatives consisting of conflicting criteria or criteria with different QoS requirements. To address this issue, this study proposes a model for determining the most suitable candidate web service by integrating AHP (Analytic Hierarchy Process) and VIKOR (Vlsekriterjumska optimizacija I KOmpromisno Resenje) methods. AHP method computes the weights assigned to QoS criteria using pairwise comparison. Thereafter, the ranking of the web services, according to a user preferred criteria, is obtained using VIKOR method. Finally, a software prototype for implementing AHP and VIKOR was implemented. To illustrate and validate the proposed approach, data from QWS dataset is used by the software prototype in a service selection process.
Web services, Web services selection, Quality of service, Multi-Criteria decision making, AHP, VIKOR
Manjula Pilaka, Fethi A. Rabhi and Madhushi Bandara, School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
Regulatory processes are normally tracked by regulatory bodies in terms of monitoring safety, soundness, risk, policy and compliance. Such processes are loosely framed processes and it is a considerable challenge for data scientists and academics to extract instances of such processes from event records and analyse their characteristics e.g. if they satisfy certain process compliance requirements. Existing approaches are inadequate in dealing with the challenges as they demand both technical knowledge and domain expertise from the users. In addition, the level of abstraction provided does not extend to the concepts required by a typical data scientist or a business analyst. This paper extends a software framework which is based on a semantic data model that helps in deriving and analysing regulatory reporting processes from event repositories for complex scenarios. The key idea is in using complex business-like templates for expressing commonly used constraints associated with the definition of regulatory reporting processes and mapping these templates with those provided by an existing process definition language. The efficiency of the architecture in evaluation, compliance and impact was done by implementing a prototype using complex templates of Declare ConDec language and applying it to a case study related to process instances of Australian Company Announcements.
Regulatory Reporting, Process Extraction, Semantic Technology, Events
Rajendra Pratap, Sonia Sharma and Ankita Bhaskar, eInfochips (An Arrow Company), Noida, Uttar Pradesh, India
Feedthrough blocks are the communication channels present at the top chip level with many hierarchical blocks to ensure smooth interaction between two or more blocks. Since it is like a channel between blocks so port positions and size are hard fixed. If the size of feedthrough block is large, then many times it becomes a challenge to satisfy internal register-to-register timing for these blocks. In this manuscript, the authors present a simple technique to have controlled internal register-to-registertimings for such large feedthrough blocks present in big integrated chips.
VLSI, chip, Setup Fixing Techniques &Feedthrough Block
Ashwani Kumar Gupta and Dr. Rajendra Pratap, Department of ASIC, einfochips (An Arrow Company), Noida, Uttar Pradesh, India
Chemical Mechanical Planarization is a process of smoothing wafer surface through exerting the chemical and mechanical forces on wafer. It is an important step in IC fabrication process. To achieve the planarity on the surface of IC, Dummy metal fills are required to be inserted. Dummy fill insertion is a time consuming process for moderate and bigger sized blocks or chips. Insertion of Dummy metal fills affects the coupling capacitance of the signal metal layers which causes signal integrity issues. In the last stages of the design closure while doing Timing ECOs, re-doing Dummy metal fills can cause timing/noise violations and ECOs can be unpredictable. In this paper we are suggesting a methodology wherein eco can be implemented without re-running the Dummy metal fill again on the complete block/chip. This will save ECO implementation time and reduce the risk of any new signal integrity issues.
CMP (Chemical Mechanical Planarization), ECO (Engineering Change Order), ILD (Inter-Level dielectric), GDS (Graphic Data System), TCL (Tool Command Language), Crosstalk, Dummy Metal Fill, Coupling Cap, PnR (Place And Route).
Eric Ohana, Science and Engineering Faculty, Queensland University of Technology, Brisbane, Australia
The paper presents an optimisation on the baseline JPEG hardware implementation that improves the compression ratio for many image types. The baseline JPEG flow is briefly reviewed along with prior art then it is explained how and where the LZW-based optimisation fits in this flow and what it replaces. The variations taken from a standard LZW compression flow are explained along with why they are necessary in this specific hardware application. The micro architecture of the hardware implementation and its FPGA build are then detailed. The various trade-offs between implementation decisions and compression efficiency are explained. Finally comparison results between the baseline JPEG flow and the LZW based optimised one are shown and conclusions are drawn.
JPEG, Huffman Encoding, LZW Compression, Content Addressable Memory (CAM), Cache Memory
Swaroop Ghosh and Rekha Govindaraj, Apple Inc., USA
We propose a novel spintronic accelerator using Domain Wall Memory (DWM) for string matching. Conventionally, string matching algorithms are implemented in software, FPGA or General Processor Units which are energy and area intensive. We have exploited the features of DWM and magnetic tunnel junctions to realize a string matching algorithm known as Knuth Morris Pratt (KMP) algorithm. Several innovative design ideas have been presented to design individual components of KMP such as multibit comparator, counter and basic Boolean logic gates. Digital CMOS is used for basic control circuitry required in the architecture like match enable and shifting the nanowire domain wall. The simulation results validate the functional performance of the proposed architecture.
Adebayo Seyi1 and Ogunseyi Taiwo2, 1Department of Information and Communication Engineering, China University of Mining and Technology, Xuzhou, China and 2Department of Information Security, Communication University of China, Beijing, China
There are genuine concerns for the right transport connection to be deployed on a particular routing protocol in order to have a reliable, fast and robust communication in spite of the size and the dynamics of the network topology. This work comparatively studies the individual implementation of reactive and proactive protocols on both UDP and TCP transport connection using end to end delay, average throughput, jitter and packet delivery ratio (PDR) as QoS metrics. We studied the combination of both the transport connection and routing protocol that will deliver the best QoS in simple and complex network scenarios with source and destination nodes fixed and the intermediate nodes randomly moving throughout the simulation time. More so, the intrinsic characteristics of the routing protocols regarding the QoS metrics and transport connection are studied. Forty simulations were run for simple and complex multi-hop network models and the results were analyzed and presented.
MANET, Wireless Network, Proactive, Reactive, QoS, UDP, TCP
Mohamed Rawidean Mohd Kassim and Ibrahim Mat, MIMOS, Ministry of International Trade and Industry, Kuala Lumpur, MALAYSIA
The rapid development of IoT (Internet of Things) based technologies created tsunamis almost in every industry and particularly in agriculture. This revolution has moved the industry from a statistical approach to a quantitative approach. The massive changes are shaking the existing agriculture methods and creating new opportunities along with a range of new issues and challenges. Agriculture products will have a very high demand by 2050. This is due to the increase of world population by 30%. Human resources for agriculture development is becoming less due to migration of people to big cities. As a result, most of the agriculture activities need to be automated using state-of-the-art technologies. IoT and related technologies will be the potential solution to solve the above issues. This paper will explore the latest application of IoT in agriculture and and highlight the issues and challenges.
IoT, smart farming, agriculture, network, architecture and wireless sensors.
Peter John A. Francisco, Department of Computer Science, University of the Philippines, Quezon City, Philippines
Many approaches have been proposed to integrate security activities into agile software development methodologies. These studies did not seem to have made the jump into practice, however, since, per our experience, most software development teams are not familiar with the range of methods developed for this purpose. This knowledge gap makes the task especially difficult for agile project managers and security specialists attempting to achieve the delicate balance of agility and security for the first time. In this study, we surveyed proposed methods available in current literature for integrating security activities into agile software engineering. From 11 proposed secure agile methods published between 2004 to 2017, we extracted 5 insights which practitioners in agile software development and security engineering can use to more effectively, jointly embed security into their software development flows. We then used the insights in a retrospective case study of a software engineering project in a fintech startup company, a high-risk industry in terms of security, and conclude that prior knowledge of the insights would have addressed major challenges in their security integration task.
Agile Process, Software Engineering, Security, Survey
Nobuaki Maki1, Ryotaro Nakata2, Shinichi Toyoda1, Yosuke Kasai1, Sanggyu Shin3 and Yoichi Seto1, 1Advanced Institute of Industrial Technology, Tokyo, Japan, 2Institute of Information Security, Yokohama City, Kanagawa, Japan and 3Tokai University, Hiratsuka City, Kanagawa, Japan
Recently the threats of cyberattacks, especially of targeted attacks are increasing rapidly and a large number of cybersecurity incidents are occurring frequently. On the other hand, capable personnel are greatly lacking, and strengthen the systematic human resource development cultivating capabilities for cybersecurity activities is becoming an urgent issue. However, only a few parts of academia and private sector in Japan can carry out the cybersecurity exercises because of high cost and inflexibility of commercial or existing training software. On this account, in order to enforce cybersecurity practical exercises cost-effectively and flexibly, we developed a virtual environment Cybersecurity Exercises (CyExec) system utilizing VirtualBox and Docker. We also implemented an open source vulnerability scanner tool WebGoat and our original cyberattack and defense training contents on CyExec.
Ecosystem, Virtualization, WebGoat, Cyberattack and Defense Exercise, Cyber Range Exercise.
Zeng Dangquan, Department of Information Science and Technology, Xiamen University Tan Kahkee College, Zhang Zhou, Fu Jian, China
CAPTCHA (Completely Automated Public Turing Test to tell Computers and Humans Apart) is a test for distinguishing between computers and humans, and has a very wide range of applications on the Internet. Most websites require users to submit CAPTCHAs when registering to log in or submit some form data to improve the security of the site, thus avoiding malicious attacks by automated robots and spammers. In this paper, a text-based CAPTCHA Using text-adhesion and visual compensation is introduced. This CAPTCHA is designed to use character cascading technology and partial defect technology to effectively prevent the machine from using the character cutting and machine learning techniques to verify the CAPTCHA, but human vision can easily separate the characters that are layered together and can fill in some of the missing characters to easily identify the CAPTCHA. In order to test the ability of the CAPTCHA to resist automatic machine identification, four kinds of specialized OCR recognition software and two online OCR recognition websites were used to identify and verify 1000 CAPTCHAs. The test results show that none of the six OCR recognition tools can correctly identify one completely CAPTCHA(correctly recognized) , and the probability of not being recognized and misidentified is over 99.5%, which proves that the CAPTCHA is a very high security technology.
CAPTCHA, text-adhesion, visual compensation, OCR, security
Zainab R. Alkindi, Mohamed Sarrab and Nasser Alzidi, Department of Computer Science, Sultan Qaboos Unviresity, Muscat, Oman
Mobile applications can collect large private user’s data including user bank details, contact numbers, photos, saved locations, etc. This poses privacy concerns for many users while they are using mobile applications. In Android 6.0 and above, users can control the apps permissions, where the system allows the users to grant and block the dangerous apps permissions at any time. However, there are additional permissions used by the apps (normal permissions) that cannot be controlled by users which may lead to many privacy violations. In this paper, we present a new approach that provides users with the ability to control the applications' access to Android system resources and private data based on user-defined policies. This approach allows users to reduce the level of privacy violation by giving the user some options that are not available in the Android permission system during the installation and run-time of Android apps. The proposed approach enables the users to control the behavior of the apps including the app network connections, permissions list, and app to app communication. The proposed approach consists of four main components that can check the app behaviors during the installation and running time, provide the users with resources and data filtration and allow users to take appropriate action to control the leakage of the application.
User Privacy, Android, Security, Mobile App, Permissions.
Simi Bajaj, Shreejai Raj, Sanket Mantri and Kewal Wadibhasme, School of Computing, Engineering and Mathematics, Western Sydney University, Australia
Software development has an around for quite a while but the progress that has been madein the last three decade is quite remarkable. Every few years there is an emergence of new concept, new programming language or framework for software development which leads to questions around management and control of the software development process. The goal of this paper is to explore the importance of software in everyday life and the need for advanced software testing methodologies for producing reliable software products. Further, this report takes a deep dive into the challenges associated with distributed software projects such as lack of effective collaboration, awareness of the project, code conflicts and resolutions which play a vital role in successful software development and how version control systems like GitHub can prove to be a helpful tool in overcoming these challenges. GitHub is a powerful version control system and there is discussion about GitHub how tester can harness the power of GitHub in testing. This paper will shed light on how GitHub is used by software testers to gain benefits that are sometimes missing in conventional software testing methods.
Github, Software testing, collaborative software testing.
Pimmanee Rattanawicha and Sutthipong Yungratog, Chulalongkorn Business School, Chulalongkorn University, Bangkok, Thailand
To understand how colour contrast in e-Commerce websites, such as hotel & travel websites, affects (1) emotional perception (i.e. pleasant, arousal, and dominance), (2) trust, and (3) purchase intention of visitors, a two-phase empirical study is conducted. In the first phase of this study, 120 volunteer participants are asked to choose the most appropriate colour from a colour wheel for a hotel & travel website. The colour “Blue Cyan”, the most chosen colour from this phase of study, is then used as the foreground colour to develop three hotel & travel websites with three different colour contrast patterns for the second phase of the study. A questionnaire is also developed from previous studies to collect emotional perception, trust, and purchase intention data from another group of 145 volunteer participants. It is found from data analysis that, for visitors as a whole, colour contrast has significant effects on their purchase intention. For male visitors, colour contrast significantly affects their trust and purchase intention. Moreover, for generation X and generation Z visitors, colour contrast has effects on their emotional perception, trust, and purchase intention. However, no significant effect of colour contrast is found in female or generation Y visitors.
Colour Contrast, e-Commerce, Website Design
James Monks and LiwanLiyanage
Spatio-temporal data is becoming increasingly prevalent in our society. This has largely been spurred on from the capability of building arrays and sensors into every day items, along with highly specialised measuring equipment becoming cheaper. The result of this prevalence can be seen in the wealth of data of this kind that is now available for analysis. This spatiotemporal data is particularly useful for contextualising events in other data sets by providing background information for a point in space and time. Problems arise however, when the contextualising data and the data set of interest do not align in space and time in the exact way needed. This problem is becoming more common due to the precise data recorded from GPS systems not overlapping with points of interest and not being easily generalised to a region. This is Interpolating data for the points of interest in space and time is important and a number of methods have been proposed with varying levels of success. These methods are all lacking in usability and the models are limited by strict assumptions and constraints. This paper proposes a new method for the interpolation of points in the patio-temporal scope, based on a set of known points. It utilises an ensemble of models to take into account the nuanced directional effects in both space and time. This ensemble of models allows it to be more robust to missing values in the data which are common in spatio-temporal data sets due to variation in conditions across space and time. The method is inherently flexible, as it can be implemented without any further customisation whilst allowing for the user to input and customise their own underlying model based on domain knowledge. It addresses the usability issues of other methods, accounts for directional effects and allows for full control over the interpolation process.
PYEONG KANG KIM1 and HYUNG HEON KIM1, TAE WOO KIM1 and Young Kyun Cha2, 1Planning & Strategic Team, INNODEP.inc, Seoul, Korea and 2School of Information Security, Korea University, Seoul, Korea
Nowadays, the need for research on an intelligent video monitoring system is increasing worldwide. Among the object detection methods, the core technology of the intelligent video monitoring system, or object detection using a deep learning-based convolutional neural network, is used widely due to its proven performance. Nonetheless, deep learning-based object detection requires many hardware resources because it decodes the videos to analyze. Therefore, this article suggests an advanced object recognition technique by conducting compressed video stream-based object detection in order to reduce consumption of resources for object detection as well as improve performance and confirms via the performance evaluation that speed and recognition rate improved compared to existing algorithms such as YOLO, SSD, and Faster R-CNN.
object detection, convolutional neural network, Inception V4, Motion Vector
Esra Çakır and Ziya Ulukan, Faculty of Engineering and Technology, Dept. of Industrial Engineering, Galatasaray University, Istanbul, Turkey
Ecotourism is a strategy that guarantees the sustainability of the earth’s natural resources and promotes local people’s economic development while maintaining and safeguarding their social and cultural integrity. However, the activities may not be suitable for sustainable environmental conditions. Therefore, activity selection is important in tourism management. In this paper, fuzzy linguistics Prolog is used to match ecotourism activities and suitable regions. Bousi~Prolog is a fuzzy Prolog that enables working with both fuzzy linguistic and linguistic tools to guide the Prolog systems towards computing with paradigm phrases that can be very helpful to the linguistic resources.
Fuzzy Prolog, Fuzzy linguistic programming, Bousi~Prolog, Sustainable ecotourism, Ecotourism activity
Nitin Khosla1 and Dharmendra Sharma2, 1Assistant Director - Performance Engineering, ICTCAPM, Department of Home Affairs, Canberra, AUSTRALIA and 2Professor - Computer Science, University of Canberra, AUSTRALIA
The aim of semi-supervised learning approach in this paper is to im-prove the supervised classifiers to investigate a model for forecasting unpredict-able load on system and to predict CPU utilization in a big enterprise applications environment. This model forecasts the likelihood of a burst in web traffic to the IT system in-used and predicts the CPU utilization under stress conditions. The enterprise IT infrastructure consists of many enterprise applications running in a real time system. Load features are extracted while analyzing the patterns of work-load demand which are hidden in the transactional data of applications. This approach generates synthetic workload patterns, execute use-case scenarios in the test environment and use our model to predict the excessive utilization of the CPU behavior under peak load and stress conditions for the validation pur-pose. Expectation Maximization method with co-learning, attempts to extract and analyze the parameters that maximize the likelihood of the model after sub-siding the unknown labels. Workload profiling and prediction has enormous po-tential to optimize the usages of IT resources with low risk.
Semi-supervised learning, Performance Load and stress testing, Co-learning, Machine learning applications.
Taewoo Kim1, Hyungheon Kim2, Pyeongkang Kim3 and Youngkyun Cha4, 1Technology Laboratory, Innodep Inc., Seoul City, Korea, 2Graduate School of Information Security, Korea University, Seoul City, Korea, 3Technology Laboratory, Innodep Inc., Seoul City, Korea and 4Graduate School of Information Security, Korea University, Seoul City, Korea
CCTV is becoming more important in solving incidents. Various solutions have been developed for effective control, and GIS solution that displays CCTV video on the map is the most useful among them. On the other hand, in the current system, since CCTV does not have a sensor such as a digital compass, it displays only the location of CCTV, not the area that CCTVs view. In this paper, we present a methodology to indicate which area a CCTV is monitoring in the existing system. It can be accomplished by showing video regarding specific PTZ values to users and receiving information from users about the area.
PTZ Region, GIS, Parameter Estimation