keyboard_arrow_up
Accepted Papers
Construction Mathematical Model Of Spectrometer Based On Curved Prism

Lei Feng, Key Laboratory of Computational Optical Imaging Technology, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China

ABSTRACT

Wide spectral band, combination of imaging and spectrum and fine spectral detection capability are the outstanding advantages of imaging spectrometer. Rich spectral information combined with spatial image of the object point greatly improves the accuracy of target detection, expands the function of traditional detection technology, and realizes the qualitative analysis of target characteristics. Spectrometer plays an irreplaceable role compared with other technologies. It has been widely used in many military and civilian fields, such as land and ocean remote sensing, remote sensing monitoring of pollutants in atmosphere, soil and water, military target detection, medical spectral imaging diagnosis, scientific experiments and so on. Curved prism spectrometer is widely used because of its high energy and no ghost image. However, curved prism spectrometer is as a non coaxial symmetric system and its aberration theory is complex. Therefore, it is necessary to establish a numerical model and construct an initial structure to provide a good starting point for system optimization. In a practical imaging spectrometer based on prism, there are many aberrations when ray incidented on the surface of each element. In the design, it is very important to establish a mathematical model to analyze these aberrations. Curved prism is a kind of non-coaxial prism which is obtained by processing the front and rear surfaces of triangular prism into two spheres. Its front and rear surfaces are not coaxial with the optical axis, so its characteristics are complex. Firstly, on the basis of the primary aberration theory, the numerical calculation model of curved prism is established, and the optimal object distance of curved prism and the effective incident angle of curved prism are solved according to the principle of minimum aberration. For given system parameters, the coordinates of object points are known, and then the numerical model of curved surface prism spectrometer is established. The vector method is used to solve the incident and output vectors of given light rays. After transmission, the optical path extremum function is established, and the second-order partial differential equation is derived. The surface equation of each element is expanded by higher order Taylor series, that is, each surface is expressed as a functional expression of the incident point and structural parameters. A set of partial differential equations is constructed, and the least square method is used to solve the minimum of the equations, and then the initial structural parameters are calculated.

KEYWORDS

Mathematical computation, partial differential equations, vector solving, curved prism


An Introduction to Quantum Computers

Hamidreza Bolhasani, Amir Masoud Rahmani and Farid Kheiri, Department of Computer Engineering, Science and Research branch, Islamic Azad University, Tehran, Iran

ABSTRACT

Since 1982 that Richard Feynman proposed the idea of quantum computing for the first time, it has become a new field of interest for many physics and computer scientists. Although it’s more than 30 years that this concept has been presented but it’s still considered as unknown and several subjects are open for research. Accordingly, concepts and theoretical reviews may always be useful. In this paper, a brief history and fundamental ideas of quantum computers are introduced with focus on architecture part.

KEYWORDS

Quantum, Computer, Hardware, Qubit, Gate


Design and Implementation of User-Centered Adaptive Search Engine

Shailja Dalmia, Ashwin T S and Ram Mohana Reddy Guddeti, National Institute of Technology Karnataka, Surathkal, Mangalore, Karnataka, India

ABSTRACT

With the ever-growing variety of information, the retrieval demands of different users are so multifarious that the traditional search engine cannot afford such heterogeneous retrieval results of huge magnitudes. Harnessing the advancements in a user-centered adaptive search engine will aid in groundbreaking retrieval results achieved efficiently for high-quality content. Previous work in this field have made using the excessive server load to achieve good retrieval results but with the limited extended ability and ignoring on demand generated content. To address this gap, we propose a novel model of adaptive search engine and describe how this model is realized in a distributed cluster environment. Using an improved current algorithm of topic-oriented web crawler with User Interface based Information Extraction Technique was able to produce a renewed set of user-centered retrieval results with higher efficiency than all existing methods. The proposed method was found to exceed by 1.5 times and two times for crawler and indexer, respectively than all prevailing methods with improved and highly precise results in extracting semantic information from Deep web.

KEYWORDS

Search Engine, WWW, Web Content Mining, Inverted Indexing, Hidden Crawler, Distributed Web Crawler, Precision, Deep Web


Identifying Data and Information Streams in Cyberspace: A Multi-Dimensional Perspective

Ikwu Ruth and Louvieris Panos, Department of Computer Sciences, Brunel University, London

ABSTRACT

Cyberspace has gradually replaced the physical reality, its role evolving from a simple enabler of daily live processes to a necessity for modern existence. As a result of this convergence of physical and virtual realities, for all processes being critically dependent on networked communications, information representative of our physical, logical and social thoughts are constantly being generated in cyberspace. The interconnection and integration of links between our physical and virtual realities create a new hyperspace as a source of data and information. Additionally, significant studies in cyber analysis have predominantly revolved around a single linear analysis of information from a single source of evidence (The Network). These studies are limited in their ability to understand the dynamics of relationships across the multiple dimensions of cyberspace. This paper introduces a multi-dimensional perspective for data identification in cyberspace. It provides critical discussions for identifying entangled relationships amongst entities across cyberspace.

KEYWORDS

Cyberspace, Data-streams, Multi-Dimensional Cyberspace


An Intelligent Internet-of-things (IoT) System to Etect and Predict Amenity Usage

Solomon Cheung1, Yu Sun1 and Fangyan Zhang2, 1Department of Computer Science, California State Polytechnic University, Pomona, CA, 91768 and 2ASML, San Jose, CA, 95131

ABSTRACT

As an act of disposing waste and maintaining homeostasis, humans have to use the restroom multiple times a day. One item that is consumed in the process is toilet paper; it often runs out easily in the most inconvenient times. One of the most fatal positions to be in is to be stuck without toilet paper. Since humans are not capable of a 100% resupply rate, we should give this task to a computer. The approach we selected was to use a pair of laser sensors to detect whether toilet paper was absent or not. Utilizing an ultrasound sensor, we would be able to detect whether a person was nearby and send a notification to a database. The online app, PaperSafe, takes the information stored and displays it onto a device for quick access. Once a sufficient amount of data is acquired, we can train a machine learning algorithm to predict the next supply date, optimized for the specific scenario.

KEYWORDS

Amenity, Homeostasis, Machine Learning, Mobile Application


Survey of Streaming Data With Dynamic Compact Streaming Algorithm

Ayodeji Oyewale1 and Chris Hughes2, 1School of Computing, Science and Engineering, University of Salford, Salford, Manchester and 2The Crescent, Salford, Manchester, United Kingdom

ABSTRACT

A growing number of applications that generate massive streams of data need intelligent data processing and online analysis. Data & Knowledge Engineering (DKE) has been known to stimulate the exchange of ideas and interaction between these two related fields of interest. DKE makes it possible to understand, apply and assess knowledge and skills required for the development and application data mining systems. With present technology, companies are able to collect vast amounts of data with relative ease. With no hesitation, many companies now have more data than they can handle. A vital portion of this data entails large unstructured data sets which amount up to 90 percent of an organization’s data. With data quantities growing steadily, the explosion of data is putting a strain on infrastructures as diverse companies having to increase their data center capacity with more servers and storages. This study conceptualized handling enormous data as a stream mining problem that applies to continuous data stream and proposes an ensemble of unsupervised learning methods for efficiently detecting anomalies in stream data.

KEYWORDS

Stream data, Steam Mining, Compact data structurres, FP Tree, Path Adjustment Method


Compression and Reconstruction of Angiographic Images Using Compressive Sensing

N. Rada, L. E. Mendoza, E. G. Florez ,TelecommunicationsEngineering, Biomedical engineering, Mechanical Engineering, Research Group in Mechanical Engineering, Universityof Pamplona, Colombia

ABSTRACT

This article presents a robust compression method known as compression sensitivity (CS). CS, allows to reconstruct scat-tered signals with very few samples unlike the Shannon-Nyquist theorem. In this article the discrete cosine transform and the wavelet transform were used to find most adequate sparse space. Angiographic images were used, which were reconstructed using algorithms such as Large-scale Sparse Reconstruction (SPGL) and Gradient Projection for Sparse Reconstruction (GPRS). In this work, it was demonstrated that using the wavelet-cosine transformed transpose allowed achieving a more satisfactory sparse space than those obtained by other research. Finally, it was demonstrated that CS works in a relevant way for compressing angiographic images and the maximum percentage of error in the reconstruction was 3.56% for SPGL.

KEYWORDS

Compressive Sensing, sparse signal, images, reconstruction, SPGL1, SPSR.


Optimizing the Performance of Convolutional Neural Networks on Raspberry PI for Real-Time Object Detection

Hyun Woo Jung, Hankuk Academy of Foreign Studies, Yongin, South Korea

ABSTRACT

Deep learning has facilitated major advancements in various fields including image detection. This paper is an exploratory study on improving the performance of Convolutional Neural Network (CNN) models in environments with limited computing resources, such as the Raspberry Pi. A pretrained state-of-art algorithm for doing near-real time object detection in videos, YOLO (“You-Only-Look-Once”) CNN model, was selected for evaluating strategies for optimizng the runtime performance. Various performance analysis tools provided by the Linux kernel were used to measure CPU time and memory footprint. Our results show that loop parallelization, static compilation of weights, and flattening of convolution layers reduce the total runtime by 85% and reduce memory footprint by 53% on a Raspberry Pi 3 device. These findings suggest that the methodological improvements proposed in this work can reduce the computational overload of running CNN models on devices with limited computing resources.

KEYWORDS

Deep Learning, Convolutional Neural Networks, Raspberry Pi, real-time object detection


A New Hybrid Descriptor Based on Spatiogram and Region Covariance Descriptor

Niloufar Salehi Dastjerdi and M. Omair Ahmad, Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada

ABSTRACT

Image descriptors play an important role in any computer vision system e.g. object recognition and tracking. Effective representation of an image is challenging due to significant appearance changes, viewpoint shifts, lighting variations and varied object poses. These challenges have led to the development of several features and their representations. Spatiogram and region covariance are two excellent image descriptors which are widely used in the field of computer vision. Spatiogram is a generalization of the histogram and contains some moments upon the coordinates of the pixels corresponding to each bin. Spatiogram captures richer appearance information as it computes not only information about the range of the function like histograms, also information about the (spatial) domain. However, there is a drawback that multi modal spatial patterns cannot be well modelled. Region covariance descriptor provides a compact and natural way of fusing different visual features inside a region of interest. However, it is based on a global distribution of pixel features inside a region and loses the local structure. In this paper, we aim to overcome the existing drawbacks of these descriptors. To this, we propose r-spatiogram and then a new hybrid descriptor is presented which is combination of r-spatiogram and traditional region covariance descriptors. The results show that our descriptors have the discriminative capability improved in comparison with other descriptors.

KEYWORDS

Feature Descriptor, Spatiogram, Region Covariance


Digitization and Transliteration of Script Identified Words from Handwritten Bilingual Documents

Ranjana S. Zinjore1 and Rakesh J. Ramteke2,1Department of Computer Science, G.G. Khadse College, Muktainagar and 2School of Computer Sciences, KBC North Maharashtra University, Jalgaon

ABSTRACT

Optical Character Recognition has got a special significance in Multi-lingual, Multi-Script country like India, where a single document may contain words in two or more languages/scripts. There is a need to digitize such type of documents for easy communication and storage. It is also useful in applications like processing of handwritten messages on social media and processing of handwritten criminal records for judicial purpose. This paper reveals the approach used in the digitization of handwritten bilingual documents consist of Marathi and English languages. In this approach three phases are used. The first phase focuses on preprocessing of handwritten bilingual document and solution of merged line segmentation. An algorithm Two _Fold_ Word _Segmentation is developed to extract words from lines. A fusion of two feature extraction methods is used for script identification. Second phase focuses on recognition of script identified words. For recognition of words two different feature extraction methods are used. The first method is based on combination of structural and statistical features and second method is based on Histogram of Oriented Gradient Method. K-Nearest Neighbor classifier gives good recognition accuracy for second feature extraction method than that of first method. Finally in third phase digitization and transliteration of recognized words is performed. A graphical user interface is designed for conversion of transliterated text into speech which is useful in the society for blind and visually impaired people to read a book consisting of bilingual text.

KEYWORDS

Digitization, Transliteration, Script Identification Histogram of Oriented Gradient, K-Nearest Neighbor


A Hybrid-based Architecture for Web Service Selection

Sandile Mhlanga1, Dr Tawanda Blessing Chiyangwa2, Dr Lall Manoj1 and Prof Sunday Ojo1, 1Tshwane University of Technology,South Africa and 2University of South Africa,South Africa

ABSTRACT

With the rapid growth of Web services in recent years, it is very difficult to choose the suitable web services among those services that offer similar functionality. Selecting the right web service does not only include the problem of discovering services on the basis of their functionalities, but also assessing the quality aspects of those services. Quality of services (QoS) is considered a distinguishing factor between similar web services and plays a vital role in web service selection. The aim of the model is to evaluate and rank the alternatives consisting of conflicting criteria or criteria with different QoS requirements. To address this issue, this study proposes a model for determining the most suitable candidate web service by integrating AHP (Analytic Hierarchy Process) and VIKOR (Vlsekriterjumska optimizacija I KOmpromisno Resenje) methods. AHP method computes the weights assigned to QoS criteria using pairwise comparison. Thereafter, the ranking of the web services, according to a user preferred criteria, is obtained using VIKOR method. Finally, a software prototype for implementing AHP and VIKOR was implemented. To illustrate and validate the proposed approach, data from QWS dataset is used by the software prototype in a service selection process.

KEYWORDS

Web services, Web services selection, Quality of service, Multi-Criteria decision making, AHP, VIKOR


Semantic Process Based Framework for Regulatory Reporting Process Management

Manjula Pilaka, Fethi A. Rabhi and Madhushi Bandara, School of Computer Science and Engineering, University of New South Wales, Sydney, Australia

ABSTRACT

Regulatory processes are normally tracked by regulatory bodies in terms of monitoring safety, soundness, risk, policy and compliance. Such processes are loosely framed processes and it is a considerable challenge for data scientists and academics to extract instances of such processes from event records and analyse their characteristics e.g. if they satisfy certain process compliance requirements. Existing approaches are inadequate in dealing with the challenges as they demand both technical knowledge and domain expertise from the users. In addition, the level of abstraction provided does not extend to the concepts required by a typical data scientist or a business analyst. This paper extends a software framework which is based on a semantic data model that helps in deriving and analysing regulatory reporting processes from event repositories for complex scenarios. The key idea is in using complex business-like templates for expressing commonly used constraints associated with the definition of regulatory reporting processes and mapping these templates with those provided by an existing process definition language. The efficiency of the architecture in evaluation, compliance and impact was done by implementing a prototype using complex templates of Declare ConDec language and applying it to a case study related to process instances of Australian Company Announcements.

KEYWORDS

Regulatory Reporting, Process Extraction, Semantic Technology, Events


Strategy to Fix Register-To-Register Timing for Large Feed Through Blocks Having Limited Internal Pipelines

Rajendra Pratap, Sonia Sharma and Ankita Bhaskar, eInfochips (An Arrow Company), Noida, Uttar Pradesh, India

ABSTRACT

Feedthrough blocks are the communication channels present at the top chip level with many hierarchical blocks to ensure smooth interaction between two or more blocks. Since it is like a channel between blocks so port positions and size are hard fixed. If the size of feedthrough block is large, then many times it becomes a challenge to satisfy internal register-to-register timing for these blocks. In this manuscript, the authors present a simple technique to have controlled internal register-to-registertimings for such large feedthrough blocks present in big integrated chips.

KEYWORDS

VLSI, chip, Setup Fixing Techniques &Feedthrough Block


Methodology to Reduce Run Time of Timing/Functional ECO

Ashwani Kumar Gupta and Dr. Rajendra Pratap, Department of ASIC, einfochips (An Arrow Company), Noida, Uttar Pradesh, India

ABSTRACT

Chemical Mechanical Planarization is a process of smoothing wafer surface through exerting the chemical and mechanical forces on wafer. It is an important step in IC fabrication process. To achieve the planarity on the surface of IC, Dummy metal fills are required to be inserted. Dummy fill insertion is a time consuming process for moderate and bigger sized blocks or chips. Insertion of Dummy metal fills affects the coupling capacitance of the signal metal layers which causes signal integrity issues. In the last stages of the design closure while doing Timing ECOs, re-doing Dummy metal fills can cause timing/noise violations and ECOs can be unpredictable. In this paper we are suggesting a methodology wherein eco can be implemented without re-running the Dummy metal fill again on the complete block/chip. This will save ECO implementation time and reduce the risk of any new signal integrity issues.

KEYWORDS

CMP (Chemical Mechanical Planarization), ECO (Engineering Change Order), ILD (Inter-Level dielectric), GDS (Graphic Data System), TCL (Tool Command Language), Crosstalk, Dummy Metal Fill, Coupling Cap, PnR (Place And Route).


A Systematic Evaluation of Manet Routing Protocols over UDP And TCP in Multi-Hop Wireless Network

Adebayo Seyi1 and Ogunseyi Taiwo2, 1Department of Information and Communication Engineering, China University of Mining and Technology, Xuzhou, China and 2Department of Information Security, Communication University of China, Beijing, China

ABSTRACT

There are genuine concerns for the right transport connection to be deployed on a particular routing protocol in order to have a reliable, fast and robust communication in spite of the size and the dynamics of the network topology. This work comparatively studies the individual implementation of reactive and proactive protocols on both UDP and TCP transport connection using end to end delay, average throughput, jitter and packet delivery ratio (PDR) as QoS metrics. We studied the combination of both the transport connection and routing protocol that will deliver the best QoS in simple and complex network scenarios with source and destination nodes fixed and the intermediate nodes randomly moving throughout the simulation time. More so, the intrinsic characteristics of the routing protocols regarding the QoS metrics and transport connection are studied. Forty simulations were run for simple and complex multi-hop network models and the results were analyzed and presented.

KEYWORDS

MANET, Wireless Network, Proactive, Reactive, QoS, UDP, TCP


Balancing Security and Agility in Software Engineering: A Survey of Secure Agile Software Development Methods

Peter John A. Francisco, Department of Computer Science, University of the Philippines, Quezon City, Philippines

ABSTRACT

Many approaches have been proposed to integrate security activities into agile software development methodologies. These studies did not seem to have made the jump into practice, however, since, per our experience, most software development teams are not familiar with the range of methods developed for this purpose. This knowledge gap makes the task especially difficult for agile project managers and security specialists attempting to achieve the delicate balance of agility and security for the first time. In this study, we surveyed proposed methods available in current literature for integrating security activities into agile software engineering. From 11 proposed secure agile methods published between 2004 to 2017, we extracted 5 insights which practitioners in agile software development and security engineering can use to more effectively, jointly embed security into their software development flows. We then used the insights in a retrospective case study of a software engineering project in a fintech startup company, a high-risk industry in terms of security, and conclude that prior knowledge of the insights would have addressed major challenges in their security integration task.

KEYWORDS

Agile Process, Software Engineering, Security, Survey


menu
Reach Us

emailcndc@cndc2019.org


emailcndconf@yahoo.com

close