Rajendra Pratap, Sonia Sharma and Ankita Bhaskar, eInfochips (An Arrow Company), Noida, Uttar Pradesh, India
Feedthrough blocks are the communication channels present at the top chip level with many hierarchical blocks to ensure smooth interaction between two or more blocks. Since it is like a channel between blocks so port positions and size are hard fixed. If the size of feedthrough block is large, then many times it becomes a challenge to satisfy internal register-to-register timing for these blocks. In this manuscript, the authors present a simple technique to have controlled internal register-to-registertimings for such large feedthrough blocks present in big integrated chips.
VLSI, chip, Setup Fixing Techniques & Feedthrough Block
Ashwani Kumar Gupta and Dr. Rajendra Pratap, Department of ASIC, einfochips (An Arrow Company), Noida, Uttar Pradesh, India
Chemical Mechanical Planarization is a process of smoothing wafer surface through exerting the chemical and mechanical forces on wafer. It is an important step in IC fabrication process. To achieve the planarity on the surface of IC, Dummy metal fills are required to be inserted. Dummy fill insertion is a time consuming process for moderate and bigger sized blocks or chips. Insertion of Dummy metal fills affects the coupling capacitance of the signal metal layers which causes signal integrity issues. In the last stages of the design closure while doing Timing ECOs, re-doing Dummy metal fills can cause timing/noise violations and ECOs can be unpredictable. In this paper we are suggesting a methodology wherein eco can be implemented without re-running the Dummy metal fill again on the complete block/chip. This will save ECO implementation time and reduce the risk of any new signal integrity issues.
CMP (Chemical Mechanical Planarization), ECO (Engineering Change Order), ILD (Inter-Level dielectric), GDS (Graphic Data System), TCL (Tool Command Language), Crosstalk, Dummy Metal Fill, Coupling Cap, PnR (Place And Route).
Eric Ohana, Science and Engineering Faculty, Queensland University of Technology, Brisbane, Australia
The paper presents an optimisation on the baseline JPEG hardware implementation that improves the compression ratio for many image types. The baseline JPEG flow is briefly reviewed along with prior art then it is explained how and where the LZW-based optimisation fits in this flow and what it replaces. The variations taken from a standard LZW compression flow are explained along with why they are necessary in this specific hardware application. The micro architecture of the hardware implementation and its FPGA build are then detailed. The various trade-offs between implementation decisions and compression efficiency are explained. Finally comparison results between the baseline JPEG flow and the LZW based optimised one are shown and conclusions are drawn.
JPEG, Huffman Encoding, LZW Compression, Content Addressable Memory (CAM), Cache Memory
Lei Feng, Key Laboratory of Computational Optical Imaging Technology, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China
Wide spectral band, combination of imaging and spectrum and fine spectral detection capability are the outstanding advantages of imaging spectrometer. Rich spectral information combined with spatial image of the object point greatly improves the accuracy of target detection, expands the function of traditional detection technology, and realizes the qualitative analysis of target characteristics. Spectrometer plays an irreplaceable role compared with other technologies. It has been widely used in many military and civilian fields, such as land and ocean remote sensing, remote sensing monitoring of pollutants in atmosphere, soil and water, military target detection, medical spectral imaging diagnosis, scientific experiments and so on. Curved prism spectrometer is widely used because of its high energy and no ghost image. However, curved prism spectrometer is as a non coaxial symmetric system and its aberration theory is complex. Therefore, it is necessary to establish a numerical model and construct an initial structure to provide a good starting point for system optimization. In a practical imaging spectrometer based on prism, there are many aberrations when ray incidented on the surface of each element. In the design, it is very important to establish a mathematical model to analyze these aberrations. Curved prism is a kind of non-coaxial prism which is obtained by processing the front and rear surfaces of triangular prism into two spheres. Its front and rear surfaces are not coaxial with the optical axis, so its characteristics are complex. Firstly, on the basis of the primary aberration theory, the numerical calculation model of curved prism is established, and the optimal object distance of curved prism and the effective incident angle of curved prism are solved according to the principle of minimum aberration. For given system parameters, the coordinates of object points are known, and then the numerical model of curved surface prism spectrometer is established. The vector method is used to solve the incident and output vectors of given light rays. After transmission, the optical path extremum function is established, and the second-order partial differential equation is derived. The surface equation of each element is expanded by higher order Taylor series, that is, each surface is expressed as a functional expression of the incident point and structural parameters. A set of partial differential equations is constructed, and the least square method is used to solve the minimum of the equations, and then the initial structural parameters are calculated.
Mathematical computation, partial differential equations, vector solving, curved prism
Hamidreza Bolhasani, Amir Masoud Rahmani and Farid Kheiri, Department of Computer Engineering, Science and Research branch, Islamic Azad University, Tehran, Iran
Since 1982 that Richard Feynman proposed the idea of quantum computing for the first time, it has become a new field of interest for many physics and computer scientists. Although it’s more than 30 years that this concept has been presented but it’s still considered as unknown and several subjects are open for research. Accordingly, concepts and theoretical reviews may always be useful. In this paper, a brief history and fundamental ideas of quantum computers are introduced with focus on architecture part.
Quantum, Computer, Hardware, Qubit, Gate
Vahab Pournaghshband1 and Peter Reiher2 , 1Computer Science Department, University of San Francisco, San Francisco, USA, 2Computer Science Department, University of California, Los Angeles, Los Angeles,USA
The market is currently sated with mobile medical devices and new technology is continuously emerging. Thus, it is costly, and in some cases impractical, to replace these devices for new ones with greater security. In this paper, we present the implementation of a prototype for Personal Security Device a self-contained, specialized wearable device that augments security to existing mobile medical devices. The main research challenge for, and hence the state of the art of, the proposed hardware design is that the device, to work with legacy devices, must require no changes to either the medical device or its monitoring software. This requirement is essential since we aim to protect already existing devices, as making modifications to the device or its proprietary software often impossible or impractical (e.g., closed source executables and implantable medical devices). Through performance evaluation of this prototype, we confirmed the feasibility of having a special-purpose hardware with limited computational and memory resources to perform necessary security operations.
Wireless medical device security, Man-in-the-middle attack.
Shailja Dalmia, Ashwin T S and Ram Mohana Reddy Guddeti, National Institute of Technology Karnataka, Surathkal, Mangalore, Karnataka, India
With the ever-growing variety of information, the retrieval demands of different users are so multifarious that the traditional search engine cannot afford such heterogeneous retrieval results of huge magnitudes. Harnessing the advancements in a user-centered adaptive search engine will aid in groundbreaking retrieval results achieved efficiently for high-quality content. Previous work in this field have made using the excessive server load to achieve good retrieval results but with the limited extended ability and ignoring on demand generated content. To address this gap, we propose a novel model of adaptive search engine and describe how this model is realized in a distributed cluster environment. Using an improved current algorithm of topic-oriented web crawler with User Interface based Information Extraction Technique was able to produce a renewed set of user-centered retrieval results with higher efficiency than all existing methods. The proposed method was found to exceed by 1.5 times and two times for crawler and indexer, respectively than all prevailing methods with improved and highly precise results in extracting semantic information from Deep web.
Search Engine, WWW, Web Content Mining, Inverted Indexing, Hidden Crawler, Distributed Web Crawler, Precision, Deep Web
Ikwu Ruth and Louvieris Panos, Department of Computer Sciences, Brunel University, London
Cyberspace has gradually replaced the physical reality, its role evolving from a simple enabler of daily live processes to a necessity for modern existence. As a result of this convergence of physical and virtual realities, for all processes being critically dependent on networked communications, information representative of our physical, logical and social thoughts are constantly being generated in cyberspace. The interconnection and integration of links between our physical and virtual realities create a new hyperspace as a source of data and information. Additionally, significant studies in cyber analysis have predominantly revolved around a single linear analysis of information from a single source of evidence (The Network). These studies are limited in their ability to understand the dynamics of relationships across the multiple dimensions of cyberspace. This paper introduces a multi-dimensional perspective for data identification in cyberspace. It provides critical discussions for identifying entangled relationships amongst entities across cyberspace.
Cyberspace, Data-streams, Multi-Dimensional Cyberspace
Solomon Cheung1, Yu Sun1 and Fangyan Zhang2, 1Department of Computer Science, California State Polytechnic University, Pomona, CA, 91768 and 2ASML, San Jose, CA, 95131
As an act of disposing waste and maintaining homeostasis, humans have to use the restroom multiple times a day. One item that is consumed in the process is toilet paper; it often runs out easily in the most inconvenient times. One of the most fatal positions to be in is to be stuck without toilet paper. Since humans are not capable of a 100% resupply rate, we should give this task to a computer. The approach we selected was to use a pair of laser sensors to detect whether toilet paper was absent or not. Utilizing an ultrasound sensor, we would be able to detect whether a person was nearby and send a notification to a database. The online app, PaperSafe, takes the information stored and displays it onto a device for quick access. Once a sufficient amount of data is acquired, we can train a machine learning algorithm to predict the next supply date, optimized for the specific scenario.
Amenity, Homeostasis, Machine Learning, Mobile Application
Vijayalakshmi M,Shanthi ThangamM and Bushra H, Department of Information Science and Technology, Anna University, Chennai City, Tamil Nadu, India
The usages of mobile devices are drastically increasing every day with high end support to the users. Due to high end configurations mobile devices such as smart phones, laptops, tablets, etc., computations are complex in these devices. Computation intensive and data intensive are plays a vital role in the mobile devices. The main challenges in the mobile devices are handling the mobile applications in the devices with high computation and high storage. The above mentioned challenges can be overcome by using mobile cloud computing. The limitations while handling the mobile cloud computing is offloading decision making, which part of computation should offload and which should execute in the mobile side. The proposed work provides the solution to the limitations and challenges mentioned earlier by providing agent based offloading decision maker for mobile cloud. The decision maker should decide which computation part is executed in the mobile side and the cloud side. The evaluation shows the mobile applications having high complexity get benefited over other high applications.
Agent based, Mobile cloud, Offloading, Computational device.
Ayodeji Oyewale1 and Chris Hughes2, 1School of Computing, Science and Engineering, University of Salford, Salford, Manchester and 2The Crescent, Salford, Manchester, United Kingdom
A growing number of applications that generate massive streams of data need intelligent data processing and online analysis. Data & Knowledge Engineering (DKE) has been known to stimulate the exchange of ideas and interaction between these two related fields of interest. DKE makes it possible to understand, apply and assess knowledge and skills required for the development and application data mining systems. With present technology, companies are able to collect vast amounts of data with relative ease. With no hesitation, many companies now have more data than they can handle. A vital portion of this data entails large unstructured data sets which amount up to 90 percent of an organization’s data. With data quantities growing steadily, the explosion of data is putting a strain on infrastructures as diverse companies having to increase their data center capacity with more servers and storages. This study conceptualized handling enormous data as a stream mining problem that applies to continuous data stream and proposes an ensemble of unsupervised learning methods for efficiently detecting anomalies in stream data.
Stream data, Steam Mining, Compact data structurres, FP Tree, Path Adjustment Method
N. Rada, L. E. Mendoza, E. G. Florez ,TelecommunicationsEngineering, Biomedical engineering, Mechanical Engineering, Research Group in Mechanical Engineering, Universityof Pamplona, Colombia
This article presents a robust compression method known as compression sensitivity (CS). CS, allows to reconstruct scat-tered signals with very few samples unlike the Shannon-Nyquist theorem. In this article the discrete cosine transform and the wavelet transform were used to find most adequate sparse space. Angiographic images were used, which were reconstructed using algorithms such as Large-scale Sparse Reconstruction (SPGL) and Gradient Projection for Sparse Reconstruction (GPRS). In this work, it was demonstrated that using the wavelet-cosine transformed transpose allowed achieving a more satisfactory sparse space than those obtained by other research. Finally, it was demonstrated that CS works in a relevant way for compressing angiographic images and the maximum percentage of error in the reconstruction was 3.56% for SPGL.
Compressive Sensing, sparse signal, images, reconstruction, SPGL1, SPSR.
Hyun Woo Jung, Hankuk Academy of Foreign Studies, Yongin, South Korea
Deep learning has facilitated major advancements in various fields including image detection. This paper is an exploratory study on improving the performance of Convolutional Neural Network (CNN) models in environments with limited computing resources, such as the Raspberry Pi. A pretrained state-of-art algorithm for doing near-real time object detection in videos, YOLO (“You-Only-Look-Once”) CNN model, was selected for evaluating strategies for optimizng the runtime performance. Various performance analysis tools provided by the Linux kernel were used to measure CPU time and memory footprint. Our results show that loop parallelization, static compilation of weights, and flattening of convolution layers reduce the total runtime by 85% and reduce memory footprint by 53% on a Raspberry Pi 3 device. These findings suggest that the methodological improvements proposed in this work can reduce the computational overload of running CNN models on devices with limited computing resources.
Deep Learning, Convolutional Neural Networks, Raspberry Pi, real-time object detection
Niloufar Salehi Dastjerdi and M. Omair Ahmad, Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada
Image descriptors play an important role in any computer vision system e.g. object recognition and tracking. Effective representation of an image is challenging due to significant appearance changes, viewpoint shifts, lighting variations and varied object poses. These challenges have led to the development of several features and their representations. Spatiogram and region covariance are two excellent image descriptors which are widely used in the field of computer vision. Spatiogram is a generalization of the histogram and contains some moments upon the coordinates of the pixels corresponding to each bin. Spatiogram captures richer appearance information as it computes not only information about the range of the function like histograms, also information about the (spatial) domain. However, there is a drawback that multi modal spatial patterns cannot be well modelled. Region covariance descriptor provides a compact and natural way of fusing different visual features inside a region of interest. However, it is based on a global distribution of pixel features inside a region and loses the local structure. In this paper, we aim to overcome the existing drawbacks of these descriptors. To this, we propose r-spatiogram and then a new hybrid descriptor is presented which is combination of r-spatiogram and traditional region covariance descriptors. The results show that our descriptors have the discriminative capability improved in comparison with other descriptors.
Feature Descriptor, Spatiogram, Region Covariance
Ranjana S. Zinjore1 and Rakesh J. Ramteke2,1Department of Computer Science, G.G. Khadse College, Muktainagar and 2School of Computer Sciences, KBC North Maharashtra University, Jalgaon
Optical Character Recognition has got a special significance in Multi-lingual, Multi-Script country like India, where a single document may contain words in two or more languages/scripts. There is a need to digitize such type of documents for easy communication and storage. It is also useful in applications like processing of handwritten messages on social media and processing of handwritten criminal records for judicial purpose. This paper reveals the approach used in the digitization of handwritten bilingual documents consist of Marathi and English languages. In this approach three phases are used. The first phase focuses on preprocessing of handwritten bilingual document and solution of merged line segmentation. An algorithm Two _Fold_ Word _Segmentation is developed to extract words from lines. A fusion of two feature extraction methods is used for script identification. Second phase focuses on recognition of script identified words. For recognition of words two different feature extraction methods are used. The first method is based on combination of structural and statistical features and second method is based on Histogram of Oriented Gradient Method. K-Nearest Neighbor classifier gives good recognition accuracy for second feature extraction method than that of first method. Finally in third phase digitization and transliteration of recognized words is performed. A graphical user interface is designed for conversion of transliterated text into speech which is useful in the society for blind and visually impaired people to read a book consisting of bilingual text.
Digitization, Transliteration, Script Identification Histogram of Oriented Gradient, K-Nearest Neighbor
Roxana Flores-Quispe and Yuber Velazco-Paredes, Deparment of Computer Science, Universidad Nacional de San Agustin, Arequipa, Peru
This paper proposes a method based on Multitexton Histogram (MTH) descriptor to classificate eight different human parasite eggs: Ascaris, Uncinarias, Trichuris, Hymenolepis Nana, Dyphillobothrium Pacificum, Taenia-Solium, Fasciola Hepatica and Enterobius-Vermicularis identifying textons of irregular shapes in their microscopic images. This proposed method includes two stages. In the first a feature extraction mechanism integrates the advantages of co-occurrence matrix and histograms to identify irregular morphological structures in the biological images throughs textons of irregular shape. In the second stage the Support Vector Machine (SVM) is used to classificate the different human parasite eggs. The results were obtaining using a dataset with 2053 human parasite eggs images achieving a success rate of 96,82% in the classification.
Human Parasite Eggs, Multitexton Histogram descriptor, Textons.
Sandile Mhlanga1, Dr Tawanda Blessing Chiyangwa2, Dr Lall Manoj1 and Prof Sunday Ojo1, 1Tshwane University of Technology,South Africa and 2University of South Africa,South Africa
With the rapid growth of Web services in recent years, it is very difficult to choose the suitable web services among those services that offer similar functionality. Selecting the right web service does not only include the problem of discovering services on the basis of their functionalities, but also assessing the quality aspects of those services. Quality of services (QoS) is considered a distinguishing factor between similar web services and plays a vital role in web service selection. The aim of the model is to evaluate and rank the alternatives consisting of conflicting criteria or criteria with different QoS requirements. To address this issue, this study proposes a model for determining the most suitable candidate web service by integrating AHP (Analytic Hierarchy Process) and VIKOR (Vlsekriterjumska optimizacija I KOmpromisno Resenje) methods. AHP method computes the weights assigned to QoS criteria using pairwise comparison. Thereafter, the ranking of the web services, according to a user preferred criteria, is obtained using VIKOR method. Finally, a software prototype for implementing AHP and VIKOR was implemented. To illustrate and validate the proposed approach, data from QWS dataset is used by the software prototype in a service selection process.
Web services, Web services selection, Quality of service, Multi-Criteria decision making, AHP, VIKOR
Manjula Pilaka, Fethi A. Rabhi and Madhushi Bandara, School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
Regulatory processes are normally tracked by regulatory bodies in terms of monitoring safety, soundness, risk, policy and compliance. Such processes are loosely framed processes and it is a considerable challenge for data scientists and academics to extract instances of such processes from event records and analyse their characteristics e.g. if they satisfy certain process compliance requirements. Existing approaches are inadequate in dealing with the challenges as they demand both technical knowledge and domain expertise from the users. In addition, the level of abstraction provided does not extend to the concepts required by a typical data scientist or a business analyst. This paper extends a software framework which is based on a semantic data model that helps in deriving and analysing regulatory reporting processes from event repositories for complex scenarios. The key idea is in using complex business-like templates for expressing commonly used constraints associated with the definition of regulatory reporting processes and mapping these templates with those provided by an existing process definition language. The efficiency of the architecture in evaluation, compliance and impact was done by implementing a prototype using complex templates of Declare ConDec language and applying it to a case study related to process instances of Australian Company Announcements.
Regulatory Reporting, Process Extraction, Semantic Technology, Events
Adebayo Seyi1 and Ogunseyi Taiwo2, 1Department of Information and Communication Engineering, China University of Mining and Technology, Xuzhou, China and 2Department of Information Security, Communication University of China, Beijing, China
There are genuine concerns for the right transport connection to be deployed on a particular routing protocol in order to have a reliable, fast and robust communication in spite of the size and the dynamics of the network topology. This work comparatively studies the individual implementation of reactive and proactive protocols on both UDP and TCP transport connection using end to end delay, average throughput, jitter and packet delivery ratio (PDR) as QoS metrics. We studied the combination of both the transport connection and routing protocol that will deliver the best QoS in simple and complex network scenarios with source and destination nodes fixed and the intermediate nodes randomly moving throughout the simulation time. More so, the intrinsic characteristics of the routing protocols regarding the QoS metrics and transport connection are studied. Forty simulations were run for simple and complex multi-hop network models and the results were analyzed and presented.
MANET, Wireless Network, Proactive, Reactive, QoS, UDP, TCP
Peter John A. Francisco, Department of Computer Science, University of the Philippines, Quezon City, Philippines
Many approaches have been proposed to integrate security activities into agile software development methodologies. These studies did not seem to have made the jump into practice, however, since, per our experience, most software development teams are not familiar with the range of methods developed for this purpose. This knowledge gap makes the task especially difficult for agile project managers and security specialists attempting to achieve the delicate balance of agility and security for the first time. In this study, we surveyed proposed methods available in current literature for integrating security activities into agile software engineering. From 11 proposed secure agile methods published between 2004 to 2017, we extracted 5 insights which practitioners in agile software development and security engineering can use to more effectively, jointly embed security into their software development flows. We then used the insights in a retrospective case study of a software engineering project in a fintech startup company, a high-risk industry in terms of security, and conclude that prior knowledge of the insights would have addressed major challenges in their security integration task.
Agile Process, Software Engineering, Security, Survey
Nobuaki Maki1, Ryotaro Nakata2, Shinichi Toyoda1, Yosuke Kasai1, Sanggyu Shin3 and Yoichi Seto1, 1Advanced Institute of Industrial Technology, Tokyo, Japan, 2Institute of Information Security, Yokohama City, Kanagawa, Japan and 3Tokai University, Hiratsuka City, Kanagawa, Japan
Recently the threats of cyberattacks, especially of targeted attacks are increasing rapidly and a large number of cybersecurity incidents are occurring frequently. On the other hand, capable personnel are greatly lacking, and strengthen the systematic human resource development cultivating capabilities for cybersecurity activities is becoming an urgent issue. However, only a few parts of academia and private sector in Japan can carry out the cybersecurity exercises because of high cost and inflexibility of commercial or existing training software. On this account, in order to enforce cybersecurity practical exercises cost-effectively and flexibly, we developed a virtual environment Cybersecurity Exercises (CyExec) system utilizing VirtualBox and Docker. We also implemented an open source vulnerability scanner tool WebGoat and our original cyberattack and defense training contents on CyExec.
Ecosystem, Virtualization, WebGoat, Cyberattack and Defense Exercise, Cyber Range Exercise.
Zeng Dangquan, Department of Information Science and Technology, Xiamen University Tan Kahkee College, Zhang Zhou, Fu Jian, China
CAPTCHA (Completely Automated Public Turing Test to tell Computers and Humans Apart) is a test for distinguishing between computers and humans, and has a very wide range of applications on the Internet. Most websites require users to submit CAPTCHAs when registering to log in or submit some form data to improve the security of the site, thus avoiding malicious attacks by automated robots and spammers. In this paper, a text-based CAPTCHA Using text-adhesion and visual compensation is introduced. This CAPTCHA is designed to use character cascading technology and partial defect technology to effectively prevent the machine from using the character cutting and machine learning techniques to verify the CAPTCHA, but human vision can easily separate the characters that are layered together and can fill in some of the missing characters to easily identify the CAPTCHA. In order to test the ability of the CAPTCHA to resist automatic machine identification, four kinds of specialized OCR recognition software and two online OCR recognition websites were used to identify and verify 1000 CAPTCHAs. The test results show that none of the six OCR recognition tools can correctly identify one completely CAPTCHA(correctly recognized) , and the probability of not being recognized and misidentified is over 99.5%, which proves that the CAPTCHA is a very high security technology.
CAPTCHA, text-adhesion, visual compensation, OCR, security
Pimmanee Rattanawicha and Sutthipong Yungratog, Chulalongkorn Business School, Chulalongkorn University, Bangkok, Thailand
To understand how colour contrast in e-Commerce websites, such as hotel & travel websites, affects (1) emotional perception (i.e. pleasant, arousal, and dominance), (2) trust, and (3) purchase intention of visitors, a two-phase empirical study is conducted. In the first phase of this study, 120 volunteer participants are asked to choose the most appropriate colour from a colour wheel for a hotel & travel website. The colour “Blue Cyan”, the most chosen colour from this phase of study, is then used as the foreground colour to develop three hotel & travel websites with three different colour contrast patterns for the second phase of the study. A questionnaire is also developed from previous studies to collect emotional perception, trust, and purchase intention data from another group of 145 volunteer participants. It is found from data analysis that, for visitors as a whole, colour contrast has significant effects on their purchase intention. For male visitors, colour contrast significantly affects their trust and purchase intention. Moreover, for generation X and generation Z visitors, colour contrast has effects on their emotional perception, trust, and purchase intention. However, no significant effect of colour contrast is found in female or generation Y visitors.
Colour Contrast, e-Commerce, Website Design
James Monks and LiwanLiyanage
Spatio-temporal data is becoming increasingly prevalent in our society. This has largely been spurred on from the capability of building arrays and sensors into every day items, along with highly specialised measuring equipment becoming cheaper. The result of this prevalence can be seen in the wealth of data of this kind that is now available for analysis. This spatiotemporal data is particularly useful for contextualising events in other data sets by providing background information for a point in space and time. Problems arise however, when the contextualising data and the data set of interest do not align in space and time in the exact way needed. This problem is becoming more common due to the precise data recorded from GPS systems not overlapping with points of interest and not being easily generalised to a region. This is Interpolating data for the points of interest in space and time is important and a number of methods have been proposed with varying levels of success. These methods are all lacking in usability and the models are limited by strict assumptions and constraints. This paper proposes a new method for the interpolation of points in the patio-temporal scope, based on a set of known points. It utilises an ensemble of models to take into account the nuanced directional effects in both space and time. This ensemble of models allows it to be more robust to missing values in the data which are common in spatio-temporal data sets due to variation in conditions across space and time. The method is inherently flexible, as it can be implemented without any further customisation whilst allowing for the user to input and customise their own underlying model based on domain knowledge. It addresses the usability issues of other methods, accounts for directional effects and allows for full control over the interpolation process.