Rafik Amari1,2 and Mounir Zrigui1,2, 1University of Monastir, Faculty of Science of Monastir, Tunisia, 2University of Monastir, Research Laboratory in Algebra, Numbers Theory and Intelligent Systems RLANTIS, Tunisia
Speech recognition is considered as the main task of the speech processing field. In this paper, we study the problem of discontinuous speech recognition (Isolated Words) for the Arabic language. Two architectures based on deep learning are compared in this work. The first one is based on CNN networks and the second combine CNN and LSTM networks. The “Arabic Speech Corpus for isolated Words” (ASD) database is used for all experiments. The results proved the performance and the advantage of CNN-LSTM approach compared to the CNN approach.
Deep Learnig, CNN, LSTM, Arabic speech Corpus, Speech recognition.
Saranya M1, Arockia Xavier Annie R2 and Geetha T V3, 1Computer Science and Engineering, CEG, Anna University, India, 2Assistant Professor, Computer Science and Engineering, CEG, Anna University, Chennai, India, 3UGC-BSR Faculty Fellow, Computer Science and Engineering, former Dean CEG, Anna University, Chennai, India
Now-a-days people around the world are infected by many new diseases. Developing or discovering a new drug for the newly discovered disease is an expensive and time consuming process and these could be eliminated if already existing resources could be used. For identifying the candidates from available drugs we need to perform text mining of a large-scale literature repository to extract the relation between the chemical, target and disease. Computational approaches for identifying the relationships between the entities in biomedical domain are appearing as an active area of research for drug discovery as it requires more man power. Currently, the computational approaches for extracting the biomedical relations such as drug-gene and gene-disease relationships are limited as the construction of drug-gene and gene-disease association from the unstructured biomedical documents is very hard. In this work, we propose pattern based bootstrapping method which is a semi-supervised learning algorithm to extract the direct relations between drug, gene and disease from the biomedical documents. These direct relationships are used to infer indirect relationships between entities such as drug and disease. Now these indirect relationships are used to determine the new candidates for drug repositioning which in turn will reduce the time and the patient’s risk.
Text Mining, Drug Discovery, Drug Repositioning, Bootstrapping, Machine Learning.
Harshit Jain and Naveen Pundir, Department of Computer Science and Engineering, IIT Kanpur, India
India and many other countries like UK, Australia, Canada follow the ‘common law system’ which gives substantial importance to prior related cases in determining the outcome of the current case. Better similarity methods can help in finding earlier similar cases, which can help lawyers searching for precedents. Prior approaches in computing similarity of legal judgements use a basic representation which is either a bag-of-words or dense embedding which is learned by only using the words present in the document. They, however, either neglect or do not emphasize the vital ‘legal’ information in the judgements, e.g. citations to prior cases, act and article numbers or names etc. In this paper, we propose a novel approach to learn the embeddings of legal documents using the citation network of documents. Experimental results demonstrate that the learned embedding is at par with the state-of-the-art methods for document similarity on a standard legal dataset.
Representation Learning, Similarity, Citation Network, Graph Embedding, Legal Judgements.
Elijah Pelofske1, Lorie M. Liebrock2, and Vincent Urias3, 1Cybersecurity Centers, New Mexico Institute of Mining and Technology, Socorro, New Mexico, USA, 2Cybersecurity Centers, New Mexico Institute of Mining and Technology, Socorro, New Mexico, USA, 3Sandia National Laboratories, Albuquerque, New Mexico, USA
In this research, we use user defined labels from three internet text sources (Reddit, Stackexchange, Arxiv) to train 21 different machine learning models for the topic classification task of detecting cybersecurity discussions in natural text. We analyze the false positive and false negative rates of each of the 21 model’s in a cross validation experiment. Then we present a Cybersecurity Topic Classification (CTC) tool, which takes the majority vote of the 21 trained machine learning models as the decision mechanism for detecting cybersecurity related text. We also show that the majority vote mechanism of the CTC tool provides lower false negative and false positive rates on average than any of the 21 individual models. We show that the CTC tool is scalable to the hundreds of thousands of documents with a wall clock time on the order of hours.
cybersecurity, topic modeling, text classification, machine learning, neural networks, natural language processing, Stackexchange, Reddit, Arxiv, social media.
Ishika Godage, Ruvan Weerasignhe and Damitha Sandaruwan, University of Colombo School of Computing, Colombo 07, Sri Lanka
It is no doubt that communication plays a vital role in human life. There is, however, a significant population of hearing-impaired people who use non-verbal techniques for communication, which a majority of the people cannot understand. The predominant of these techniques is based on sign language, the main communication protocol among hearing impaired people. In this research, we propose a method to bridge the communication gap between hearing impaired people and others, which translates signed gestures into text. Most existing solutions, based on technologies such as Kinect, Leap Motion, Computer vision, EMG and IMU try to recognize and translate individual signs of hearing impaired people. The few approaches to sentence-level sign language recognition suffer from not being user-friendly or even practical owing to the devices they use. The proposed system is designed to provide full freedom to the user to sign an uninterrupted full sentence at a time. For this purpose, we employ two Myo armbands for gesture-capturing. Using signal processing and supervised learning based on a vocabulary of 49 words and 346 sentences for training with a single signer, we were able to achieve 75-80% word-level accuracy and 45-50% sentence level accuracy using gestural (EMG) and spatial (IMU) features for our signer-dependent experiment.
Sign Language, Word-Level Recognition, Sentence-Level Recognition, Myo Armband, EMG, IMU, Supervised Learning.
Dandan Yu1, Wuying Liu1, 2, 1Shandong Key Laboratory of Language Resources Development and Application, Ludong University, 264025 Yantai, Shandong, China, 2Laboratory of Language Engineering and Computing, Guangdong University of Foreign Studies, 510420 Guangzhou, Guangdong, China
Brand image is an important market competitiveness of luxury brands. Luxury brands gradually tend to adopt virtual spokespersons to reduce the negative impact of celebrities scandal on the economic benefits of luxury brands. In this context, the article describes the development trends of virtual spokespersons in recent years and the specific applications in luxury brand advertising, and uses Valentinos Weibo interaction data as the basis to explore does virtual spokespersons impact the para-social interaction relationship between luxury brands and their consumers. Finally, it discusses the problems between the virtual spokespersons of luxury brands and the para-social interaction and puts forward corresponding development suggestions.
Luxury Brands, Virtual Spokesperson, Para-social Interaction, Weibo Data.
Afia Fairoose Abedin, Amirul Islam Al Mamun, Rownak Jahan Nowrin, Amitabha Chakrabarty, Moin Mostakim1 and Sudip Kumar Naskar2, 1Department of Computer Science and Engineering, Brac University, Dhaka, Bangladesh, 2Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
Chatbots, today, are used as virtual assistants to reduce human workload. Unlike humans, chatbots can serve multiple customers at a time, are available 24/7 and reply in less than a fraction of a second. Though chatbots perform well in task-oriented activities, in most cases they fail to understand personalised opinions, statements or even queries which later impact the organization for poor service management. Lack of understanding capabilities in bots disinterest humans to continue conversations with them. Usually, chatbots give absurd responses when they are unable to interpret a user’s text accurately. The major gap of understanding between the users and the chatbot can be reduced if organizations use chatbots more efficiently and improve their quality of products and services by extracting the client reviews from conversations. Thus, in our research we incorporated all the key elements that are necessary for a chatbot to analyse and understand an input text precisely and accurately. We performed sentiment analysis, emotion detection, intent classification and named-entity recognition using deep learning to develop chatbots with humanistic understanding and intelligence. The efficiency of our approach can be demonstrated accordingly by the detailed analysis.
Natural Language Processing, Humanistic, Deep learning, Sentiment analysis, Emotion detection, Intent classification, Named-entity recognition.
Shahana Nandy1 and Vishrut Kumar2, 1Department of Electrical and Electronics Engineering, National Institute of Technology, Warangal, India, 2Department of Information and Communication Technology, Manipal Institute of Technology, India
The Covid-19 pandemic has significantly altered our way of life. Physical, social interactions are being steadily replaced with virtual connections and remote interactions. Social media platforms such as Facebook, Twitter, and Instagram have become the primary medium of communication. However, being relegated to a solely online presence has had a major impact on the mental health of users since the onset of the pandemic. The present study aims to identify depressed Twitter users by analyzing their tweets. We propose a deep learning model which stacks a bidirectional LSTM layer along with a Catboost Algorithm layer to classify tweets and detect depression. The results show that the proposed model outperforms standard machine learning approaches to classification and that there was a definite rise in depression since the beginning of the pandemic. The studys primary contribution is the novel deep learning model and its ability to detect depression.
Sentiment analysis, Text analysis, Long Short Term Memory, Catboost, COVID-19.
Mika Kishino1 and Kanako Komiya2, 1Ibaraki University, Ibaraki, Japan, 2Tokyo University of Agriculture and Technology, Tokyo, Japan
The linguistic speech patterns that characterize lines of Japanese anime or game characters were extracted and analyzed in this study. Conventional morphological analyzers, such as MeCab, segment words with high performance, but they are unable to segment broken expressions or utterance endings that are not listed in the dictionary, which often appears in lines of anime or game characters. To overcome this challenge, we propose segmenting lines of Japanese anime or game characters using subword units that were proposed mainly for deep learning, and extracting frequently occurring strings to obtain expressions that characterize their utterances. We analyzed the subword units weighted by TF/IDF according to gender, age, and each anime character and show that they are linguistic speech patterns that are specific for each feature. Additionally, a classification experiment shows that the model with subword units outperformed that with the conventional method.
Pattern extraction, Characterization of fictional characters, Subword units, Linguistic speech patterns, word segmentation.
Jun Izutsu1 and Kanako Komiya2, 1Ibaraki University, 2Tokyo University of Agriculture and Technology
This study proposes a method to develop neural models of the morphological analyzer for Japanese Hiragana sentences using the Bi-LSTM CRF model.Morphological analysis is a technique that divides text data into words and assigns information such as parts of speech. In Japanese natural language processing systems, this technique plays an essential role in downstream applications because the Japanese language does not have word delimiters between words.Hiragana is a type of Japanese phonogramic characters, which is used for texts for children or people who cannot read Chinese characters.Morphological analysis of Hiragana sentences is more difficult than that of ordinary Japanese sentences because there is less information for dividing. For morphological analysis of Hiragana sentences, we demonstrated the effectiveness of fine-tuning using a model based on ordinary Japanese text and examined the influence of training data of morphological analysis on texts of various genres.
Morphological analysis, Hiragana texts, Bi-LSTM CRF model, Fine-tuning, Domain adaptation.
Yichen Liu1, Jonathan Sahagun2 and Yu Sun3, 1Shen Wai International School, 29, Baishi 3rd Road Nanshan Shenzhen China 518053, 2California State Polytechnic University, Los Angeles, CA, 91748, 3California State Polytechnic University, Pomona, CA, 91768
As our world becomes more globalized, learning new languages will be an essential skill to communicate across countries and cultures and as a means to create better opportunities for oneself . This holds especially true for the English language . Since the rise of smartphones, there have been many apps created to teach new languages such as Babbel and Duolingo that have made learning new languages cheap and approachable by allowing users to practice briefly whenever they have a free moment for. This is where we believe those apps fail. These apps do not capture the interest or attention of the user’s for long enough for them to meaningfully learn. Our approach is to make a video game that immerses our player in a world where they get to practice English verbally with NPCs and engage with them in scenarios they may encounter in the real world . Our approach will include using chatbot AI to engage our users in realistic natural conversation while using speech to text technology such that our user will practice speaking English .
Machine Learning, NLP, Data Mining, Game Development.
Feng Yuanyuan and Liu Kejian, Department of Computer and Software Engineering, Xihua University, Chengdu, China
Personality is the dominant factor affecting human behavior. With the rise of social network platforms, increasing attention has been paid to predict personality traits by analyzing users behavior information, and pay little attention to the text contents, making it insufficient to explain personality from the perspective of texts. Therefore, in this paper, we propose a personality prediction method based on personality lexicon. Firstly, we extract keywords from texts, and use word embedding techniques to construct a Chinese personality lexicon. Based on the lexicon, we analyze the correlation between personality traits and different semantic categories of words, and extract the semantic features of the texts posted by Weibo users toconstruct personality prediction models using classificationalgorithmn. The final experimentsshows that compared with SC-LIWC, the personality lexicon constructed in this paper can achieve a better performance.
Personality Lexicon, Machine Learning, Personality Prediction.
I.V.Gomes1,2, H.Puga2 and J.L.Alves2, 1MIT Portugal, Guimarães, Portugal, 2CMEMS – Center for Microelectromechanical Systems University of Minho, Portugal
Ultrasonic-microcasting is a manufacturing technique that opens the possibility of obtaining biodegradable magnesium stents through a faster and cheaper process whilst it also brings important features such as the production of devices with cross-section variation.This way, it may be feasible tailoring the expansion profile of the stent. Even so, there are still geometric constraints which are essentially associated with the minimum thickness that the process allows to obtain and that is, currently, about 0.20 mm. Moreover, the nature of the material used - magnesium alloy - also demands thicker structures which may be harmful to the stent performance. In this work, a numerical model for stent shape optimization based on its cross-section variation is presented, aiming at reducing the dogboning phenomenon observed in this type of device. Such model is in agreement with a set of optimization variables and limiting values of the design and optimization parameters, which are defined considering both the advantages and constraints of the ultrasonic-microcasting process. Moreover, this model suggests an optimized geometry that despite it presents higher thickness, has a performance comparable to that ofthe most popular stent models currently being used.
Stent, Optimization, Ultrasonic-Microcasting, Dogboning.
Andrew Bloch-Hansen , Roberto Solis Oba , and Andy Yu, Department of Computer Science, Western University, Ontario, Canada
The two-dimensional strip packing problem consists of packing in a rectangular strip of width 1 and minimum height a set of n rectangles, where each rectangle has width 0 < w ≤ 1 and height 0 < h ≤ 1. We consider the high-multiplicity version of the problem in which there are only K different types of rectangles. For the case when K = 3, we give an algorithm which provides a solution requiring at most height 3/2 +ε plus the height of an optimal solution, where ε is any positive constant.
LP-relaxation, two-dimensional strip packing, high multiplicity, approximation algorithm.
Vedasingha K. S, K. K. M. T. Perera, Akalanka H. W. I, Hathurusinghe K. I, Nelum Chathuranga Amarasena and Nalaka R. Dissanayake, Department of Information Technology, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
Railways provide the most convenient and economically beneficial mode of transportation, and it has been the most popular transportation method among all. According to the past analysed data, it reveals a considerable number of accidents which occurred at railways and caused damages to not only precious lives but also to the economy of the countries. There are some major issues which need to be addressed in railways of South Asian countries since they fall under the developing category. The goal of this research is to minimize the influencing aspect of railway level crossing accidents by developing “Railway Process Automation System”, as there are high-risk areas that are prone to accidents and safety at these places is of utmost significance. This paper describes the implementation methodology and the success of the study. The main purpose of the system is to ensure human safety by using Internet of Things (IoT) and image processing techniques. The system can detect the current location of the train and close the railway gate automatically. And it is possible to do the above-mentioned process through a decision-making system by using past data. The specialty is both processes working parallel. As usual, if the system fails to close the railway gate due to technical or a network failure, the proposed system can identify the current location and close the railway gate through decision making system which is a revolutionary feature. The proposed system introduces further two features to reduce the causes of railway accidents. Railway track crack detection and motion detection are those features which play a significant role in reducing the risk of railway accidents. Moreover, the system is capable of detecting rule violations at a level crossing by using sensors. The proposed system is implemented through a prototype, and it is tested with real-world scenarios to gain the above 90% of accuracy.
Crack Detection, Decision-Making, Image Processing, Internet of Things, Motion Detection, Prototype, Sensors.
Sulaiman AdesegunKukoyi, O.F.W Onifade, Kamorudeen A. Amuda, Department of Computer Science, University of Ibadan, Nigeria
Voice information retrieval is a technique that provides Information Retrieval System with the capacity to transcribe spoken queries and use the text output for information search. CIS is a field of research that involves studying the situation, motivations, and methods for people working in a collaborative group for information seeking projects, as well as building a system for supporting such activities. Human find it easier to communicate and express ideas via speech. Existing voice search like Google and other mainstream voice search does not support collaborative search. The spoken speeches passed through the ASR for feature extraction using MFCC and HMM, Viterbi algorithm precisely for pattern matching. The result of the ASR is then passed as input into CIS System, results is then filtered have an aggregate result. The result from the simulation shows that our model was able to achieve 81.25% transcription accuracy.
Information Retrieval, Collaborative Search, Collaborative Information Seeking, Automatic Speech Recognition, Feature Extraction, MFCC, Hidden Markov Model, Acoustic Model, Viterbi Algorithm.
Gunbir Singh Baveja1 and Jaspreet Singh2, 1Delhi Public School, Dwarka, New Delhi, Delhi, India, 2GD Goenka University, Sohna, Haryana, India
Earthquake Prediction has been a challenging research area for many decades, where the future occurrence of this highly uncertain calamity is predicted. In this paper, several parametric and non-parametric features were calculated, where the non-parametric features were calculated using the parametric features. 8 seismic features were calculated using Gutenberg-Richter law, total recurrence time, seismic energy release. Additionally, criterions such as Maximum Relevance and Maximum Redundancy were applied to choose the pertinent features. These features along with others were used as input for an Extreme Learning Machine (ELM) Regression Model. Magnitude and Time data of 5 decades from the Assam-Guwahati region were used to create this model for magnitude prediction. The Testing Accuracy and Testing Speed were computed taking Root Mean Squared Error (RMSE) as the parameter for evaluating the model. As confirmed by the results, ELM shows better scalability with much faster Training and Testing Speed (up to thousand times faster) than traditional Support Vector Machines. The Testing RMSE (Root Mean Squared Error) came out to be . To further test the model’s robustness, magnitude-time data from California was used to- calculate the seismic indicators, fed into neural network (ELM) and tested on the Assam-Guwahati region. The model proves to be successful and can be implemented in early warning systems as it continues to be a major part of Disaster Response and Management.
Earthquake Prediction, Machine Learning, Extreme Learning Machine, Seismological Features, Gutenberg-Richter Law, Support Vector Machine.
Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China
Artificial intelligence has been an eye-popping word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards continuous development. From the definition, the welfare of human beings is the core of continuous development. Continuous development is useful only when ordinary people’s lives are improved whether in health, education, employment, environment, equality or justice. Securing decent jobs is a key enabler to promote the components of continuous development, economic growth, social welfare and environmental sustainability. The human resources are the precious resource for nations. The high unemployment and underemployment rates especially in youth is a great threat affecting the continuous economic development of many countries and is influenced by investment in education, and quality of living.
Artificial Intelligence, Conceptual Blueprint, Continuous Development, Human Resources, Learning and Employability Blueprint.
Hao Zhou, Yixin Chen, David Troendle, Byunghyun Jang, Computer Information and Science, University of Mississippi, University, USA
An automated and accurate fabric defect inspection system is in high demand as a replacement for slow, inconsistent, error-prone, and expensive human operators in the textile industry. Previous efforts focused on certain types of fabrics or defects, which is not an ideal solution. In this paper, we propose a novel one-class model that is capable of detecting various defects on different fabric types. Our model takes advantage of a well designed Gabor filter bank to analyze fabric texture. We then leverage an advanced deep learning algorithm, autoencoder, to learn general feature representations from the outputs of the Gabor filter bank. Lastly, we develop a nearest neighbor density estimator to locate potential defects and draw them on the fabric images. We demonstrate the effectiveness and robustness of the proposed model by testing it on various types of fabrics such as plain, patterned, and rotated fabrics. Our model also achieves a true positive rate (a.k.a recall) value of 0.895 with no false alarms on our dataset based upon the Standard Fabric Defect Glossary.
Fabric defect detection, One-class classification, Gabor filter bank.
Chee Keong Wee1 and Nathan Wee2, 1Digital Application Services, eHealth Queensland, Queensland, Australia, 2Science &Engineering Faculty, Queensland University of Technology, Queensland, Australia
Database replication is ubiquitous among organizations’ IT infrastructure when data is shared across multiple systems and their service uptime is critical. But complex software will eventually suffer outages due to different type of circumstances and it is important to resolve them promptly and restore the services. This paper proposes an approach to resolve data replication software’s through deep reinforcement learning. Empirical results show that the new method can resolve the software faults quickly with high accuracy.
Database Management, Data replication, reinforcement learning, fault resolution.
Kevin Qu and Yu Sun, California State Polytechnic University, Pomona, CA, 91768
A number of social issues have been grown due to the increasing amount of “fake news”. With the inevitable exposure to this misinformation, it has become a real challenge for the public to process the correct truth and knowledge with accuracy. In this paper, we have applied machine learning to investigate the correlations between the information and the way people treat it. With enough data, we are able to safely and accurately predict which groups are most vulnerable to misinformation. In addition, we realized that the structure of the survey itself could help with future studies, and the method by which the news articles are presented, and the news articles itself also contributes to the result.
Machine Learning, Cross Validation, Training and Prediction, Misinformation.
Yiqi Gao1 and Yu Sun2, 1Sage High School, Newport Coast, CA 92657, 2California State Polytechnic University, Pomona, CA, 91768
The start of 2020 marked the beginning of the deadly COVID-19 pandemic caused by the novel SARSCOV-2 from Wuhan, China. At the time of writing, the virus has infected over 150 million people worldwide and resulted in more than 3.5 million global deaths. Accurate predictions made using machine learning algorithms can be useful as a guide for hospitals and policy makers to make adequate preparations and enact effective policies to combat the pandemic. This paper takes a two-pronged approach to analyzing COVID-19. First, it attempts to utilize machine learning algorithms such as linear regression, polynomial regression, and random forest regression to make accurate predictions of daily COVID-19 cases using combinations of a range of predictors. Then, using the feature significance of random forest regression, it attempts to compare the influence of the individual predictors on the general trend of COVID-19 with the predictions made and to also highlight factors of high influence, which can then be targeted by policies for efficient pandemic response.
Covid-19 Case Prediction, Data Mining, Machine Learning Algorithm.
Santanu Ray1 and Pratik Gupta2, 1Ericsson, New Jersey, USA, 2Ericsson, Kolkata, India
Conventional test automation framework executes test cases in sequential manner which increases the execution time. Even though framework supports multiple test suites execution in parallel, we are unable to perform the same due to system limitations and infrastructure cost. Build and maintenance of automation framework is also time consuming and cost-effective. The paper is design for implementing ascalable test automation framework providing test framework as a service which expedite test execution by distributing test suites in multiple services running in parallel without any extra infrastructure.
Distributed Testing, Robot Framework, Docker, Automation Framework
Jinghua Sun1, Samuel Edwards2, Nic Connelly3, Andrew Bridge4 and Lei Zhang1, 1COMAC Shanghai Aircraft Design and Research Institute, Shanghai, China, 2Defence Aviation Safety Authority, 661 Bourke St, Melbourne, VIC, Australia, 3School of Engineering, RMIT University, Melbourne, VIC, Australia, 4European Union Aviation Safety Agency, Cologne, Germany
Airborne software is invisible and intangible, and it can significantly impact the safety of the aircraft. However, it cannot be exhaustively tested and only assured through a structured, process, activity, and objective-based approach. The paper studied the development processes and objectives applicable to different software levels based on RTCA/DO-178C. Identified 82 technical focus points based on each airborne software development sub-process, then created a Process Technology Coverage matrix to demonstrate the technical focuses of each process. Developed an objective-oriented top-down and bottom-up sampling strategy for the four software Stage of Involvement reviews by considering the frequency and depth of involvement. Finally, created the Technology Objective Coverage matrix, which can support the reviewers to perform the efficient risk-based SOI reviews by considering the identified technical points, thus ensuring the safety of the aircraft from the software assurance perspective.
Airborne Software, SOI, DO-178C, Objective, Sampling Strategy.
Zhongwei Teng, Jacob Tate, William Nock, Carlos Olea, Jules White, Vanderbilt University, USA
Checklists have been used to increase safety in aviation and help prevent mistakes in surgeries. However, despite the success of checklists in many domains, checklists have not been universally successful in improving safety. A large volume of checklists is being published online for helping software developers produce more secure code and avoid mistakes that lead to cyber-security vulnerabilities. It is not clear if these secure development checklists are an effective method of teaching developers to avoid cyber-security mistakes and reducing coding errors that introduce vulnerabilities. This paper presents in-process research looking at the secure coding checklists available online, how they map to well-known checklist formats investigated in prior human factors research, and unique pitfalls that some secure development checklists exhibit related to decidability, abstraction, and reuse.
Checklists, Cyber Security, Software Development.
Ngoc Hong Tran1, Tri Nguyen2, Quoc Binh Nguyen3, Susanna Pirttikangas4, M-Tahar Kechadi5, 1Vietnamese-German University, Vietnam, 2Center for Ubiquitous Computing, University of Oulu, Finland, 3Ton Duc Thang University, Vietnam, 4School of Computer Science, University College Dublin, Ireland, 5Insight Centre for Data Analytics, University College Dublin
This paper investigates the situation in which exists the unshared Internet in specific areas while users in there need instant advice from others nearby. Hence, a peer-to-peer network is necessary and established by connecting all neighbouring mobile devices so that they can exchange questions and recommendations. However, not all received recommendations are reliable as users may be unknown to each other. Therefore, the trustworthiness of advice is evaluated based on the advisors reputation score. The reputation score is locally stored in the users mobile device. It is not completely guaranteed that the reputation score is trustful if its owner uses it for a wrong intention. In addition, another privacy problem is about honestly auditing the reputation score on the advising user by the questioning user. Therefore, this work proposes a security model, namely Crystal, for securely managing distributed reputation scores and for preserving user privacy. Crystal ensures that the reputation score can be verified, computed and audited in a secret way. Another significant point is that the device in the peer-to-peer network have limits in physical resources such as bandwidth, power and memory. For this issue, Crystal applies lightweight Elliptic Curve Cryptographic algorithms so that Crystal consumes less the physical resources of devices. The experimental results prove that our proposed model performance is promising.
Reputation, peer to peer, privacy, security, homomorphic encryption, decentralized network.
CyprianOtutu Alozie, Department of Education, Canterbury Christ Church University the UK
This essay aims to identify and analyse a theoretical approach to the teaching of mathematics in a secondary school or college setting. It is intended that the theory can be organised into a reliable and succinct knowledge framework that educators might consider and use when planning and teaching mathematics, and as a provision of credible arguments to be put to potential schools to help raise achievement in mathematics. Approaching from a socio-technical background, I intend to explore the use of neural network theory in the teaching of mathematics. The property that is of significance for the neural network is the ability of the network (students) to learn from the environment (teachers/schools) and to improve its performance through learning. The neural networks learn about their environment through an interactive process of adjustments, applied to their synaptic weights and levels (Haykin, 1999). The network becomes more knowledgeable about its environment after each iteration of the learning process.
Artificial Neural Network, Iterative Error-correction Learning, Feedback, Feedforward, Receptors, Neural net, and effectors.
Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China
In the information era, enormous amounts of data have become available on hand to decision makers.Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s lives. This transformation influences everything from how we manage and operate our homes to automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big data and IoT, as well as the opportunities provided by the applications in various operational domains.
Artificial Intelligence, Big Data, IoT, Digital Transformation Revolution, Machine Learning.
Olumide Babalola, School of Law, University of Reading, Whiteknights, Reading, United Kingdom
Internet of Things (IoT) refers to the seamless communication and interconnectivity of multiple devices within a certain network enabled by sensors and other technologies facilitating unusual processing of personal data for the performance of a certain goal. This article examines the various definitions of the IoT from technical and socio-technical perspectives and goes ahead to describe some practical examples of IoT by demonstrating their functionalities vis a vis the anticipated privacy and information security implications. Predominantly, the article discusses the information security and privacy risks posed by the operationality of IoT as envisaged under the EU GDPR and makes a few recommendations on how to address the risks.
Data Protection, GDPR, Information Security, Internet of Things, Privacy.
Farhad Zamani and Retno Wulansari, Telkom Corporate University Center, Telkom Indonesia, Bandung, Indonesia
Recently, emotion recognition began to be implemented in the industry and human resource field. In the time we can perceive the emotional state of the employee, the employer could gain benefits from it as they could improve the quality of decision makings regarding their employee. Hence, this subject would become an embryo for emotion recognition tasks in the human resource field. In a fact, emotion recognition has become an important topic of research, especially one based on physiological signals, such as EEG. One of the reasons is due to the availability of EEG datasets that can be widely used by researchers. Moreover, the development of many machine learning methods has been significantly contributed to this research topic over time. Here, we investigated the classification method for emotion and propose two models to address this task, which are a hybrid of two deep learning architectures: One-Dimensional Convolutional Neural Network (CNN-1D) and Recurrent Neural Network (RNN). We implement Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) in the RNN architecture, that specifically designed to address the vanishing gradient problem which usually becomes an issue in the time-series dataset. We use this model to classify four emotional regions from the valence-arousal plane: High Valence High Arousal (HVHA), High Valence Low Arousal (HVLA), Low Valence High Arousal (LVHA), and Low Valence Low Arousal (LVLA). This experiment was implemented on the well-known DEAP dataset. Experimental results show that proposed methods achieve a training accuracy of 93.2% and 95.8% in the 1DCNN-GRU model and 1DCNN-LSTM model, respectively. Therefore, both models are quite robust to perform this emotion classification task.
Emotion Recognition, 1D Convolutional Neural Network, LSTM, GRU, DEAP.
Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China
In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s lives. This transformation influences everything from how we manage and operate our homes to automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big data and IoT, as well as the opportunities provided by the applications in various operational domains.
Artificial Intelligence, Big Data, IoT, Digital Transformation Revolution, Machine Learning.
Elijah Macharia, Prof. Waweru Mwangi, Dr. Michael Kimwele, School of Computing and IT., Jomo Kenyatta University of Agriculture & Technology, Nairobi, Kenya
Time, effort, and moneyneeded in keeping up with maintenance of a computer software has always beenviewed greater than time taken to develop it. Likewise, its vagueness in determining maintainability of a softwareat beginning phases of software development makes the process more confounded. This paper demonstrates thenecessity and significance of software maintainability at design phase and builds up a multilinear regression model, ‘Maintainability Estimation Framework and Metrics for Object Oriented Software (MEFOOS)’ by broadening the MOOD metrics. The framework estimates the maintainability of object-oriented software components in terms of their Testability, understandability, Modifiability, Reusability and Portability by using design level object-oriented metrics of software components. Such early measurement of maintainability will essentially help software designers to vary the software component, if theres any shortcoming, in early stages of designing and subsequently the maintainability of ultimate software.Due to this, time, effort, and cash required in maintaining software goes to scale back significantly. The framework has been validated through proper statical measuresand logical interpretation has been drawn.
Software maintenance, object-oriented design, software metrics, software maintainability, mood metrics, software component, maintainability model.
Meghang Nagavekar and Arthur Gomes, Manipal Institute of Technology, Manipal-576104, India
Based on Real-Time Operating System (RTOS) concepts, a continuous data transceiver system is designed. The wireless data transmission is enabled using the HC-12 board as a Radio Frequency (Bluetooth) module. To achieve computations for data/signal processing, an STM32 microcontroller is chosen. An open-source Middleware— FreeRTOS is used for implementing RTOS in the microcontroller. The complete transceiver system consists of an electronic remote controller as the transmitter and a multi-purpose electronic driver setup as a receiver. The receiver module can be integrated into various systems as per the user’s requirements. The controller’s application in future research prospects ranges from the manual operation of industrial machinery to the safety testing/prototyping of medical robots. The overall system is fast, reliable and convenient.
Embedded Systems, Radio Frequency, Bluetooth, FreeRTOS, STM32.
Shravan K Donthula1 and Supravat Debnath2, 1Department of Electrical Engineering, Indian Institute of Technology, Hyderabad, India, 2Integrated Sensor Systems, Centre for Interdisciplinary Programs, Indian Institute of Technology, Hyderabad, India
This paper describes the implementation of a 4-channel 10-bit, 1 GS/s time-interleaved analog to digital converter (TI-ADC) in 65nm CMOS technology. Each channel consists of interleaved T/H and ADC array operating at 250 MS/s, each ADC array consists of 14 time-interleaved sub-ADCs. This configuration provides high sampling rate even though each sub-ADC works at a moderate sampling rate. We have selected 10-bit successive approximation ADC (SAR ADC) as a sub-ADC, since this architecture is most suitable for low power and medium resolution. SAR ADC works on binary search algorithm, since it resolves 1-bit at a time. The target sampling rate was 20 MS/s in this design, however the sampling rate achieved is 15 MS/s. As a result, the 10-bit SAR ADC operates at 15 MS/s with power consumption of 560 µW at 1.2 V supply and achieves SNDR of 57 dB (i.e. ENOB 9.2 bits) near nyquist rate input. The resulting Figure of Merit (FoM) is 63.5 fJ/step. The achieved DNL and INL is +0.85\-0.9 LSB and +1\-1.1 LSB respectively. The 10-bit SAR ADC occupies active area of 300 µm × 440 µm. The functionality of single channel TI-SAR ADC has been verified by simulation with input signal frequency of 33.2 MHz and clock frequency of 250 MHz. The desired SNDR of 59.3 dB has been achieved with power consumption of 11.6 mW. This results in a FoM value of 60 fJ/step.
ADC, SAR, TI-ADC, LSB, MSB, T/H, SCDAC, CDAC, SFDR, SINAD, SNR, TG, EOC, D-FF, MIM, MOM, DNL, INL.
A.Suresh, S.Shyama, Sangeeta Srivastava and Nihar Ranjan, Embedded Systems, Product Development and Innovation Center (PDIC), BEL, India
Sensing of analogue signals such as voltage, temperature, pressure, current etc. is required to acquire the real time analog signals in the form digital streams. Most of the static analog signals are converted into voltage using sensors, transducers etc and then measured using ADCs. The digitized samples from ADC are collected either through serial or parallel interface and processed by the programmable chips such as processors, controllers, FPGAs, SOCs etc and the appropriate critical mission decisions are taken in the system. In some cases, Multichannel supported ADCs  are used to save the layout area when the functionalities are to be realized in a small form factor. In such scenarios, parallel interface for each channel is not a preferred interface considering the more number of interfaces / traces between the components. Specifically considering the exponential growth of serial interfaces for high speed applications, latest ADCs available in the market have the serial interfaces even for the multichannel support, but with multiplexing of n number of channels. Custom Sink multichannel IP core has been developed using VHDL coding to interwork with multichannel supported, time division multiplexed ADCs with serial interface. The developed IP core can be used either as it is with the SPI interface complied as specified in this paper or with necessary modifications based on the amount of variation with SPI interface in terms of it’s number of channels, sample size, sampling frequency, data transfer clock, control signals and the sequence of the operations performed to configure ADC and to achieve the required data transfer between the ADC and the programmable chip, here it is FPGA. The efficiency of implementation is validated using the measurements of throughput, accuracy. ZYNQ  FPGA and LTC2358  ADC are used to evaluate the developed IP core. Integrated Logic Analyser (ILA)  which is an integrated verification tool of Vivado is used for Verification. No Third party tool is required, whereas  uses Synopsis Discovery AMS platform.
ADC, Sensor, Multichannel, Accuracy, Sink Synchronization, FPGA, VHDL .
Copyright © VLSIE 2021