📅Nov 18, 2019
(Universitat Autonoma de Barcelona)
📅Oct 06, 2019
Collecting Mushrooms for human consumption is a very popular activity in Catalonia. It is so popular that the goverment is often discussing methods to control access to the forests of the country, including tolls. There are hundreds of different mushroom especies, some are very appreciated, and some are poisonous, and even can cause death. There are a number of fatalities every day because of mushroom intoxication.
The exact identification of mushroom especies is an important challenge that can save lives. The aim of this project is to build a machine learning system to identify each mushroom especies from photos taken with mobile phones.
👤Miguel Ángel Castillo-Martínez
(National Polytechnic Institute of Mexico)
📅Sep 30, 2019
When the skin cancer is not detected in early stage can cause metastasis, consequently, the cancer scatters to overall body. Based in this fact, the proposal consists in image processing and machine learning approach to make a computer assists in cancer detection in acquired images according to existent patterns
📁High Performance Computing
📅Nov 21, 2019
Neural architecture search (NAS) is a technique for finding a neural network architecture model for domain-specific applications. This search tool itself is usually based on reinforcement learning, with a recurrent neural network to generate neural network models, but it would take a long time to find good candidates by searching and testing all possible combinations of neural network architecture models as each candidate needs to be trained with data. Moreover, as latency, power consumption, and chip areas are highly sensitive to hardware, there is no guarantee that the results will meet user’s requirements without modeling hardware characteristics in the search process. To address the aforementioned issues specifically for Edge AI applications on the OpenVINO Starter Kit, we propose a NAS framework which accelerates the search process for the FPGA and meets the accuracy and latency required by the user. Furthermore, our framework will optimize the model after the candidates are found. We will conduct case studies to demonstrate the effectiveness of our framework for Edge AI applications such as image classification.
📅Sep 30, 2019
This project is dedicated to the implementation of an FPGA-based acoustic keyword spotting (KWS) system for the portuguese language. Such system performs real-time processing using MFCC extraction as pre-processing and a convolutional neural network (CNN) as the classifier.
(Tokyo Institute of Technology)
📅Oct 15, 2019
This project is an FPGA implementation of the accurate monocular depth estimator with realtime. The monocular depth estimation estimates the depth from single RGB images. Estimating depth is important to understand the scene and it improves the performance of 3D object detections and semantic segmentations. Also, there is many applications requiring depth estimation such as robotics, 3D modeling and driving automation systems. The monocular depth estimation is extremely effective in these applications where the stereo images, optical flow, or point clouds cannot be used. Moreover, there is the possibility to replace an expensive radar sensor into the general RGB camera.
We choose the CNN (Convolutional Neural Network)-based monocular depth estimation since the stereo monocular estimation requires larger resource and CNN schemes are able to realize accurate and dense estimation. Estimating the depth from 2D images is easy for human but it is difficult to implement accurate system under limited device resources. Because CNN schemes require massive amount of multiplications. To handle this, we adapt 4 and 8-bit quantizations for the CNN and weight pruning for the FPGA implementation.
Our CNN-based estimation is demonstrated on OpenVINO Starter Kit and Jetson-TX2 GPU board to compare the performance, inference speed and energy efficiency.
📅Oct 08, 2019
The aim of our project is to develop the real-time video frame depth reconstruction device using FPGA.
There are a lot of classical algorithms of depth map reconstructing, but even the finest of them do it slowly, it takes a few seconds to process even a single frame.
These approaches don’t work in real-time.
Within the project, we are intended to develop the device, which can speed up a frame depth map reconstruction process, dealing with that task in real time without reduction in the processing result quality.
To construct a real-time depth map we use a special architecture deep neural network, implemented in FPGA, which processes two images from stereo-pare simultaneously.
FPGA enables the user to make this process more efficient than CPU thanks to implementation of both the parallel architecture and pipelines, so we achieve a great speed up of this process with help of parallel data processing and pipelining in data flow.
📅Nov 21, 2019
📁High Performance Computing
(National Research University "Moscow Power Engineering Institute")
📅Oct 09, 2019
The task of image segmentation and its further real-time processing is one of the major computer vision research areas. Image segmentation of video sequence is essential to object detection, motion parameters estimation, big-data analysis automation.
Image segmentation of Full HD video sequence in real-time is impossible using only CPU computing. Therefore there is a need for an accelerator that will perform specific pipeline processing. The ideal platform for this task is FPGA with low power consumption, high performance and reconfigurable computing.
We offer a solution for image segmentation of video sequence in Full HD image resolution (1920x1080) with a frame rate of 30 frames per second. Data is sent from a single camera in real-time. As an example of the use of our solution we have removed the selected objects from the video sequence with the background recovery of the deleted part of the image.
👤SAMEER BAIG MOHAMMAD
(RAJIV GANDHI UNIVERSITY OF KNOWLEDGE TECHNOLOGIES, NUZVID)
📅Oct 11, 2019
For the past few years, there is an increasing trend in the road accidents across the world, there is no exemption even for the developed countries like US for this problem, which reported 40,000 accidents deaths in the year 2018. This shows the gross severity of the problem which should be addressed immediately.
Road accidents lead to lots of unpredictable consequences such as abnormal deaths, perpetual injuries, loss of earnings, etc. The primary causes of these unexpected accidents are distracted and drowsy driving, over speeding, violation of traffic rules, drunken driving, etc. To overcome these constraints, we aim to design an efficient and reliable autonomous car.
Autonomous car is capable of sensing its environment and takes decisions in accordance with the rules without the intervention of human beings. Our design works on the concepts of Convolution Neural Networks (CNN), Machine Learning, Computer vision, and Image/Video Processing. In our project, the autonomous system will detect the lane and obstacles, calculate the distance from the obstacle to the car, blow the horn when a pedestrian is detected and controls the car’s acceleration based on the information given to the system.
Simultaneously, the provision will be made for live audio streaming (especially for the visually impaired) and displaying video streaming to the persons inside the vehicle.
Parallel architecture features of FPGA will be utilized for arriving quick decisions using CNN.
If time permits, we may plan to implement destination arrival using the GPS.
(Universidade Federal de Pernambuco (UFPE))
📅Oct 08, 2019
iOwlT is a geolocalization sound system based on the nature of prey searches by an nocturne owl. Using the technique of multilateration of signals, widely used in telecommunications, and the phase shift of a signal detected by distinct sensors well distributed in space, it is possible, with the knowledge of algebra and physics, to prove that in an N-dimensional space, just N+1 detectors are needed to accurately determine the origin of the event. As the real life has 3 dimensions only 4 sound sensors determines the location of an event.
Combining the power of the parallel processing, achieved with the use of the FPGA from DE-10 Nano board to deal with the appropriate simultaneous treatment of the audio signals by, mainly, adaptive digital filters (they adapt to the sound signals obtained in order to optimize their processing), and the use of machine learning algorithms trained to recognize the desired event, the present project aims to design an embedded system that could be coupled to a vehicle, which detects the location of gun-shooting events, that when identified will be displayed in a mobile application.
It can be used in urban areas to detect sources of gun shots and even in forbidden hunting areas to identify possible hunters, the problem to be solved has applications that do not restrict the location of the source and of a specific sound, it can be adapted to the recognition of another audible pattern.
(Universiti Malaysia Perlis (UniMAP))
📅Oct 13, 2019
Malaysia is an agriculture-based country, where 70% of population depends on agriculture domain. As cutting-edge innovation development increases, the environmental pollution is frequently occurring, which leads the demand of fruit quality especially for agricultural farms. Nowadays, there are several environment pollution and disaster, like flood, drought, river pollution and etc which abilities to influence the growth and quality of fruits. However, in order to maintain good fruit quality, some problems are needed to solve such as difficulty to detect fruit ripeness before and after harvest. Fruit grading system is currently ineffective and inefficient which achieved through manual process by trained human expertise. The manual processes for identifying ripeness are achieved by expert experience in sensing and observing the fruit skin texture. The manual process has high possibility to human error and time consuming. In automated fruit grading system, appearance (shape, colour, size and bruises) is generally utilized to classify the fruit’s grade. Therefore, this project aims to develop a high green life innovation product by monitoring fruit quality using FPGA technology combining with three non-destructive methods, such as image, odour, and capacitive techniques. This project will integrate fruit recognition and classification, and also fruit quality assessment by implementing expert vision, and sensing data acquisition through hybrid intelligent algorithms such as Support Vector Machine-Fuzzy Interference System (SVM-FIS). This system is implemented on a FPGA-SoC Intel Cyclone V SoC available on DE10-Nano Kit, for the purpose of collecting and analyzing data concurrently through IoT-based technology implementation. The proposed system used high performance requirements that are covered by FPGA-SoC, since this device has concurrency ability, sufficient number of I/O pin, and low power consumption, which makes it suitable for this product and its applications. This project is expected to monitor and to classify fruits and also to control and manage the pre- and post-harvest of fruits by implementing the FPGA-SoC board through IoT-based system.