Teams can still edit your proposals during community voting period.

Category: Sort By:
Regional Final
📁Other: Virtual/Augmented Reality (VR/AR)
👤Kwang Liang Chong (Universiti Tunku Abdul Rahman (UTAR))
📅Jul 08, 2018
Virtual/Augmented reality (VR/AR) is a fascinating way to travel using nothing more than the power of technology. With a headset and motion tracking, VR/AR lets you look around a virtual space as if you are actually there. It is also been a promising technology for decades that never truly caught on.

Position tracking and depth sensing to measure user head motion and surrounding condition are very important towards achieving immersion and presence in virtual reality. In VR/AR, performance is extremely important as every milliseconds counts.

In our project, we are proposing on FPGA based Kalman Filter in analyzing IMU data for position tracking as well as stereo camera for depth sensing. IMU sensor data which consists of noise and interfaces are inaccurate in position tracking and hence require filtering. Kalman filter is an algorithm which requires complex matrix calculation where high-speed arithmetic function implementations and pipelining can be achieve by using FPGA based fully hardware Kalman Filter. Stereo vision depth processing pipeline will be processed in FPGA, outputting dense depth map. Then, object detection and tracking will run on integrated ARM processor based on dense depth map generated by FPGA. Depth sensing on FPGA consume lesser power while having higher performance compared to digital signal processor.

Low power consumption and small package size of FPGA allows realization of battery-powered cordless headset to maximize user experience.
details »
 
 

👀 2444   💬 13
Regional Final
📁Machine Learning
👤Pradeep Kathirgamaraja (ParaQum Technologies)
📅Jul 07, 2018
Embedded Neural Coprocessor is the next generation of embedded processor which have capability to execute machine learning functions efficiently. Our Coprocessor supports small scale convolution neural networks(CNN) computation natively using re-configurable layers which will be implemented in programmable logics of Cyclone V. First, user designs the neural network using provided API, then train it offshore (servers) and feed the model to our Coprocessor. Then it can execute feed forward computation and generate output which will be used in targeted application eg real-time object recognition, face recognition, voice command recognition etc. we have separate dedicated hardware logic to emulate each type of layers which are computational intensive operations like convolution/ Fire layer, Maxpool layer. For example, if a network have convolution layer operation, it have dedicated hardware which perform the convolution layer function. Layer parameters (dimension, inputs, etc) are configurable dynamically according to requirement. Similarly we have configurable Maxpool hardware module. We support most commonly used layer operations, But if user required different operation, which also supported by performing computation in ARM processor. Layers are connected through memory after each layer output stored in memory.

Our Coprocessor can achieve great speed compare to typical sequential embedded processor since we are using hardware level parallelization. It can be used in many real-time applications. It will be cost effective compare to GPU or other dedicated hardware level acceleration. We separate device used for training (in servers) and testing application (Coprocessor), because training is very time consuming process which need heavy computing power while testing can be done remotely. Ultimately we could able to run small scale neural network on embedded device with greater speed.
details »
 
 

👀 3834   💬 26
Regional Final
📁Machine Learning
👤Samira Peiris (Sri Lanka Institute of Information Technology )
📅Jul 06, 2018
In the Synthetic Fabric Printing Industry, fabric shrinks width wise due to the bleaching and dyeing process. To prevent this, the fabric is put under a process called' stentering'. This process adjusts the GSM (Grams per square meter) of a fabric by stretching it under high temperature. The required temperature and stretching forces are pre-determined and set up on the machine. However, at the end of the process, the GSM of the fabric must be checked manually by cutting a standard size of fabric and measuring its weight. This must be done to a small batch of the fabric and the settings must be fine tuned for each lot.

This project aims to utilize the high speed processing capabilities of FPGAs to create a system that can be taught to determine the GSM of a fast moving fabric by image processing magnified still images of the fabric.
details »
 
 

👀 947   💬 5
Regional Final
📁High Performance Computing
👤Atul Kumar (Bharati Vidyapeeth College Of Engineering)
📅Jul 01, 2018
In this modern world the problem of loneliness in the old age is quite very common and critical for the society in every aspect. According to the Demographic Research about ‘Living alone: One-person households’ which states that the 30% of the total population are living alone and this has been increasing by the time. It was 19% in 2000 and there is the increase in the older women to live alone from the past. There could be many reasons for someone to live alone and surely it is a difficult task for oneself. And this arouses the person to go in a state of loneliness. This might causes the problem of lack of companionship, falling into an unsafe environment, inability to complete their errands, remembering their medication, their personal details, etc.

The proposed system will communicate virtually with his/her house and the user by taking the real time data and status. Then identifying the needs of the user, status of their health and take the necessary actions in favor of the user.
details »
 
 

👀 758   💬 5
Regional Final
📁Other: Industrial Automation
👤Isuru Senevirathne (University of Moratuwa)
📅Jun 30, 2018
Productivity and efficiency are key factors in the clothing industry. Despite the improved sewing machine techniques, the sewing machines are not yet fully automated. With the use of FPGA technology, we propose to improve the capabilities of the automated sewing machine using fast image processing, sequencing and controlling of actuators.

In the clothing industry, most of the sewing machines are not automated and operated by an individual due to reasons like high-cost, unreliability, low speed and lack of cloth handling capabilities. With the redesign of the sewing machine using FPGA platform, these drawbacks can be avoided. In the primary design, simple sewing machine is expected to be implemented using machine vision, process building and controlling the actuators. The processes can be done smoothly and efficiently using the FPGA platform.
details »
 
 

👀 904   💬 9
Regional Final
📁Machine Learning
👤Dong Tran Le Thang (Duy Tan University)
📅Jun 26, 2018
Drivers can barely keep their eyes open, according to the AAA Foundation for Traffic Safety’s annual Traffic Safety Culture Index. More than a third of drivers report having fallen asleep behind the wheel at some point in their lives, and more than one in ten has fallen asleep behind the wheel in the past year. A Foundation study completed in Novembr 2014 found the impact of having drowsy drivers on the road is considerable. Drowsy drivers are involved in an estimated 21% of fatal crashes, up from 16.5% from the previous 2010 study, as most drivers drift out of their lanes or off the road. Drivers themselves are often crash victims who die in single-car crashes. To address the danger, vehicle manufacturers as well as technology companies are unveiling sophisticated new driver drowsiness alert systems. Nowadays, driving support systems, such as car navigation systems, are getting common, and they support drivers in several aspects. It is important for driving support systems to detect status of driver's consciousness. Particularly, detecting driver's drowsiness could prevent drivers from collisions caused by drowsy driving. In our research, we implement the some detection methods for detecting driver's drowsiness processing technique. This system is based on facial images analysis for warning the driver of drowsiness or in attention to prevent traffic accidents.
details »
 
 

👀 1486   💬 25
Regional Final
📁Machine Learning
👤Kanchana Ranasinghe (University of Moratuwa)
📅Apr 30, 2018
Deep Learning dominates contemporary Machine Vision, with Convolutional Neural Networks (CNNs) being the state-of-the-art recognition system. In our work, we attempt to detect and accurately recognize diseases in plants, given a considerable dataset of images. Our focus would be on recognizing exact diseases by applying image processing techniques to close up images of plant leaves. We employ FPGA for real-time inference of the trained CNNs. For detection, we obtain multi-spectral images (visual/RGB) and calculate the NDVI index for an aerial video feed. Areas beyond a threshold are marked sick, and a map is generated with locations of affected regions.

We treat the recognition of different diseases as a classification task, and train a CNN using a collected dataset. We would be training separate CNNs for each plant type, each of which would be able to recognize the diseases relevant to that plant species. Due to the limited quantity of training data and the nature of this task, we focus on using transfer learning for training the CNN, where we already provide it with knowledge regarding general images, and use plant disease specific data only for determining very high level features. In addition, we plan to look at active learning to find the most required images to be labelled, as more similar data can be collected, instead of blanket data collection. The inference using this trained Neural Network will be done based on FPGA as real-time performance is expected for this computationally heavy task.

Our system will be in the form of a single-board computer based device using FPGA for complex computations. Images obtained would be processed on the device itself, mostly using the FPGA. As inference in CNNs is done using FPGA, this is possible in real-time. We are testing Inception, RESNET and Alexnet CNN architectures and will carry out training of the Neural Network on a high performance device (one time task). The initial case would be done using python with tensorflow, opencv, and numpy. The trained neural network architecture would be rebuilt in low-level, and the trained-weights would be borrowed and directly used.

Our key beneficiaries include small-scale home gardeners (people without access to specialized knowledge regarding plant diseases), green-house based farmers, and plant disease research groups.

Possible future extensions for this work include detecting diseases (anomaly detection) using general aerial images. Considering the ubiquity of drone based agriculture for large fields, this will show emerge invaluable in near future.
details »
 
 

👀 3093   💬 72
Regional Final
📁Machine Learning
👤Bibin Johnson (Indian Institute of Space Science and Technology)
📅Apr 30, 2018
We propose a novel VLSI architecture for performing object avoidance using Convolutional Neural Network (CNN) in real-time. The CNN implementation in CPU/GPGPU involves large computation time while consuming significant power. By implementing CNN using an FPGA we could reduce computation and power dissipation considerably. Deep learning techniques are commonly used in search engines, handwriting recognition, speech recognition, object classification etc. The autonomous vehicles are set to hit the roads soon, when and where these cars should turn by avoiding obstacles can all be identified using proper training of neural networks. In this work, CNN based optical flow calculates the distance, direction and apparent velocity at which the object is approaching a vehicle, this information is used for obstacle avoidance.
The stream of images generated from an onboard camera will be given as inputs to the CNN in real-time. Grayscale images are employed for this project for computational feasibility. The CNN implementation has multiple convolution layers, pooling layers, and fully connected layer. CNN convolves individual features across the whole image, this is better than conventional machine learning. In the filtering phase we will convolve features across the image stream this results in a stack of filtered images. It is followed by pooling to shrink the image size and then we apply ReLU nonlinearity after each layer. We will have a fully connected layer which converts the list of features into a list of votes.
The hardware efficiency of the system can be verified by implementing it on SoC. The proposed system will exploit FPGA for its parallel nature, so most of the function is performed in FPGA and CPU acts as an intermediary between FPGA and other input/output interfaces. CNN will be trained offline and is implemented in FPGA, while the back-propagation is implemented using a CPU. In order to reduce bandwidth, we will make use of 8-bit image and 4- bit weights so that we could pack more neurons into the FPGA in parallel. We need a large dataset to train the CNN, but due to unavailability of large training datasets, we would use synthetic datasets.
details »
 
 

👀 1078   💬 43
Semifinalist
📁Machine Learning
👤Naufal Ridho Hizamiar (Bandung Institute of Technology, Indonesia)
📅May 24, 2018
COLORIZER is a proposed image colorization system that is specifically designed for FPGA. Image colorization itself is a process of adding color to grayscale image. Using COLORIZER, we let the system to colorize the photo automatically by mapping a grayscale image (1-channel pixel) to a colorized output image (3-channel RGB pixel). The COLORIZER system uses deep learning approach, Artificial Neural Network (ANN), to recognize patterns in an image so image colorization can be automated. Training process is usually done very slow using single-threaded CPU because of its very heavy computational load. The computation, which involves bulk of simple multiplications and accumulations, actually has a lot of parallelism. Thus, we need to 'parallelize' the computation and find a method to compensate its resource-consuming system. We implement the COLORIZER system using FPGA and also applies Sliding Window Technique. Take note that the neural networks is fully implemented within the FPGA, meaning that the FPGA is not only used as a hardware accelerator. Also by applying Sliding Window Technique to the system we only need 3 output perceptrons rather than the number of all image pixels (3 perceptrons to represent Red, Green Blue channel). Furthermore, with this technique we can create many training sets only from one image. Sliding Window Technique is actually a widely used technique in object detection. But, we see opportunity that this technique can be used 'hand-in-hand' with ANN to implement image colorization on FPGA.
details »
 
 

👀 966   💬 11
Semifinalist
📁Machine Learning
👤Karthikeyan P (Velammal College of Engineering and Technology)
📅May 11, 2018
Project overview:
The jasminum flower is classified depending on their quality by extracting colour, shape, texture of the flower using various techniques and performing tensor flow algorithm for machine learning process.
Purpose:
1. Currently, flowers are classified manually based on their quality, which is a time consuming process so that the flowers lose their freshness.
2. Accuracy of classification is poor.
The following are the existing difficulties, which could be altered by automation proposed in this project.
Design goal:
The system should be designed in such a way that there must a sample collector where the jasminum flower is collected directly from the farm. The flower is allowed to pass on a conveyor belt. When it enters the image capturing chamber, the image of the flower is captured and given to the microcontroller of raspberry pi then the image is subjected for feature extraction. The colour, shape, texture of the flower could be identified using the following algorithms: Colour and Edge Diversity Descriptor (CEDD), Hue Saturation Value (HSV) colour space, Linear Binary Pattern (LBP), Zernike moments. The extracted features are fed as inputs to the tensor flow. The tensor flow is open source software by Google where the machine learning and deep learning is performed. The concept of machine learning became easy after the occurrence of tensor flow which has in-build support for deep learning, tools to assemble neural networks, mathematical functions for neural networks. Once the output is generated, it is received by the microcontroller and opens the switch for the following category via switch controller. The flower that is moving on a conveyor belt falls into the box of its category since the switch was open. Thus, the flowers could be classified accordingly. The light intensity controller is used to control the intensity or brightness while capturing the image in the image capturing chamber. The multiplexer switch and the light intensity controller could be programmed in FPGA board DE10 which could be efficient.
CONCLUSION:
Thus, the jasminum flower is segregated according to its quality with great accuracy using this methodology. Machine learning made the life of people easier and comfortable.
details »
 
 

👀 1381   💬 6
Semifinalist
📁Other: Combine of Image Processing, High Performance Computing and Digital Design.
👤Taufik Ibnu Salim (TIU of Instrumentation Development)
📅May 11, 2018
Spatial Digital Image Correlation (DIC) was used to measure the bubble rising terminal velocity to obtain the bubble size distribution generated by a mixing pump. This method provides a non-contact and low-cost measurement of bubble size. The digital image correlation setup is designed using low-cost line laser-diode as a light source, and a consumer pocket digital camera to record the light scattered from the rising bubble inside the chamber. The measured terminal velocity distribution is proportional to the square radius of the microbubble according to the Hadamard Rybczynski equation. DIC analysis of multiple interrogation windows was performed to obtain the bubble velocity distribution. Therefore the bubble size distribution can be estimated by calculating the radius from the Hadamard-Rybczynski equation. Evaluation of multiple frame pairs over time was performed to see the effect of the size distribution of the bubble radius.
details »
 
 

👀 1169   💬 19
Semifinalist
📁Other: Orientation Estimation
👤Pratikto Hidayat (Universitas Gadjah Mada)
📅May 10, 2018
We proposed an Attitude and Heading Reference Systems (AHRS) coprocessor from
tri-axis Magnetic, Angular Rate, and Gravity (MARG) and Inertial measurement Unit (IMU) sensor using Madgwick’s
AHRS sensor fusion algorithm. By relieving processor-intensive tasks from the 
primary processor, coprocessors can accelerate overall system performance.
details »
 
 

👀 1279   💬 50

1 2 3