📁Other: Virtual/Augmented Reality (VR/AR)
👤Kwang Liang Chong
(Universiti Tunku Abdul Rahman (UTAR))
📅Jul 08, 2018
Virtual/Augmented reality (VR/AR) is a fascinating way to travel using nothing more than the power of technology. With a headset and motion tracking, VR/AR lets you look around a virtual space as if you are actually there. It is also been a promising technology for decades that never truly caught on.
Position tracking and depth sensing to measure user head motion and surrounding condition are very important towards achieving immersion and presence in virtual reality. In VR/AR, performance is extremely important as every milliseconds counts.
In our project, we are proposing on FPGA based Kalman Filter in analyzing IMU data for position tracking as well as stereo camera for depth sensing. IMU sensor data which consists of noise and interfaces are inaccurate in position tracking and hence require filtering. Kalman filter is an algorithm which requires complex matrix calculation where high-speed arithmetic function implementations and pipelining can be achieve by using FPGA based fully hardware Kalman Filter. Stereo vision depth processing pipeline will be processed in FPGA, outputting dense depth map. Then, object detection and tracking will run on integrated ARM processor based on dense depth map generated by FPGA. Depth sensing on FPGA consume lesser power while having higher performance compared to digital signal processor.
Low power consumption and small package size of FPGA allows realization of battery-powered cordless headset to maximize user experience.
👤Dong Tran Le Thang
(Duy Tan University)
📅Jun 26, 2018
Drivers can barely keep their eyes open, according to the AAA Foundation for Traffic Safety’s annual Traffic Safety Culture Index. More than a third of drivers report having fallen asleep behind the wheel at some point in their lives, and more than one in ten has fallen asleep behind the wheel in the past year. A Foundation study completed in Novembr 2014 found the impact of having drowsy drivers on the road is considerable. Drowsy drivers are involved in an estimated 21% of fatal crashes, up from 16.5% from the previous 2010 study, as most drivers drift out of their lanes or off the road. Drivers themselves are often crash victims who die in single-car crashes. To address the danger, vehicle manufacturers as well as technology companies are unveiling sophisticated new driver drowsiness alert systems. Nowadays, driving support systems, such as car navigation systems, are getting common, and they support drivers in several aspects. It is important for driving support systems to detect status of driver's consciousness. Particularly, detecting driver's drowsiness could prevent drivers from collisions caused by drowsy driving. In our research, we implement the some detection methods for detecting driver's drowsiness processing technique. This system is based on facial images analysis for warning the driver of drowsiness or in attention to prevent traffic accidents.
(University of Moratuwa)
📅Apr 30, 2018
Deep Learning dominates contemporary Machine Vision, with Convolutional Neural Networks (CNNs) being the state-of-the-art recognition system. In our work, we attempt to detect and accurately recognize diseases in plants, given a considerable dataset of images. Our focus would be on recognizing exact diseases by applying image processing techniques to close up images of plant leaves. We employ FPGA for real-time inference of the trained CNNs. For detection, we obtain multi-spectral images (visual/RGB) and calculate the NDVI index for an aerial video feed. Areas beyond a threshold are marked sick, and a map is generated with locations of affected regions.
We treat the recognition of different diseases as a classification task, and train a CNN using a collected dataset. We would be training separate CNNs for each plant type, each of which would be able to recognize the diseases relevant to that plant species. Due to the limited quantity of training data and the nature of this task, we focus on using transfer learning for training the CNN, where we already provide it with knowledge regarding general images, and use plant disease specific data only for determining very high level features. In addition, we plan to look at active learning to find the most required images to be labelled, as more similar data can be collected, instead of blanket data collection. The inference using this trained Neural Network will be done based on FPGA as real-time performance is expected for this computationally heavy task.
Our system will be in the form of a single-board computer based device using FPGA for complex computations. Images obtained would be processed on the device itself, mostly using the FPGA. As inference in CNNs is done using FPGA, this is possible in real-time. We are testing Inception, RESNET and Alexnet CNN architectures and will carry out training of the Neural Network on a high performance device (one time task). The initial case would be done using python with tensorflow, opencv, and numpy. The trained neural network architecture would be rebuilt in low-level, and the trained-weights would be borrowed and directly used.
Our key beneficiaries include small-scale home gardeners (people without access to specialized knowledge regarding plant diseases), green-house based farmers, and plant disease research groups.
Possible future extensions for this work include detecting diseases (anomaly detection) using general aerial images. Considering the ubiquity of drone based agriculture for large fields, this will show emerge invaluable in near future.
(Sri Lanka Institute of Information Technology )
📅Jul 06, 2018
In the Synthetic Fabric Printing Industry, fabric shrinks width wise due to the bleaching and dyeing process. To prevent this, the fabric is put under a process called' stentering'. This process adjusts the GSM (Grams per square meter) of a fabric by stretching it under high temperature. The required temperature and stretching forces are pre-determined and set up on the machine. However, at the end of the process, the GSM of the fabric must be checked manually by cutting a standard size of fabric and measuring its weight. This must be done to a small batch of the fabric and the settings must be fine tuned for each lot.
This project aims to utilize the high speed processing capabilities of FPGAs to create a system that can be taught to determine the GSM of a fast moving fabric by image processing magnified still images of the fabric.
📅Jul 07, 2018
Embedded Neural Coprocessor is the next generation of embedded processor which have capability to execute machine learning functions efficiently. Our Coprocessor supports small scale convolution neural networks(CNN) computation natively using re-configurable layers which will be implemented in programmable logics of Cyclone V. First, user designs the neural network using provided API, then train it offshore (servers) and feed the model to our Coprocessor. Then it can execute feed forward computation and generate output which will be used in targeted application eg real-time object recognition, face recognition, voice command recognition etc. we have separate dedicated hardware logic to emulate each type of layers which are computational intensive operations like convolution/ Fire layer, Maxpool layer. For example, if a network have convolution layer operation, it have dedicated hardware which perform the convolution layer function. Layer parameters (dimension, inputs, etc) are configurable dynamically according to requirement. Similarly we have configurable Maxpool hardware module. We support most commonly used layer operations, But if user required different operation, which also supported by performing computation in ARM processor. Layers are connected through memory after each layer output stored in memory.
Our Coprocessor can achieve great speed compare to typical sequential embedded processor since we are using hardware level parallelization. It can be used in many real-time applications. It will be cost effective compare to GPU or other dedicated hardware level acceleration. We separate device used for training (in servers) and testing application (Coprocessor), because training is very time consuming process which need heavy computing power while testing can be done remotely. Ultimately we could able to run small scale neural network on embedded device with greater speed.
📁Other: Industrial Automation
(University of Moratuwa)
📅Jun 30, 2018
Productivity and efficiency are key factors in the clothing industry. Despite the improved sewing machine techniques, the sewing machines are not yet fully automated. With the use of FPGA technology, we propose to improve the capabilities of the automated sewing machine using fast image processing, sequencing and controlling of actuators.
In the clothing industry, most of the sewing machines are not automated and operated by an individual due to reasons like high-cost, unreliability, low speed and lack of cloth handling capabilities. With the redesign of the sewing machine using FPGA platform, these drawbacks can be avoided. In the primary design, simple sewing machine is expected to be implemented using machine vision, process building and controlling the actuators. The processes can be done smoothly and efficiently using the FPGA platform.
(Indian Institute of Space Science and Technology)
📅Apr 30, 2018
We propose a novel VLSI architecture for performing object avoidance using Convolutional Neural Network (CNN) in real-time. The CNN implementation in CPU/GPGPU involves large computation time while consuming significant power. By implementing CNN using an FPGA we could reduce computation and power dissipation considerably. Deep learning techniques are commonly used in search engines, handwriting recognition, speech recognition, object classification etc. The autonomous vehicles are set to hit the roads soon, when and where these cars should turn by avoiding obstacles can all be identified using proper training of neural networks. In this work, CNN based optical flow calculates the distance, direction and apparent velocity at which the object is approaching a vehicle, this information is used for obstacle avoidance.
The stream of images generated from an onboard camera will be given as inputs to the CNN in real-time. Grayscale images are employed for this project for computational feasibility. The CNN implementation has multiple convolution layers, pooling layers, and fully connected layer. CNN convolves individual features across the whole image, this is better than conventional machine learning. In the filtering phase we will convolve features across the image stream this results in a stack of filtered images. It is followed by pooling to shrink the image size and then we apply ReLU nonlinearity after each layer. We will have a fully connected layer which converts the list of features into a list of votes.
The hardware efficiency of the system can be verified by implementing it on SoC. The proposed system will exploit FPGA for its parallel nature, so most of the function is performed in FPGA and CPU acts as an intermediary between FPGA and other input/output interfaces. CNN will be trained offline and is implemented in FPGA, while the back-propagation is implemented using a CPU. In order to reduce bandwidth, we will make use of 8-bit image and 4- bit weights so that we could pack more neurons into the FPGA in parallel. We need a large dataset to train the CNN, but due to unavailability of large training datasets, we would use synthetic datasets.
📁High Performance Computing
(Bharati Vidyapeeth College Of Engineering)
📅Jul 01, 2018
In this modern world the problem of loneliness in the old age is quite very common and critical for the society in every aspect. According to the Demographic Research about ‘Living alone: One-person households’ which states that the 30% of the total population are living alone and this has been increasing by the time. It was 19% in 2000 and there is the increase in the older women to live alone from the past. There could be many reasons for someone to live alone and surely it is a difficult task for oneself. And this arouses the person to go in a state of loneliness. This might causes the problem of lack of companionship, falling into an unsafe environment, inability to complete their errands, remembering their medication, their personal details, etc.
The proposed system will communicate virtually with his/her house and the user by taking the real time data and status. Then identifying the needs of the user, status of their health and take the necessary actions in favor of the user.
📁Other: Combination of I.O.T , Embedded system and Man to Machine communication
(Gujarat Technological University)
📅Apr 28, 2018
Using sixth sense technology an ordinary person can enhance their communication with outer world. Our idea is to create a device which is helpful for those unfortunate people who are unable to speak/talk, by providing them an easy way to communicate with outer world by using their hand gestures with help of sixth sense technology. Because of affordable price and less number of components at the user side, this device will provide an efficient way of communication. By taking the existing advantages of sixth sense technology like mobile excess and faster data exchange, we will use the FPGA board as a memory device and also as a medium between user and a server and as an image processor . The server has all the data about gestures and respective speech synthesize and will communicate with FPGA board at real time basis using IOT technology. If one does not have no internet connection, then there is no need to worry!! The FPGA board will have a storage which also has all the speech and gestures data. So ultimately this device will work efficiently in both mode, Online and Offline!
📁Internet of Things
👤Dr. Bijoy Kumar Upadhyaya
(Tripura Institute of technology)
📅Jan 30, 2018
The project is focused on the security of a citizen whether inside or outside of a house. The project is enhanced with Bluetooth camera, GPS, Audio recording module making it exclusive to ensure the security of an individual by alerting the concerned authority in smart way.
📁High Performance Computing
📅May 02, 2018
Adders are most important part in a processor working as they are present in ALU unit and perform all arithmetic and logical operations . The faster the adders involving the faster the results we get , As we are indeed trying to get faster and high end results in today technology . we are need of the fastest adders to make faster computations.
In this project , we are using a Brent kung adder and Binary excess converter( B E C) in a carry select adder instead of the regular use of ripple carry adders it is the Existing method . By replacing ripple carry adders and placing Brent kung adder along with binary excess converter , we tend to get faster results. As the carry propagation and generation time has reduced from ripple carry adder to Brent kung adder as it is parallel prefix adder works like a tree based and Binary excess converter on other hand will add 1 to the result making the faster computation . So with the use of Brent kung adder and Binary excess converter we get faster computational environment. In future we are intending to use more faster approach of adders involving use of high end addition techniques like vedic maths or use of other faster computational methods. Finally , Our method not only makes computation faster but also consumes less power as it uses less number of logics than existing results in lower power consumption and these can be implemented in FPGA.
📁Other: Orientation Estimation
(Universitas Gadjah Mada)
📅May 10, 2018
We proposed an Attitude and Heading Reference Systems (AHRS) coprocessor from
tri-axis Magnetic, Angular Rate, and Gravity (MARG) and Inertial measurement Unit (IMU) sensor using Madgwick’s
AHRS sensor fusion algorithm. By relieving processor-intensive tasks from the
primary processor, coprocessors can accelerate overall system performance.