👤Miguel Ángel Castillo-Martínez
(National Polytechnic Institute of Mexico)
📅Sep 30, 2019
When the skin cancer is not detected in early stage can cause metastasis, consequently, the cancer scatters to overall body. Based in this fact, the proposal consists in image processing and machine learning approach to make a computer assists in cancer detection in acquired images according to existent patterns
📅Sep 30, 2019
This project is dedicated to the implementation of an FPGA-based acoustic keyword spotting (KWS) system for the portuguese language. Such system performs real-time processing using MFCC extraction as pre-processing and a convolutional neural network (CNN) as the classifier.
(Universidade Federal de Pernambuco (UFPE))
📅Oct 08, 2019
iOwlT is a geolocalization sound system based on the nature of prey searches by an nocturne owl. Using the technique of multilateration of signals, widely used in telecommunications, and the phase shift of a signal detected by distinct sensors well distributed in space, it is possible, with the knowledge of algebra and physics, to prove that in an N-dimensional space, just N+1 detectors are needed to accurately determine the origin of the event. As the real life has 3 dimensions only 4 sound sensors determines the location of an event.
Combining the power of the parallel processing, achieved with the use of the FPGA from DE-10 Nano board to deal with the appropriate simultaneous treatment of the audio signals by, mainly, adaptive digital filters (they adapt to the sound signals obtained in order to optimize their processing), and the use of machine learning algorithms trained to recognize the desired event, the present project aims to design an embedded system that could be coupled to a vehicle, which detects the location of gun-shooting events, that when identified will be displayed in a mobile application.
It can be used in urban areas to detect sources of gun shots and even in forbidden hunting areas to identify possible hunters, the problem to be solved has applications that do not restrict the location of the source and of a specific sound, it can be adapted to the recognition of another audible pattern.
📁Other: Controls and Dashboards
👤Luis Fernando Aljure Munoz
📅Oct 03, 2019
This is a project development based on 'Inverse Thermodynamics Technology' taking advantage of Hilsch Tube efficiency to design a levitation vehicle control. According to this technology, a few small pressure vessels (air tanks) are enough to produce high rotating kinethic energy that makes the vehicle levitate and move forward.
The use of a high speed FPGA is imperative to keep track of all physical varibles involved in the process to synchronize and control the mechanism of the vehicle. A dashboard display is implemented to gather all sensor variables and control information.
(University of Houston-Downtown)
📅Jun 28, 2019
Existing physiological reading systems, e.g., those used in patient monitoring, are ineffective for daily practices. This project aims to simulate an environment for daily physiological tracking using a FPGA DE10 board with physiological sensors. On a single board computer, we will an affordable reusable, expandable, and wireless, machine that can monitor a user’s temperature, ECG heart activity and EEG brain waves. With the integrated sensors and coding, the device should be capable of live streaming and exporting collected data on a local web server for rendering.
For the collection of EEG brain waves signals, a hand-made headset will be connected to the FPGA board via a Bluetooth module. Data will be sent and collected from the EEG headset using a Python Library.
The hosting microcomputer will be made capable of configuring and programming the required html, PHP, and python files. Overall, the data rendering software simulates a professional medical interface and is available to both the mobile devices and internet browser. The user should be able to connect the device to a remote server via the internet wirelessly, attach the reusable sensors to his/her body, and download the information gathered.
This project aims to challenge the affordability and accessibility of existing healthcare oriented monitoring equipment. From this point on, a system with the ability to collect data, and perform machine learning tasks based on the collected data is desirable. Ideally, with the computational power provided by the remote server, such a system will be able to diagnose the user’s mental states based on the knowledge gathered with the machine learning power and the organized data collection and processing.
A virtual reality system on mobile phone will be connected to the DE10 board to render mind intervention activities based on the diagnosis of the user’s mental state. Continual diagnosis and intervention will be studied to find the best routine for certain type of mental health problems.
The FPGA DE10 board will exert its power in this project, especially in the stages of machine learning for brain state recognition and rendering of virtual reality scenes. Those machine learning and virtual reality tasks will be handled using packages that run on full-fledged operating systems supported by FPGA DE10.
📁High Performance Computing
(Schneider Software Systems)
📅Jun 30, 2019
FPGA Project Proposal
By Al Schneider
This entry to the contest presents a new computer concept. The purpose of this entry is to show the concept is feasible.
The architect of this concept grew up in software in the seventies working on Univac multi-processor systems. Then, the power of such systems was apparent but shortcomings were also apparent. Years after leaving Univac this person discovered solutions to these shortcomings. The cost of implementing these solutions was high and prevented a serious attempt at demonstrating them. However, with the advent of FPGA technology, the solutions can be implemented in the real world at a reasonable cost.
In essence, this system enables large scale multiprocessing with all processors accessing a common memory without conflict. A system is planed with 256 processors to demonstrate the concept.
The method utilizes an approach somewhat opposite from traditional computer technology. Traditionally, computers move data to a hardware CPU. This system, on the other hand, utilizes virtual processor units (VPUs) as opposed to a central processor unit. They move within the system traveling to the data.
Experiments performed on a Max 10 FPGA indicate that the switching time for a LUT gate is 1.6 nanoseconds. Based on that, the aggregate throughput of the suggested system would be 200 Mega IPS.
A simple C like language will be provided to program the many processors.
The proposed entry would use a Terasic Open VINO Starter Kit to analyze the frequencies from an audio input into 256 sub bands and translate the sound into moving images on a display screen. This is to be implemented as a stethoscope that displays a visual representation of heart sounds as well as audio. An important point here is that the device will do this in real time. This requires analysis of audio input, image rendering, and image display within 33 milliseconds to produce 30 frames per second.
This link visualizing music illustrates how the display might appear:
FPGA Virtues Realized in this Entry
Using FPGA’s to develop concepts.
Demonstrates the value of having memory and logic on the same chip.
Demonstrates the ability to eliminate unneeded functionality.
Demonstrates the ability to add custom desired functionality.
Demonstrates performing many tasks simultaneously.
There may be a need to demonstrate the Cyclone V GX I/O capability and the PCIe interface.
📅Oct 15, 2019
Real time streaming analytics Machine Learning by FPGA accelerator card using FPGA AI Engine OpenVINO/DLA. The approach get through cloud based on open source for multispectrum, multi-RAT Spectrum sharing between M2M and LTE-A in 5G networks. The "streaming" works on a secure framework to provide firmware updates on Network devices code bases by Coreboot and EFI Development Kit (tianocore) for the rapidly evolving RTOS embedded system with detailed coverage of requirements and optimization of Boot Setting File (BSF). The software access multiple connections through protocols for secure remote access and learns from the "behavior" of your network, then activates the most probable event scene based on the Java-based web interface usage pattern with a multi-cloud hybrid platform using the Python framework to automate data analytics, including edge-based gateway decision, improves connection, monitoring, authentication of intelligent sensor efficiency for automatic adaptation of usage. Enhancing the learning capabilities for qualification of components based for specific tasks allocated in gateways that could act as instruments distributed by a platform have a dedicated neural network processing unit and AI function API for performance the FPGA tuning module SIM card data with modular network topology Data Center (AS, peering, links) and Edge Firewall. With a smart layer for rules and automation, uses "admin" link data centers networks and Gateways controls allowing for addition to performer "node to gateway" communications such as (BACnet). Slicing data for 5G evolution bridge our physical and virtual world for operators tap into based on our experience on development ready to turn technical concepts into 5G transformations. The ability to carry out processes, like profiling data sources algorithms can comb through all of the different data sources include the data log and select the ones that fall within a certain category have a clearer "frame" of the sources available, AI technology can tag them automatically, eliminating considerable manual work. In addition the links among different sources for detecting anomalies on data traficc. Select switch protocols to explore Link-Aggregation (802.3ad) or VPC like and always think about stacking and VLAN (802.1q). VLAN Extension in L2, VPLS / VPWS, OTV like E-VPN and PBB-EVPN. Define QoS, Jumbo Frame and SAN Network. Framework programmatic ones being 5G security sensors automated firmware update by FOTA. The software can connect industrial SCADA or DCS directly to the cloud using industrial protocols such as NB-IoT, CAT-M1, MODBUS, OPC, ISA100 wireless technology, PROFIBUS for connectivity and CoAP / MQTT for gateway solves the problem of interoperability and M2M communication through of internal flash memory FPGA, with this feature in synchronization of private files from open cloud storage service to a safe and upgraded service. It has differential correction parameters implemented through the very useful radio module when there is no reliable network coverage structure to reduce the high cost of the DTLS handshake in the WSN and provides reduced latency when compared to a standard DTLS use case without require changes to the final hosts in a multiple output multiplex network (MIMO) through the RTM module in parallel through many more antennas.
📁Internet of Things
📅Oct 17, 2019
Azure IoT Edge enables developers to deploy containerized modules to internet connected devices which allows for maintaining a desired state of running services through cloud-configured deployment configurations. This mechanism also offers the ability to securely update running modules at runtime on remote devices via changes to this configuration.
This IoT Edge module will allow the user to configure the FPGA portion of the Cyclone V SoC from Linux within an IoT Edge module, allowing for a robust deployment mechanism for shipping FPGA configurations to remote devices at scale.
This can allow for FPGA enabled devices to be dynamically reprogrammed in the field, without physical access. As such, we can now deliver hardware configurable updates over the air to enable a wide variety of never before seen use cases including fixing issues without need for a physical recall, updating configurations to be more performant or add new features, and allowing devices to be bootstrap with remotely delivered FPGA configurations on first run.
📁Other: High-speed Video Processing/Artificial Intelligence/IoT
(University of Illinois at Urbana-Champaign)
📅Oct 12, 2019
The purpose of our project is to build a smart home security camera system. On a high level, the system would focus its sights on particular objects in the room that the user wants to ensure doesnt get handled by intruders. If an intruder enters the room and begins to handle the object, then the user will be notified. Aside from the notification, there will be an hdmi display that shows the camera feed with detected objects. That feed will also be transferred to the user pc or phone over wifi.
We aim to build this system using the DE10-nano development kit. We would run OpenCV on the ARM Cortex A9 chip to interface with the camera and take care of the frame differencing and Kalman filtering to identify and track moving objects. The Cyclone V SE would be used to run an Artificial Intelligence that classifies the moving object detected by the ARM chip. There would be a live video stream via HDMI to a monitor, a data dump via ethernet, and a wireless status update via the Arduino header to the cloud.
(Universidad Santo Tomás)
📅Jun 28, 2019
Reinforcement learning (RL) allows systems to learn from their own experience, tries and errors. In robotic, when dynamics are complex and environments are unknown, RL is a great way to obtain control and decision policies. This project aims to implement different methods of RL on the FPGA inside a Intel SoC, in order to take advantage of the parallelism that programmable logic offers.
(San Jose State University )
📅Oct 09, 2019
Purpose: The power of deep learning has given the ability to extract meaningful information from image analysis.The real time extraction of text from images is useful for visually impaired people to read instructions on sign boards ,get information of items during shopping and various other applications.In this project we aim to interface the camera with OpenVINO Starter kit and explore the high computing performance to extract text from the images in real time using
convolution neural networks.
Applications:This project has applications in autonomous vehicle navigation, to guide visually impaired people. This project can be further exploited to extract the text in english and render in any other language.
📁Other: Biological Neural Network Simulation
(Universidad Tecnológica Nacional - Facultad Regional Córdoba)
📅Oct 09, 2019
Caenorhabditis elegans (C. Elegans) is one of the simplest organisms with a nervous system. In the hermaphrodite, this system comprises 302 neurons, the pattern of which has been comprehensively mapped, in what is known as a connectome.
Currently, most of the work done in simulating this neural network is done by software, and what little work has been done in hardware emulation is focused on a per neuron simulation, implementing each neuron inside a separate FPGA IC, which is great for studying this particular nervous system, which has a really low neuron count, but when we want to study bigger nervous systems, comprised of millions (mice) or billions (humans) of neurons, it's not feasible to take this one neuron in one FPGA approach, we need to be able to integrate those neurons and optimize the neuron implementation to utilize the lowest amount of resources possible.
Why? Right now, we have the full connectome (which means all neurons and corresponding synapses have been mapped) for the C. Elegans, which was completed in 1986.
We already have active projects like the CIC's Mouse Connectome Project or the Human Connectome Project, which aim to fully map the Mouse and Human brains, respectively.
When these connectomes have been mapped, a suite like the one proposed here will be of much use to simulate these networks with low latency, real time results, which would bring huge benefits to neuroscience and psychology.
The aim of this project is twofold:
First, to implement the C. Elegans neural network efficiently, utilizing the lowest amount of resources possible, and write the corresponding software (or, if available, integrate it with an open source solution) to visualize the results in a computer, with the data transmitted via a high speed interface like PCIe.
Second, to lay the ground work for a complete suite for simulating biological neural networks in hardware and visualizing the results (be it neuron status, or in case of simple organism like the C. Elegans, view the actual organism moving, but receiving data not from a software simulation, but from the hardware implementation).