Deadline to register is Jun 30, 2019.
Teams can still edit your proposals during judging period.

Category: Sort By:
📁Machine Learning
👤連鈞 胡 (成功大學 )
📅Jun 30, 2019
捲積神經網路(Convolution Neural Network)在圖形處理的類神經網路中以高準確度和龐大運算量著名。本研究進行捲積神經網路硬體加速器設計。藉由探討捲積運算中的平行性、設計空間探索、記憶體重複使用狀況等,做為我們的硬體加速器設計方案。我們以VGG-16作為實驗架構,並用ImageNet LSVRC-2014做為我們的架構資料集。我們將預先訓練好的權值和偏移量的資料做分析處理,將參數值取出後,在固定16位元數下,載入給改良加速硬體加速器做捲積運算。我們利用迴圈平行、乘法器管線化和利用固定寬度乘法器取代一般乘法器,這三種方式來達到加速運行以及節省硬體面積。而有鑒於捲積神經網路架構之不同,以相同加速硬體若要適用於不同之神經網路架構,我們提出第二種設計方案,可應用在不同的CNN架構上,降低捲積層計算次數。我們以捲積核行為基礎,資料緊密分布的方式,將資料分配給硬體上的加速單元運作。
details »
👀 1393   💬 0
📁Machine Learning
👤建璋 陳 (成功大學)
📅Jun 30, 2019
Support vector machines (SVMs) are widely used in various artificial intelligence (AI) applications. Due to AI applications’ high computation complexity and real-time requirements, it is critical to speed up the SVM operation efficiently. The most part of the SVM computation is the kernel functions, which dominate the overall SVM speed and need to be implemented with special hardware.
In this thesis, we designed a new SVM hardware accelerator that speeds up efficiently the calculation of kernel functions by changing the form of the decision function and by tiling the loops in it. And, we had also designed a new efficient fixed-width multiplier with very low errors for use in this SVM accelerator.
Therefore, our SVM accelerator has a significantly improved detection speed compared to others, and the fixed-width multiplier has also the lowest errors than other approximate multipliers.
details »
👀 1491   💬 0

1 ... 7 8