Student Projects

    Student Projects    

Design of a TFP16 Vector Processor for RISC-V

The increasing demand of computation at the edge and the tight power budgets push designers to migrate double and single-precision calculations to formats of reduced precision and dynamic range for applications that can tolerate some inaccuracy.

In this context, variable precision floating-point formats with storage limited to 16 bits, such as TFP16, are suitable for applications in signal processing, machine learning and other applications in embedded systems.

The focus of this project is to design a TFP16 vector processor that can sustain a throughput of one result per clock cycle, and it is based on an extension of the RISC-V instruction set.

Platform: ASIC.

Design and Characterization of a Smart Sensor for Smart City Ambient Noise Monitoring

The idea consists in having ambient noise monitors, equipped with acoustic smart sensors, to measure the noise level and generate detailed and dynamically updated ambient noise maps in real time.

The goal is to record, process and classify the ambient noise by using algorithms typical of machine learning, and to identify hardware architectures to accelerate parts of the algorithms.

The project is the design of a noise monitor (phonometer), implemented on an Arduino-based microprocessor, and the profiling of the phonometer performances in terms of measurement accuracy and energy consumption. The project includes the implementation of the software infrastructure to capture the GPS data and the transmission of the measurements via the cellular network.

In collaboration with Univ. of Rome "Tor Vergata", Italy.

Platforms: SODAQ ARM-based Arduino boards, plus Actel's FPGAs (now Microsemi)

ASIC implementation of Forward Error Correction (FEC) for High-speed Data Center Interconnects

Data Centers are connected using High-speed optical communication and powerful error correction codes are used for reliable transmission bringing bit-error rates below 10-15. For power efficient implementation ASIC are used. The project shall investigate FEC for Data Center Interconnects (DCI) at targets speed at 1 Tbit/s, and beyond.

To achieve fast and efficient codes, concatenated coding is used employing a soft decision inner code towards the channel and a hard decision outer code. A possible choice for inner code is a SD Hamming code and for outer code, possible choices are Staircase or Product codes.

In the project, initially one of the FEC components shall be chosen and evaluated at the data rates of 400Gbit/s to 1.6Tbit/s. Then, the project can be extendet to other components. The ASIC implementation shall be synthesized and evaluated in terms of area and power requirements.

In collaboration with Søren Forchhammer (DTU Photonics).

Platform: ASIC.

Approximate Operations and Operands for Machine Learning

Approximate computing is a promising approach to energy-efficient design of digital systems in many domains such as Machine Learning (ML). The use of specialized data formats in Deep Neural Networks (DNNs) could allow substantial improvements in processing time and power efficiency.

The focus is on applying variable precision formats, such as TFP, to ML algorithms. These formats allow to set different precisions for different operations and to tune the precision of given layers of the neural network to obtain higher power efficiency.

After the implementation in software of the Neural Network (NN) to determine what formats give the optimal accuracy, the focus is on the hardware implementation of the NN in the selected data format (e.g., TFP, BFloat16, ...).

Finally, the NN is evaluated for energy-efficiency, performance, complexity and accuracy.

An alternative project is the implementation of specific operators used in NN, such as the SoftMax function, in an approximated manner.

Platforms: FPGAs or ASICs.

Energy Efficient Accelerators and DSPs

Increasingly sophisticated and computationally intensive algorithms are required for applications running on mobile devices and embedded processors. These applications include audio and image recognition, machine learning, and security.

In this context, Application Specific Processors (ASPs) are used to accelerate software applications in hardware. FPGA-based accelerators are attractive because they can be designed and fine tuned to match exactly the algorithm, and FPGAs can be reconfigured at run-time by making the system adaptable to the specific workload.

Audio applications can benefit from specialized accelerator implementing emerging number formats, such as TFP16.

The main focus of this project is to characterize the behavior of different number formats in audio applications, and evaluate the trade-offs in terms of audio quality, spectra, performance, and energy efficiency.

In collaboration with Univ. of Rome "Tor Vergata", Italy.

Platforms: FPGAs and ASICs.


Modified by Alberto Nannarelli on Sunday September 25, 2022 at 10:21