UCF DRACO LAB Design of Resilient Architectures for Computing


We develop algorithms and processes to automatically design, develop, and assess the resilience, robustness, security electronic devices and systems.

Our research includes topics at various stages of investigation: advanced/novel cryptographic logic primitives (polymorphic, homomorphic, quantum-enhanced), symbiosis of AI designed constructs for AI applications, assessment and evaluation of assistive technologies on semiconductor design and post-manufacturing operation, and development and detection methods for AI-based sabotage.

We have opportunities for undergraduate (paid or experiential based on interest and time commitment) and graduate research (funded). Our research projects are tightly coupled with the topics listed above, but we also sponsor more applied projects which may be of interest to Senior Design Teams.

All Projects

AI Detecting Hardware Trojans

The aim of this project is to assess if a hardware trojan could be detected within a large-scale system. The approach involves implementing a hardware trojan designed to periodically acquire “sensitive information.” The goal is to investigate whether the AI model can detect variations in voltage power consumption and determine if can decipher when sensitive information is being leaked exactly.

Formal Verification of a RISC-V Processor

This research project focuses on using Formal Verification, specifically Formal Equivalence Checking, to verify the equivalence(s) of two RISC-V processors. The goal is to compare the behavior of an already verified RISC-V processor with an unverified one using mathematical proofs, identifying, and addressing any discrepancies.

Automated RTL Generation

Automated synthesis tools have made large strides in recent years in their capabilities to produce production grade code with minimal specification. GPT systems like Bard and ChatGPT have established a new state-of-the-art within the program synthesis field, but their applications to Verilog and other HDLs is an open question. Initial experiments have demonstrated significant restrictions on the off-the-shelf adoption of these tools, and thus this project seeks to answer questions such as: what problems have transformer architectures ingested sufficient training data to solve properly within the verilog space? what capabilities have these systems developed for evaluating themselves? and what interaction paradigms are most effective for interfacing with these novel tools? This project seeks to evaluate a number of AI models in their code synthesis capabilities and, using an open coding technique, describe the quality of the code produced by these systems in comparison to wholly human generated code.

Security in Automobile Systems

In the time of advancing autonomous vehicle technologies, the demand for secure embedded systems is increasing. The reliability of vehicle systems relies on real-time data to ensure the safety of the vehicles and its passengers. It is imperative that these systems are fortified against attacks that could compromise data integrity and the real-time operations. Widely employed in automotive applications, the STM32 Microcontroller offers a platform for a Real-Time Operating System. This research looks to address the question: Does the STM32 Microcontroller utilizing Real-Time Operating Systems maintain both data integrity and real-time operations when subjected to a Clock Glitching Attack?

Homomorphic Acceleration Side-Channel Attacks

With the rise of cloud computing, we need ways to preserve our data against cloud server attacks. Typically data sent to the cloud must be decrypted before operations can be performed on it, creating a weakness for attackers. Homomorphic encryption provides a solution against this by allowing cloud computing operations to be performed on encrypted data, securing against a potential cloud-side attack. However, homomorphic encryption/decryption is a computationally expensive process, and typical edge-side embedded devices need additional hardware and software implementations to accelerate the en/decryption step. However, these edge-sided operations are still be susceptible to side-channel attacks, potentially leaking information about the cryptographic keys. This project seeks to explore potential side-channels in proposed homomorphic acceleration platforms.

Router Canaries for Home Networks

In the event of a cyber attack and compromise of a home network, the integration of an additional defense mechanism into a router plays a crucial role. This measure is designed to identify the specific hardware targeted, pinpoint involved IP addresses, and potentially gather pertinent legal information related to the breach. Employing canaries as a preemptive measure enables prompt alerts to both end-users and the network provider regarding any unauthorized access. The functionality of this safeguard extends to detecting anomalies in IoT device data during access attempts, allowing for the isolation and temporary termination of usage by networked appliances implicated in the breach. Analyzing input/output metrics, inner packet arrival times, and data parameters, including CPU availability, is crucial for identifying and understanding irregular patterns. This is essential for triggering proactive responses, such as disrupting service to the compromised application when necessary, and determining the appropriate instances to notify network providers, ensuring a timely and effective cybersecurity response.

Formal Verification of Hardware Designs from Large Language Models

The growing interest in Artificial Intelligence in commercial spaces has increased the demand for more reliable output from AI. Usually, Large Language Models(LLMs) require users to verify its output’s feasibility before applying it within a space. Applying formal methods to verify outputs based on specified metrics will increase the accuracy of LLMs and reduce the burden of verification of the user. This project explores LLM-generated hardware designs in Verilog and C and compares them to already verified designs across a variety of metrics. Particularly, hardware that has applications within safety-critical missions as mistakes are detrimental. These results will give insight into LLM’s decision-making, showcase the shortcomings of AI-generated hardware design, and provide a heading for what challenges need to be overcome in the AI formal verification space.