CSE REU Projects for 2025

Generative AI for Biomedical Imaging

Faculty: Ulugbek Kamilov

Biomedical imaging often deals with the problem of reconstructing images free from artifacts and noise. Recent progress in generative AI has transformed the way we reconstruct biomedical images. REU students will work on advanced algorithms for image reconstruction based on integration of optimization and generative AI models. We have developed a family of such techniques that use state-of-the-art generative AI to generate clean images from the corrupt measurements. REU students will have an opportunity to learn about real-world problems in biomedical imaging, study cutting edge imaging technology, and contribute to this exciting research area.

Skills Required: Familiarity with image processing and deep learning. Proficiency with Python.

Explanation Design for AI-Assisted Decision Making

Faculty: Chien-Ju Ho

AI-assisted decision-making, which leverages AI support to enhance human decision-making, is becoming increasingly prevalent across various fields. However, its effectiveness is often hindered by humans either over-relying on or under-utilizing AI assistance. A commonly suggested remedy is to provide explanations for the AI’s recommendations. Unfortunately, existing explanation approaches have demonstrated limited empirical success in improving human-AI joint decision outcomes. In this REU project, the student will investigate how different designs of explanations in AI-assisted decision-making affect overall human-AI decision outcomes. The project includes conducting human-subject experiments to identify the elements that make explanations effective for individuals and developing theory-driven algorithms to improve explanation design.

Skills Required: A strong programming and mathematical background is required. Knowledge of and interest in psychology and economics are a plus.

Explainable Planning and Scheduling

Faculty: William Yeoh

In human-aware planning and scheduling systems, when the agent recommends a plan or schedule to a human user, it is often the case that the user might not understand why the recommendation is good, for example, compared to an alternative in the user’s mind. In such a scenario, there is a need for the agent to explain its recommendation to the user, providing them with the necessary information to understand properties of the recommendation (e.g., optimality, feasibility, etc.).

In this REU project, students will have the opportunity to investigate solution approaches from a wide spectrum, ranging from symbolic logic-based approaches that use knowledge representation and reasoning (KR) to data-driven approaches that use large language models (LLMs), as well as neuro-symbolic approaches that combine the benefit of both.

Skills Required: Strong programming skills. Familiarity with logic and/or LLMs is a plus.

Improving Large Language Model Capabilities

Faculty: Jiaxin Huang

We are entering an era with a proliferation of Large Language Models (LLMs). As these models become increasingly prevalent in real-world applications, addressing their limitations and enhancing their capabilities has become a critical area of research. This summer research project offers undergraduate students the opportunity to explore cutting-edge research in LLM from one or more of the following directions: (1) Improving LLM trustworthiness through enhanced calibration, alignment, and reasoning capabilities. This involves developing methodologies to ensure LLM outputs are not only accurate but also appropriately confident and aligned with human values, as well as refining their ability to engage in complex reasoning tasks. (2) Enhancing LLM efficiency by optimizing training and inference processes. This research direction focuses on developing techniques to reduce computational resources required for LLM computation without compromising performance, potentially exploring areas such as efficient fine-tuning and inference strategies, and architectural designs. (3) Improving multi-modal capabilities, with a particular emphasis on Vision Language Models (VLMs). This research direction aims to creating AI systems capable of understanding and generating content across multiple modalities.

Skills Required: Strong programming skills, prior experience with research projects on deep learning is highly desirable.

Hyperdimensional computing and linear codes

Faculty: Netanel Raviv

Hyperdimensional Computing (HDC) is an emerging computational paradigm for representing compositional information as high-dimensional vectors, and has a promising potential in applications ranging from learning with neural networks to neuromorphic computing. In a radical shift from traditional information processing methods, in HDC one represents objects as random binary vectors, and uses algebraic operations on those vectors to store data structures and apply learning algorithms.


Recently, linear error correcting codes have been shown to provide incredible speedups in a variety of learning and information processing tasks in HDC. In this project we will explore the theory and practice of using linear codes in HDC, and examine the connection to associative memories. We will develop algorithms, prove bounds, and test those in real-world learning applications.

Skills Required: An ideal candidate will have strong mathematical skills, especially in linear algebra, finite fields, and probability. The student must be comfortable with reading and writing proofs, have critical and algorithmic thinking, and also be capable of implementing simple learning algorithms in Python. Familiarity with error correcting codes or associative memories (Hopfield networks) is appreciated.

Learning while reasoning

Faculty: Brendan Juba

We are developing methods for more sophisticated kinds of automated reasoning that directly use raw data during the reasoning process. These methods automatically identify regularities in the data that are relevant to the reasoning problem under consideration. In this project, we aim to show that our methods can successfully scale to reason about large sets of objects. We will modify existing systems for automated reasoning to incorporate our learning methods, and evaluate their performance.

Skills Required: Familiarity with logic and probability, and strong knowledge of at least one of the two. Either strong knowledge of C/C++ or convex optimization.

Cake-cutting fairness explorations

Faculty: Ron Cytron

This work is funded by Mozilla in their Responsible Computer Science program. We aim to create exercises using cake-cutting algorithms to demonstrate fairness and unfairness in algorithmic approaches to sharing divsible and indivisble resources. The student and I will study the algorithms together, brainstorm exercises, and the student will create and write-up the exercises. The work would then be released to the larger Mozilla project leaders and to the general public for use in courses.

Skills Required: python programming, basic algorithms course

Reinforcement Learning for Satellite-Based GPS-Denied UAV Navigation

Faculty: Nathan Jacobs

In outdoor environments where GPS signals are unavailable or unreliable, traditional UAV navigation methods fall short. This project tackles the challenge by using reinforcement learning to train a virtual UAV to navigate autonomously, relying solely on a downward-facing camera, inertial measurement unit (IMU), and reference satellite images. Working entirely in a simulated environment, students will develop and refine algorithms to enable the UAV to match real-time visual data to pre-existing maps, plan efficient flight paths, and adapt to changes in the landscape. This project offers hands-on experience with cutting-edge tools in reinforcement learning and UAV simulation, equipping students to push the boundaries of autonomous navigation technology.

Skills Required: Experience with Python programming and reinforcement learning is required. Experience with computer vision and image processing would be helpful but is not required.

Computer Vision for Plant Ecology

Faculty: Nathan Jacobs

The field of computer vision has changed dramatically in the past ten years, making it possible to automatically extract detailed information from vast sets of images. While challenges remain, this presents an opportunity to improve our understanding of earth processes and their impact on plants and animals. The Multimodal Vision Research Laboratory (MVRL) is developing computer vision techniques to address pressing problems in ecology in close collaboration with domain experts. The focus of this project is on developing methods to assist botanists in processing herbarium specimens (plants that are collected in the field, dried, and stored in archives). There is a vast global effort to digitize these specimens to support research on climate change and climate resilience, but automated tools are needed to better make use of this imagery. We anticipate two students working in collaboration on different aspects of the problem (e.g., triaging images to support routing to the appropriate expert, improving estimates of geographic location for specimens without precise GPS, etc.).

Skills Required: Experience with Python programming, data wrangling, and machine learning is required. Experience with computer vision, image processing, and deep learning would be helpful but is not required.

High-Performance Computing to Benefit Scientific Observation

Faculty: Roger Chamberlain, Jeremy Buhler, Ron Cytron

The task of gathering data from scientific instruments — such as telescopes, particle sensors, and DNA sequencers — can benefit from advances in computational techniques. Instruments can report their observations faster and more reliably, and large volumes of observation data can be processed in real time within the instrument to make rapid discoveries and direct follow-up observations. Ultimately, more powerful computation can dramatically increase the ability of our instruments to answer questions about the world around us.

In this project, we work with experts in multiple domains of science (e.g., astrophysics, aerosol chemistry, molecular biology) to improve the performance and power efficiency of computations that benefit scientific observation. Because computations must occur within or adjacent to the observing instrument, they must meet strict size, weight, and power (SWaP) constraints as well as achieving goals for application throughput and latency. To succeed within these constraints, we often turn to low-power multicores and to non-traditional computer architectures (FPGAs, GPUs) to accelerate an application.

Skills Required: Programming ability; C/C++ experience a plus but not strictly required