CSE REU Projects for 2023
Algorithmic Procedural Fairness
Faculty: Yevgeniy Vorobeychik
Algorithmic approaches are now abundant, and increasingly the automated decisions they make impact real lives. How do we ensure that such decisions are fair? Conventional approaches to algorithmic fairness focus primarily on the distribution of outcomes. However, research in procedural fairness suggests that procedural aspects, such as treating people with dignity and giving them a voice in the process, are often perceived as more fundamentally important by individuals actually affected. This project will involve a human subject study of aspects of procedural fairness in the context of algorithmic decision making. We will investigate, in particular, how explanations and ability to contest decisions impact human judgments of fairness.
Skills Required: Strong programming skills, experience in web programming, and solid understanding of statistics and data analysis. Prior experience with Amazon Mechanical Turk is a plus.
Miniature Urban Autonomous Driving
Faculty: Yevgeniy Vorobeychik
We are developing an urban autonomous driving platform, WU-mini-city. The goal for this summer is to develop and hone basic self-driving capability of a miniature autonomous car (MIT Racecar/J architecture), including lane following and obstacle avoidance. A significant practical challenge is that many such tasks require non-trivial computation. The research task is to develop approaches that are efficient and low-enough latency to enable them to run on the computing platforms that are a part of the miniature vehicles. We will explore deep reinforcement learning approaches as well as more conventional model-based approaches to solve these problems.
Skills Required: Strong programming and mathematical background. Knowledge of machine learning foundations, both conceptual and practical. Significant experience with python, including ML libraries in python. Prior experience with deep learning, reinforcement learning, and/or robotics a strong plus.
Deep Learning for Computational Imaging
Faculty: Ulugbek Kamilov
Computational imaging often deals with the problem of forming images free of artifacts and noise. REU students will work on advanced algorithms for image restoration that are based on integration of optimization and machine learning. We have developed a family of such techniques that use learned information, such as natural image features, to generate clean images from the corrupt ones. REU students will have an opportunity to learn about real-world problems in biomedical imaging, study cutting edge imaging technology, and contribute to this exciting research area.
Skills Required: Familiarity with image processing, optimization, and machine learning. Knowledge of Python and/or Matlab.
Modernizing Internet Communications
Faculty: Patrick Crowley
With the arrival of HTTP/3 and QUIC, the Internet is experiencing its most profound technological changes in decades. The QUIC protocol is replacing TCP in all new apps and services, and change is coming for all networked software systems and hardware platforms. This NSF-sponsored project is building a software platform that aims to dramatically simplify the development, debugging, and evaluation of software and services built on HTTP/3 and QUIC. In this project, you will work alongside PhD students and research staff members to learn about HTTP/3 and QUIC, and to learn how to write and evaluate mobile apps and back-end services with these new protocols.
Skills Required: Strong programming skills are required. You should have a strong interest in becoming proficient with: rust, web programming, tcp/ip, open telemetry, Android, and iOS.
Semi-Automated Camera Trap Annotation
Faculty: Nathan Jacobs, Solny Adalsteinsson (Tyson Research Center)
The integration of passive monitoring devices (e.g., camera traps and acoustic recorders) into ecological research has advanced our understanding of a wide range of taxa, particularly among nocturnal or cryptic species. For example, motion-triggered infrared cameras document animal presence 24 hours a day with minimal disturbance to study subjects. While passive monitoring devices continue to improve and become more affordable, ecologists are limited by their ability to process the enormous volumes of data that these devices collect. Human annotation of these datasets is time-consuming, costly, and limits the practical utility of passive monitoring devices.
To overcome this problem, there is a need for semi-automated systems to rapidly annotate project imagery. Such systems must carefully combine human expertise with advanced computer vision algorithms. This project, which is in collaboration Dr. Adalsteinsson (Tyson Research Center), an expert in wildlife ecology, will develop systems for annotating camera trap imagery and computer-vision algorithms for automatically detecting animals in camera-trap images.
Skills Required: Experience with C++ and Python, familiarity with OpenGL, good foundation in algorithms and data structures (particularly those related to graphs).Strong Python programming and mathematical background. Knowledge of machine learning foundations, both conceptual and practical. Prior experience with deep learning and/or computer vision is a strong plus.
Cyber-physical attack and defense
Faculty: Ning Zhang
Cyber-physical systems, such as autonomous vehicles, are revolutionizing different sectors in our society from manufacturing to transportation. While the industry is excited about the potentials of such systems with pervasive connectivity, security in these safety-critical cyber-physical systems remains a major concern for users, developers and lawmakers. The goal of this project is to develop the platform and attacks to enable verification of system defenses.
In this project, the REU will work with Ph.D. students on (one or more tasks) (1) reproducing existing attacks/defenses on real CPS platforms, including cyber and physical-based methods. (2) building hardware-in-the-loop simulation (3) developing performance profiling systems in Linux kernel to facilitate overhead measurement. Based on their interests, REUs have choices to develop their knowledge of kernel, hacking, and/or adversarial machine learning.
Skills Required: Strong programming skills and familiarity with C++ or python. Some understanding of the Linux systems stack (e.g., scheduling, network, perf subsystems) is preferred. Prior experience with robots/simulation is a plus.
Fine-Grained, Task-Driven Urban Remote Sensing
Faculty: Nathan Jacobs
Early work in remote sensing focused on rural areas due to the relatively low spatial resolution of traditional sensors. Over the past ten years, there have been rapid advances in the spatial resolution and temporal frequency of satellite platforms, and it’s increasingly inexpensive to collect high-resolution data from ground-based and aerial platforms. This, coupled with the increasing urbanization of world populations, means that it’s critical to develop automated tools to interpret raw sensor data (images, point clouds, and radar) with the goal of building fine-grained, task-driven models of urban areas.
Within this broad research area, there are two planned focus areas: (1) using deep learning approaches to construct fine-grained maps of urban areas, including building functions and transportation infrastructure, by fusing ground-based and overhead imagery; and (2) using contrastive learning approaches to assess whether a given piece of social media is likely to have been generated at a purported location and time.
Skills Required: Strong Python programming and mathematical background. Knowledge of machine learning foundations, both conceptual and practical. Prior experience with deep learning and/or computer vision is a strong plus.
High-Performance Computing to Benefit Scientific Observation
The task of gathering data from scientific instruments — such as telescopes, particle sensors, and DNA sequencers — can benefit from advances in computational techniques. Instruments can report their observations faster and more reliably, and large volumes of observation data can be processed in real time within the instrument to make rapid discoveries and direct follow-up observations. Ultimately, more powerful computation can dramatically increase the ability of our instruments to answer questions about the world around us.
In this project, we work with experts in multiple domains of science (e.g., astrophysics, aerosol chemistry, molecular biology) to improve the performance and power efficiency of computations that benefit scientific observation. Because computations must occur within or adjacent to the observing instrument, they must meet strict size, weight, and power (SWaP) constraints as well as achieving goals for application throughput and latency. To succeed within these constraints, we often turn to low-power multicores and to non-traditional computer architectures (FPGAs, GPUs) to accelerate an application.
Skills Required: Programming ability; C/C++ experience a plus but not strictly required
Faculty: Caitlin Kelleher
Code maintenance represents more than 50% of the total cost of developing a piece of software . Understanding an unfamiliar codebase is a major contributor to the time spent on software maintenance. Increasing the efficiency of code modification tasks even modestly has the potential to enable faster technological progress. Several studies suggest that as programmers attempt to understand the behavior of unfamiliar code, they pose and attempt to answer a variety of questions including what parts of the code are relevant to a task as well as the rationales behind implementation details. While some researchers have begun to explore the use of more detailed code histories to support these information needs, no clear approach has yet emerged. We note that much of the information sought during code maintenance tasks exists in the context and behavior of the programmer who originally wrote that code. Yet, while we routinely store different versions of a codebase, we rarely capture any of the surrounding information produced and consumed during the development of that codebase.
Imagine that instead of capturing a series of code snapshots, we could instead capture the story of the code. This story would consist of the series of problems the programmer encountered and the path to their solutions. Thus, a code story would include both how the code developed over time and the influences that shaped that development. This approach is motivated by work in four areas: program comprehension, narrative and memory, advance organizers, and cognitive load theory. While the foundations of each are unique, all suggest that providing a structure (such as the problem to be solved) followed by the details (the path to solve the problem) enables learners to more readily structure, process, and retain information. Thus, we hypothesize that a story-based approach to presenting code histories will help programmers to more quickly 1) identify which section of code are relevant to a particular problem, 2) understand the code they deem relevant and 3) modify it appropriately. The goals of this project are to understand the structure of coding histories and use that to develop code stories that aid programmers in understanding, using, and modifying existing code bases.
Skills Required: Web programming