Research
Carnegie Mellon’s Software and Societal Systems Department (S3D) hosts an active research group with a highly interdisciplinary approach to software engineering. Indeed, we believe that interdisciplinary work is inherent to software engineering. The field of software engineering (SE) is built on computer science fundamentals, drawing from areas such as algorithms, programming languages, compilers, and machine learning. At the same time, SE is an engineering discipline: both the practice of SE and SE research problems revolve around technical solutions that successfully resolve conflicting constraints. As such, trade-offs between costs and benefits are an integral part of evaluating the effectiveness of methods and tools.
Emerging problems in the area of privacy, security, and mobility motivate many challenges faced by today’s software engineers, motivating new solutions in SE research. Because software is built by people, SE is also a human discipline, and so research in the field also draws on psychology and other social sciences. Carnegie Mellon faculty bring expertise from all of these disciplines to bear on their research, and we emphasize this interdisciplinary approach in our REU Site.
Below, you'll find projects we are planning for summer 2025.
Adversarial attacks on self driving car data
Mentor: Claire Le Goues
Description and Significance
About 100 billion dollars has been invested in self driving car development. After all of that, and many recent high profile failures, we're confident that they are difficult to develop. Now it is your turn to discover failures in machine-learning based self driving car algorithms.
In this project, we will develop new methods for quickly finding failures in testing self driving car algorithms in simulation. We'll build on a varied set of technologies including differentiable rendering, adversarial algorithms, testing environment generation, and deep learning. We will develop a testing system that creates an initial scene for a self driving car algorithm, then makes modifications such as moving a traffic cone or changing the lighting, until the algorithm demonstrates unsafe behavior. Uniquely, we plan to encode those changes in a differentiable way, so we can use gradient descent to find those catastrophic changes quickly.
We're looking for a student with comfort in python development who is conversant in calculus and linear algebra. If you have experience in any of graphics, machine learning, robotics, or game development, you're an especially good fit. If you don't have experience with any of those, but you have interest, this project may be a great place to learn.
Student Involvement
The student on this project will learn how to build usable and practical tools for automated, self driving car bug finding in Python and apply these skills by extending our existing work on adversarial grasping to work for self driving algorithms.
AI-Driven Braille Decoding Systems
Mentor: Justin Chan
Description and Significance
Braille literacy rates amongst the blind are as low as 10% in the United States, in part due to the difficulty of learning braille and a decrease in qualified teachers. This significantly affects the 75,000 people in the US who lose part or all of their vision due to age and other diseases every year. There is thus an unmet need to create systems that can significantly improve the reading capabilities of the newly blind.
In this project, we aim to design low-cost hardware and AI-driven software systems that leverage either a) tactile and acoustic sensors or b) optical sensors with multi-modal machine learning models that can robustly decode braille in a variety of different realworld conditions including non-ideal lighting and ambient sound conditions, and varied usage patterns by different users.
Given the low-cost and portable nature of the hardware design of the systems, they have the potential to be deployed at scale particularly in low and middle countries, where the need for such technologies is particularly acute. All hardware and software artifacts produced as a result of this research will be made open-source.
Student Involvement
The student will have the opportunity to lead or join a project that can lead to publication at a venue in mobile systems, ubiquitous computing or HCI.
The student will engage in hardware prototyping with microcontrollers and sensors, and the design of applied signal processing, and machine learning pipelines. The exact responsibilities of the students will be tailored based on his or her exact background and skillset.
The student will benefit from being part of a vibrant lab community that is highly engaged with their work, and supportive of each other's growth and development. The student can expect to benefit from a faculty and senior PhD team that will provide hands-on research and career mentorship through the internship, and if successful, after the internship through till publication of the project.
An AI Driven Operating System for GPUs
Mentor: Dimitrios Skarlatos
Description and Significance
As GPUs become a cornerstone of modern computing, there is an urgent need for operating systems tailored to their unique architecture. Traditional CPU-based operating systems often fail to harness the full power of GPUs, which are optimized for parallel processing and high-throughput workloads. Developing an efficient, GPU-specific operating system can unlock unprecedented performance, improve scalability, and revolutionize fields such as AI and scientific computing.
In this project, we aim to design and build an innovative operating system specifically for GPUs. This system will manage GPU resources more efficiently, provide advanced scheduling algorithms to maximize utilization, and simplify development through intuitive programming interfaces. To ensure its usability and impact, we will explore new abstractions tailored for GPU workloads, support seamless integration with CPU-based systems, and focus on optimizing memory management and data transfers—a critical bottleneck in GPU computing.
Student Involvement
This project offers a range of exciting opportunities for students interested in the intersection of computer architecture, systems programming, and high-performance computing. Students can contribute to designing novel GPU-specific operating system abstractions, developing low-level resource management algorithms, and creating tools to benchmark and validate performance improvements. There are also opportunities to work on user-focused challenges, such as improving developer interfaces and enabling interoperability with existing software ecosystems. Whether you're passionate about systems-level programming, GPU architecture, AI, or software design, this project provides a hands-on environment to make a tangible impact in cutting-edge computing.
References
https://www.cs.cmu.edu/~caos
Accelerated Software Testing - NaNofuzz
Mentors: Joshua Sunshine and Brad Myers
Description and Significance
Generating a robust test suite is often considered one of the most difficult tasks in software engineering. In the United States alone, software testing labor is estimated to cost at least $48 billion USD per year. Despite high cost and widespread automation in test execution and other areas of software engineering, test suites continue to be created manually by software engineers. Automatic Test sUite Generation (ATUG) tools have shown promising results in non-human experiments, but they are not widely adopted in industry.
Prior research provides clues that traditional ATUG tools may not be well-aligned to the process human software engineers use to generate test suites. For instance, some tools generate incorrect or hard-to-understand test suites while others obscure important information that would help the software engineer evaluate the quality of the test suite. Often these problems are evident only by observing software engineers using these tools.
NaNofuzz was recently featured on the Hacker News homepage. Lean more about NaNofuzz at its GitHub repository here: https://github.com/nanofuzz/nanofuzz
Student Involvement
This research project will flex your research muscles: you will approach the problem of test suite generation using a comprehensive mix of theory, human observation, PL, HCI, and prototype engineering. You will gain insights using our emerging theory of test suite generation and using innovative prototype tools, such as NaNofuzz, that you help build. This comprehensive approach will expand your research skill set and help you discover new science-based solutions that may make testing easier and more enjoyable for the 26+ million (estimated) software engineers on earth today.
CoDec: Designing Carbon Labels for Societal Decarbonization
Mentor: Yuvraj Agarwal
Description and Significance
Climate change and global warming have been a central focus worldwide, especially within the last decade. In 2015, the UN Paris Agreement was signed, calling for global carbon emissions to be reduced by 45% by 2030, and 100% by 2050. However, we as a global community are falling behind on this goal. Computational Decarbonization (CoDec) is an emerging field dedicated to addressing these global issues, and focuses on optimizing carbon efficiency to reduce the lifecycle carbon emissions of computing and societal infrastructure using computational and data-driven techniques.
An important mission within the computational decarbonization community is creating methods to provide consumers with the knowledge required to make carbon-aware decisions when they buy products. In this project, we will explore creating “carbon labels” for products to communicate carbon emissions-related information to consumers.
Student Involvement
Students will work with a graduate student to help conduct a user study to evaluate whether consumers want carbon information readily available and if so what should the content of a carbon label look like. Students involved with this project will learn how to develop and run user studies, analyze quantitative data, methods to analyze open ended responses and qualitative data, and will gain familiarity with the field of computational decarbonization as a whole.
Compositional Verification of Distributed Protocols in TLA+
Mentor: Eunsuk Kang
Description and Significance
Distributed protocols, e.g. Paxos and Raft, are at the heart of communication for distributed systems. It is very important for these protocols to be correct, yet verifying (proving) their correctness is extremely challenging. In this project, we aim to improve the scalability and performance of tools that are used to verify the correctness of distributed protocols written in the TLA+ formal specification language. In particular, we will be applying compositional techniques to improve the state of the art for model checking (automatically verifying) distributed protocols.
Student Involvement
Students will learn about the theory of model checking, including TLA+ and compositional verification. The goal of the project is for the student to implement compositional verification techniques in an existing model checker for the TLA+ language. Students will work with one of two model checkers that our team has been developing: Recomp-Verify and Carini (see the references). Both model checkers are implemented in Java, so students will be expected to write Java code.
References
https://github.com/cmu-soda/recomp-verify/tree/FMCAD24
https://github.com/cmu-soda/carini
Cross-linguistic Support for Reactivity
Mentor: Jonathan Aldrich
Description and Significance
Reactive programming has become a popular approach for developing industrial web and mobile applications. Current reactive programming frameworks, however, only support reactive client-side data updates, with no support for code codes and synchronization of data updates between clients and server databases. To this end, we are working on Meerkat, a system that provides linguistic support for reactivity across a distributed system.
We will extend our current prototype of Meerkat by providing cross-language reactivity through integrations with JavasSript and web programming. We will work on approaches that support the integration of JavaScript web components into Meerkat applications with clear interfaces that preserve the reactive semantics across client and server borders and services. The meerkat prototype currently supports nominal integrations to add JavaScript and this project will expand this to for example allows programmers to seamlessly import TypeScript React components and integrate them with a line or two of typechecked Meerkat code.
Student Involvement
Students will work on designing and implementing a model for integrating Meerkat with reactive components written in TypeScript, Rust, or other languages. The model will provide a bridge to ecosystems and functionality that are not part of the Meerkat execution model. For example, our model does not support threads or timers that could be used to do things like generate periodic events that update application state, without the user taking any action. A component written in TypeScript, or perhaps provided as a server-side plugin in a language like Rust, could generate such events and trigger Meerkat actions in the same way that a user-facing interface, or the system’s REPL would.
References
Costa Seco, J. and Aldrich, J., 2024, October. The Meerkat Vision: Language Support for Live, Scalable, Reactive Web Apps. In Proceedings of the 2024 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (pp. 54-67)
Zhong, H. and Liu, A., 2024, October. Meerkat: Distributed Reactive Live Semantics with Causal Consistency. In Companion Proceedings of the 2024 ACM SIGPLAN International Conference on Systems, Programming, Languages, and Applications: Software for Humanity (pp. 52-53)
Meerkat Project: https://github.com/heng-zhong-2003/meerkat
A Debugger for Formally Verifying Program Correctness
Mentor: Jonathan Aldrich
Description and Significance
Despite recent advances in formally proving the correctness of real-world software programs, formal verification has not seen wide adoption in the software industry. One major barrier to the adoption of formal verification is that the reasoning about correctness using current tools and techniques is difficult and requires much more effort than the effort to develop the software itself. We have been developing a debugger that visualizes all possible execution paths through the program, displays the current state of the verifier at each point in the program, and explains whether the program satisfies the specification, and if not, helps the programmer localize the verification errors. This debugger will make it easier for a programmer to write and debug formal specifications for a program.
Student Involvement
This is a novel and exciting project that lies at the intersection of formal methods, programming languages, and HCI. Students will have the opportunity to learn about the state of the art in formal verification and the potential research directions in which formal verification can be improved. Several possible paths for student work include:
1. User interface design and programming for the debugger, which is currently built as a plugin for the JetBrains IntelliJ platform
2. Improving the algorithm and implementation of the symbolic execution verifier
3. Performing an empirical study of the usability of the debugger
Students should have strong programming skills in C and Java; knowledge of Scala or another functional programming language is strongly recommended. No prior experience of formal verification is required, only an interest in the engineering of more reliable and secure programs.
References
1. Jenna DiVincenzo et al. 2024. Gradual C0: Symbolic Execution for Gradual Verification. https://jennalwise.github.io/assets/pdf/divincenzo2024gradualc0.pdf
2. Peter Müller, Malte Schwerhoff, and Alexander J. Summers. 2016. Viper: A Verification Infrastructure for Permission-Based Reasoning. https://pm.inf.ethz.ch/publications/MuellerSchwerhoffSummers16.pdf
Designing context-aware AI-driven assistants to help patients
Mentor: Mayank Goel
Description and Significance
We aim to guide patients like her through their recovery process using lightweight sensing and behavior modeling. The end-to-end solution will unify multimodal activity and behavior sensing on a watch with an AI-driven intervention agent. The system will use watch-based multimodal sensing to guide the patient. The agent will recognize the patient’s actions and behaviors and proactively intervene only as needed. It will augment doctors' understanding of patients' situations with a meaningful and appropriate level of explanation when a problem occurs.
Developer tools for prompt programming
Mentor: Brad Myers
Description and Significance
The introduction of generative pre-trained foundation models (e.g., GPT-4), have enabled software developers to engage in "prompt programming", which has enabled a new class of powerful software. Prompt programming is defined as when developers write "a prompt that accepts variable inputs and could be interpreted by a foundation model (FM) to perform specified actions and/or generate output. This prompt is executed within a software application or code by a FM." [1] Under this definition, prompts can be seen as "programs" for FMs. Examples include Google's AI Overview feature, which uses generative AI to summarize the contents of related search articles.
While prompt programming, we have found that developers engage in rapid iteration of their prompt programs. Due to the lack of supportive tooling for prompt programming, developers often struggle to track and recall the different iterations of prompts and model outputs, which hinders their productivity. In this project, we are building developer tools to support prompt programming, specifically to support developers' understanding of their different prompt iterations.
Student Involvement
Students will work closely with a PhD student to build a tool to help developers track, retrieve, and make sense of different prompt versions. Depending on the student's interests and the needs of the project, the student may engage in helping build tool prototypes; design and run surveys, user studies, or interviews; or perform qualitative or quantitative analysis.
References
[1] Liang, Jenny T., et al. "Prompts are programs too! understanding how developers build software containing prompts." arXiv preprint arXiv:2409.12447 (2024).
Finding Aliasing Bugs at Scale with BorrowSanitizer
Mentors: Jonathan Aldrich and Joshua Sunshine
Description and Significance
The Rust programming language is increasingly popular because it can provide safety guarantees without run-time overhead. Critical open source projects such as Chromium, Android, and the Linux Kernel have started to integrate Rust in interoperation with C and C++. However, these languages do not have similar safety features, and they can be used in ways that break Rust's assumptions about aliasing and mutability. Rust relies on these assumptions to optimize programs correctly, so breaking them can introduce the kinds of security vulnerabilities that Rust was designed to prevent. Developers have no reliable method for finding these Rust-specific bugs in multi-language applications. We will fill this gap in tooling with BorrowSanitizer: a production-ready dynamic instrumentation tool for finding aliasing bugs in applications where Rust is used alongside other languages.
Student Involvement
Students will join our team and will contribute to the design, implementation, and evaluation of our tool. BorrowSanitizer is already in development, so we anticipate having a working prototype to build from by the start of the summer. Students will work closely with a graduate student mentor to find a project involving the tool that best fits their interests. Potential topics may include implementing new forms of dynamic instrumentation or designing static analyses for removing redundant run-time checks.
Formal Verification of High-level Multiparty Cryptographic Protocols
Mentors: Eunsuk Kang and Fraser Brown
Description and Significance
Secure multi-party computation (MPC) is a set of cryptographic protocols that allows parties in a protocol to compute data without knowing the inputs ¹. MPC has a variety of applications: electronic voting, medical research, and secure-machine learning to name a few ². To ensure security of MPC systems, we can use formal models to verify MPC protocols and show that adversaries cannot obtain private information.
Student Involvement
There are several ways that formal verification can be applied to MPC systems. Students will have the opportunity to learn about how verification can be used with MPC, and specific areas that need to be improved. Several possible paths for student work include:
1. Creating a verified library for homomorphic encryption
2. Reasoning about automated decomposition for security protocols
3. Formally modeling select MPC protocols
Students will learn about proof techniques and how they can be applied to verify MPC protocols for abstract parties. They will gain experience in formally modeling protocols and writing security properties about transfer of information during protocol execution.
Students should have a basic understanding of both cryptography and formal verification (knowledge of what they are and where they might be applied). MPC systems will be studied at an abstract level, so in-depth knowledge of abstract number theory is not necessary.
Students should have an interest in security, as well as formal methods.
References
¹ David Evans, Vladimir Kolesnikov, and Mike Rosulek. A Pragmatic Introduction to Secure Multi-Party Computation. NOW Publishers, Boston, Massachusetts, United States, 2022.
² Chuan Zhao, Shengnan Zhao, Minghao Zhao, Zhenxiang Chen, Chong-Zhi Gao, Hongwei Li, and Yu an Tan. Secure multi-party computation: Theory, practice and applications. Information Sciences, 476:357–372, 2019.
Fostering Empathy Through Gaming while Ridesharing
Mentor: Haiyi Zhu
Description and Significance
Ridesharing platforms like Uber and Lyft create and exploit information asymmetries to control drivers, exploit them via low/unfair wages and hidden/invisible labor, and generally subject them to poor working conditions. While the research community has extensively documented persistent issues plaguing workers — algorithmic discrimination [4], health and safety hazards [1], social isolation [2], and a lack of standard employee benefits [3] — many consumers (e.g., riders) remain uninformed about such adverse conditions endemic to platform-based work, and such lack of public awareness hinders progress for necessary policy and infrastructural advancements. Several attempts from scholars and advocacy organizations have probed at auditing driver pay through data donations from drivers, but the potential of leveraging contributions from riders are underexplored. In this project, we plan to leverage techniques from gameplay and system design to create user experiences that foster consumer empathy toward worker conditions, in a way that is joyful, collaborative and consensual to both groups.
Student Involvement
Students will engage in the design and development process for a game that builds empathy between riders and drivers while simultaneously allowing for data contributions from engaged passengers. Students will gain practical experience synthesizing related literature around topics of platform-based gig work, crowdsourcing, systems/game design, in addition to familiarity and practice with HCI techniques of UX design and research, collaborative software engineering practices, as well as user studies and field evaluations.
References
1. Jane Hsieh, Miranda Karger, Lucas Zagal, Haiyi Zhu. Co-Designing Alternatives for the Future of Gig Worker Well-Being: Navigating Multi-Stakeholder Incentives and Preferences. In Proc. Designing Interactive Systems Conference (DIS), 2023
2. Zheng Yao, Silas Weden, Lea Emerlyn, Haiyi Zhu, Robert E. Kraut. Together But Alone: Atomization and Peer Support among Gig Workers. In Proc. of the ACM on Human-Computer Interaction (CSCW), 2021
3. Jane Hsieh, Oluwatobi Adisa, Sachi Bafna, Haiyi Zhu, “Designing Individualized Policy and Technology Interventions to Improve Gig Work Conditions” In Proc. Symposium on Human-Computer Interaction for Work (CHIWORK), 2023
4. Jane Hsieh, Angie Zhang, Mialy Rasetarinera, Erik Chou, Daniel Ngo, Jason Carpenter, Karen Lightman, Min Kyung Lee, Haiyi Zhu “Supporting Gig Worker Needs and Advancing Policy Through Worker-Centered Data-Sharing” In Submission to CSCW ’25 July cycle
Human Factors in Distributed Confidential Computing
Mentor: Lorrie Cranor
Description and Significance
The goal of Distributed Confidential Computing (DCC) is to enable scalable data-in-use protections for cloud and edge systems, like home IoT. The “protections” offered by DCC depends on the users and what privacy protections they require. The goal of this project is to (1) determine what kind of protections key stakeholders would want, and (2) to design an interface for describing these protections.
Student Involvement
Students will learn how to conduct research in usable privacy and security by working with a mentor to conduct a user study that will identify privacy preferences DCC stakeholders. Based on this data, the team will design and implement prototype interfaces, which will be evaluated by another user study. Depending on interests and project needs, the student may help set up online surveys and collect data on a crowd worker platform, perform qualitative and/or quantitative data analysis, or design and implement prototypes.
References
Center for Distributed Confidential Computing. https://nsf-cdcc.org/
Hana Habib and Lorrie Faith Cranor. Evaluating the Usability of Privacy Choice Mechanisms. SOUPS ‘22. https://www.usenix.org/system/files/soups2022-habib.pdf
Improving the WebAssembly Ecosystem through Dynamic Analyses with Visualizations
Mentor: Ben Titzer
Description and Significance
This project will enable the student to solve real-world problems through providing useful tooling to developers. They will learn about WebAssembly, compilers, virtual machines, instrumentation, dynamic analysis, scalable visualization, and much more. The student can decide on what sounds interesting to them within the scope of the project. Read below for some ideas.
Observability of Languages that Compile to Wasm: WebAssembly (Wasm) is a portable bytecode that has been growing in use-cases beyond the browser such as edge computing, IoT, and embedded systems. As a compilation target for many languages, WebAssembly presents an interesting domain for language-agnostic developer tooling. However, there are few tools/analyses available to developers to debug the dynamic behavior of their applications.
The Project: The WebAssembly Research Center (WRC) has developed a DSL for WebAssembly instrumentation called Whamm that enables developers to expressively write instrumentation logic and inject via bytecode rewriting or interfacing with an engine depending on the application domain constraints. In this project, we will leverage this DSL to write more dynamic analyses that visualize the results to improve the Developer Experience (DevX) of Wasm. For example, tools could generate flame graphs, coverage reports, path profiling, hotness reports, etc. The limit really is your imagination.
Observability of Dynamic Languages on Wasm: For dynamic languages, Wasm bytecode’s static type system presents a challenging target. As a workaround, many dynamic languages recompile their interpreter to run on top of Wasm (using a guest runtime). This method makes running on Wasm possible, but complicates observability. In essence, the problem is that the application logic is hidden by a level of interpretation.
The Project: Our Guest Runtime Profiler (GRP) project enables users to attribute “fuel consumption” (amount of time/cycles spent) to guest bytecode. In this REU project, we hope to make this tool more usable by leveraging the insights of GRP to render an interactive visualization of source-code fuel consumption to end-users to aid in observability. For example, displaying information at different levels of granularity (component → module → function → block → opcode) that can be interacted with to “zoom in/zoom out” of abstractions.
Investigating common security vulnerabilities in blockchain software
Mentor: Hanan Hibshi
Description and Significance
The expansion of blockchain into different applications is increasing. When more lines of code are introduced in any software application, the attack surface increases because new lines of code could open the door to more software bugs/vulnerabilities. After all, software developers approach blockchain technology as programmable software where they get to use software tools that are more specific to developing blockchain applications.
We continue to see vulnerabilities in software on the rise year after year. Almost every application on the market has a vulnerability reported with a recorded CVE number(s). We envision blockchain software to be no different as programmers may continue to make mistakes that introduce software bugs/vulnerabilities. In this proposal, our goal is to study how common vulnerabilities (e.g., integer overflow) may exist in code written for the blockchain. This work will generate examples of code that contains such vulnerabilities and make these available for future testing and educational purposes with and for blockchain developers.
Student Involvement
The student will be learning about smart contracts code and compiling a set of examples of blockchain code that contains common software vulnerabilities with recommendations on how to avoid/fix these vulnerabilities in code. They will be creating tutorials for smart contract beginners that guide them through these vulnerabilities using the code examples they developed.
Investigating Multi-factor Authentication Phishing
Mentor: Lorrie Cranor
Description and Significance
Two-factor authentication (2FA) or, more generally, multi-factor authentication (MFA) is a common approach used by large organizations to protect accounts from compromise. While introducing MFA makes traditional phishing attacks more difficult, attackers have adopted new ruses to trick users into authenticating. For example, an attacker may contact the victim in order to induce them to approve a login prompt or provide an authentication code ¹. Despite the rise in social engineering attacks on MFA, there is limited research ² exploring why users are susceptible to MFA attacks or what additional safety measures would strengthen resilience to MFA social engineering attacks. To rectify this gap, we plan to conduct a multi-stage study investigating users’ interactions with MFA phishing in the university setting.
Student Involvement
Students will work with a graduate student to help conduct a user study related to MFA phishing. Through this process, they will learn about how HCI research methods (e.g., interviews, surveys, etc.) are applied to computer security and privacy issues. Based on student interest and the results of ongoing research, students may be involved in all stages of the research process, including design, execution, and analysis. Students may also help to build web infrastructure for simulating MFA phishing attacks.
References
¹ Siadati, Hossein, et al. "Mind your SMSes: Mitigating social engineering in second factor authentication." Computers & Security 65 (2017): 14-28.
² Burda, Pavlo, Luca Allodi, and Nicola Zannone. "Cognition in social engineering empirical research: a systematic literature review." ACM Transactions on Computer-Human Interaction 31.2 (2024): 1-55.
Live Programming with Whamm (a Bytecode Instrumentation DSL for WebAssembly)
Mentor: Ben Titzer
Description and Significance
The Problem: WebAssembly (Wasm) is a portable bytecode that has been growing in use-cases beyond the browser such as edge computing, IoT, and embedded systems. The WebAssembly Research Center (WRC) has developed a DSL for WebAssembly instrumentation called Whamm that enables developers to expressively write instrumentation logic and inject via bytecode rewriting or interfacing with an engine depending on the application domain constraints. Expressing instrumentation can be challenging due to the expertise required by the developer and the necessity of an accurate mental model of the instrumentation framework and how it determines where to inject the instrumentation into application execution.
The Project: In this project, we seek to leverage an interesting domain of research to improve the visibility of instrumentation injection through providing a live programming environment, as an IDE plugin, to developers. As an example, the developer will be presented with a text editor of their Whamm monitor along with a side panel of their application-to-monitor. As the Whamm monitor is mutated, the resulting match locations in the application will be shown in the side panel. If the developer clicks into some match location in the side panel and edits the instrumentation code there, the edits will persist in the Whamm monitor text editor as well. This demonstrates the basic functionality of the live programming model and would greatly improve the debuggability of Whamm instrumentation through improving the visibility of dynamic behavior.
LLMs and Templates Unite for Automated Security Vulnerability Repair
Mentors: Ruben Martins and Claire Le Goues
Description and Significance
Software security vulnerabilities can lead to data breaches and extortion, posing significant risks to individuals and businesses. The Common Weakness Enumeration (CWE) [1] identifies over 900 software weaknesses, highlighting the complexity of these threats. Quickly addressing vulnerabilities is crucial, but manual patching is often slow and error-prone. Automated program repair (APR) techniques [2] have emerged to generate patches that fix vulnerabilities while maintaining desired program behaviors. Despite the promise of faster and more reliable fixes, several challenges hinder the widespread adoption of APR, especially for security. In this project, we plan on combining LLMs [3] with template-based repair techniques [4], which offer a structured and principled approach to automated program repair by leveraging predefined templates for specific vulnerability classes.
Student Involvement
We have a preliminary tool called RepairChain [5] written in Python that takes as input a vulnerable Git commit and a set of test cases and returns a ranked list of patches to fix the security vulnerability. This framework employs basic prompting to generate patches directly from LLMs and a simple version of templates with limited interaction with the LLMs. The student should extend this prototype. In particular, we would like to use LLMs (like GPT4o) to guide a template-based approach in template selection, fix localization, and template instantiation. LLMs can effectively guide template insertion by analyzing code context and identifying relevant patterns. This synergy between LLMs and template-based generation will produce a principled technique that can produce more accurate and contextually appropriate patches for security vulnerabilities. This work will involve writing code in Python, prompt engineering, and creating new templates for patching CWEs, focusing on memory vulnerabilities common in C programs.
References
[1] Omer Aslan, Semih Serkant Aktug, Merve Ozkan-Okay, Abdullah Asim Yilmaz, and Erdal Akin. A comprehensive review of cyber security vulnerabilities, threats, attacks, and solutions. Electronics, 12(6):1333, 2023
[2] Claire Le Goues, Michael Pradel, and Abhik Roychoudhury. Automated program repair. Communications of the ACM, 62(12):56–65, 2019.
[3] Saad Ullah, Mingji Han, Saurabh Pujar, Hammond Pearce, Ayse Coskun, Gianluca Stringhini. LLMs Cannot Reliably Identify and Reason About Security Vulnerabilities (Yet?): A Comprehensive Evaluation, Framework, and Benchmarks. In 2024 IEEE Symposium on Security and Privacy (SP), pages 199–199, 2024. IEEE Computer Society
[4] Kui Liu, Anil Koyuncu, Dongsun Kim, and Tegawende F. Bissyande. TBar: Revisiting template-based automated program repair. In Proceedings of the 28th ACM SIGSOFT international symposium on software testing and analysis, pages 31–42, 2019
[5] Chris Timperley, Claire Le Goues, and Ruben Martins. RepairChain: An Automated Tool for Security Vulnerability Patching. https://github.com/ChrisTimperley/RepairChain, 2024
Machine Learning in Production
Mentor: Christian Kästner
Description and Significance
The advances in machine learning (ML) have stimulated widespread interest in integrating AI capabilities into various software products and services. Therefore today’s software development team often have both data scientists and software engineers, but they tend to have different roles. In an ML pipeline, there are in general two phases: an exploratory phase and a production phase. Data scientists commonly work in the exploratory phase to train an off-line ML model (often in computational notebooks) and then deliver it to software engineers who work in the production phase to integrate the model into the production codebase. However, data scientists tend to focus on improving ML algorithms to have better prediction results, often without thinking enough about the production environment; software engineers therefore sometimes need to redo some of the exploratory work in order to integrate it into production code successfully. In this project, we want to explore responsible engineering practices, especially the integration of requirements engineering and safety engineering methods to drive design, testing, and collaboration.
Student Involvement
We want to improve responsible engineering practices through a focus on requirements and safety engineering. To this end, we may build tools to capture, negotiate, and critique requirements both for the model and the system, approaches to automatically test models and systems, explore how hazard analysis can be used to anticipate the unanticipated consequences from wrong model predictions, and empirically study how developers adopt or misuse responsible engineering practices. This research may involve interviews, analysis of software artifacts, and tool building. The project can be tailored to the students’ interests, but interests or a background in empirical methods would be useful. Familiarity with machine learning is a plus but not required. Note, this is not a data science/AI project, but a project on understanding software engineering practices relevant to data scientists.
Maintaining Software Trustworthiness with AI-Driven User Feedback Elicitation
Mentor: Travis Breaux
Description and Significance
For decades, researchers have worked to bridge the gap between the complexities of the real world and the functionality of software systems. Yet, challenges often arise when these systems are deployed in real-world settings that involve constantly engaging with humans. A system may not comply with social and ethical norms, violate user privacy, or not comply with the legal and regulatory framework within which it needs to operate. Addressing these issues is critical for ensuring software trustworthiness, often referred to as normative requirements in the software engineering community. A key question that drives research in this area is how to ensure the trustworthiness of software systems as normative requirements evolve. One common approach is to gather user feedback through interviews. However, traditional methods have limitations: 1) User feedback can be vague or incomplete. 2) Users may unintentionally omit critical information due to assumptions or tacit knowledge. 3) Developers may lack the expertise to ask targeted, insightful follow-up questions. To address these challenges, this project aims to develop an AI-powered interview assistant tool for software engineers. By leveraging the language fluency and inference capabilities of Large Language Models (LLMs), the tool will recommend relevant follow-up questions during interviews. This approach ensures that user feedback is more comprehensive and actionable, improving the software's ability to adapt to evolving requirements.
Student Involvement
As part of this project, you will gain hands-on experience at the intersection of cutting-edge AI and real-world software development challenges. Specifically, you will:
1) Explore how LLMs can be applied to solve practical, user-centric problems.
2) Learn and implement advanced techniques such as Chain-of-Thought reasoning and Self-Reflection to enhance the quality of follow-up question generation.
3) Evaluate these techniques through experiments to measure their effectiveness.
4) Build and deploy a prototype of the LLM-powered interview assistant.
5) Conduct user testing to refine the tool and assess its impact in real-world settings.
Microarchitectural Attacks and Defenses
Mentor: Riccardo Paccagnella
Description and Significance
Over the past two decades, numerous attacks have emerged in which an adversary exploits properties of the hardware to breach the security of software, even in the absence of software vulnerabilities. These attacks stem from the increasing gap between the abstraction hardware is intended to expose and the one its implementation concretely exposes. This project aims to develop a deeper understanding of this abstraction gap, with a particular focus on microarchitectural attacks (e.g., Spectre, Meltdown, Hertzbleed). Project goals include developing a deeper understanding of the microarchitectural optimizations enabling these attacks, assessing the vulnerability of software (e.g., cryptographic software) against these attacks, and developing practical mitigations against these attacks.
Student Involvement
Students will work with a team to answer research questions related to the above project description. In the process, students will learn how computer systems (really) work behind abstraction layers, how to attack real software by exploiting real hardware, and how to make software and hardware more secure and efficient. No prior experience in this area is required -- just curiosity and a desire to learn about these topics.
The Origins and Survival of Long-Range Ties on Twitter
Mentor: Patrick Park
Description and Significance
Long-range social ties play a crucial role in information diffusion and social cohesion across diverse communities [1]. This project aims to investigate the origins and survival of these ties on Twitter.
The study will focus on two key aspects:
1. Case studies of strong long-range ties: Through interviews or surveys with Twitter users who maintain strong long-range ties, we will gather qualitative data on tie formation and persistence.
2. Longitudinal analysis of tie survival: Using multiple Twitter datasets collected over time, we will examine factors contributing to the survival of long-range ties, building upon findings that suggest strong ties can endure despite cognitive and stance differences [2].
This research has significant implications for understanding social network dynamics in the digital age and may provide insights into bridging political and social divides.
Students may be involved in several aspects of the project, gaining valuable experience in mixed-methods research, data analysis, and scientific writing. Key responsibilities include:
1. Literature review on social network theory, tie formation, and tie survival
2. Data collection:
- Identifying Twitter users with strong long-range ties
- Conducting interviews or surveys with selected users
- Compiling and cleaning longitudinal Twitter datasets
3. Data analysis:
- Qualitative analysis of interview/survey responses
- Quantitative analysis of tie survival using statistical methods
4. Interpretation and presentation of results
The student will have the opportunity to contribute to our understanding of social network dynamics in the digital age, applying concepts such as tie strength, cognitive distance, and stance distance to real-world social media interactions.
References
1. Patrick S. Park et al.,The strength of long-range ties in population-scale social networks. Science 362, 1410-1413(2018). DOI:10.1126/science.aau9735
2. Park, P. S., Xu, H. G., & Carley, K. M. (2024). The Life of a Tie: Social Origins of Network Diversity. In R. Thomson et al. (Eds.), SBP-BRiMS 2024, LNCS 14972, pp. 226–235.
picoCTF Cybersecurity & Education Research through Online Gaming - Multiple Projects
Mentors: Hanan Hibshi and Maverick Woo
Description and Significance
picoCTF is a free online Capture The Flag-style game implemented by and maintained at Carnegie Mellon University to promote and facilitate cybersecurity education. The target audience of our service has been middle and high school students since 2013, but it has also been increasingly popular and attracted many other age groups, including college students and adult learners. Each picoCTF release features ~100 cybersecurity challenge problems of increasing difficulty, which are revealed over a storyline to guide players to hone their skills one step at a time. These security challenges cover concepts such as cryptography, binary exploitation, web exploitation, reverse engineering, forensics, and other topics related to security and privacy. As a year-round education platform, the picoCTF team conducts constant research in areas including cybersecurity education methodologies, CTF problem development, and education platform development. This summer, we have multiple research projects for motivated students to get involved in various missions in picoCTF:
(1) Collect empirical data through surveys, user studies, and classroom observations
To improve the design of the picoCTF platform to reach a larger number of students, especially in under-resourced communities, we need to collect and analyze empirical data to inform our design enhancement and platform scalability process. This project includes a research plan to obtain the needed data through conducting user studies, focus groups, and usability and scalability tests that examine picoCTF in a classroom setting. We are interested in understanding how we can enhance our platform to better serve K-12 teachers and students, college students, and independent adult learners in under-resourced communities. The empirical data collected by closely observing participants in focus groups and from surveys and questionnaires will provide informed feedback to the picoCTF research team about possible technical challenges and key design improvements that would enhance the students’ experience.
(2) Analyze player data from previous years and develop visualization tools
In addition to the surveys and questionnaires, the current picoCTF platform is rich with player data from previous years that could be a valuable resource to conduct in-depth analysis that would help understand how to improve the game to reach a wider audience, and what/where to include new challenges. The analysis would help reveal patterns that could be mapped to educational goals and help investigate where players fail to solve challenges or where the game becomes less interesting. These findings could ultimately help improve the user experience and retainment. Other areas of analysis include performance by gender, team diversity, age, educational background, etc. We envision students will learn to use modern data analysis toolkits to analyze our data and build an interactive web-based exploration tool for presenting the findings from these analyses.
(3) Write new CTF challenges for the game and test current CTF challenges
Writing and testing CTF challenges are ongoing tasks in picoCTF. Testing current challenges help identify errors and bugs before a future competition goes live, and writing new challenges help increase our challenges pool. For the latter, we are especially interested in new challenges in areas that we have minimal or no existing coverage on our platform. These include privacy, mobile security (Android / iOS), IoT security (e.g., embedded Linux-based device), and ICS security (e.g., RTOS, ROS). Students will engage in learning security and privacy problems that arise in these areas and develop new CTF challenges of gradually increasing complexity to cater to players with different stages of technical capability.
Student Involvement
We are looking for multiple students who are interested in cybersecurity and who enjoy working on projects that have a global impact on youth and workforce development. Dr. Hanan Hibshi and Dr. Maverick Woo will be the faculty members overseeing students' research activities, and the picoCTF team consists of software engineers and graduate students who work together with student research assistants. The picoCTF project is interdisciplinary in nature and can be attractive to students with different backgrounds. Students with Human-Computer Interaction background can enjoy conducting user studies, collecting empirical data, or examining the picoCTF interface and proposing design changes that can improve user experience. Students with an interest in CS education and/or cognitive psychology could help analyze the data from existing players to investigate ways that can improve learning outcomes. Students who enjoy software development can help the technical team improve the current gaming platform and migrate to the new 2021 picoCTF that has advanced features. Finally, students with a cybersecurity background can join our team and enjoy testing the current challenges (by playing the game!) and help create new challenges or add a new category of challenges.
References
Owens, K., Fulton, A., Jones, L. and Carlisle, M., 2019. pico-Boo!: How to avoid scaring students away in a CTF competition.
Privacy Infrastructure for the Internet of Things
Mentor: Norman Sadeh
Description and Significance
With the increasingly widespread deployment of sensors recording and interpreting data about our every moves and activities, it has never been more important to develop technology that enables people to retain some level of awareness and control over the collection and use of their data. While doing so as users browse the Web or interact with their smartphones is already proving to be daunting, it is even more challenging when data collection takes place through sensors such as cameras, microphones and other technologies users are unlikely to even notice. CMU's Privacy Infrastructure for the Internet of Things is designed to remedy this situation. It consists of a portal that enables owners of sensors to declare the presence of their devices, describe the data they collect and, if they want to, provide people with access to controls that enable them to possibly restrict how much data is collected about them and for what purpose. The infrastructure comes along with an IoT Assistant mobile. The app enables people to discover sensors around them and access information about these sensors, including any available settings that might enable them to restrict the collection and use of their data. Deployed in Spring 2020, the infrastructure already hosts descriptions of well over 100,000 sensors in 27 different countries and the IoT Assistant app has been downloaded by tens of thousands of users. The objective of this project is to extend and refine some of the infrastructure's functionality.
Student Involvement
Students working on this project will learn how to develop and deploy robust software that is easy to use and empowers people to better manage their privacy. Thia project combines software engineering, privacy and security engineering, human computer interactions and human subject studies.
References
Anupam Das; Martin Degeling; Daniel Smullen; Norman Sadeh Personalized Privacy Assistants for the Internet of Things: Providing Users with Notice and Choice, IEEE Pervasive Computing, 2019.
Shikun Zhang, Yuanyuan Feng, Lujo Bauer, Lorrie Faith Cranor, Anupam Das, and Norman Sadeh, “Did you know this camera tracks your mood?”: Understanding Privacy Expectations and Preferences in the Age of Video Analytics, Proceedings on Privacy Enhancing Technologies, 2021, 1, Apr 2021 [pdf]
Yuanyuan Feng, Yaxing Yao, Norman Sadeh, "A Design Space for Privacy Choices: Towards Meaningful Privacy Control in the Internet of Things", Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, May 2021 [pdf]
https://iotprivacy.io
https://www.privacyassistant.org/iot/
Safety Labels for GenAI Applications
Mentor: Lorrie Cranor
Description and Significance
The rapid proliferation of Generative AI (GenAI) in consumer applications has sparked calls for transparent methods to assess and communicate the safety risks of these technologies to consumers, users and the general public. This includes concerns about bias, toxicity, misinformation, security and privacy, and beyond. The scope of these issues extends from foundation models (e.g., GPT-4) to their diverse applications (e.g., ChatGPT). Yet existing evaluation methods heavily focus on benchmarking foundation models, overlooking the complicated interactions between users and GenAI-powered applications in the context of use. Academia, industry, and policymakers are all advocating for safety labels to inform users and consumers about the potential risks and harms of GenAI applications. Despite these initiatives, challenges remain in how to effectively measure safety risks considering the human-AI interaction layer, how to design labels that are both useful and usable for users and those impacted, and how to responsibly deploy safety labels in practice through real-world partnerships. The goal of this project is to develop a dynamic, expert-informed, user-centered approach to evaluate GenAI-powered applications and create public-facing safety labels.
Student Involvement
Students will learn how to conduct research in usable privacy and security by working with a mentor to design and/or conduct a study that will inform the design of GenAI safety labels. Depending on interests and project needs, the student may help set up online surveys and collect data on a crowd worker platform, help design and conduct interviews or focus groups, perform qualitative and/or quantitative data analysis, or design and implement prototypes.
References
The CUPS Lab has done prior label development work in other areas, including privacy nutrition labels (https://cups.cs.cmu.edu/privacyLabel/) and IoT security and privacy labels (https://iotsecurityprivacy.org/).
Security Question Answering (QA) Assistants that Change User Behavior
Mentor: Norman Sadeh
Description
It is estimated that 95% of security incidents can be traced to human error and in particular people failing to follow best practices. Increasingly users are turning to chatbots to answer a variety of everyday security. The objective of this project is to develop personalized GenAI assistants capable of providing users with answers that are not just accurate but that also reflect their level of expertise, are understandable and actionable, and motivate them to heed the assistant's recommendations.
Significance
Security and privacy are becoming increasingly complex to manage for everyday users. The need for effective assistants capable of effectively helping users in this area has never been more important.
Student Involvement
Students working in this project will learn to systematically evaluate and refine GenAI technologies in the context of security and privacy questions. This will include work designed to increase the accuracy of provided answers as well as work designed to elicit more effective answers. Our work combines LLMs, prompt engineering, privacy and security nudging, user modeling, protection motivation theory, and human subject studies.
Recent Publications
Ananya Balaji, Lea Duesterwald, Ian Yang, Aman Priyanshu, Costanza Alfieri, Norman Sadeh, "Generating Effective Answers to People’s Everyday Cybersecurity Questions: An Initial Study", International Web Information Systems Engineering Conference (WISE 2024), Dec 2024 [pdf]
Abhilasha Ravichander, Ian Yang, Rex Chen, Shomir Wilson, Thomas Norton, Norman Sadeh, "Incorporating Taxonomic Reasoning and Regulatory Knowledge into Automated Privacy Question Answering", International Web Information Systems Engineering Conference, Dec 2024 [pdf]
A Ravichander, A Black, T Norton, S Wilson, N Sadeh
Breaking down walls of text: How can nlp benefit consumer privacy?
ACL, 2021
A Ravichander, AW Black, S Wilson, T Norton, N Sadeh
Question answering for privacy policies: Combining computational and legal perspectives, EMNLP 2019.
arXiv preprint arXiv:1911.00841
Simulation and Detection of Adversarial Retrieval Attacks in Media Literacy Training Environment
Mentor: Kathleen Carley
Description and Significance
This project aims to integrate adversarial retrieval attacks into a simulated environment for media literacy training on disinformation, enhancing participants' ability to detect and mitigate such attacks in real-world scenarios. Adversarial retrieval attacks involve injecting malicious documents into a corpus through poisoning or encoding, exploiting vulnerabilities in information retrieval systems. By simulating these attacks within the training environment, participants are exposed to realistic challenges posed by disinformation tactics. Detection tools will be integrated into the environment, enabling participants to identify and analyze adversarial content effectively. This approach contributes to an understanding of adversarial attacks' role in spreading disinformation promoting resilience disinformation campaigns.
Student Involvement
Students participating in this project will design and implement adversarial attacks, test their impact on retrieval systems, and integrate malicious documents into the simulated environment. They will also contribute to the development of user-friendly detection tools, ensuring they are effective for participants in the training program. Additionally, students will evaluate the effectiveness of these tools by analyzing how well they identify and mitigate adversarial content. Students will gain valuable skills in information retrieval, machine learning, and disinformation analysis, and will contribute to the emerging field of adversarial retrieval attacks.
References
Misinformation and PageRank: https://arxiv.org/abs/2404.08869
PoisonedRAG: https://arxiv.org/abs/2402.07867
Substructural Information Flow - Tools and/or User Studies
Mentor: Jonathan Aldrich
Description and Significance
Information flow control (IFC) is a long-studied approach for establishing non-interference properties of programs. For instance, IFC can be used to prove that a secret does not interfere with some computation, thereby establishing that it does not leak. The typical formulation of information flow relies on the theory of lattices, which are difficult to reason about for programmers without a mathematical background and produce complicated specifications. We have been investigating a simplified foundations for information flow which piggybacks on polymorphism, a language feature familiar to a broad range of programmers working in statically-typed languages. Beyond familiarity, this formulation allows for the complexity of information flow specifications to remain low as the complexity of programs scales up, due to its elegant theoretical underpinnings. We believe this has the potential to transition information flow from being a largely academic technique to one that can be readily deployed by every day programmers to solve real-world security problems. Human studies are essential to making a strong initial argument to this effect.
Student Involvement
We're hoping to design and conduct a series of user studies on our simplified formulation of information flow, investigating its cognitive aspects. Students would work on a largely-qualitative slightly-quantitative approach to analyzing the usability characteristics of information flow, developing methods of comparing and contrasting lattice-based information flow to our new approach. In doing so, we hope to develop methods of evaluating programming language and type system features which produce _reusable_ frameworks of knowledge, such that cognitive aspects can be predicted compositionally from future design iterations and extensions of our type system. Concretely, students could work on (1) designing the user studies themselves or (2) designing tools surrounding the user studies, including pursuing various kinds of implementations.
References
A workshop presentation on our system: https://hgouni.com/files/iwaco24.pdf
Research notes: https://15316-cmu.github.io/2024/lectures/20-functional.pdf
Sustainable Open Source Communities
Mentors: Bogdan Vasilescu and Christian Kästner
Description and Significance
Reuse of open source artifacts in ecosystems has enabled significant advances in development efficiencies as developers can now build on significant infrastructure and develop apps or server applications in days rather than months or years. However, despite its importance, maintenance of this open source infrastructure is often left to few volunteers with little funding or recognition, threatening the sustainability of individual artifacts, such as OpenSSL, or entire software ecosystems. Reports of stress and burnout among open source developers are increasing. The teams of Dr. Kaestner and Dr. Vasilecu have explored dynamics in software ecosystems to expose differences, understand practices, and plan interventions [1,2,3,4]. Results indicate that different ecosystems have very different practices and interventions should be planned accordingly [1], but also that signaling based on underlying analyses can be a strong means to guide developer attention and affect change [2]. This research will further explore sustainability challenges in open source with particular attention to the interaction between paid and volunteer contributors and stress and resulting turnover.
Student Involvement
Students will empirical study sustainability problems and interventions, using interviews, surveys, and statistical analysis of archival data (e.g., regression modeling, time series analysis for causal inference). What are the main reasons for volunteer contributors to drop out of open source projects? In what situations do volunteer contributors experience stress? In which projects will other contributors step up and continue maintenance when the main contributors leave? Which past interventions, such as contribution guidelines and code of conducts, have been successful in retaining contributors and easing transitions? How to identify subcommunities within software ecosystems that share common practices and how do communities and subcommunities learn from each other? Students will investigate these questions by exploring archival data of open source development traces (ghtorrent.org), will design interviews or surveys, will apply statistical modeling techniques, will build and test theories, and conduct literature surveys. Students will learn state of the art research methods in empirical software engineering and apply them to specific sustainability challenges of great importance. Students will actively engage with the open source communities and will learn to communicate their results to both academic and nonacademic audiences.
References
[1] Christopher Bogart and Christian Kästner and James Herbsleb and Ferdian Thung. How to Break an API: Cost Negotiation and Community Values in Three Software Ecosystems. In Proc. Symposium on the Foundations of Software Engineering (FSE), 2016.
[2] Asher Trockman, Shurui Zhou, Christian Kästner, and Bogdan Vasilescu. Adding sparkle to social coding: an empirical study of repository badges in the npm ecosystem. In Proc. International Conference on Software Engineering (ICSE), 2018.
[3] Bogdan Vasilescu, Kelly Blincoe, Qi Xuan, Casey Casalnuovo, Daniela Damian, Premkumar Devanbu, and Vladimir Filkov. The sky is not the limit: multitasking across github projects. In Proc. International Conference on Software Engineering (ICSE), 2016.
[4] Bogdan Vasilescu, Daryl Posnett, Baishakhi Ray, Mark GJ van den Brand, Alexander Serebrenik, Premkumar Devanbu, and Vladimir Filkov. Gender and tenure diversity in GitHub teams. In Proc. ACM Conference on Human Factors in Computing Systems (CHI), 2015.
Through-Body Wireless Imaging for Health with AI/ML
Mentor: Swarun Kumar
Description and Significance
In this project, we will explore ways in which commodity wireless imaging platforms (e.g. commodity radar) could be used to image the body and diagnose health concerns. In simple terms, we seek to enable "X-ray vision" without the use of harmful X-ray radiation, instead leveraging much lower radio frequencies. Doing so has transformative applications for a variety of bio-systems within the body. Consider for example, being able to visualize blood-flow or the food processed through the digestive tract. The project will explore a variety of strategies to fundamentally improve the penetration and spatial resolution of current radar systems to meet such diverse objectives, while remaining safe and effective for body sensing. To do so, we will leverage a combination of hardware and software system design, including the use of AI/ML algorithms for super-resolution imaging.
Student Involvement
The student will be exposed to systems programming that straddles both hardware and software design. The student will work closely with PhD students in the WiTech lab, coupled with close interaction with the faculty mentor. The overall objective is to lead to a publication at a major international venue in mobile systems/HCI.
TTPython: A DSL and Runtime for Distributed and Time-Sensitive Applications
Mentor: Jonathan Aldrich
Description and Significance
Distributed, time-sensitive applications are ubiquitous in cyber-physical systems. When writing these applications in conventional languages, programmers struggle to manage the complexity of handling time as a control flow concept. Time-based control flow is difficult to implement and specify. TTPython, a domain-specific language (DSL) and runtime embedded in Python addresses these challenges. The programmer writes their application in a single file while adding decorators and system calls to specify distribution and timing requirements. TTPython then handles the distribution, communication, and coordination between devices to realize this.
Student Involvement
Students will work on one of 3 projects extending TTPython: multitenancy support, program state encoding, or debugging. Multitenancy support investigates the interactions of private applications on shared hardware resources, such as taking an image with a camera. Program state encoding involves the introduction of finite-state machines in the design of TTPython applications. The debugger involves developing a tool suite that will start with the instrumented execution of TTPython dataflow programs, recording the sequence of functions executed on each machine, values passed, and information on timing and energy use of task execution as well as messages sent and received. This data allows engineers to identify performance or power-related problems and apply algorithmic or configuration patches, or even step through distributed execution to identify and fix bugs. Students will work with external collaborators using TTPython for research purposes and engage in software and hardware design decisions.
References
The TTPython Project: https://ccsg.ece.cmu.edu/ttpython/overview.html
Understanding the People, Means, and Activities of Effective Software Engineering Education
Mentor: Andrew Begel
Description and Significance
Traditional project-based learning provides infrastructures that unleash people’s intrinsic abilities to create. Learning communities allow people with common interests to obtain resources to collaborate on projects and share ideas, tools, and expertise. However, makers from historically marginalized groups face social and physical barriers to participation. Distanced maker spaces are a potential solution to equity gaps in traditional maker spaces, allowing geographically distributed individuals to engage in making while forming meaningful relationships. To better understand factors contributing to effectively distanced project-based learning, we apply the People, Means, and Activities conceptual framework to investigate the experiences and outcomes of autistic community college students in a remote, project-based AI engineering course.
Student Involvement
Students will work with a graduate student to analyze interviews, surveys, and multimodal log data from Zoom, Discord, and GitHub related to student background, experiences, behaviors, and outcomes. Through this process, they will learn about how HCI and Learning Analytics research methods (e.g., interviews, surveys, log traces, etc.) are applied to software engineering education issues.
References
American Society for Engineering Education. (2016). Envisioning the Future of the Maker Movement Summit Report. https://www.asee.org/publications/asee-publications/asee-reports/maker-report
Jaskiewicz, T., Mulder, I., Verburg, S., & Verheij, B. (2018, September 26). LEVERAGING PROTOTYPES TO SUPPORT SELF-DIRECTED SOCIAL LEARNING IN MAKERSPACES.
Hira, A., & Hynes, M. M. (2018). People, Means, and Activities: A Conceptual Framework for Realizing the Educational Potential of Makerspaces. Education Research International, 2018, 6923617. https://doi.org/10.1155/2018/6923617
Lock, J., Redmond, P., Orwin, L., Powell, A., Becker, S., Hollohan, P., & Johnson, C. (2020). Bridging distance: Practical and pedagogical implications of virtual Makerspaces. Journal of Computer Assisted Learning, 36(6), 957–968. https://doi.org/10.1111/jcal.12452
User Awareness of Social Media Algorithms
Mentor: Daniel Klug
Description and Significance
Social media entertainment apps, like Instagram or TikTok, are popular means of communication among younger people because of their high accessibility and ubiquitousness in regards to online social interaction and participation. A key element in social media apps are somehow mysterious algorithms that observe user behavior and cater content feeds to users based on their content consumption and browsing and sharing behavior. Many studies are looking to analyze the social media algorithms, and recent studies largely analyze TikTok’s algorithm regarding online identity, privacy, discrimination, biases, or mediated realities. Yet we have only little understanding about what users know about socio-technical aspects of social media apps when they consume, create and share various content. In the MINT Lab, we are using qualitative approaches, such as interviews, content analysis, and user observations to research users’ opinions, knowledge, and awareness of social media algorithms in the context of communication, socialization, and entertainment. We aim to apply societal and systemic perspectives informed by human-computer interaction and societal computing to questions of online communication, networking etc.
Possible research questions are: What are common user understandings of social media algorithms? What are users’ ways of observing how algorithms might work? How does users’ understanding of algorithms affect their consumption and creation of various content? Such questions aim to better understand social, cultural, and political aspects in social media usage, especially in relation to community guidelines, privacy, ethics, race, gender, or marginalized communities. Our goal is to study and understand how humans as users interact with social technology and how the use of social media apps is connected to and integrated into our everyday life.
Student Involvement
Students will learn how to design qualitative research projects and to apply qualitative methods to research socio-technological aspects of social media use, engagement, and participation. This can include designing and conducting interviews, designing and conducting user observations, doing qualitative content analysis, finding and contacting study participants, best practices for conducting user studies, and how to transcribe, code, analyze, and interpret qualitative data (e.g. interviews, observation protocols). Based on quality criteria for qualitative research, students will learn how to develop and validate research questions from qualitative user data. The ideal student is familiar with social media platforms, has interest in qualitative research, and is open to conducting qualitative research, such as interviews and/or observations.
References
Steen, E., Yurechko, K., & Klug, D. (2023). You Can (Not) Say What You Want: Using Algospeak to Contest and Evade Algorithmic Content Moderation on TikTok. Social Media+ Society, 9(3), 20563051231194586.
Klug, D., Steen, E., & Yurechko, K. (2023, April). How Algorithm Awareness Impacts Algospeak Use on TikTok. In Companion Proceedings of the ACM Web Conference 2023 (pp. 234-237).
Klug, D., Qin, Y., Evans, M., & Kaufman, G. (2021, June). Trick and please. A mixed-method study on user assumptions about the TikTok algorithm. In 13th ACM Web Science Conference 2021 (pp. 84-92).
Karizat, N., Delmonaco, D., Eslami, M., & Andalibi, N. (2021). Algorithmic folk theories and identity: How TikTok users co-produce Knowledge of identity and engage in algorithmic resistance. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-44.
Le Compte, D., & Klug, D. (2021, October). “It’s Viral!”-A Study of the Behaviors, Practices, and Motivations of TikTok Users and Social Activism. In Companion Publication of the 2021 Conference on Computer Supported Cooperative Work and Social Computing (pp. 108-111).
Simpson, E., & Semaan, B. (2021). For You, or For “You"? Everyday LGBTQ+ Encounters with TikTok. Proceedings of the ACM on human-computer interaction, 4(CSCW3), 1-34.
Verifying the Rust Standard Library
Mentor: Bryan Parno
Description and Significance
The Secure Foundations Lab has been developing a new language, Verus ¹, for formally proving the correctness of code written in Rust ². This summer, we plan to expand to verify the implementation of the Rust standard library itself. The Rust standard library is of critical importance to the Rust community, since nearly every Rust program depends on it for correctness and security. And yet, for the sake of better performance, the standard library relies heavily on unsafe Rust code, which disables Rust's standard safety checks and instead expects the developer to carefully ensure that safety is preserved. Using Verus, we can instead provide the developer with an automated safety net, allowing them to demonstrate in a machine-checkable fashion that their code indeed ensures safety.
Student Involvement
Students will work with a team to verify critical portions of the Rust standard library. In the process, students will learn about Rust, including some of its more advanced features, as well as about formal verification and theorem proving; no prior experience in these areas is required -- just good taste and a desire to use tools that let your write perfectly corrected code.
References
¹ https://verus.rs
² https://www.rust-lang.org/