Carnegie Mellon University

AI, robotics, machine learning, and advanced manufacturing have profoundly impacted our society, economy, and daily lives. Yet automation is displacing workers across many industries. Algorithms are driving decision-making in powerful, unseen (and unforeseen) ways. New platforms and networks are reshaping how we view and engage with our world.

The Block Center seeks out results-oriented projects that align with our three focus areas: how emerging technologies will alter the future of work, how AI and analytics can be harnessed responsibly for social good, and how innovation in these spaces can be more inclusive and improve quality of life for all. We then support work that shows the greatest promise for delivering actionable policy impact.

The Block Method
homepage-secondary-img.jpg

Featured

AI Is Here To Stay. How Do We Regulate It?

adobestock_429594706.jpeg

While governing artificial intelligence, something in which very little is clear, one thing is: “Don’t regulate the technology,” said Ramayya Krishnan, the Dean of Carnegie Mellon University - Heinz College of Information Systems and Public Policy, “because the technology will evolve.”

Read the article here

CMU Experts Lent Expertise to New U.S. Artificial Intelligence 'Roadmap'

krishnan.jpg

“This bipartisan roadmap recognizes that innovation in robotics is vital to realize AI’s ability to enhance the future of our economy and improve the quality of life in America,” said Theresa Mayer(opens in new window), CMU’s vice president for research. “Majority Leader Schumer, along with Sens. Round, Young and Heinrich, solicited input from a wide variety of experts and stakeholders and we are so appreciative to see our faculty’s expertise reflected in these recommendations.” 

Read the full press release

Carnegie Mellon University Cautions Voters to Be Aware of How Generative Artificial Intelligence May be Used During the Election to Create False Images, Videos and News

voter-guide.png

Generative Artificial Intelligence (GenAI) allows users to create realistic images, videos, audio, and text quickly and cheaply—capabilities that can be useful in many contexts. But during elections, GenAI can be misused to manipulate and deceive voters at an unprecedented magnitude and scale. Researchers at Carnegie Mellon University have created a new guide to educate voters about how the technology may be used by unethical parties, particularly foreign adversaries, to manipulate and misinform American voters in ways they may not recognize. 

Read More

Block Center Experts Recognized for Operationalizing the NIST AI Risk Management Framework Report

blockcenter-post-gazette.png

The Pittsburgh Post-Gazette recently featured an article on the Block Center's report on Operationalizing the National Institute for Standards and Technology (NIST) Artificial Intelligence Risk Management Framework, which highlights use-cases for the technology and how to minimize risk associated with AI tools in the public and private sectors. 

Read More

New Framework for Using AI in Health Care Considers Medical Knowledge, Practices, Procedures, Values.

healthtech.png

Health care organizations are looking to artificial intelligence (AI) tools to improve patient care, but their translation into clinical settings has been inconsistent, in part because evaluating AI in health care remains challenging. In a new article, researchers propose a framework for using AI that includes practical guidance for applying values and that incorporates not just the tool’s properties but the systems surrounding its use.The article was written by researchers at Carnegie Mellon University, The Hospital for Sick Children, the Dalla Lana School of Public Health, Columbia University, and the University of Toronto. It is published in Patterns.

Read the press release

Operationalizing the NIST AI Risk Management Framework - Summary Report 

rai logo

In July 2023, the Responsible AI Initiative of Carnegie Mellon University’s Block Center for Technology and Society (RAI) hosted the National Institute of Standards and Technology (NIST) for a discussion on “Operationalizing the NIST Risk Management Framework”. The convening brought together Carnegie Mellon researchers, academic colleagues, and industry, nonprofit and government practitioners with NIST experts for a focused conversation about how to best operationalize the Artificial Intelligence Risk Management Framework (RMF) released by NIST in January of 2023. 

RAI is proud to release a summary report of the event, including real-world use cases and key takeaways from the event. 

Read the full summary report

Forlzzi Briefs Senators on AI in the Workforce

forum-2.jpg

Carnegie Mellon University School of Computer Science Professor Jodi Forlizzi on Tuesday shared four recommendations with U.S. senators to ensure that innovations in artificial intelligence are sustainable, responsible and work for workers.

Read more

Events

Previous:

Evaluating Generative AI Systems: The Good, the Bad, and the Hype

Monday, 15 April 2024

This event is hosted by GenLaw in Washington DC with sponsorship by Carnegie Mellon’s K&L Gates Initiative.

Previous:

CMU expert convening on “Supporting NIST’s Development of Guidelines on Red-teaming for Generative AI”.

Sponsored by the Responsible AI Initiative at the Block Center and the K&L Gates Initiative on Ethics and Computational Technologies at Carnegie Mellon.

Watch the videos of each session HERE

CONNECT WITH US. GET SUPPORT.
FUEL THE GOOD.

Are you part of a CMU research team, corporate partner, legislator, or student? Reach out.

Contact Us

Sign up for The Block Center Newsletter
* indicates required
man-block-center
block_socialscience-x-technology-1.jpg