Human-computer interaction

New method uses crowdsourced feedback to help train robots

To teach an AI agent a new task, like how to open a kitchen cabinet, researchers often use reinforcement learning — a trial-and-error process where the agent is rewarded for taking actions that get it closer to the goal. In many instances, a human expert must carefully design a reward function, which is an incentive mechanism that gives the agent…

Read more

Explained: Generative AI

A quick scan of the headlines makes it seem like generative artificial intelligence is everywhere these days. In fact, some of those headlines may actually have been written by generative AI, like OpenAI’s ChatGPT, a chatbot that has demonstrated an uncanny ability to produce text that seems to have been written by a human. But what do people really mean…

Read more

AI copilot enhances human precision for safer aviation

Imagine you're in an airplane with two pilots, one human and one computer. Both have their “hands” on the controllers, but they're always looking out for different things. If they're both paying attention to the same thing, the human gets to steer. But if the human gets distracted or misses something, the computer quickly takes over. Meet the Air-Guardian, a…

Read more

Multi-AI collaboration helps reasoning and factual accuracy in large language models

An age-old adage, often introduced to us during our formative years, is designed to nudge us beyond our self-centered, nascent minds: "Two heads are better than one." This proverb encourages collaborative thinking and highlights the potency of shared intellect. Fast forward to 2023, and we find that this wisdom holds true even in the realm of artificial intelligence: Multiple language…

Read more

New tool helps people choose the right method for evaluating AI models

When machine-learning models are deployed in real-world situations, perhaps to flag potential disease in X-rays for a radiologist to review, human users need to know when to trust the model’s predictions. But machine-learning models are so large and complex that even the scientists who design them don’t understand exactly how the models make predictions. So, they create techniques known as…

Read more

Artificial intelligence for augmentation and productivity

The MIT Stephen A. Schwarzman College of Computing has awarded seed grants to seven projects that are exploring how artificial intelligence and human-computer interaction can be leveraged to enhance modern work spaces to achieve better management and higher productivity. Funded by Andrew W. Houston ’05 and Dropbox Inc., the projects are intended to be interdisciplinary and bring together researchers from…

Read more

A faster way to teach a robot

Imagine purchasing a robot to perform household tasks. This robot was built and trained in a factory on a certain set of tasks and has never seen the items in your home. When you ask it to pick up a mug from your kitchen table, it might not recognize your mug (perhaps because this mug is painted with an unusual…

Read more

MIT CSAIL researchers discuss frontiers of generative AI

The emergence of generative artificial intelligence has ignited a deep philosophical exploration into the nature of consciousness, creativity, and authorship. As we bear witness to new advances in the field, it’s increasingly apparent that these synthetic agents possess a remarkable capacity to create, iterate, and challenge our traditional notions of intelligence. But what does it really mean for an AI…

Read more

3 Questions: Jacob Andreas on large language models

Words, data, and algorithms combine, An article about LLMs, so divine. A glimpse into a linguistic world, Where language machines are unfurled. It was a natural inclination to task a large language model (LLM) like CHATGPT with creating a poem that delves into the topic of large language models, and subsequently utilize said poem as an introductory piece for this…

Read more

Study: AI models fail to reproduce human judgements about rule violations

In an effort to improve fairness or reduce backlogs, machine-learning models are sometimes designed to mimic human decision making, such as deciding whether social media posts violate toxic content policies. But researchers from MIT and elsewhere have found that these models often do not replicate human decisions about rule violations. If models are not trained with the right data, they…

Read more

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More