WHAT THE RESEARCH IS:
Composable planning is a new way of building AI agents that are better at solving unfamiliar tasks. Traditional training involves agents repeatedly practicing one specific task at a time. To pave the way for more versatile AI, this new approach focuses on training agents on a set of simple, related tasks within a given environment to enable it to then perform longer, more complex tasks.
HOW IT WORKS:
The agent learns a model of its environment based on attributes that the researchers assign, and then it learns to perform a range of basic tasks within that labeled environment. It then uses this model to plan out the novel tasks with which it’s presented. If, for example, this approach were used to teach an AI agent to cook, it might learn the general layout of a kitchen and a number of simple subtasks (such as cracking an egg) that could apply to a variety of dishes.
WHY IT MATTERS:
Training a separate agent for every conceivable task isn’t feasible. The pool of possible tasks is already too large to account for, and as AI becomes more common, systems will inevitably encounter tasks that aren’t known at training time. This new approach yields agents capable of combining simple, single-action tasks to perform longer, more complex sets of tasks. This is foundational research that could help shift the state of the art in autonomous agents away from today’s single-purpose systems and toward AI with greater adaptability.