IBM Automated Planning Research @ AAAI 2020

Share this post:

What is Planning?
Planning refers the task of finding a procedural course of action for a declaratively described system to reach its goals while optimizing target performance measures. Given the task, automated planning technologies find a sequence of operators or a plan that can transition the current state to a desired end state.

Automated planning provides an autonomous agent with the ability to answer the question “What should I do next?” in order to function in its environment. This was among the first tasks posed in the early formulation of artificial intelligence more than half a century ago and — to this date — remains a largely unsolved core AI competency due to the complexity of the task in terms of both representation and reasoning.

Automated Planning at IBM
Automated planning technologies find many industry applications, including robots and autonomous systems, cognitive assistants, cyber security, service composition, and so on. Enterprise AI applications come with unique challenges and the same is true for planning. Key aspects of adopting automated planners for industry involve scaling to very large problems, interfacing with end users, and providing guarantees on the quality of the solution. Research from IBM touches on many of these aspects at AAAI20, the premier international conference on artificial intelligence.

Explainable AI Planning 
One of the primary challenges when systems interface with end users is dealing with the users’ expectations or mental models. This means that even if they system is making the best decisions in its own model, it will appear to be inexplicable to the observer. In “Expectation-Aware Planning, we show how an agent can incorporate the mental model of the user inside its own planning problem and prove that this is theoretically no harder than the original planning task. This allows an agent to generate self-explanatory plans when there is an observer in the loop.

In relation to existing works that have attempted this, our algorithm is the first-of-its-kind that can handle both ontic as well as epistemic effects on the mental model through communication and through observation, thereby opening up new behavior patterns not allowed by the state-of-the-art in human-aware planning and explainable AI. This is in addition to a large scale-up in computation afforded by the proposed approach, which circumvents the need to search in the space of model differences between the agent and the user by leveraging a compilation of epistemic and classical planning.

Paper info: Expectation-Aware Planning: A Unifying Framework for Synthesizing and Executing Self-Explaining Plans for Human-Aware Planning. Sarath Sreedharan, Tathagata Chakraborti, Christian Muise, and Subbarao Kambhampati. Presentation: Technical Session 9 (Gibson): Planning Tuesday February 11th 9:30-10:45. Posters: Poster / Demo Reception 3, Americas Hall I / II, Tuesday, February 11 18:30 – 20:30 PM.


Tutorial: Synthesizing Explainable and Deceptive Behavior in Human-AI Interaction. Subbarao Kambhampati, Tathagata Chakraborti, Sarath Sreedharan, and Anagha Kulkarni. Saturday February 7th 14:00 – 18:00.


Multiple solutions for planning tasks

Real-world applications often require coming up with multiple plans. One possible reason for that is the incompleteness of planning models. Continuing with this idea, it is often the case that, while interacting with end users, their preferences are not fully known, and it is not desirable to come up with a single plan as a recommendation for the user or as a course of action for the agent. In general, for any sufficiently sophisticated real-world task, models are always incomplete, and an intelligent agent must be able to reason with multiple possible solutions. Another possible reason is that sometimes a solution to a real-world problem corresponds to a set of plans rather than one plan. One such example problem is scenario planning.

When reasoning about various sets of plans, there are two main criteria to consider. The first one is quality, as expressed via the costs of individual plans, and the second one is diversity of the set of plans. Both are considered under the wide umbrella of diverse planning. The subfield of diverse planning has seen a substantial body of work over the last decade. The research, however, was somewhat hectic, with multiple diversity metrics introduced, as well as planners for these metrics. Few papers were suggesting alternative definitions of diverse planning and were comparing with previous work, although solving a differently defined problem. Our current work attempts to organize the subfield of diverse planning, introducing a taxonomy of computational problems that fall under diverse planning and mapping existing planners to the problems they solve. Further, we introduce a new paradigm in solving diverse planning using classical planners and post processing the resulted sets of plans to optimize to any diversity metric. We empirically show our approach to over perform all previous planners by a large margin, on each of the several existing diversity metrics.

Paper info: Reshaping Diverse PlanningMichael Katz and Shirin Sohrabi. Presentation: Monday, 11:15, Gibson

Considering the quality criteria only, the field of planning has seen a body of work on generating top-k plans, which are dependent on the parameter k — the required number of plans. This value is often an artificial constraint, an approximation used to ensure that a sufficient number of plans are found. Instead, we propose to focus on what really matters — the quality of plans. We propose a collection of new computation problems under the umbrella of top-quality planning. Once the definition of what constitutes a solution does not include a certain number of plans, we can define additional problems of interest, such as unordered top-quality planning, where two plans are identical if they differ only in the order of their operators. Many real-world applications are indifferent to orders of actions in a plan and therefore solutions to unordered top-quality planning are in high demand, especially since — in practice — such solutions are found faster than the full top-quality solution.

Paper info: Top-Quality Planning: Finding Practically Useful Sets of Best Plans, Michael Katz and Shirin Sohrabi and Octavian Udrea. Presentation: Monday, 11:35, Gibson.

Our code is available for download at


ML+Planning Hybrid Approaches and Neurosymbolic AI

Recent advances in machine learning have opened up newer ways to bridge the gap between automated planning and the real world, especially in terms of dealing with incompleteness in models. IBM Research is furthering a neurosymbolic AI approach that marries two powerful AI techniques: deep neural networks for visual recognition and language understanding, and symbolic program execution for reasoning. The power of a hybrid approach has been recently demonstrated in planning, with a portfolio planner Delfi that uses a convolutional neural network for choosing the right planner for a given planning task. Delfi has won the optimal track of the latest International Planning Competition (IPC 2018) and had attracted a lot of attention in the planning community. However, Delfi has an inherent limitation — it uses image convolution for images created from graphical structures that represent planning tasks. Our recent research aims to exploit recent advances in graph neural networks to work directly on the graphical structures.

Paper info: Online Planner Selection with Graph Neural Networks and Adaptive SchedulingTengfei Ma, Patrick Ferber, Siyu Huo, Jie Chen, Michael Katz. Presentation: Monday, 2:20, Gibson

Other Planning Highlights at AAAI-20

While the tools for symbolic sequential decision making are well developed and allow us to solve large, real-world problems, these tools require a symbolic model. Such models are partially constructed by domain experts and partially captured from data. Causal knowledge extraction allows for the capture of partial symbolic models from data, allowing the creation ofsymbolic and hybrid planning models for various real-world applications.

Demo: Causal Knowledge Extraction Through Large-Scale Text Mining, Oktie Hassanzadeh, Debarun Bhattacharjya, Mark Feblowitz, Kavitha Srinivas, Michael Perrone, Shirin Sohrabi, Michael Katz. Poster / Demo Reception 1, Americas Hall I / II: Sunday, February 9 19:30 – 21:30.

IBM also has a rich history of work at the intersection of automated planning and business processes. During AAAI 2020 Workshops, we will highlight challenges in the model acquisition task for the management of business processes and the design of goal-oriented conversational agents. These works will focus on acquiring domain models from experts in the loop, as opposed to learning from data. However, we will again observe instances of synergy between learning and planning approaches.

Paper info: Planning for Goal-Oriented Dialogue Systems. Christian Muise, Tathagata Chakraborti, Shubham Agarwal, Ondrej Bajgar, Arunima Chaudhary, Luis Lastras, Josef Ondrej, Miroslav Vodol and Charlie Wiecha. AAAI 2020 Workshop on Interactive and Conversational Recommendation Systems (WICRS). Presentation: Saturday February 8th 11:40 – 11:52.

Paper info: D3BA: A Tool for Optimizing Business Processes Using Non-Deterministic Planning. Tathagata Chakraborti and Yasaman Khazaeni. AAAI-20 Workshop on Intelligent Process Automation (IPA-20). Presentation: Friday February 7th 15:10- 15:30.

AAAI 2020 Best Technical Demonstration Award 
TraceHub – A Platform to Bridge the Gap between State-Of-The-Art Time-Series Analytics and Datasets Agarwal, S.; Muise, C.; Agarwal, M.; Upadhyay, S.; Tang, Z.; Zeng, Z.; and Khazaeni, Y. 
In this demo, we presented TraceHub – a platform that connects new non-trivial state-of-the-art time-series analytics with datasets from different domains. Analytics owners can run their insights on new datasets in an automated setting to find insight’s potential and improve it. Dataset owners can find all possible types of non-trivial insights based on latest research. We provide a plug-n-play system as a set of Dataset, Transformer pipeline, and Analytics APIs for both kinds of users. We show a usefulness measure of generated insights across various types of analytics in the system. We believe that this platform can be used to bridge the gap between time-series analytics and datasets by significantly reducing the time to find the true potential of budding timeseries research and improving on it faster. We are happy to announce that our demo got selected as the Best AAAI 2020 Demo by popular vote!
IBM Research has a long history of AI Planning-related work, both oriented around enterprise applications as well as focused on theoretical foundations and creating generic tools. Our planners have won numerous awards and continue to be among the best performing tools for various planning formalisms. If you are interested in learning more about AI Planning at IBM Research while at AAAI-20, come find us during the talks referenced above or at IBM Research booth #103.
Shirin Sohrabi

IBM Research Staff Member

More AI stories

Deriving Complex Insights from Event-driven Continuous Time Bayesian Networks

Real-world decision making often involves situations and systems whose uncertain and inter-dependent variables interact in a complex and dynamic way. Additionally, many scenarios are influenced by external events that affect how system variables evolve. To address these complex scenarios for decision making, together with colleagues at the IBM T. J. Watson Research Center, we have developed a new dynamic, probabilistic graphical model called - Event-driven Continuous Time Bayesian Networks.

Continue reading

Progressing IBM Project Debater at AAAI-20 — and Beyond

At the thirty-fourth AAAI conference on Artificial Intelligence (AAAI-20), we will present two papers on recent advancements in Project Debater on two core tasks, both utilizing BERT.

Continue reading

Mastering Language Is Key to More Natural Human–AI Interaction

IBM Research AI is leading the push to develop new tools that enable AI to process and understand natural language. Our goal: empower enterprises to deploy and scale sophisticated AI systems that leverage natural language processing (NLP) with greater accuracy and efficiency, while requiring less data and human supervision.

Continue reading