Convex optimization problems, which involve the minimization of a convex function over a convex set, can be approximated in theory to any fixed precision in polynomial time. However, practical algorithms are known only for special cases. An important question is whether it is possible to develop algorithms for a broader subset of convex optimization problems that are efficient in both theory and practice.
At ACL 2019, IBM researchers released a paper detailing the model they trained to answer complex questions using Neural Program Induction, which allows an AI model can be taught to procedurally decompose a complex task into a program.
IBM Fellow Salim Roukos provides some specifics on IBM Research’s enterprise NLP work by highlighting four papers IBM Research AI is presenting at the ACL 2019 conference.
Language-independent entity linking makes sense of any text on the web Let’s take a hypothetical sports headline: Philadelphia Beats New York in 34-26 Nail Biter! For applications like search or question answering, we’d like today’s computers to analyze this piece of text (based on the context, of course) and make sense out of the “entities” […]
Koichi Kamijo, a scientist at IBM’s research lab in Tokyo was only 20 questions into a 300-question life insurance form. And he was already exhausted. There had to be an easier way to answer these questions. So, he got together with his colleagues, Ryo Kawahara, Takayuki Osogami, Masaki Ono and Shunichi Amano to come up […]