As agents using artificial intelligence have wormed their way into the mainstream for everything from customer service to fixing software code, it’s increasingly important to determine which are the best for a given application, and the criteria to consider when selecting an agent besides its functionality. And that’s where benchmarking comes in.
However, a new research paper, AI Agents That Matter, points out that current agent evaluation and benchmarking processes contain a number of shortcomings that hinder their usefulness in real-world applications. The authors, five Princeton University researchers, note that those shortcomings encourage development of agents that do well in benchmarks, but not in practice, and propose ways to address them.