Agents
By themselves, language models can't take actions - they just output text.
Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.