Artificial intelligence has always promised to augment human capability. But something changed in the last year. We crossed a threshold from AI as a tool—something you ask a question and receive an answer—to AI as an agent: something that sets goals, plans actions, executes them, and adapts when things go sideways.
What Makes an Agent?
An AI agent is characterized by four properties:
- Perception — it takes in information from its environment
- Reasoning — it forms a plan to achieve a goal
- Action — it executes steps in the world, not just inside a chat window
- Memory — it retains context across time and sessions
The combination of these four properties produces something qualitatively different from the chatbot era. An agent can browse the web, write and run code, send emails, schedule meetings, and interact with external APIs—all in pursuit of a goal you described in plain language.
The Human Question
The central question isn’t whether agentic AI is powerful. It clearly is. The question is: what role do humans play when machines can handle the loop themselves?
We believe the answer is direction, judgment, and values. Agents are extraordinarily good at optimization once a goal is set. They are, at present, poor at knowing which goals are worth optimizing. That remains a deeply human question—rooted in context, relationships, ethics, and lived experience.
Human-centered computing has always insisted that technology should serve people, not the other way around. In the agentic age, that principle becomes more urgent, not less.
What This Blog Is For
This is a space to think carefully about that question. We’ll cover:
- How teams are actually deploying agents today
- The infrastructure emerging to support reliable autonomy
- The ethical and social dimensions of delegating decisions to machines
- Stories from the frontier—what’s working, what isn’t, and why
The agentic age is arriving whether we’re ready or not. Our job is to make sure humans remain at the center of it.