From Pigeons to Platforms: How Behaviorism and Bounded Rationality Shaped AI
In the first post of this series I wrote that intelligence isn’t really about technology — it’s about behavior. That was the starting thesis: AI should be understood as a mirror of how we act, and what we are, and its origins have an intrinsic relationship to that.
In this second part, I want to go deeper: to show how two classic frameworks — B.F. Skinner’s behaviorism and Herbert Simon’s bounded rationality — not only shaped psychology and economics, but also directly inspired some of the principles that underpin modern AI. And more importantly, how these roots help us understand both the risks and the opportunities of AI in today’s organizations.
From pigeons to algorithms
Skinner demonstrated that behavior can be shaped through rewards and punishments — insights captured in works like Science and Human Behavior (1953). That logic lies at the heart of reinforcement learning, the branch of AI that now — along with deep learning — trains agents to play video games, optimize supply chains, enable large language models, or recommend content. At its core, AI learns like a pigeon in Skinner’s box: it tries, receives feedback, and adjusts.
Simon, on the other hand, introduced the idea of bounded rationality (Nobel Lecture, 1978): humans (and machines) do not seek perfect solutions, but satisfactory ones, constrained by time, data, and resources. That principle lives inside almost every modern algorithm — models that satisfice rather than optimize, because they operate in a world of limits. In addition, AI is prone to error and to adversarial attacks, just like we are.
When AI shapes behavior the wrong way
A study with 3D artists by Microsoft Research (2022) revealed a clear risk: when interacting with an AI system, humans began to adapt their judgment to please the algorithm, even against their own creative instincts. The result was a vicious loop: the AI reinforced biases, and the artists lost trust in their own judgment. The lesson is uncomfortable: if we don’t carefully design what behaviors AI reinforces, it may narrow creativity instead of expanding it.
A similar dynamic emerged in rural China, where the Brilliant Doctor system (2019 pilot) was deployed in clinics to assist physicians. In some cases, doctors became overly reliant on the system’s recommendations, sidelining their own expertise. In others, they distrusted or resisted the tool, which strained the doctor–patient relationship. The same pattern repeated: technology isn’t neutral — it actively shapes human behavior, for better or worse.
When AI enhances human trust
A very different case comes from the study My Advisor, Her AI and Me (2022) in a European bank. It compared investment advice delivered purely by AI with a hybrid model, where a human filtered and validated the AI’s recommendation before sharing it with clients. The outcome? Trust and adoption were significantly higher in the hybrid model. The human presence acted as an emotional anchor that made the technology credible and useful. The lesson here: AI can amplify trust when it is designed as a complement in a collective intelligence approach, not a substitute.
The point for Latin America
From this one clear conclusion emerges: AI always reinforces human behavior. The real question is which behaviors.
In Latin America, this is no longer a theoretical debate. More than 65% of consumers already use some form of AI in daily life (IDB, 2023), and 87% of startups in the region integrate AI (Endeavor, 2022). In Brazil, nearly 17% of mid- to large-size companies report using AI in real processes (IBGE, 2023). And according to the World Bank (2023), between 30% and 40% of jobs in the region are exposed to generative AI.
The region has both the scale of the challenge and the size of the opportunity: to turn exposure into advantage. I’ve seen it first-hand in boardrooms and customer meetings — the hunger is there. We need not just to use AI for efficiency, but to reinforce strategic behaviors — creativity, trust, productivity, inclusion.
Some may say it’s impossible for Latin America to lead in this field against Silicon Valley or China. But impossible is just a matter of opinion.
📌 In the next post of The Behavior of Intelligence I’ll explore concrete actions organizations in the region can take to make AI a real cultural and strategic lever.
👉 And I’d love to hear: what’s the first behavior you think we should reinforce to make AI create real value in LATAM?
PS. This post does not represent Salesforce’s official position, nor do I speak on behalf of the neurodivergent community. These reflections are entirely my own — as a father of a child with autism and as someone deeply involved in this journey. AI was used to polish this article.

