Skip to main content

What Philosopher Ibn Sina Can Teach Us about AI

A philosopher who lived centuries before artificial intelligence might be able to help us understand the field's personhood questions

A mosaic tile portrait of Ibn Sina on a blue background

Islamic philosopher Ibn Sina (980-1037 C.E.)

Mohamed Osama/Alamy Stock Photo

In 2022, Google engineer Blake Lemoine developed a rapport with an excellent conversationalist. She was witty, insightful, and curious; their dialogues flowed naturally, on topics ranging from philosophy to TV to dreams for the future. There was just one problem: she was an AI chatbot.

In a series of conversations with Google’s LaMDA language model, Lemoine became gradually convinced the chatbot was a personlike you or me. “I know a person when I talk to it,” he told the Washington Post in 2022.

Whether you believe Lemoine’s claims or not, the question emerges: Do we, in fact, know a person when we talk to one?


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Personhood generally refers to a moral status; in most ethical frameworks, particular considerations—rights, duties, praise, blame, dignity, agency—emerge at the level of the person. So the question of whether electronic systems do or could merit the status of personhood has wide-ranging implications for how we engage with these technologies.

To assess the possibility of e-personhood, we need some standard of personhood generally. In recent years many philosophers have argued that what makes us persons is our capacity for conscious experience. But how do we define consciousness? What external evidence can we use to determine whether a being is conscious?

The lack of consensus on these questions is one of the reasons that the debate around AI personhood has long been at a bit of a stalemate. The question emerges: What other criteria do we have for assessing the possibility of e-personhood? As a doctoral candidate studying the philosophy of science, I believe a path toward answering this futuristic question may lie in our distant past—in the work of the early Islamic philosopher Ibn Sina (980–1037 C.E.).

Ibn Sina lived centuries before the invention of the printing press, much less artificial intelligence. And yet he was concerned with many of the same questions AI ethicists think about today—questions like: What makes a person a person, as opposed to an animal?

Just as contemporary AI researchers are interested in comparing the processes that underpin human and AI responses to similar tasks, Ibn Sina was interested in comparing the internal processes humans and animals might undergo to arrive at similar behavior outputs. For him, one key distinguishing capacity of the human person is the capacity to grasp “the universal.” Whereas animals can only think about particulars (the specific things right in front of them), humans can reason from generalized rules.

In Al-Nafs, Ibn Sina discusses a popular ancient example of a sheep perceiving a wolf. Whereas a human being would invoke a broad principle—“Wolves are generally dangerous, and this particular animal in front of me is a wolf; therefore, I should run away”—he claims that animals think differently. They don’t reason from a rule; they just see the wolf and know to run. They are limited to “particulars”—that wolf—rather than reasoning about universal qualities of wolves.

The distinction Ibn Sina draws between human and animal psychology bears strong resemblance to a distinction that contemporary computer scientists are investigating with regard to AI. Current research suggests that artificial neural networks lack the ability for systematic compositional generalizability. Linguists and cognitive scientists use this term to describe the types of inferences we make from generalized rules. It’s widely assumed to be one of the primary ways humans reason in everyday life. Whereas humans abstract meanings from sequences of words that they can then combine into more complex ideas, AI fishes within statistical datasets for specific data entries that match the particular task at hand.

This difference explains a great deal about the limitations of contemporary AI. To see it at play, look to the ubiquitous CAPTCHA tests used to distinguish between humans and bots. “Look at these curvy letters. Much curvier than most letters, wouldn’t you say? No robot could ever read these,” comedian John Mulaney quips in a 2018 Netflix special. It seems absurd, but it’s true; sufficient alterations make it difficult for even the most sophisticated artificial systems to recognize letters. This is because these systems lack the compositional capacity to make abstract generalizations about the core features of a given letter and apply it to a particular warped example.

This difference between human and artificial cognition maps neatly to Ibn Sina’s description of what is unique about human reasoning. In al-Šhifā, he describes how “the intellect … learns what things are shared in common and what things are not, and so extracts the natures of things common in species.” On his account, humans extract the essential features of things from their less-essential features to form generalized concepts. We then reason using these concepts, applying them to cases.

For example, as children we learn to extract a core feature of the letter X: its being composed of two crossed lines. We then abstract—make a universal generalization about the core features of an X—to conclude that all Xs are composed of two crossed lines. Finally, by applying this generalization, we can recognize specific Xs. We know that two crossed lines are a core feature of the letter X and that the random additional lines and warping in the CAPTCHA image are not.

The computer, in contrast, is unable to deduce that this image represents an X unless it has been fed an exact image of an X (or something sufficiently similar). The additional lines and warped shape are enough to make this X unrecognizable, because it does not match the computer’s large repository of specific images categorized as Xs.

Similarly, if an artificial neural network were presented with the task of the sheep, it would not reason as the human does, from a general concept of wolf-ness to features of the particular wolf such as dangerousness. Instead, it would reason as the sheep does, constrained to the realm of particulars.

A crucial difference between the sheep and the artificial neural network is that the artificial neural network has access to a much larger repository of particulars in the form of increasingly exhaustive datasets. What makes deep learning so successful at language tasks is its access to large datasets of vast numbers of particulars, rather than the genuine replication of human reasoning through compositional generalizability.

Ibn Sina’s core criterion for personhood—reasoning from universals—closely resembles systematic compositional generalizability. This criterion could provide a potentially testable standard for personhood. In fact, so far, AI has failed this test in numerous studies. Whether or not one adopts it as a solution, Ibn Sina’s account provides a new lens on the problem of personhood that challenges the assumptions of consciousness-centered accounts.

Scientific ethics is so often concerned with the cutting edge—the latest research, the newest technology, a constant influx of data. But sometimes the questions of the future require careful consideration of the past. Looking to history allows us to look beyond the preoccupations and assumptions of our time and may just provide refreshing approaches toward current stalemates.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.