Skip to main content

This Simple Strategy Might Be the Key to Advancing Science Faster

The incentives in science don’t always encourage openness—but being wrong might just be the key to getting it right.

Anaissa Ruiz Tejada/Scientific American

Uncertain

[CLIP: Song: “It's a wave. It’s a particle! It’s a wave! It’s a particle! It’s a wave!”]

[CLIP: theme music]

Christie Aschwanden: I’m Christie Aschwanden, and this is Uncertain, a Scientific American podcast about the uncertainty that drives and occasionally mucks up scientific discovery.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


This is episode four of this five-part series, and it occurs to me that we still haven’t even mentioned the thing most people immediately think of when it comes to science and uncertainty.

So I’m going to rectify that right now.

Chanda Prescod-Weinstein: Chanda Prescod-Weinstein—I’m an associate professor of physics and astronomy and core faculty of women’s and gender studies at the University of New Hampshire.

Aschwanden: Can you explain Heisenberg’s uncertainty principle?

Prescod-Weinstein: So the fundamental idea behind Heisenberg’s uncertainty principle is that there is a limit on how well we can simultaneously measure the position of an object, so where something is, and its momentum. Which we can roughly say is, in some sense, a measure of its velocity, so how fast it’s going in what direction. There’s also a mass in there, but we don’t need to worry about that.

And there’s actually also another uncertainty relation between energy and time about how well you can simultaneously measure the energy of an object and the time over which the system is changing.

Aschwanden: I remember learning about this in high school and just feeling my head spin. Help me out here.

Prescod-Weinstein: So one of the early realizations among the quantum theorists was this idea of wave-particle duality: that everything can be a particle and a wave, and simultaneously.

[CLIP: Background singing: “It's a wave! It’s a particle! It’s a wave! It’s a particle! It’s a wave!”]

Aschwanden: Yeah, that’s the part that really made me go, “What???

Prescod-Weinstein: I think the intuitive way to think about this is that objects that we might think of as particles—like, you might imagine it as like a tiny little ball—can actually be conceived of as little waves. Like, you can even think about waves in the ocean or in a lake or just dropping like a spoon into a full sink of dishes…

[CLIP: Spoon drops into water]

Prescod-Weinstein: … with lots of water in it, you see, like waves ripple out.

Aschwanden: Okay, so I’m imagining these waves, but how can they also be a particle?

Prescod-Weinstein: Go back to that example of, like, the spoon being dropped into, like, the full sink of water.

[CLIP: Spoon drops into water]

Prescod-Weinstein: You have waves that are spreading out from where the spoon fell in. And that spread means that the position of the wave is spread out and also that the momentum of the wave is spread out. And so there’s no longer an exact point where the wave exists; the wave is, is spread out and where it exists.

And so that’s one way of thinking about that uncertainty is—that that information is spread out in ways that it’s not in the classical picture when you just think of, “I have a tiny little ball that’s located in one place.”

[CLIP: Spoon drops into water]

Aschwanden: Okay, so that perspective does help. But wrapping your head around quantum mechanics sort of requires a new way of thinking.

How do you teach your students to bend their minds like this?

Prescod-Weinstein: At the start of the semester, I walked them through an experiment where the only way to explain the experimental results is to accept that quantum mechanics is the correct theory of reality on microphysical scales.

We came back to that experiment two more times in the semester. And it was only on the third time that they were like, “Wait a minute, this isn’t the outcome that I expect.” And I was like, “Right? It’s not the outcome that you’re expect.” Someone said, “Well, something must be missed—missing.” And I was like, “It’s just quantum mechanics.”

Aschwanden: It’s just quantum mechanics, ha. Oh yeah, it all makes total sense now.

Prescod-Weinstein: You have to shift your relationship to what you call intuitive. And that’s actually part of what your training as a scientist is supposed to do, is to not only deepen your intuition about things that you come in knowing, but also to create new wells of intuition for you.

Aschwanden: New wells of intuition—I like it. But intuition can lead you astray, too, right?

Prescod-Weinstein: It’s my task to try and understand the world that actually is. And I think that that means being open to the possibility that what I thought was intuitive for me is completely wrong, and that what I was told about how the universe works needs adjustment. We have to be ready for that at any time.

Aschwanden: That’s the kind of open-mindedness that seems essential to creativity.

Prescod-Weinstein: That’s a lesson I take from learning the history of my discipline, which is that people think things are absolutely correct and then they absolutely turn out to be wrong, right?

Aschwanden: What’s an example of that?

Prescod-Weinstein: The most famous example is relativity–special relativity.

Aschwanden: If you took high school or college physics, you probably learned about Isaac Newton’s laws of motion.

Remember that equation, force equals mass times acceleration? That’s Newton.

He proposed that gravity was an attraction between masses. The apple falls off the tree on the ground because Earth’s gravity pulls it there. Students are still memorizing these equations because they work remarkably well in most situations.

But then Albert Einstein came along and said that no, no, that’s not quite right.

I’m going to totally oversimplify here, but this came about because Newton’s ideas work great in normal situations we encounter here in everyday life, but they break down around black holes and near the speed of light.

So Einstein proposed that gravity is a distortion of the fabric of spacetime caused by the presence of matter or energy. And in doing that, Einstein...

Prescod-Weinstein: … completely rewrote our understanding of mechanics at high speeds.

Aschwanden: Wow! So you’ve got these equations that totally work in most situations, and they make some kind of intuitive sense, and then something comes along to upend them.

You’ve already talked about keeping an open mind, but is there another kind of thinking that’s essential here?

Prescod-Weinstein: There’s a real value in being able to think in fantastical ways and ask the fantastical questions because occasionally things go in that direction. You know, Bohr’s atom is such a great example.

Aschwanden: This is also called the Rutherford-Bohr model, and it’s a description of atomic structure. There’s a dense, positively charged nucleus at the center, and then electrons circle the nucleus in distinct orbits, or energy levels.

Electrons can absorb or emit energy only in set amounts and jump between orbits when they do that.

Prescod-Weinstein: It’s such a creative and weird idea that the electron and the hydrogen atom can only live in certain discrete locations around the nucleus. That’s weird, right? And that’s not necessarily based on any ideas. Like, that’s not, “Well, if you just take the data, it points you to that thing,” like in special relativity. That’s something where some imagination was required.

Aschwanden: It really speaks to how uncertainty can be a great spark for creativity in science, just like it is literature and the arts.

Prescod-Weinstein: But the difference between us and say, a novelist, is that at some point, what we do has to fit a reality that’s out there. We are constrained by reality. In a way; like, we can’t do magical realism without checking it against data–and if the data says no magical realism, you’re done.

Aschwanden: The Rutherford-Bohr model is super cool! We still learn about it in textbooks. But...

Prescod-Weinstein: We also know that Bohr’s atom is slightly wrong but still valuable enough to teach it.

Aschwanden: It still has some usefulness. And scientists built on Bohr’s work to create better models that are even more correct. Science keeps going. What do you want your students to take away from your quantum mechanics course?

Prescod-Weinstein: I wanted my class to be a place where, like, they could just be like, “Man, quantum mechanics—how cool! How weird! How interesting!” And not just be, “Man, quantum mechanics—that will allow me to calculate this thing for my material science research that will translate into your product that will solve lots of things, right?” To actually have a moment of being like, “Man, the cosmos—what a place...!”

Aschwanden: I think that sense of wonder is what draws so many of us to science. So you become a scientist and then what?

Prescod-Weinstein: Our job as scientists is to be imaginative and creative and humble. And science is humbling. It’s hard. Everybody is going to get knocked down at some point—it took Einstein 10 years [to go] from special relativity to general relativity.

Aschwanden: The genius stereotype is Einstein just standing before a chalkboard and having a eureka moment. But the lone genius trope isn’t an accurate depiction of how science advances, is it?

Prescod-Weinstein: He was building on the mathematical work of other people—he couldn’t have finished those theories without that work. We don’t do any of it alone; we’re always building on the work of others.

And in the 2020s almost nobody does scientific work by themselves. We are a highly social, collaborative discipline. And that means that we have to be humble, and we have to be open to the contributions of others because that is how we move forward.

[CLIP: Music]

Aschwanden: We just heard examples of how science is an iterative process. Progress comes from people coming up with ideas that are sort of right and then new evidence and ideas coming in to update them to become even more correct.

Underlying this process is a willingness by scientists to accept that they might be wrong and be open to updating their ideas.

It turns out that social scientists have a term for this mindset. To find out more, I talked with two researchers who are studying this thing they call “intellectual humility.”

Tenelle Porter: I’m Tenelle Porter. I’m an assistant professor of psychology at Rowan University.

Daryl Van Tongeren: Daryl Van Tongeren—I’m an associate professor of psychology at Hope College.

Aschwanden: What exactly is intellectual humility?

Van Tongeren: Intellectual humility is about recognizing our own limitations and the fallibility of our own beliefs, as well as the ability for us to share our beliefs modestly and nondefensively while trying to take the perspective of other people.

Aschwanden: What’s an example of this?

Porter: There’s this famous excerpt of classroom video and these are third-grade students, and they’re having a discussion about what is an even and odd number. And a student raises his hand and says, “You know, I think six is both even and odd because six is both divisible by two. But when you divide it by two, they’re three groups of two. So it’s even and odd.”

Aschwanden: Okay, so he’s wrong, but that’s a pretty sophisticated way of thinking about it. How did the other kids respond?

Porter: And what kind of happens from then is the class engaging with his idea—students in the room saying, “Well, I don’t think so. But prove it to us.”

And at some point, a student makes a point to him along the lines of “No, that can’t be, and here’s why.” And he kind of says, “Well, I didn’t think of it that way. You know, thanks so much for bringing that up.... I didn’t, I didn’t know that before,” and, “Oh, that’s interesting. Tell me more about why you think that.”

Aschwanden: It seems to me that this kind of openness to being wrong and changing your mind is what we might consider the idealized way of scientific thinking. Do you agree?

Van Tongeren: I do think that intellectual humility is part and parcel in the scientific process. And so I think part of, you know, what it means to be a scientist is, in the scientific attitude is to embrace the word “curiosity,” to constantly be questioning, to be a lifelong learner and to embrace the world by changing your views in light of strong empirical evidence. And so I do think that intellectual humility is part of what it means to be an honest scientist.

Aschwanden: It also seems necessary to always be seeking out errors or pockets of uncertainty if you’re going to make progress in science, right?

Porter: Science is about making new discoveries. It’s about contributing to knowledge. And ultimately, you cannot learn something that you think you already know.

Aschwanden: How is intellectual humility related to curiosity?

Porter: I think that intellectual humility is necessary for curiosity because if you already have it all figured out, why would you be curious about anything at all? And so what comes first, in this process:

The first step is acknowledging, “Well, gee, there are some things that I don’t have figured out,” which can then light a fire and light a curiosity to explore and investigate those things.

Aschwanden: So you want to always be open to the possibility that you’re wrong. But does that mean you can’t ever feel right or assert your ideas?

Van Tongeren: Yeah, I’m really glad that you brought that up. You know, one metaphor that you can think about in your thinking about humility and intellectual humility ... is about being the right size—so not too big, not too small in a given situation.

And so for people who have earned the credibility and have demonstrated expertise, they need to live into the space that they can take up with that expertise.

And so intellectual humility is also not shrinking when you have expertise to speak up in a situation. So it’s certainly not the case that everybody’s perspective in a particular context would be equally as valuable.

Aschwanden: If I’m going in for surgery, I don’t want my surgeon to say, “Well, I’m pretty sure that I know the right place to make the incision, but I could be wrong.” And I certainly wouldn’t want her asking some rando on the Internet to weigh in.

But what about being the right size in the other direction? Intellectual humility can be hard to practice, right?

Porter: Part of how intellectual humility kind of can manifest ... is a willingness to reveal we don’t know something or are confused about something—to actually say, “Oh, I’ve never heard of that person,” which can be a difficult thing to do. Sometimes it shows a little bit of our vulnerability.

Aschwanden: What about for scientists?

Porter: And so to the extent that scientists are really committed ideologically to a paradigm or a finding, and not willing to acknowledge the limitations of their knowledge, in a certain area, they’re, they’re confining themselves to what they have already discovered, which limits them in a way from making new discoveries and pushing our boundaries further.

So it does seem to me that scientists could achieve at higher levels with, with intellectual humility, with this kind of openness—openness to critiques, openness to evidence that counters what they already thought was true openness to admitting and coming to grips with the fact that they may not understand something as deeply as perhaps even their title would suggest that they do.

Aschwanden: What’s standing in the way of that?

Van Tongeren: So even though I mentioned the scientific method is steeped in intellectual humility, the academy doesn’t actually reward intellectual humility as much.

So it loves it as a virtue, and it says, “This is a value we have.” But in practice, when it tends to reward our people who strongly defend their position and marshal as much evidence as they can for their pet theory and then get as many people as they can to endorse their theory.

And the way that science works best is when other people are able to weigh in and offer me feedback on why my ideas are shortsighted, why my ideas might be terrible—which happens quite often—or ways that I can actually improve my ideas.

Aschwanden: But there are structural impediments to that happening in academia.

Van Tongeren: Oftentimes because of the way that academic positions are structured, that tenure is set up—it’s really about defending your belief in explaining and defending, all the time, why you’re right while telling your students and expressing [to] them how you’re the expert, and no, but I think that we could do a better job of modeling intellectual humility and rewarding that in the structures in the academy.

Aschwanden: Daryl is talking here about a problem that’s starting to get some attention among scientists—namely, that we expect scientists to act one way, to be humble and open to new ideas and new data, and yet the incentives in science don’t always reward this kind of behavior.

Scientists who publish splashy, novel results get rewarded with tenure and media attention and grants, but a lot of these findings don’t hold up over time, and the attention-grabbing results may not be as important long-term as slower, more incremental research.

The problem of misaligned incentives has sparked some pretty lively discussions and a bunch of navel-gazing within the scientific community.

About 10 years ago, some researchers founded a new institute to help tackle the problem.

Brain Nosek: My name is Brian Nosek. I’m a professor at the University of Virginia and executive director of the Center for Open Science.

Aschwanden: What’s the Center for Open Science?

Nosek: Center for Open Science exists to increase openness, integrity and reproducibility because those are fundamental aspects of what makes science trustworthy in the realm of trying to discover how things work or how to create knowledge.

Aschwanden: I was at a conference last year where you put up a slide that said, “science is trustworthy because it does not trust itself.” And then you presented it almost like a little mantra: be humble, calibrated, truth-seeking.

Nosek: Yeah, the self-skepticism is often articulated as a fundamental part of why science becomes trustworthy—because it’s always saying, "well, but what if, yeah, but what if or what about...?"

And so the, to be able to do that productively, one has to have a level of humility of—I am very confident that I’m saying and often I am, but simultaneously, I have to be open to the possibility of being wrong, especially on these hard problems.

Aschwanden: Okay, so how do scientists go about being less wrong?

Nosek: So there’s very specific behaviors that we try to advance among researchers to try to improve the credibility of the claims that we make. One of them is to make precommitments.

Aschwanden: This is the practice of preregistration, where researchers register their study protocols and data-analysis plans ahead of time, before they’ve started the study. Why is this so important?

Nosek: It is not always intuitive why after the fact hypothesizing is not as reliable at testing ideas that we might have had beforehand.

A common way that people describe it as the Texas sharpshooter fallacy. If you came up to me, and I said, “I am such a good shot. Look at the wall over there,” and you look at the wall, and you see that there are 10 targets, and I hit the bull’s eye on every single target.

Aschwanden: Right, I’d say, “Wow, Brian, you’re a really good shot.”

Nosek: But then my brother says, “Oh, you might be interested to know, Christie, that he painted on the targets afterwards, rather than beforehand,” you would say, “Oh, that’s not so impressive,” because I was just shooting the wall and then painted the targets afterwards. That’s the difference between predicting the things beforehand.

Aschwanden: How do you think about uncertainty and science?

Nosek: For me, science is a process of reducing uncertainty about the world. And that’s very different from saying that science delivers a set of facts for people to use. The difference is that if we think of science as the purveyor of facts, then we assume that when a scientist tells us, “This is what we found,” that it’s true.

The idea of science as uncertainty reduction is to say, here’s the evidence that I have right now. Here’s how I think we can understand that. Here’s how I would explain it. And we’ll see if it survives as others look at that [and] interrogate the evidence that I provided or acquire new evidence that they come up with or generate an explanation that I hadn’t thought of for the claims that I’m making for my evidence.

Aschwanden: So that’s a very measured approach to science. Does it mean we can’t get excited about new results because they might be wrong?

Nosek: I don’t mind getting excited about new possibilities because that is super fun and interesting. And that’s part of what innovation in research is all about is, “Here’s this thing, and it’s amazing, and wouldn’t it be crazy if...?”

But, of course, the really important part of that sentence is the if. All of those initial innovations, discoveries, exciting findings, especially ones that are counter to what we think is probably true, ultimately probably aren’t true—but occasionally they are.

Aschwanden: It reminds me of that Internet meme: big, if true. Let’s get excited but also know that it might not pan out.

Nosek:That’s, I think, the way that we can embrace the discovery process and science is to get genuinely excited, as long as we can couple it with that recognition of, all bets are off as to whether this is going to survive. And make that part of the story, not just the story that journalists tell but the story that we, as a consuming public, think about discovery-oriented science, because science occurs on a continuum.

Aschwanden: What does that continuum look like?

Nosek: There’s that early stuff that, you know, we’re just, “Oh, my God, this came out of my lab; I can’t believe it happened; I got to tell you about it.” And that’s very generative. It’s very exciting. It sends science in new directions; it gets people fighting.

Aschwanden: Those are the findings that make news headlines.

Nosek: And then there’s this science that’s maturing over time, that sort of, like, “Oh, we think we got a hang of this. But yeah, there’s things we’re debating about. And there’s certainly parts that have to be unpacked. So it’s a very active area of study.”

Aschwanden: So you get the exciting first finding, and then you keep studying it and figuring out where it applies and all that.

Nosek: And then there’s the science—it’s like, “We’re ready to use this,” like, “You can build a bridge with this; you can build a better battery with this; you can go to the moon with this.” And that’s the science that we have lots of confidence in.

And if you understand what stage you’re in, great, then you should be able to calibrate your confidence accordingly. But if you think all of it is like that stage of “We’re ready to send you to the moon,” then we’re going to get into trouble.

Aschwanden: So you’re saying that we need to be careful to keep the degrees of uncertainty in mind.

Nosek: Yeah, one of the real challenges in this process from going from discovery to knowledge to use is that it’s not just the public that is confused about how much uncertainty there is but it’s scientists themselves.

Aschwanden: Wait—how do scientists get confused about the uncertainty?

Nosek: If I find something new in my lab, I get excited about it. I start to believe it. I start to convince myself about all the reasons that it’s true. And I miss the fact that nope, this could have happened accidentally. It could have been certain decisions that I made as I was going along the process because I’m, I’m human, and I’m engaged in those ideas.

Aschwanden: Right, we all want to think that we’re doing great work. We don’t see what we’ve overlooked. It’s the uncertainty we so easily forget.

Nosek: A lot of the challenges that the Center for Open Science tries to address is the fact that humans are the ones doing the science. And we will inevitably have our own points of view, our own career prospects, our own desires for certain things to be the way they are … get involved in how we interpret and create evidence for the things that we’re studying.

And none of that has to be intentional malfeasance. It can all be very natural, implicit factors that are influencing how it is that I can interact with my data and plan my studies that then get me the outcomes that helped me advance my career.

Aschwanden: What are some other ways you work to counteract that or put safeguards in place?

Nosek: What we try to do is create better transparency of the whole process so that you, as a viewer, can see how it is I got to my claims and be able to interrogate and say, “Wait a second, why’d you do it this way? Why do you do it that way? What about this?”

But then also create that transparency for me as the research producer, so that I can understand what I had thought at the beginning might actually be different than what I think now that I’m looking at the data.

Aschwanden: What insights have come from promoting more transparency and preregistration?

Nosek: When we started advocating for preregistration, I would give lectures about why we think that this is a good idea of confronting ourselves after the fact. And every once in a while, someone could come up to me after the talk and say, you know, “I heard you give this lecture about preregistration, like, a year ago. And I was like, whatever, I don’t know. It doesn’t seem like great, but I’ll try it.”

Aschwanden: So the person would go in and preregister their study and write up a plan for how they would analyze the data. And then they’d remember ...

Nosek: “Oh, I have the preregistration; I should go look at that.” And they’d already analyzed their data. And then they went back to preregistration, they said, “Oh, this is so different than what I thought my study was about.”

They had opened up the data; they were generatively constructing what they thought the research was about and what questions they were asking. But when they went back to see what they wrote down, when they planned it, it was an entirely different study—had a totally different meaning.

Aschwanden: So they’d done the thing that preregistration was supposed to guard against—moving the target around once they started seeing where the bullets land, so to speak.

Nosek: They’re like, now I understand why preregistration is important. It’s because we’re such good storytellers—because narrative is so much part of how it is we understand things that once we see data, we immediately start to impose narrative to it. And it’ll be shaped by whatever it seems like we’re seeing in it.

Aschwanden: That just really speaks to how easy it is to get sucked into our own story. One of the issues I always grapple with is that there are the results that come out of a study, but we need a story to help us make sense of the numbers.

And so researchers come up with a story, or an explanation, but it isn’t always correct.

Nosek: Yeah, well, this is really challenging because we want science to be definitive, and numbers feel very definitive and objective. And there are lots of ways to do science with numbers and to get sort of, “Here are the measurements.” But the measurements are evidence; they’re not explanations.

Aschwanden: Right, because we think in stories, not numbers. And as soon as you turn numbers or results into a story, you’re introducing human judgment and human biases.

Nosek: The explanations that we impose on our evidence is always qualitative. It’s always our interpretation. There aren’t quantitative ways to explain why something is observed. And so what that means is there’s a lot of reasoning, sometimes maybe deductive; a lot of it is inductive or abductive.

It’s sort of saying, “How would I make sense of this, given the things that I know and given the other things that we see in the world?” And that’s a very challenging process.

And there are a few places where we can create formal models, meaning you can actually create an algorithm or a mathematical equation to explain, in quotes, what is happening. But much of science is informal.

Aschwanden: How do you keep in mind which parts are results and which parts are the explanation?

Nosek: It gets very fuzzy very quickly. And that’s just kind of the reality. There isn’t a way to escape it, and to recognize that that’s just another area of uncertainty, that explanation comes with, “This is how I make sense of the evidence.” And you might look at it and say, “Well, I don’t disagree with anything in the evidence. What I disagree with is its link to the explanation.”

And then we don’t have an easy way to resolve that except for then trying to design studies where we see your explanation might anticipate outcome A [and] my explanation would anticipate outcome B. And if we can come to some agreement about that, then we now have a way to distinguish those explanations. But that ideal happens very rarely in science.

Aschwanden: How would you describe the kind of mindset that scientists need to have so that they’re not falling too in love with their findings?

Nosek: Yeah, the way that we phrase it is that the priority for researchers should be on getting it right, not being right. The prioritization of getting it right means that if you say, “I think I see something wrong with your research,” I should say, “Oh, really? What is it? Tell me, because my goal is to make sure that it’s right.”

Aschwanden: It sounds so easy, but can it work in real life with fallible humans?

Nosek: This kind of ethos is present in a lot of different places. The one that I draw inspiration from is the open-source software community that are building software tools for community benefit.

When, you know, developers in this community write their code out in the open, and other developers look at their code and say, “Oh, I found a bug. And here’s a fix for your bug,” and the original coder doesn’t get upset, like, “Oh, I can’t believe you found a bug in my code,” They say, “Oh, my gosh, thank you. So you found a problem, you fixed it ... or maybe you didn’t fix it, you just told me so that I could fix it, but nevertheless now my code is better.”

Aschwanden: At the end of the day, though, there’s still some uncertainty. What’s the best way to manage that?

Nosek: For calibration, the real goal is how is it that we can try to represent that level of uncertainty? When we make initial discoveries, what we tend to default to is a very general explanation, right?

I see that when I prime you with these pictures of vegetables, you eat more vegetables; then I suddenly have a very general theory of, ah, this very subtle priming of types of foods gets people to eat those foods, whereas it may be a much more limited claim that is actually true.

Aschwanden: We’re always looking for the one universal law that explains everything, but these can be hard to find, especially in social science.

Nosek: What we are is constantly pursuing better explanations of the world. All scientific models are flawed in some way because they’re simplifications of the complexity of reality.

Aschwanden: Reminds me of that old saying in statistics, “All models are wrong; some are useful.”

Nosek: Yeah, the way that science makes progress is by improving those models. And they need to be simplifications of reality because reality is too complex. So we develop these ways of explaining what’s happening in the world. And those simplifications need to be right enough to be able to predict and ultimately be able to explain why it is things happen in the world.

And some of those models are incredibly successful, right? Newton’s laws of mechanics. Einstein’s theory of general relativity—they predict lots and lots of things very, very well.

Aschwanden: Here we are, back to Newton and Einstein. We’ve come full circle. But the questions I’m left with are: Given that every scientific idea contains uncertainty, how can we really know things? How can we make decisions, some of them life-or-death, in the face of uncertainty?

That’s what we’ll explore in the next and final episode of Uncertain.

Our show is produced by me, Christie Aschwanden, and Jeff DelViscio. Our series art is by Annaissa Ruiz Tejada. Our beautiful Particle/Wave song was composed and performed by Christine Laskowski. All other series music is from Epidemic Sound.

Funding for this series was provided by UC Berkeley’s Greater Good Science Center–it’s part of the Expanding Awareness of the Science of Intellectual Humility Initiative, which is supported by the John Templeton Foundation.

This is Uncertain, a podcast from Scientific American. Thanks for listening.

Listen to the third episode of Uncertain.
Listen to the fifth and final episode of Uncertain.

This Simple Strategy Might Be the Key to Advancing Science Faster