We support our Publishers and Content Creators. You can view this story on their website by CLICKING HERE.

Every once in a while, I get nervous about artificial intelligence taking my job. I’m a decent enough writer, but so is AI. Sure, it can’t write with the heart, soul, or dashing good looks that I do, but we’re seeing people lose writing gigs to AI nonetheless.

That’s happening in a lot of industries, and it’s scary.

But then I see stories like this one and I think to myself, “Alright, we’ve got some time.”

According to Popular Science, researchers at Dartmouth Health demonstrated why everyone needs to remember that AI is a tool and doesn’t have all the answers.

They showed an AI program 25,000 X-rays of people’s knees from the National Institutes of Health’s Osteoarthritis Initiative. From there, they asked the AI program to find traits in the X-rays that could help determine if the person whose knee it was drank beer or ate refried beans.

That is, of course, completely ridiculous, but remember, this is AI, and does what it’s told. So, it found some traits that it said could do this.

“The models are not uncovering a hidden truth about beans or beer hidden within our knees,” researchers wrote.

Instead, they said that the results demonstrate an idea called “algorithmic shortcutting” in which AI programs find patterns in completely nonsensical information.

Like, y’know, the effects of beer drinking and refried beans eating away ones knee (by the way, I want to know what made them pick refried beans of all foods for this exercise).

“Shortcutting makes it trivial to create models with surprisingly accurate predictions that lack all face validity,” they said.

We’re leaning more and more into artificial intelligence in our daily lives, so it’s important to remember this kind of thing. Sure, AI is fun to mess around with to create images of Richard Nixon surfing while eating a burrito, but it does not have all the answers, even if it seems like it does.