This week, the scientific journal Frontiers in Cell and Developmental Biology published research featuring bogus imagery made with Midjourney, one of the most popular AI image generators.

The open-access paper explores the relationship between stem cells in mammalian testes and a signaling pathway responsible for mediating inflammation and cancer in cells. The paper’s written content does not appear to be bogus, but its most eye-popping aspects are not in the research itself. Rather, they are the inaccurate and grotesque depictions of rat testes, signaling pathways, and stem cells.

Advertisement

The AI-generated rat diagram depicts a rat (helpfully and correctly labeled) whose upper body is labeled as “senctolic stem cells.” What appears to be a very large rat penis is labeled “Dissilced,” with insets at right to highlight the “iollotte sserotgomar cell,” “dck,” and “Retat.” Hmm.

Advertisement

According to Frontiers’ editor guidelines, manuscripts are subject to “initial quality checks” by the research integrity team and the handling editor prior to the peer-review process. In other words, many eyes supposedly reviewed this work before the images were published.

Advertisement

To the researchers’ credit, they state in the paper that images in the article were generated by Midjourney. But Frontiers’ site for policies and publication ethics notes that corrections may be submitted if “there is an error in a figure that does not alter the conclusions” or “there are mislabeled figures,” among other factors. The AI-generated imagery certainly seems to fall under those categories. Dingjun Hao, a researcher at Xi’an Jiaotong University and co-author of the study, did not immediately respond to Gizmodo’s request for comment.

The rat image is glaringly wrong, even if you’ve never cut open a rat’s genitals. But the other figures in the paper could pass as credible to the untrained eye, at least at first glance. Yet even someone who has never opened a biology textbook would see, upon further scrutiny, that the labels on each diagram are not quite English—a telltale sign of AI-generated text in imagery.

Advertisement

An AI-generated diagram.

The article was edited by an expert in animal reproduction at the National Dairy Research Institute in India, and it was reviewed by researchers at Northwestern Medicine and the National Institute of Animal Nutrition and Physiology. So how did the wacky images get published? Frontiers in Cell and Developmental Biology did not immediately respond to a request for comment.

Advertisement

The OpenAI text generator ChatGPT is proficient enough to get farkakte research past the supposedly discerning eyes of reviewers. A study conducted by researchers at Northwestern University and the University of Chicago found that human experts were duped by ChatGPT-produced scientific abstracts 32% of the time.

So, just because the illustrations are clearly nonsense cosplaying as science, we shouldn’t overlook AI engines’ ability to pass off BS as real. Crucially, those study authors warned, AI-generated articles could cause a scientific integrity crisis. It seems like that crisis may be well underway.

Advertisement

Alexander Pearson, a data scientist at the University of Chicago and co-author of that study, noted at the time that “Generative text technology has a great potential for democratizing science, for example making it easier for non-English-speaking scientists to share their work with the broader community,” but “it’s imperative that we think carefully on best practices for use.”

Five AI-generated diagrams of stem cells.

Advertisement

The increased popularity of AI has caused scientifically inaccurate imagery to make its way into scientific publications and news articles. AI images are easy to make and often visually compelling—but AI is as unwieldy, and it is unsurprisingly difficult to convey all the nuance of scientific accuracy in a prompt for a diagram or illustration.

The recent paper is a far cry from the bogus papers of years past, a pantheon that includes such hits as “What’s the Deal With Birds?” and the Star Trek-themed work “Rapid Genetic and Developmental Morphological Change Following Extreme Celerity.”

Advertisement

Sometimes, a paper that gets through peer-review is merely funny. Other times, it’s a sign that “paper mills” are churning out so-called research that has no scientific merit. In 2021, Springer Nature was forced to retract 44 papers in the Arabian Journal of Geosciences for being total nonsense.

In this case, the research may have been OK, but the whole study is thrown into question by the inclusion Midjourney-generated images. The average reader may have a hard time considering signaling pathways when they’re still busy counting exactly how many balls the rat is supposed to have.

Advertisement

More: ChatGPT Writes Well Enough to Fool Scientific Reviewers

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums