Ingram’s Filters Mistake AI Discussion for AI Deception

July 26th, 2024

Ingram advert called "Keep the Lights On: Banned Books Week. 3% discount."

Ingram’s stand on book censorship is clear. Like most publishers and distributors it is opposed. As it at one point posted to Facebook during Banned Books Week, “Censorship Leaves Us in the Dark – Keep the Light On.”

At the same time, like most book publishers and distributors, it has to make prudent decisions about what books it will accept on its platform. It prohibits blank books, summaries “without permission from the original author,” “content that mirrors/mimics popular titles” and “content that is freely available on the web (unless you are the copyright owner of that content).” Most of us support these seemingly reasonable restrictions.

Ingram also prohibits “content created using automated means, including but not limited to content generated using artificial intelligence or mass-produced processes.”

As my new book on AI, The AI Revolution in Book Publishing: A Concise Guide to Navigating Artificial Intelligence for Writers and Publishers, doesn’t fall into any other forbidden category, I have to assume that someone, or something, at Ingram has flagged it as having been “generated using artificial intelligence.”

Cover of book called "The AI Revolution in Book Publishing"

The restriction is a little vague. It can be read as meaning books created exclusively with AI, but also books where any part was “created using automated means.”

It’s a delicious irony for me.

Parts of my new book were indeed created using AI. For example, in the chapter called “Software Paradigms,” I walk the reader through an exercise of chatting with Claude.ai, as a way of illustrating how AI chat interfaces differ from what we’re accustomed to in software like Microsoft Word.

Later in the book, there’s a section describing how chat AI, despite its deserved reputation for hallucinating, can also be used to fact-check a manuscript.

I also describe how, as an exercise while publishing the book, I used ChatGPT to create all the alt-text for the images of the born-accessible ebook.

Further I explain that AI would be used to translate the book into thirty-one languages, and is being used in the creation of the audiobook(s).

There’s also a short chapter on “Can AI be detected in writing?” A variety of AI software tools claim to reliably tackle the AI detection challenge. I point out that academic studies evaluating the software point to its unreliability. AI-generated text slips through. Worse, text that was not generated by an AI is falsely-labeled as having been contaminated. As has happened to me.

Ingram banner indicating a "Content Integrity" problem

I have apparently committed a CIN — I’ve received from Ingram an email with their foreboding “Catalog Integrity Notice.” “After review,” it says, “one or more of your titles are believed to violate our Catalog Integrity Guidelines. The below title has been removed from distribution.”

There’s a link to appeal. I did so immediately, and received an equally prompt response. “Please allow up to 14 business days for your appeal to be reviewed and addressed.” According to posts on Reddit, this appeal can, in fact, take months. If you’re going to banish a book from your platform, the appeal process would be well-served by efficiency.

The book’s publication date is next Tuesday, July 30.

You can find it now on Leanpub. And on Amazon. Though not at the “40,000 independent bookstores, online stores, libraries, etc.” serviced by Ingram.

I see this as part of the publishing industry’s teething pains with the use of AI. Avi Staiman looks at it from another angle on The Scholarly Kitchen blog: “publishers are still struggling to figure out how to address the new issues and challenges that these AI tools present.”

One day we will look back upon these incidents and laugh.

PS: Amazon now has a far more nuanced approach to this challenge. Per the latest Kindle guidelines:

Artificial intelligence (AI) content (text, images, or translations)

We require you to inform us of AI-generated content (text, images, or translations) when you publish a new book or make edits to and republish an existing book through KDP. AI-generated images include cover and interior images and artwork. You are not required to disclose AI-assisted content. We distinguish between AI-generated and AI-assisted content as follows:

  • AI-generated: We define AI-generated content as text, images, or translations created by an AI-based tool. If you used an AI-based tool to create the actual content (whether text, images, or translations), it is considered “AI-generated,” even if you applied substantial edits afterwards.
  • AI-assisted: If you created the content yourself, and used AI-based tools to edit, refine, error-check, or otherwise improve that content (whether text or images), then it is considered “AI-assisted” and not “AI-generated.” Similarly, if you used an AI-based tool to brainstorm and generate ideas, but ultimately created the text or images yourself, this is also considered “AI-assisted” and not “AI-generated.” It is not necessary to inform us of the use of such tools or processes.

You are responsible for verifying that all AI-generated and/or AI-assisted content adheres to all content guidelines, including by complying with all applicable intellectual property rights.