March 23rd, 2024
Despite widespread AI use, few publishers have publicly defined their AI policies.
The extent of the problem loomed in my lunch conversation today with publishing industry veteran William Gunn. Of course we talked about AI and book publishing. William has been working in and around AI for several years, most recently with SciSpace, a startup in San Francisco, and on the campaign for the EU AI Act. He’s also an expert in helping organizations communicate complex topics to a public not expert in complex technologies.
We kicked around ideas about how publishers can communicate to the public their approaches to AI. The term “their public” has a slippery significance here, when you consider the different publics addressed by trade, scholarly and educational publishers. For trade publishers the most important audience is authors and their agents. AI is a sensitive topic for that public, to say the least. Scholarly publishers face different obstacles, when they consider AI’s promising impact on research, and then AI’s more problematic impact upon converting research into narrative.* For educational publishers, establishing policies is tricky, as AI’s encroachment on the practice of learning is multifaceted and ongoing.
I think that publishers face two big challenges as they move forward with AI technologies. The first is to develop a company position about how to approach AI generally, and on how to incorporate AI into their workflows. The second challenge is communicating that position, clearly and unambiguously, to their constituents.
Developing a public position on AI is something that a few of the larger publishers have addressed. But not many, and particularly not trade publishers.
As a random example, HarperCollins has a page on its site outlining their “Values & Commitments.” It discusses diversity & inclusion, philanthropy, sustainability and more. While they state that “at HarperCollins, authors and their work are at the center of everything we do,” there’s nary a word about what they’re using AI for, whether it touches authors’ work, nor what their expectations are around what their authors might be doing with AI.
We know that the company is working with AI tools — company CEO Brian Murray said as much at the London Book Fair last year.
I bet there’s at least an informal or draft policy on AI within HarperCollins. And I’m sure that many of HarperCollins’ authors and agents are now asking questions about the policy. But the policy has not yet been communicated to a wider public, including authors who might want to consider being published by the company.
In contrast, Elsevier is verbose on the topic. In the “Elsevier Policies” section of its website included are statements on “Responsible AI Principles,” “Text and Data Mining,” and “The use of generative AI and AI-assisted technologies in writing for Elsevier.”
The publisher policies I have seen are mostly flawed. Some of them are in fact policies directed externally, AT authors, with a range of admonitions about what is acceptable practice (not much) and what is not acceptable (lots). O’Reilly’s “AI Use Policy for Talent Developing Content for O’Reilly” goes on for pages and pages, with esoteric guidance, such as “DO NOT use any OSS GenAI Models that produce software Output that is subject to the terms of a copyleft or network viral open source license.”
The very few internal publisher policies that I’ve seen are conservative, excessively so. These publishers reacted too quickly to the range of perceived threats, and to their authors’ anxieties, and have hamstrung their own ability to engage robustly with this fast-developing, fast-changing technology.
It’s a given that they will use AI “responsibly,” whatever that means. It’s a given that they have the utmost concern for authors’ intellectual property and for aggressively protecting author’s copyrighted work. (Though, of course, these broad principles must be publicly stated, and often re-iterated.)
But what else?
- Will they allow AI to have a role in editorial acquisitions? (They’d be fools if they don’t.)
- Will they allow AI to have a role in developmental editing, line editing and copyediting? (They’d be fools if they don’t.)
- Will they allow AI to have a role in determining print runs and allocations?
- In creating accessible ebook files?
- In aiding audiobook creation in cases where it’s not economically-realistic to hire talented human narrators?
- In aiding foreign language translation into smaller markets where rights could never be sold?
- In developing marketing material at scale?
- In communicating with resellers?
- In author compensation calculations?
If so they must make this clear, and clearly explain, the thinking behind these policies. They must be brave in countering the many objections of most authors at this time of fear and doubt.
You can do it yourself, put it off, or contact William for help.
* Note: As I was finishing this effort I discovered an excellent post from Avi Staiman called “Dark Matter: What’s missing from publishers’ policies on AI generative writing?” He’s coming from the perspective of researchers publishing in scholarly journals, but many of the points he makes apply across the publishing spectrum.
Note: April 2, 2024: In their March 7, 2024 earnings call, publicly-traded John Wiley & Sons was broadly forthcoming on their four-part AI strategy. Matt Kissner, Interim President & Chief Executive Officer, said that they “think of the opportunity in four areas,” which are content licensing, product & publishing innovation, business model innovation and employee productivity. Authors were mentioned once, in reference to an AI-powered article matching engine that will help authors “get published faster and in the right journals.”
You can find the presentation slides and the earnings call transcript here.
April 4, 2024: Kester Brewin suggests that writers need AI transparency statements included with their books.