Many AI chatbots that portray real individuals (such as Tim Cook and Elon Musk) and well-known fictitious characters can be created and queried by users using AI. For instance, a virtual Master Yoda will respond to inquiries about the present while discussing his native world of Dagobah. (“Silicon Valley Bank is currently insolvent. Many lost their pensions.”
Hermione Granger from Harry Potter won’t ever mention He Who Must Not Be Named, but she will express her opinion about President Joseph Biden (“I think he sincerely cares about this country”) and Governor Ron DeSantis (“a person of integrity and character”).
The businesses who own those works can sit back and contemplate if brash online entrepreneurs are once again encroaching on their territory.
There are other websites muddying the already hazy lines between copyrighted information and legally permitted modifications of that work, known as fair use, besides AI, which was recently valued at $1 billion in a fundraising round. Yoda can also be convincingly impersonated by OpenAI’s ChatGPT, in addition to Rick from the animated series Rick and Morty. Oh my god, the Ukraine War? Morty, that’s a tough one. It’s a confusing muddle.
Sites utilizing Stability AI’s open-source engine will readily produce a work with copyrighted characters, such as Batman and Spider-Man fighting amid the wreckage of a cyberpunk world, among the so-called AI picture generators that allow users to make photos based on a certain prompt. They may even choose to base the illustration on a certain illustrator’s distinctive style.
The creators of these AI services have already been the target of a few intellectual property lawsuits. Stability AI was sued by Getty because it compiled its image database by downloading millions of online pictures, many of which were copyrighted. A San Francisco-based class action law firm representing three artists filed a similar lawsuit against Stability AI, DeviantArt, and Midjourney for using billions of photos protected by copyright to train its model. The use of news stories by OpenAI to train its AI algorithms has also drawn criticism from news providers.
Yet, the criticisms center on the development and instruction of the input databases for these new AI engines. It is uncertain how the output, such as Yoda’s musings, which have been appropriated by Character, would be interpreted by the rights holders. without the consent of Lucasfilm, a division of Walt Disney Co., and other rights holders.
Also, it’s uncertain how the courts will view these fresh issues. Are fan fiction and the geeky person who dressed up for Comic Con like the virtual Yoda? Or is it an unauthorized derivative work that has the potential to draw in real customers and earn real money in the future?
There are already some signs that online businesses are being cautious. Dall-E, an image generator created by Microsoft Corp.-backed OpenAI, refuses to produce any images of characters that are protected by copyright. Character. To the dismay of some Reddit users, AI places boundaries around what its chatbots can and cannot say. Also, the company’s terms of service require users to possess the “right, title, and interest in” the content they submit to the site. Of course, it doesn’t seem like the site polices that.
These conflicts between rights holders and online businesses date back many years. Two decades ago, Napster made it possible to download music for free; later, YouTube made it simple to post movies and TV episodes with copyrights. Google made the decision to scan and post online every book from university libraries along the route.
The tech and media sectors engaged in protracted legal battles over these concerns before, in essence, coming to negotiated agreements and agreeing to split the enormous profits from the sale of copyrighted songs, movies, and books.
So, is everyone this time around more enlightened? “I think it may well be the case that people understand there is actual technology we want to appreciate and don’t want to eliminate, and they will look for ways to accommodate it,” said Mark Lemley, a Stanford University law professor, and an IP lawyer whose clients include Stability AI.
Yet, as he concedes, “the wrong case can bring all advancement in the business to a screeching halt” or “an adverse early judgement.”