Search Our Website

AOP Member Login

News
BUSINESS & LEGAL BLOG: Artificial intelligence, machine-learning and copyright...

3 December 2020

Rowan Fee

© Rowan Fee

We recently made a submission to a call for views from the Intellectual Property Office (IPO) about Artificial Intelligence (A.I.) and copyright - you can read that here, if you wish. A.I. is making its presence felt in many aspects of image-making already, most notably in smartphones, but also in software like Adobe's Photoshop where one can 'create' additional image elements like skies and textures at the click of a mouse. There is currently no consensus on the definition of A.I. and the IPO's reliance on one alone is not indicative at this point of government's interpretation of the term.
This raises questions about copyright in these A.I.-assisted and A.I.-generated works - is there a level of protection for such, or should there even be any? Should Adobe have a copyright stake in your image when you've used a sky that's been generated through their A.I....? Clearly not, but you can see how this can get messy, very quickly.

The other issue stems form the development of A.I. platforms and systems. In order for A.I. to develop, it needs to be 'trained' and part of that training process involves feeding the system a vast amount of information, which will necessarily involve copyright-protected works. If you want an A.I. to be able to produce 2-D artwork that looks like Van Gogh's, you first have to tell it what Van Gogh's oeuvre looks like. Whilst this approach isn't a problem for works out of copyright, it certainly is for those works that are still protected. The model of licensing use, that we promote as photographers (and is used by all sorts of other creators), is flexible and adaptable and there is no reason why we cannot employ this approach to ensure that any A.I. that requires training (the machine-learning phase, if you like) using copyright-protected works, licenses the use of those works and the rights-holders are properly compensated.

It is clear from both EU and US case law and legislation that copyright is protection for works of human endeavour and creativity. The Infopaq case in the EU confirmed that the matter of copyright revolves around the 'intellectual creation of the author' and the notorious 'Monkey Selfie' case (Naruto v Slater) in the US confirmed that the US Copyright Act did not expressly authorise animals to file copyright infringement claims.

We agree that copyright exists to protect human creativity and therefore that works of A.I. origin should not be afforded the same protections. That is not to say they shouldn't be afforded any protection, though, and that part of the discussion is where we are headed next - what right should exist, if any, to protect A.I. works? Whatever the answer to that, we also believe that a strengthening of the moral right to be identified (the attribution right) as the author, is long overdue and with this brings an opportunity to require that A.I.-generated works are obviously identifiable as such. 'Deep fakes' and 'fake news' have become all too prevalent over the past couple of years and we need to be aware that the increase in A.I. across a range of platforms is likely to compound the problem.
If we are successful in this - that would be a 'win' for all creators, as the attribution right has suffered, since its inception, from compromise.

We expect further consultations from the IPO as A.I. development accelerates and in the context of our exit from the EU, the UK's governments are likely to to be keen to offer incentives for businesses involved in A.I. development to set up here. All the more reason why a close eye needs to be kept on A.I.

Join our mailing list for free access to this resource.