16 August, 2022

These Shoes are Made for Selling - AG Szpunar Opines on Intermediary Liability and the 'Use' of a Trademark for Online Marketplaces

Intermediaries in e-commerce are a near necessity for many online sellers of products, but they can produce headaches and issues where potentially unauthorized goods are sold through them that infringe a company's IP rights. The status of intermediaries' liability has been slowly evolving in the courts, and a hotly anticipated decision by the CJEU on the issues is very close in the horizon that should clear matters up significantly in Europe. Before this, however, Attorney General Szpunar has had their say and issued their opinion earlier this Summer (the opinion only being available in English very recently). The opinion sets the scene for the CJEU and could be a glimpse into the CJEU's position on the matter in the near future. 

The case of Christian Louboutin v Amazon Europe Core Sàrl concerned infringement proceedings launched by Christian Louboutin against Amazon, under which they alleged that: (i) Amazon is liable for infringement of his trade mark; (ii) it should cease the use, in the course of trade, of signs identical to that trade mark in the EU; and (iii) he is entitled to damages for the harm caused by the unlawful use at issue. Amazon unsurprisingly challenged this position, arguing that it is merely an operator of an online marketplace and cannot be held liable for the actions of its sellers. The Luxembourg courts referred the matter to the CJEU for clarification, which is closing in on the crescendo of the decision itself following AG Szpunar's opinion.

The case combines two separate actions in Luxembourg and Belgium involving the two parties. 

AG Szpunar noted that they both concerned the interpretation of the concept of ‘use’ for the purposes of Article 9(2) of Regulation 2017/1001. In essence the referring courts are asking "…whether Article 9(2)… must be interpreted as meaning that the operator of an online sales platform must be regarded as using a trade mark in an offer for sale published by a third party on that platform on account of the fact that, first, it publishes both its own commercial offerings and those of third parties uniformly without distinguishing them as to their origin in the way in which they are displayed, by allowing its own logo as a renowned distributor to appear on those advertisements, and, secondly, it offers third-party sellers the additional services of stocking and shipping goods posted on its platform by informing potential purchasers that it will be responsible for those activities". The courts also asked "...whether the perception of a reasonably well-informed and reasonably observant internet user has an impact on the interpretation of the concept of ‘use’"

As a primer, Article 9(2) provides that the proprietor of an EU trademark is entitled to prevent all third parties not having their consent from using, in the course of trade, any sign which is identical with the trademark. However, what amounts to 'use' in this context is paramount to determining potential liability under the provision for any would-be infringer. 

AG Szpunar then moved on to discussing the current definition of 'use' under Article 9. Some of the main decisions here are Daimler AG v Együd Garage Gépjárműjavító és Értékesítő Kft, Coty Germany GmbH v Amazon Services Europe Sàrl, Google France SARL and Google Inc. v Louis Vuitton Malletier SA and L’Oréal SA v eBay International AG

The CJEU has found that the term 'using' in terms of intermediaries involves active behavior and direct or indirect control of the act constituting the use, and that the act of use presupposes, at the very least, that that third party uses the sign in its own commercial communication. The latter point was key in this according to the AG and without it any such use would be lacking.

In terms of the commercial communication aspect, the CJEU has determined that a referencing service provider does not use a sign in its own commercial communication since it only allows its clients themselves to use the relevant signs, with the result that it "…merely creates the technical conditions necessary for the use of a sign"

Similarly, a marketplace operator would not be using a sign in its own commercial communication when it provides a service consisting in enabling its customers to display that sign in their commercial activities, and that the stocking of goods bearing the relevant sign is not a use of that sign in a third party’s own commercial communication since it has not itself offered the goods for sale or put them on the market. 

The AG does highlight that this set of definitions is not a clear one and needs to be more specific and focused to make better sense. 

The AG very cleverly shifted the definition towards the recipient away from the actual communicator, i.e. the platforms. He discussed that the commercial communication of an undertaking would be to promote goods or services to third parties. To put simply, the intermediary adopts the sign to such an extent that that sign appears to be part of its activity.

The adoption condition still has to be looked at from the perspective of the recipient, i.e. the user of the marketplace, to determine whether the sign is perceived by that user as being integrated into that commercial communication. The user that this would be assessed against is a "reasonably well-informed and reasonably observant internet user"

AG Szpunar then moved on to discussing Amazon's particular business model and whether they would be 'using' a trademark in the course of their business. 

The first question referred to the CJEU concerned, primarily, if "…the activity of a marketplace operator of publishing commercial offerings from third-party sellers on its website where those offers for sale display a sign which is identical with a trade mark"

As noted above, this activity would not constitute a 'use' of a trademark (specifically decided in the eBay case), however, Amazon operates differently to eBay as a marketplace. Amazon provides a much more comprehensive offering, where third party sellers can post their own advertisements for goods in addition to Amazon's own. In those instances, the Amazon logo is merely present to indicate to consumers that the ad was posted by a third party seller. 

In that instances the AG noted that that would not lead to a reasonably well-informed and reasonably observant internet user to perceive the signs displayed on the advertisements of third-party sellers as part of Amazon’s commercial communication. The same would apply to the integration of third-party ads by Amazon into specific shops on the platform, which is the type of organization that is integral to the functioning of these types of online platforms. 

The referring courts also asked whether "…the fact that Amazon offers a ‘comprehensive’ service, including providing assistance in preparing advertisements and the stocking and shipment of certain goods, has an impact on the classification of the use of a sign displayed on those advertisements by Amazon"

To properly consider this the AG highlighted that you must consider Amazon’s activities as a whole to determine whether the involvement of that company might be classified as use of that sign. The AG thought that this would not be the case. 

Amazon's involvement, while able to give them more control over the sale of a product that may infringe a trademark, but that involvement was only for the benefit of the consumer so that they could ensure prompt and guaranteed delivery after a product is purchased. In the AG's view this is not sufficient to establish that Amazon has used the sign at issue in its own commercial communication. 

This position ties in with the decision in Coty, where the CJEU determined that "…a sign cannot be regarded as having been used in the marketplace operator’s own commercial communication where that operator stocks goods bearing a sign on behalf of a third-party seller without itself pursuing the aim of offering those goods or putting them on the market"

The AG also noted that Amazon's posting of the ads in question would not be 'using' the sign in question either. 

To round things off, the AG summarized their position as answering the questions that "…Article 9(2)… must be interpreted as meaning that the operator of an online sales platform cannot be regarded as using a trade mark in an offer for sale published by a third party on that platform on account of the fact that, first, it publishes both its own commercial offerings and those of third parties uniformly, without distinguishing them as to their origin in the way in which they are displayed, by allowing its own logo as a renowned distributor to appear on those advertisements, both on its website and in the advertising categories of third-party websites and, secondly, it offers third-party sellers the additional services of assistance, stocking and shipping of goods posted on its platform by informing potential purchasers that it will be responsible for the provision of those services, provided that such elements do not lead the reasonably well-informed and reasonably observant internet user to perceive the trade mark in question as an integral part of the operator’s commercial communication".

The AG's opinion really sets the scene for the CJEU and seems very sensible in protecting the functionality of online marketplaces, like Amazon. If the courts would see things otherwise, this would clearly massively impede the functionality of these services, and in fact, without heavy moderation by the platforms themselves, would make them impossible to operate. This should not be seen as carte blanche for the platforms either, but could set sensible, clearer boundaries for both platforms, brands and third party sellers, but it remains to be seen what the CJEU's position will be. 

03 August, 2022

The EPO Are Not 'Board' of AI Yet - EPO Board of Appeal Weighs in on Whether Artificial Intelligence Can Be an Inventor

The progression of AI in the world of IP rights keeps trundling along, with many national and international bodies having already weighed in on whether AI can be an inventor for the purpose of patent law or not (discussed on this blog before). The answer seems to have been a resounding "no", however, following an earlier decision by the EPO, the EPO Board of Appeal has now issued its decision in the ever-continuing DABUS saga, which does shed some further light on the matter and does show that the current legislative framework does seem robust enough on the matter of AI (though this writer would think that specific legislative updates are still a must).

The case of Designation of inventor/DABUS (case J 0008/20) concerned two patent applications filed at the EPO (namely EP18275163 and EP18275174). The applications didn't list an inventor for the patents, but, following a clarification by the applicant, Stephen Thaler, the inventor was noted to be "DABUS". DABUS is an AI created by Mr Thaler. The EPO Receiving Section rejected both applications on the grounds that the designated inventor, DABUS, didn't meet the requirements for an inventor, that being a need for them to be a 'natural person'. Mr Thaler subsequently appealed both decisions to the Board of Appeal.

The appeal was brought on three separate grounds: (i) whether an applicant can designate an entity which is not a natural person as the inventor under Article 81 of the European Patent Convention; (ii) to comply with the EPC is it enough for an applicant to file any declaration irrespective of its content, or does the latter need to satisfy specific requirements; and (iii) whether and to what extent the EPO can examine and object to statements filed under Article 81.

To set the scene, Article 81 requires that an inventor be designated for all patent applications, and any deficient designation of an inventor can be amended under Rule 21 of the EPC. Further, Article 61 of the EPC notes that "[t]he right to a European patent shall belong to the inventor or his successor in title" (the latter being a legal successor in the title of the rights), and the rights of any employee, if they are the inventor, will be determined by the national legislation where they are employed. 

The Board of Appeal first dealt with the main request in the matter, i.e. whether an AI can be an inventor of a patent. The Board firmly rejected this point, as they determined that "[u]nder the EPC the designated inventor has to be a person with legal capacity", so adopting a simpler, ordinary meaning approach in deciding what an 'inventor' is. What the provisions around designating an inventor seek to do are to confer and protect the rights of the inventor, facilitate the enforcement of potential compensation claims provided under domestic law, and identify a legal basis for entitlement to the application. The Board clearly set out that "[d]esignating a machine without legal capacity can serve neither of these purposes"

The Board also noted that there is no practice or agreement that would allow for the Board to overcome this specific language. The appellant also advanced an argument of fairness allowing for AI to be an inventor of a patent; however, the Board immediately rejected this argument as well. 

The Board therefore confirmed that the Receiving Section's decision was correct in raising an objection to the applications. 

In summarizing its position, the Board decided that "…the main request does not comply with the EPC, because a machine is not an inventor within the meaning of the EPC. For this reason alone it is not allowable"

The Board then turned to the auxiliary request, i.e. whether the first sentence of Article 81 does not apply where the application does not relate to a human-made invention. The Board swiftly sided with the request and agreed that the part of Article 81 would not apply in that instance, since the provisions were drafted to confer specific rights on the inventor, so where no human inventor can be identified, then Article 81 does not apply. 

The appellant had provided a statement with this request noting that they had derived the right to the European patent as owner and creator of the machine. The Board disagreed and stated that the statement didn't bring them within the remit of Article 60, since there is no legal situation or transaction which would have made him successor in title of an inventor. 

The Board did consider two referrals that were requested to be made to the Enlarged Board, however, the referrals were rejected. 

In discussing why no referral was needed, the Board turned to matters surrounding Article 52 of the EPC, which specifies that an invention is one which is novel, industrially applicable and involves an inventive step is patentable. The appellant argued that the provision only relates to human-made inventions, and the Board agreed with this position. How a particular invention was made doesn't play a part in the patent system, so therefore it is possible that AI-generated inventions too are patentable under Article 52. However, this position can't co-exist with Article 60, so you would have inventions that are patentable under Article 52, but for which no rights are provided under Article 60. 

The Board also considered the requirement to file a statement of origin of the right to a European patent where the inventor and the applicant differ. The Board noted that this is only a formal requirement, and only informs the public of the origin of the right, but that it would be disproportionate to deny protection to patentable subject-matter for failing to fulfil such a formal requirement. 

Additionally, they noted that, as the lawmakers only had human-made inventions in mind when drafting Article 60, so it is possible that no statement on the origin of the right is required where the application concerns an invention developed by a machine. 

The crux of why Mr Thaler's applications have failed in light of the above is that there is nothing preventing him from listing himself as the inventor in relation to both applications. The Board highlighted that there is no case law "…which would prevent the user or the owner of a device involved in an inventive activity to designate himself as inventor under European patent law".

The decision is yet another among many that highlight the issues of AI inventorship and why there is simply a need for evolution in the law to accommodate AI inventions somehow. However, as noted by the Board, the creator of the AI could be listed as the inventor, bypassing this issue, however, it seems that Mr Thaler was deadset on setting a new precedent and allowing for direct ownership by the AI systems. This presents the problem of enforcement and/or remuneration, and undoubtedly these would be handled by the creator themselves, rendering this problem almost moot in this writer's view. We will see if the case progresses to the Enlarged Board, but it seems that Mr Thaler is in no way done with the DABUS saga as things stand. 

03 May, 2022

Challenger Defeated - Article 17 of the Digital Single Market Directive remains valid in light of the Freedom Expression

The EU is currently working on, or passing, big legislative changes in order to address issues within the digital marketplace and in relation to "Big Tech". This includes the Digital Single Market Directive and the forthcoming Digital Services Act, which will make sweeping changes to how companies operate in the digital space. One hotly contested point within this package is Article 17 of the DSM, which was challenged by Poland in a hotly awaited case within the CJEU. Following an opinion from Advocate General Øe as far back as July last year, the Court handed down its decision very recently on this, which sets the scene for the application of the DSM in the near future.

The case of Republic of Poland v European Parliament concerned an action for annulment brought by Poland in relation to Article 17. What the provision does is introduce liability for platforms that allow users to upload digital content onto the platform, such as YouTube, for copyright infringement if that user-uploaded material infringes copyright. However, platforms can avoid liability by making "best efforts" to either acquire a license or block the infringing content, and an obligation is placed on the platforms to act expeditiously following any notification by rightsholders to remove or disable the content and to use "best efforts" to prevent any future uploads, for example through the use of content filtering.

As is clear, Article 17 is set to change the landscape for platforms like YouTube and introduces strict measures and steps that need to be taken by the same in relation to infringing content. Poland challenged the provision on the grounds that it infringed Article 11 of the Charter of Fundamental Rights of the European Union, which enshrines the freedom of expression in EU law. 

What they specifically argue is that in order to be exempted from all liability for giving the public access to copyright-protected works or other protected subject-matter uploaded by their users in breach of copyright, online content-sharing service providers are required to carry out preventive monitoring of all the content which their users upload. This imposition of these mandatory measures, without appropriate safeguards put in place, would breach the right to freedom of expression.

The Court discussed their decision in YouTube and Cyando, which sets out that "the operator of a video-sharing platform or a file-hosting and ‑sharing platform, on which users can illegally make protected content available to the public, does not make a ‘communication to the public’ of that content, within the meaning of that provision, unless it contributes, beyond merely making that platform available, to giving access to such content to the public in breach of copyright". In addition, the same service providers will be specifically exempt from infringement provided that they do not play an active role of such a kind as to give it knowledge of or control over the content uploaded to its platform.

Article 17, as discussed above, is set to change this position and introduce new liability provisions and obligations on platforms. 

As set out by the Court, Article 11 of the Charter and Article 10 of the European Convention on Human Rights guarantee the freedom of expression and information for everyone, including in relation to the dissemination of information which encompasses the Internet as a key means of dissemination. Therefore, courts have to take due account the particular importance of the internet to freedom of expression and information and ensure that those rights are respected in any applicable legislation. 

In considering whether Article 17 has an impact on peoples' freedom of expression, they initially noted that the provision assumes that some platforms will not be able to get licenses from rightsholders for the content that is uploaded by users, and the rightsholders are free to determine whether and under what conditions their works and other protected subject matter are used. If no authorization is received the platforms will only have to "...demonstrate that they have made their best efforts... to obtain such an authorisation and that they fulfil all the other conditions for exemption" in order to avoid liability.

The conditions include acting expeditiously when notified by rightsholders in order to disable access to, or to remove the potentially offending content, to put in place appropriate measures to prevent the uploading of infringing content (e.g. through the use of automatic recognition and filtering tools), and to prevent future uploads. The Court does accept that these early measures can restrict an important means of disseminating online content and thus constitute a limitation on the right guaranteed by Article 11 of the Charter. They, therefore, concluded that "the specific liability regime [under] Article 17(4)... in respect of online content-sharing service providers, entails a limitation on the exercise of the right to freedom of expression and information of users of those content-sharing services".

However, the next piece of the puzzle the Court addressed was whether the limitation could be justified. 

The Charter does allow for legislation to impact the freedoms it contains, however, any limitations of those rights must be proportionate and made "... only if they are necessary and genuinely meet objectives of general interest recognised by the [EU] or the need to protect the rights and freedoms of others". If there is a choice between several different measures the least onerous one has to be chosen over the others and the disadvantages caused must not be disproportionate to the aims pursued. Finally, adequate safeguards need to be put in place within the legislation to guarantee the protection of those rights from abuse. 

The Directive and Article 17(4) do specify the limitations placed on the freedom of expression, it does not specify the means through which it should be done, only that they need to use their 'best efforts' in doing so. Although, as set out above, the Charter does require that these need to be specified, the Court noted that the limitations can be formulated in terms that are sufficiently open to be able to keep pace with changing circumstances. 

The Court also mentioned that the legislation, in Article 17(7) (and similarly in Article 17(9)), requires that other expression not be limited if it does not infringe copyright, so it is specific well beyond mere "best efforts". The Court accepted that both Articles 17(7) and (9) protected the right to freedom of expression and information of users of online content-sharing services adequately. They confirmed that this struck an adequate balance between both parties' interests. Looking at the liability mechanism itself, the Court agreed that it was not only appropriate but also appears necessary to meet the need to protect intellectual property rights. 

Also, the platforms are protected by the provision somewhat due to the notification requirements it sets, and without notification, broadly speaking, they would not be held liable for any infringing contents. Article 17(8) also specifically excludes a general monitoring obligation on the platforms. Any notification also has to contain sufficient information to remove the infringing contents easily. Article 17(9) contains several other procedural safeguards that should protect the right to freedom of expression and information of users where the service providers erroneously or unjustifiably block lawful content.

In summary, the Court set out that "...the obligation on online content-sharing service providers to review, prior to its dissemination to the public, the content that users wish to upload to their platforms, resulting from the specific liability regime established in Article 17(4)... and in particular from the conditions for exemption from liability... has been accompanied by appropriate safeguards by the EU legislature in order to ensure... respect for the right to freedom of expression and information of the users of those services... and a fair balance between that right, on the one hand, and the right to intellectual property, protected by Article 17(2) of the Charter".

The decision is undoubtedly correct and, although a bitter pill to swallow for many platforms that will be impacted by it, the aim of the legislation seems to not be to unduly burden them with monitoring obligations and to stifle the freedom of expression, but attempts to balance that with the protection of legitimate copyrights held by other parties. It will remain to be seen what filtering measures will be considered adequate by the EU courts in the future if they are challenged, but one can imagine sufficient measures already exist, such as Content ID for YouTube. The DSM is the first big step in the new IP regime in Europe, with more changes set to be made in the near future. 

12 April, 2022

Data in the Clouds - CJEU Accepts the Use of the Private Copying Exception in Relation to Cloud Storage

The use of cloud storage is near ubiquitous these days, with both companies and consumers using it at increasing amounts to store both their private data and even multimedia content, making it accessible without having all of the data copied onto a given device at all times. However, even though it's not something that most people will consider at the time (or at all), the copying of copyright protected content even onto cloud storage could potentially infringe on the rights of rightsholders, which could result in payments being due for the same. With that in mind as a starting point, could you use a private copying exception to avoid liability for such copying? Luckily, the CJEU has recently handed down its decision on the matter, clarifying the position for current and prospective copiers alike.

The case of Austro-Mechana Gesellschaft zur Wahrnehmung mechanisch-musikalischer Urheberrechte GmbH v Strato AG concerned Austro-Mechana, who is a copyright collecting society in Austria acting in a fiduciary capacity for the interests of its members and asserting the legal rights they have in their works on their behalf. Austro-Mechana made an application in court for the invoicing and taking payment for "storage media of any kind" from Strato, as they provide their customers a service called 'HiDrive', which allows for the storage of files through cloud computing. Strato contested this and the matter progressed through the Austrian courts, ultimately ending up with the CJEU.

The CJEU were faced with two questions on the matter, with the first one asking "...whether Article 5(2)(b) of Directive 2001/29 must be interpreted as meaning that the expression ‘reproductions on any medium’ referred to in that provision covers the saving, for private purposes, of copies of works protected by copyright on a server on which storage space is made available to a user by the provider of a cloud computing service". In short, does the exception under Article 5(2) include private copying onto cloud storage. 

The first consideration, according to the CJEU, was whether a 'reproduction' could include copying onto cloud storage. The Directive and its recitals make it amply clear that the phrase should be interpreted very broadly. The court noted that copying onto cloud storage includes both the reproduction through the uploading of a file for storage, and when a given file is accessed by the user subsequently and downloaded into any device. This means that copying onto cloud storage indeed constitutes 'reproduction' under Article 5(2). 

The second consideration is whether 'any medium' covers the provision of cloud computing servers for the storage of files in the cloud. In the broadest sense the phrase includes all media from which a protected work could be copied from, which includes cloud computing. 

Additionally, the Directive's purpose is to create a general and flexible framework at EU level in order to foster the development of the information society and to create new ways to exploit protected works. This is underpinned by technological neutrality. The same applies to the protection of copyright protected works, and the legislation is intended to not become obsolete and to apply to as much of new technology in the future as possible. 

The CJEU therefore concluded that the concept of ‘any medium’ includes a server on which storage space has been made available to a user by the provider of a cloud computing service. 

In summary, the CJEU set out their answer to the first question that "...Article 5(2)(b)… must be interpreted as meaning that the expression ‘reproductions on any medium’... covers the saving, for private purposes, of copies of works protected by copyright on a server on which storage space is made available to a user by the provider of a cloud computing service".

The second question posed to the CJEU asked "...whether Article 5(2)(b)… must be interpreted as precluding national legislation that has transposed the exception... and that does not make the providers of storage services in the context of cloud computing subject to the payment of fair compensation in respect of the unauthorised saving of copies of copyright-protected works by natural persons, who are users of those services, for private use and for ends that are neither directly nor indirectly commercial". In short, whether any national legislation that has transposes the exception, and doesn't impose a payment of royalties for the unauthorized copies, is excluded in relation to non-commercial private copying. 

This expressly refers to Article 2 of the Directive which provides an exception to reproductions made by people for a non-commercial purpose, provided that the rightsholder is fairly compensated. 

The CJEU notes that when Member States decide to transpose the exception to their national legislative framework they are required to provide for the payment of fair compensation to rightholders. They also note that the copying of copyright protected works by individuals can cause harm to rightsholders, which the compensation is attempting to remedy. 

As was already answered above, the phrase 'reproductions on any medium' includes cloud computing, but Member States still have wide discretion on the compensation of rightsholders in relation to copying that is covered by the exception. This includes who pays and how the monies are collected. Additionally, case law has set out that "...in principle, for the person carrying out private copying to make good the harm connected with that copying by financing the compensation that will be paid to the copyright holder", which in this case would be the users of the cloud storage systems.

However, there are practical difficulties in identifying individual users and obliging them to pay any requisite fees, especially since the potential harm suffered may be minimal and may therefore not give rise to an obligation for payment, so Member States can impose a private copying levy to cover this more broadly. Even so, this will have an impact on both the users and the fees for the cloud storage services they may be purchasing, impacting the 'fair balance' requirement as set out in the Directive. This is left to the national courts and the Member States to ensure it is in place. 

The CJEU therefore concluded that the answer to the second question is that "Article 5(2)(b)... must be interpreted as not precluding national legislation that has transposed the exception... and that does not make the providers of storage services in the context of cloud computing subject to the payment of fair compensation in respect of the unauthorised saving of copies of copyright-protected works by natural persons, who are users of those services, for private use and for ends that are neither directly nor indirectly commercial, in so far as that legislation provides for the payment of fair compensation to the rightholders"

The decision gets cloud storage providers off the hook for any compensation that might be payable to rightsholders, and it is up to Member States to impose a levy, if any, for such copying by private individuals. This writer has never heard of such levies even being considered, and any such levies could deter innovation in this space due to the public's lack of desire for its implementation due to costs. If you are not a consumer of cloud storage services, you would undoubtedly not be happy to pay for them either. Nevertheless, this is the CJEU's proposed position which leaves the option very much on the table. 

15 March, 2022

Yet Another Rejection - US Copyright Office Rejects Registration of Copyright in AI Created Artwork

How works created by artificial intelligence, or AI, are treated by various jurisdictions has been a hot topic recently, including in this very blog (for example here and here). It seems that, despite a briefly celebrated win in Australia, the technology is seeing setback after setback on whether AI-created works will be covered by IP and this seems more unlikely every day. The US Copyright Office was the latest blow in the battle for IP protection of AI-created works, which leaves the technology in a bit of a bind, however, it featured a familiar individual yet again. 

The US Copyright Office's Review Board recently released a decision on the copyright protection of a work, A Recent Entrance to Paradise, created by an AI program called Creativity Machine, developed by Steven Thaler. While the Creativity Machine was the author of the work, Mr Thaler was listed as the claimant alongside it as its owner. The work was autonomously created by the Creativity Machine, but Mr Thaler wanted to register the work as a work-for-hire to the owner of the program, himself. 

The US Copyright Office initially rejected the registration of the copyright in the work in 2019 as it "lacks the human authorship necessary to support a copyright claim", which Mr Thaler subsequently appealed this decision arguing that the "human authorship requirement is unconstitutional and unsupported by either statute or case law". Despite this, the Copyright Office rejected the registration on the same basis as before and noted that Mr Thaler had not provided any evidence of sufficient creative input or intervention by a human author in the work and that the Copyright Office won't abandon its longstanding interpretation of this matter which is based on precedent from the courts, including the Supreme Court. 

Being ever-persistent, Mr Thaler then yet again appealed, arguing in a familiar fashion that the Office’s human authorship requirement is unconstitutional and unsupported by case law. Mr Thaler also argued that there is a public policy piece to this where the Copyright Office 'should' register copyright in machine-created works because doing so would "further the underlying goals of copyright law, including the constitutional rationale for copyright protection". Mr Thaler also again rejected the Copyright Office's assertions of supporting case law. 

At the beginning of its decision, the Review Board accepted that the works were indeed created by AI without any creative contribution from a human actor. However, the Board noted that copyright only protects "the fruits of intellectual labor that are founded in the creative powers of the [human] mind". Mr Thaler was therefore tasked with either proving that the work was created by a human, which it hadn't been, or somehow convincing the Board to depart from long-established precedent.

The Review Board then moved on to discussing the precedent relevant to the matter at hand. 

Paragraph 306 of the US Copyright Compendium sets out that "the Office will refuse to register a claim if it determines that a human being did not create the work".  As such, human authorship is a prerequisite to copyright protection in the US. 

Similarly, 17 USC 102(a) affords copyright protection to "original works of authorship", and, while the phrase is very broad in its definition, it still requires human authorship. This has been supported by US Supreme Court decisions in Burrow-Giles Lithographic Co. v Sarony and Mazer v Stein where the Supreme Court required human authorship for copyright protection. In addition to the highest court in the US, the lower courts have also followed these decisions. 

Although the courts haven't directly decided on AI authorship in relation to copyright just yet, Mr Thaler has featured in a recent decision in the case of Thaler v Hirshfeld (the very same individual as in this decision) where the court decided that, under the Patent Act, an ‘inventor’ must be a natural person and cannot be an AI. 

The Board also discussed a recent report by the USPTO who discussed IP issues around AI. In this report, the USPTO noted that "existing law does not permit a non-human to be an author [and] this should remain the law"

The Board also briefly touched on Mr Thaler's secondary argument, which was that AI can be an author under the law because the work-made-for-hire doctrine allows for non-human, artificial persons such as companies to be authors. The Board rejected this argument since the work was not made for hire. This was due to the requirement for such a work to be prepared by e.g. an employee or one or more parties who agree that the work is one for hire. No such agreement was in place between Mr Thaler and the Creativity Machine. Even outside of this, the doctrine merely addresses ownership and not copright protection, so human authorship is, yet again, required. 

The Board, therefore, rejected the appeal and refused to register the work. 

The decision by the Board is by no means surprising, as the status of AI-created works seems to be more and more established as the years go by as a no-go. While copyright protection could be possible in some jurisdictions, as discussed in the above blog articles in more detail, it seems highly unlikely to be possible in the US without legislative intervention. As AI develops the law will have to address this and potentially grant AI-creation some level of protection, however, this will take some time to come true. 

11 January, 2022

Just in the Middle - CJEU Decides on Intermediary Liability for Copyright Infringing User Content

The issue of intermediary liability for copyright infringement has been a thorny issue in recent years, and the case involving YouTube (discussed more here) in this regard has been trundling through the EU courts for what feels like decades (exacerbated by the pandemic no doubt). However, the European courts have reached their decision last summer, giving some needed guidance on the matter and setting the scene for intermediary liability for the future. This writer is woefully behind the times in writing about this decision, but it does merit belated discussion and is a very important decision to keep in mind in relation to intermediaries.

The case of Frank Peterson v Google LLC (along with another case; Elsevier v Cyando) concerns Nemo Studio, a company owned by Mr Peterson who is a music producer. In 1996, Nemo Studio entered into a worldwide artist contract with Sarah Brightman covering the use of her audio and video recordings. Further agreements were entered into in the subsequent years, including one with Capitol Records for the exclusive distribution of Ms Brightman's recordings and performances. One of her albums, "A Winter Symphony", was released in 2008 along with an accompanying tour that year. The same year her album and private recordings from her tour were uploaded onto YouTube. Mr Peterson attempted to have the materials removed from the platform, including through cease-and-desist letters, and Google subsequently blocked access to the offending videos. However, even after this the content could be accessed on YouTube again, and Mr Peterson initiated legal proceedings against Google in Germany for copyright infringement, with the matter ultimately ending up with the CJEU.

The other combined case, very briefly put, dealt with at the same time involved Elsevier, an international specialist publisher and Cyando, a website where users could upload files for hosting and sharing. Users had uploaded copies of three works for which Elsevier owns the copyright onto the Cyando website, which were then shared on other websites via web links. Elsevier also brought legal proceedings against Cyando in Germany for copyright infringement, after which the CJEU combined both actions into one.

The first question posed to the court concerned "…whether Article 3(1) of the Copyright Directive must be interpreted as meaning that the operator of a video-sharing platform or a file-hosting and ‑sharing platform, on which users can illegally make protected content available to the public, itself makes a ‘communication to the public’ of that content".

As a primer, Article 3(1) provides the exclusive right to copyright holders to communicate their works to the public, which also gives them the right to prevent said communications by other parties without authorization. Additionally, what amounts to a 'communication to the public' includes all communication to the public not present at the place where the communication originates, so giving a very broad right to copyright holders. However, a balance has to be struck between the interests of the copyright holder and the interests and fundamental rights of users, in particular their freedom of expression and of information.

To dive a bit deeper into the meaning of the phrase 'communication to the public', this encompasses two specific criteria, namely: (i) an act of communication of a work; and (ii) the communication of that work to a public. 

In terms of the first criterion, an online platform performs and 'act of communication' where "…it intervenes, in full knowledge of the consequences of its action, to give its customers access to a protected work, particularly where, in the absence of that intervention, those customers would not, in principle, be able to enjoy the broadcast work". For the second criterion, a 'public' comprises "…an indeterminate number of potential recipients and implies, moreover, a fairly large number of people".

Finally, in addition to the above criteria, the courts have determined that for an act to be a 'communication to the public' "…a protected work must be communicated using specific technical means, different from those previously used or, failing that, to a ‘new public’, that is to say, to a public that was not already taken into account by the copyright holder when it authorised the initial communication of its work to the public"

One thing to note in the context of both YouTube and Cyando is that both platforms operate through user uploads, who act autonomously from the platform, decide whether the content will be publicly available and are responsible for their own actions. For file-hosting services, like Cyando, there is the additional requirement that the users have to share a link to the content for it to be accessed and/or downloaded, which is up to them to do. While the users might very well be doing an 'act of communication to the public', it remains unclear whether the intermediaries are doing the same through this.

The Court noted that the platforms play an indispensable part in making potentially illegal content available, but this is not the only criterion that you need to concern yourself with; the intervention of the platform needs to be deliberate and not just incidental. This means that the platforms have to intervene in full knowledge of the consequences of doing so, with the aim of giving the public access to protected works for there to be an infringement.

The Court highlighted that, in order to determine this, one has to "…take into account all the factors characterising the situation at issue which make it possible to draw, directly or indirectly, conclusions as to whether or not its intervention in the illegal communication of that content was deliberate". These circumstances can include: "...[i] the circumstance that such an operator, despite the fact that it knows or ought to know that users of its platform are making protected content available to the public illegally via its platform, refrains from putting in place the appropriate technological measures that can be expected from a reasonably diligent operator in its situation in order to counter credibly and effectively copyright infringements on that platform; and [ii] the circumstance that that operator participates in selecting protected content illegally communicated to the public, that it provides tools on its platform specifically intended for the illegal sharing of such content or that it knowingly promotes such sharing, which may be attested by the fact that that operator has adopted a financial model that encourages users of its platform illegally to communicate protected content to the public via that platform".

In short this means that platforms will have to take positive steps to prevent the sharing of infringing content. Mere knowledge that this is being done isn't sufficient to conclude that a platform intervenes with the purpose of giving internet users access to that content unless they have been specifically warned by rightsholders of this activity. Making a profit from this activity will also not, in itself, make a platform liable. 

The Court summarized its answer to the question as: "… Article 3(1)… must be interpreted as meaning that the operator of a video-sharing platform or a file-hosting and ‑sharing platform, on which users can illegally make protected content available to the public, does not make a ‘communication to the public’ of that content… unless it contributes, beyond merely making that platform available, to giving access to such content to the public in breach of copyright. That is the case, inter alia, where that operator has specific knowledge that protected content is available illegally on its platform and refrains from expeditiously deleting it or blocking access to it, or where that operator, despite the fact that it knows or ought to know, in a general sense, that users of its platform are making protected content available to the public illegally via its platform, refrains from putting in place the appropriate technological measures that can be expected from a reasonably diligent operator in its situation in order to counter credibly and effectively copyright infringements on that platform, or where that operator participates in selecting protected content illegally communicated to the public, provides tools on its platform specifically intended for the illegal sharing of such content or knowingly promotes such sharing, which may be attested by the fact that that operator has adopted a financial model that encourages users of its platform illegally to communicate protected content to the public via that platform".

The Court then turned to the second and third questions together, which it posed as asking: "…whether Article 14(1) of the Directive on Electronic Commerce must be interpreted as meaning that the activity of the operator of a video‑sharing platform or a file-hosting and -sharing platform falls within the scope of that provision, to the extent that that activity covers content uploaded to its platform by platform users. If that is the case, that court wishes to know, in essence, whether Article 14(1)(a) of that directive must be interpreted as meaning that, for that operator to be excluded, under that provision, from the exemption from liability provided for in Article 14(1), it must have knowledge of specific illegal acts committed by its users relating to protected content that was uploaded to its platform"

Under Article 14(1) Member States have to ensure that service providers of information society services are not liable for the information stored at the request of the recipient of the services, i.e. users, provided that the provider does not have actual knowledge of illegal activity or information and, is not aware of facts or circumstances from which the illegal activity or information is apparent, or that the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove the information or to disable access to it. What is key here is that the provider has to be an ‘intermediary service provider’, meaning one that they are "…of a mere technical, automatic and passive nature, [i.e.] that service provider has neither knowledge of nor control over the information which is transmitted or stored".

The above assessment can be made considering Article 3(1), and if a provider contributes, beyond merely providing its platform, to giving the public access to protected content in breach of copyright, they won't be able to rely on the above exemption (but this isn't the sole determination on the exemption). Also, the technological measures that a provider takes to prevent copyright infringement, as discussed above, doesn't give the provider active knowledge of the infringing activity, and bring them outside the scope of Article 14(1). Knowledge or awareness, however, can be derived through internal investigations by the providers or through notifications by third parties.

Summarizing its position in relation to the questions the Court determined that "…Article 14(1)… must be interpreted as meaning that the activity of the operator of a video-sharing platform or a file-hosting and -sharing platform falls within the scope of that provision, provided that that operator does not play an active role of such a kind as to give it knowledge of or control over the content uploaded to its platform"

The Court then moved on to the fourth question, which asked "…whether Article 8(3) of the Copyright Directive must be interpreted as precluding a situation where the rightholder is not able to obtain an injunction against an intermediary whose services are used by a third party to infringe the rights of that rightholder unless that infringement has previously been notified to that intermediary and that infringement is repeated".

To put in simpler terms, the question asks if rightsholders are prevented from seeking an injunction against an intermediary if their services are used to infringe their rights, unless they have been notified of the infringement previously. 

The Court considered that national courts have to be able to stop current and/or future infringements under the Article. The process and any requirements are up to Member States to determine (but must not interfere with other adjacent EU provisions), which includes a potential requirement of notification before an injunction can be granted. However, this isn't the case strictly under the relevant Article. 

The Court did note that, pursuant to Article 15(1) of the E-Commerce Directive, Member States cannot impose a monitoring obligation on service providers over the data that they store. With that in mind, any obligation to monitor content in order to prevent any future infringement of intellectual property rights is incompatible with this Article.

In light of its considerations on the question, the Court summarized its position as: "…Article 8(3) of the Copyright Directive must be interpreted as not precluding a situation under national law whereby a copyright holder or holder of a related right may not obtain an injunction against an intermediary whose service has been used by a third party to infringe his or her right, that intermediary having had no knowledge or awareness of that infringement, within the meaning of Article 14(1)(a) of the Directive on Electronic Commerce, unless, before court proceedings are commenced, that infringement has first been notified to that intermediary and the latter has failed to intervene expeditiously in order to remove the content in question or to block access to it and to ensure that such infringements do not recur. It is, however, for the national courts to satisfy themselves, when applying such a condition, that that condition does not result in the actual cessation of the infringement being delayed in such a way as to cause disproportionate damage to the rightholder".

The fifth and sixth questions were the only ones remaining, but the Court decided that they didn't need to be answered as the first and second questions were answered in the negative.

As is apparent, the case is a huge milestone for the regulation of intermediaries and copyright on the Internet. It sets fairly clear guidelines and expectations on service providers but seems to attempt to strike a proper balance with freedom of expression and the interests of service providers and rightsholders. This, however, is very much subject to change with the introduction of the Digital Copyright Directive, which, under Article 17, now imposes an obligation on service providers to under threat of liability to make 'best efforts' to obtain authorization and act in a similar way when infringing content is present on their services. The case and the new Directive set the scene for service providers, who will have to stay on their toes with the prevalence of infringing content online these days.

18 October, 2021

An Unusual Inventor - Australian Federal Court Gives Green Light for AI Inventors of Patents

Having spoken about whether artificial intelligence could be deemed to be an inventor was something that was discussed on this blog over a year ago, with both the UK, the US and the EU, currently, not allowing for AI to be considered as an 'inventor' and therefore any inventions created by the AI would not be patentable. While this position seems to remain in those jurisdiction, an interesting development has emerged in Australia which seems to go counter to what the above jurisdictions have decided. 

The case Thaler v Commissioner of Patents concerned a patent application for a patent by Stephen Thaler (application VID108/2021) for a patent for two inventions, created by DABUS (Mr Thaler's AI system), for an improved beverage container and a flashing beacon to be used for emergencies. The case focused on whether the AI system could be an inventor under the Patents Act 1990. The Deputy Commissioner of Patents initially rejected the application as, in their view, the application didn't comply with specific requirements and that under s. 15 of the Patents Act an AI could not be treated as an inventor. Mr Thaler then sought a judicial review of the decision, which ended up in the Federal Court.

The Court first discussed the issue generally, noting that: (i) there is no specific provision in the Patents Act that expressly refutes the proposition that an artificial intelligence system can be an inventor; (ii) there is no specific aspect of patent law, unlike copyright law involving the requirement for a human author or the existence of moral rights, that would drive a construction of the Act as excluding non-human inventors; (iii) the word "inventor" has not been expressly defined and under its ordinary meaning it can include any 'agent' which invents, including AI; (iv) the definition of an "inventor" can be seen to potentially evolve and change with the times, potentially to include non-human authors; and (v) the object of the Act is to promote the economic wellbeing of the country, which should inform the construction of the Act. 

To include AI as an inventor would, according to the Court, help in promoting innovation and the promotion of economic wellbeing in Australia. But the problem of who the ultimate legal owner of the rights still remains. This would indeed promote the objective of s. 2A of the Act, which sets out the object of the Act as discussed above. 

The Court highlighted the need for a human applicant, on behalf of the AI system, for any inventions that these systems might come up with, who would then also have the rights to any patents granted for that AI system. 

The Court also discussed the notion of the "inventive step" under s. 18 of the Act, which is a legal requirement for patentability. Both s. 18 and 7, which elaborates more on the meaning of the "inventive step", don't require an inventor per say, nor that such an inventor, if even required, would be legal person. The Court concluded that, for there to be an inventive step, a legal person isn't required as the inventor at all, even though the Act does refer to "mental acts" and "thoughts".

The Court then moved onto discussing the various dictionary definitions for what an "inventor" is. This was quickly dismissed as a grounds to limiting inventors to legal persons, as the Court noted that the definitions, as set out, have moved on from the historical meanings given to them and can't be limited in the same way. 

The next issue was who the patent would be granted to under s. 15 of the Act (but focusing on still who could be classed as an "inventor"). The section includes four different classes of person to whom a patent can be granted. 

The first is if the inventor is a person. The Court quickly determined that, as DABUS is not a person that the section won't apply in this instance, but also discussed that this does not demonstrate that the concept of a "person" would be different to that of an "inventor". An AI system could indeed be the inventor, but not able to be granted a patent as they don't fulfill the requirements of s. 15. 

The second one is a person who would, on the grant of a patent for the invention, be entitled to have the patent assigned to them. This will be discussed in more detail following the other classes. 

The third one is a person who derives title to the invention from the inventor or a person mentioned in the second class. Again, this will be discussed further in the context of the second class. 

Finally, the fourth one is a person who is the legal representative of a deceased person in one of the earlier named classes, which, unsurprisingly, the Court determined to not apply here.

Returning to the question of the second and third classes, the Court first highlighted that Mr Thaler could indeed fall under the second class as the programmer and operator of DABUS, through which he could acquire title in any inventions that may be granted patents over. The title in the patent flows through to Mr Thaler automatically, so wouldn't require the assignment of any rights by the inventor (here DABUS) to Mr Thaler. Additionally, the Court determined that s. 15 doesn't require an inventor at all, but only requires that an applicant is entitled to have a patent assigned to them. 

Also, Mr Thaler, according to the Court, would fall under the third class as he, on the face of it, he has derived title to the invention from DABUS. As the AI system cannot legally assign any inventions to Mr Thaler, however, the language of s. 15 does allow for one to derive rights in an invention even outside of legal assignment of those rights. All of Mr Thaler's rights in any invention become his by virtue of his ownership and operation of the inventor, DABUS. 

The Court succinctly summarised the matter as "generally, on a fair reading of ss. 15(1)(b) and 15(1)(c), a patent can be granted to a legal person for an invention with an artificial intelligence system or device as the inventor".

Ultimately what the case focused on is whether a valid patent application has been made, rather than who will own any patent that might be granted in the future. 

What the Court determined was that "...an inventor as recognised under the Act can be an artificial intelligence system or device. But such a non-human inventor can neither be an applicant for a patent nor a grantee of a patent". This still leaves the ownership of a patent in the air, but, focusing on what was discussed by the Court, it is more than likely that ownership in any AI inventions would automatically pass to the owner and operator of that AI system. 

The Australian Court's approach to this question has hugely departed from the viewpoints of that of the US, UK and the EU, where AI systems have been rejected as potential inventors. It will be very interesting to see the law change and evolve in this in the years to come, in particular any decisions on the granting of a patent to the owner of an AI system, but so far at least Australia seems to be on the vanguard of allowing for AI systems to potentially push the envelope on innovation and to protect those innovations in the process.