03 May, 2022

Challenger Defeated - Article 17 of the Digital Single Market Directive remains valid in light of the Freedom Expression

The EU is currently working on, or passing, big legislative changes in order to address issues within the digital marketplace and in relation to "Big Tech". This includes the Digital Single Market Directive and the forthcoming Digital Services Act, which will make sweeping changes to how companies operate in the digital space. One hotly contested point within this package is Article 17 of the DSM, which was challenged by Poland in a hotly awaited case within the CJEU. Following an opinion from Advocate General Øe as far back as July last year, the Court handed down its decision very recently on this, which sets the scene for the application of the DSM in the near future.

The case of Republic of Poland v European Parliament concerned an action for annulment brought by Poland in relation to Article 17. What the provision does is introduce liability for platforms that allow users to upload digital content onto the platform, such as YouTube, for copyright infringement if that user-uploaded material infringes copyright. However, platforms can avoid liability by making "best efforts" to either acquire a license or block the infringing content, and an obligation is placed on the platforms to act expeditiously following any notification by rightsholders to remove or disable the content and to use "best efforts" to prevent any future uploads, for example through the use of content filtering.

As is clear, Article 17 is set to change the landscape for platforms like YouTube and introduces strict measures and steps that need to be taken by the same in relation to infringing content. Poland challenged the provision on the grounds that it infringed Article 11 of the Charter of Fundamental Rights of the European Union, which enshrines the freedom of expression in EU law. 

What they specifically argue is that in order to be exempted from all liability for giving the public access to copyright-protected works or other protected subject-matter uploaded by their users in breach of copyright, online content-sharing service providers are required to carry out preventive monitoring of all the content which their users upload. This imposition of these mandatory measures, without appropriate safeguards put in place, would breach the right to freedom of expression.

The Court discussed their decision in YouTube and Cyando, which sets out that "the operator of a video-sharing platform or a file-hosting and ‑sharing platform, on which users can illegally make protected content available to the public, does not make a ‘communication to the public’ of that content, within the meaning of that provision, unless it contributes, beyond merely making that platform available, to giving access to such content to the public in breach of copyright". In addition, the same service providers will be specifically exempt from infringement provided that they do not play an active role of such a kind as to give it knowledge of or control over the content uploaded to its platform.

Article 17, as discussed above, is set to change this position and introduce new liability provisions and obligations on platforms. 

As set out by the Court, Article 11 of the Charter and Article 10 of the European Convention on Human Rights guarantee the freedom of expression and information for everyone, including in relation to the dissemination of information which encompasses the Internet as a key means of dissemination. Therefore, courts have to take due account the particular importance of the internet to freedom of expression and information and ensure that those rights are respected in any applicable legislation. 

In considering whether Article 17 has an impact on peoples' freedom of expression, they initially noted that the provision assumes that some platforms will not be able to get licenses from rightsholders for the content that is uploaded by users, and the rightsholders are free to determine whether and under what conditions their works and other protected subject matter are used. If no authorization is received the platforms will only have to "...demonstrate that they have made their best efforts... to obtain such an authorisation and that they fulfil all the other conditions for exemption" in order to avoid liability.

The conditions include acting expeditiously when notified by rightsholders in order to disable access to, or to remove the potentially offending content, to put in place appropriate measures to prevent the uploading of infringing content (e.g. through the use of automatic recognition and filtering tools), and to prevent future uploads. The Court does accept that these early measures can restrict an important means of disseminating online content and thus constitute a limitation on the right guaranteed by Article 11 of the Charter. They, therefore, concluded that "the specific liability regime [under] Article 17(4)... in respect of online content-sharing service providers, entails a limitation on the exercise of the right to freedom of expression and information of users of those content-sharing services".

However, the next piece of the puzzle the Court addressed was whether the limitation could be justified. 

The Charter does allow for legislation to impact the freedoms it contains, however, any limitations of those rights must be proportionate and made "... only if they are necessary and genuinely meet objectives of general interest recognised by the [EU] or the need to protect the rights and freedoms of others". If there is a choice between several different measures the least onerous one has to be chosen over the others and the disadvantages caused must not be disproportionate to the aims pursued. Finally, adequate safeguards need to be put in place within the legislation to guarantee the protection of those rights from abuse. 

The Directive and Article 17(4) do specify the limitations placed on the freedom of expression, it does not specify the means through which it should be done, only that they need to use their 'best efforts' in doing so. Although, as set out above, the Charter does require that these need to be specified, the Court noted that the limitations can be formulated in terms that are sufficiently open to be able to keep pace with changing circumstances. 

The Court also mentioned that the legislation, in Article 17(7) (and similarly in Article 17(9)), requires that other expression not be limited if it does not infringe copyright, so it is specific well beyond mere "best efforts". The Court accepted that both Articles 17(7) and (9) protected the right to freedom of expression and information of users of online content-sharing services adequately. They confirmed that this struck an adequate balance between both parties' interests. Looking at the liability mechanism itself, the Court agreed that it was not only appropriate but also appears necessary to meet the need to protect intellectual property rights. 

Also, the platforms are protected by the provision somewhat due to the notification requirements it sets, and without notification, broadly speaking, they would not be held liable for any infringing contents. Article 17(8) also specifically excludes a general monitoring obligation on the platforms. Any notification also has to contain sufficient information to remove the infringing contents easily. Article 17(9) contains several other procedural safeguards that should protect the right to freedom of expression and information of users where the service providers erroneously or unjustifiably block lawful content.

In summary, the Court set out that "...the obligation on online content-sharing service providers to review, prior to its dissemination to the public, the content that users wish to upload to their platforms, resulting from the specific liability regime established in Article 17(4)... and in particular from the conditions for exemption from liability... has been accompanied by appropriate safeguards by the EU legislature in order to ensure... respect for the right to freedom of expression and information of the users of those services... and a fair balance between that right, on the one hand, and the right to intellectual property, protected by Article 17(2) of the Charter".

The decision is undoubtedly correct and, although a bitter pill to swallow for many platforms that will be impacted by it, the aim of the legislation seems to not be to unduly burden them with monitoring obligations and to stifle the freedom of expression, but attempts to balance that with the protection of legitimate copyrights held by other parties. It will remain to be seen what filtering measures will be considered adequate by the EU courts in the future if they are challenged, but one can imagine sufficient measures already exist, such as Content ID for YouTube. The DSM is the first big step in the new IP regime in Europe, with more changes set to be made in the near future. 

12 April, 2022

Data in the Clouds - CJEU Accepts the Use of the Private Copying Exception in Relation to Cloud Storage

The use of cloud storage is near ubiquitous these days, with both companies and consumers using it at increasing amounts to store both their private data and even multimedia content, making it accessible without having all of the data copied onto a given device at all times. However, even though it's not something that most people will consider at the time (or at all), the copying of copyright protected content even onto cloud storage could potentially infringe on the rights of rightsholders, which could result in payments being due for the same. With that in mind as a starting point, could you use a private copying exception to avoid liability for such copying? Luckily, the CJEU has recently handed down its decision on the matter, clarifying the position for current and prospective copiers alike.

The case of Austro-Mechana Gesellschaft zur Wahrnehmung mechanisch-musikalischer Urheberrechte GmbH v Strato AG concerned Austro-Mechana, who is a copyright collecting society in Austria acting in a fiduciary capacity for the interests of its members and asserting the legal rights they have in their works on their behalf. Austro-Mechana made an application in court for the invoicing and taking payment for "storage media of any kind" from Strato, as they provide their customers a service called 'HiDrive', which allows for the storage of files through cloud computing. Strato contested this and the matter progressed through the Austrian courts, ultimately ending up with the CJEU.

The CJEU were faced with two questions on the matter, with the first one asking "...whether Article 5(2)(b) of Directive 2001/29 must be interpreted as meaning that the expression ‘reproductions on any medium’ referred to in that provision covers the saving, for private purposes, of copies of works protected by copyright on a server on which storage space is made available to a user by the provider of a cloud computing service". In short, does the exception under Article 5(2) include private copying onto cloud storage. 

The first consideration, according to the CJEU, was whether a 'reproduction' could include copying onto cloud storage. The Directive and its recitals make it amply clear that the phrase should be interpreted very broadly. The court noted that copying onto cloud storage includes both the reproduction through the uploading of a file for storage, and when a given file is accessed by the user subsequently and downloaded into any device. This means that copying onto cloud storage indeed constitutes 'reproduction' under Article 5(2). 

The second consideration is whether 'any medium' covers the provision of cloud computing servers for the storage of files in the cloud. In the broadest sense the phrase includes all media from which a protected work could be copied from, which includes cloud computing. 

Additionally, the Directive's purpose is to create a general and flexible framework at EU level in order to foster the development of the information society and to create new ways to exploit protected works. This is underpinned by technological neutrality. The same applies to the protection of copyright protected works, and the legislation is intended to not become obsolete and to apply to as much of new technology in the future as possible. 

The CJEU therefore concluded that the concept of ‘any medium’ includes a server on which storage space has been made available to a user by the provider of a cloud computing service. 

In summary, the CJEU set out their answer to the first question that "...Article 5(2)(b)… must be interpreted as meaning that the expression ‘reproductions on any medium’... covers the saving, for private purposes, of copies of works protected by copyright on a server on which storage space is made available to a user by the provider of a cloud computing service".

The second question posed to the CJEU asked "...whether Article 5(2)(b)… must be interpreted as precluding national legislation that has transposed the exception... and that does not make the providers of storage services in the context of cloud computing subject to the payment of fair compensation in respect of the unauthorised saving of copies of copyright-protected works by natural persons, who are users of those services, for private use and for ends that are neither directly nor indirectly commercial". In short, whether any national legislation that has transposes the exception, and doesn't impose a payment of royalties for the unauthorized copies, is excluded in relation to non-commercial private copying. 

This expressly refers to Article 2 of the Directive which provides an exception to reproductions made by people for a non-commercial purpose, provided that the rightsholder is fairly compensated. 

The CJEU notes that when Member States decide to transpose the exception to their national legislative framework they are required to provide for the payment of fair compensation to rightholders. They also note that the copying of copyright protected works by individuals can cause harm to rightsholders, which the compensation is attempting to remedy. 

As was already answered above, the phrase 'reproductions on any medium' includes cloud computing, but Member States still have wide discretion on the compensation of rightsholders in relation to copying that is covered by the exception. This includes who pays and how the monies are collected. Additionally, case law has set out that "...in principle, for the person carrying out private copying to make good the harm connected with that copying by financing the compensation that will be paid to the copyright holder", which in this case would be the users of the cloud storage systems.

However, there are practical difficulties in identifying individual users and obliging them to pay any requisite fees, especially since the potential harm suffered may be minimal and may therefore not give rise to an obligation for payment, so Member States can impose a private copying levy to cover this more broadly. Even so, this will have an impact on both the users and the fees for the cloud storage services they may be purchasing, impacting the 'fair balance' requirement as set out in the Directive. This is left to the national courts and the Member States to ensure it is in place. 

The CJEU therefore concluded that the answer to the second question is that "Article 5(2)(b)... must be interpreted as not precluding national legislation that has transposed the exception... and that does not make the providers of storage services in the context of cloud computing subject to the payment of fair compensation in respect of the unauthorised saving of copies of copyright-protected works by natural persons, who are users of those services, for private use and for ends that are neither directly nor indirectly commercial, in so far as that legislation provides for the payment of fair compensation to the rightholders"

The decision gets cloud storage providers off the hook for any compensation that might be payable to rightsholders, and it is up to Member States to impose a levy, if any, for such copying by private individuals. This writer has never heard of such levies even being considered, and any such levies could deter innovation in this space due to the public's lack of desire for its implementation due to costs. If you are not a consumer of cloud storage services, you would undoubtedly not be happy to pay for them either. Nevertheless, this is the CJEU's proposed position which leaves the option very much on the table. 

15 March, 2022

Yet Another Rejection - US Copyright Office Rejects Registration of Copyright in AI Created Artwork

How works created by artificial intelligence, or AI, are treated by various jurisdictions has been a hot topic recently, including in this very blog (for example here and here). It seems that, despite a briefly celebrated win in Australia, the technology is seeing setback after setback on whether AI-created works will be covered by IP and this seems more unlikely every day. The US Copyright Office was the latest blow in the battle for IP protection of AI-created works, which leaves the technology in a bit of a bind, however, it featured a familiar individual yet again. 

The US Copyright Office's Review Board recently released a decision on the copyright protection of a work, A Recent Entrance to Paradise, created by an AI program called Creativity Machine, developed by Steven Thaler. While the Creativity Machine was the author of the work, Mr Thaler was listed as the claimant alongside it as its owner. The work was autonomously created by the Creativity Machine, but Mr Thaler wanted to register the work as a work-for-hire to the owner of the program, himself. 

The US Copyright Office initially rejected the registration of the copyright in the work in 2019 as it "lacks the human authorship necessary to support a copyright claim", which Mr Thaler subsequently appealed this decision arguing that the "human authorship requirement is unconstitutional and unsupported by either statute or case law". Despite this, the Copyright Office rejected the registration on the same basis as before and noted that Mr Thaler had not provided any evidence of sufficient creative input or intervention by a human author in the work and that the Copyright Office won't abandon its longstanding interpretation of this matter which is based on precedent from the courts, including the Supreme Court. 

Being ever-persistent, Mr Thaler then yet again appealed, arguing in a familiar fashion that the Office’s human authorship requirement is unconstitutional and unsupported by case law. Mr Thaler also argued that there is a public policy piece to this where the Copyright Office 'should' register copyright in machine-created works because doing so would "further the underlying goals of copyright law, including the constitutional rationale for copyright protection". Mr Thaler also again rejected the Copyright Office's assertions of supporting case law. 

At the beginning of its decision, the Review Board accepted that the works were indeed created by AI without any creative contribution from a human actor. However, the Board noted that copyright only protects "the fruits of intellectual labor that are founded in the creative powers of the [human] mind". Mr Thaler was therefore tasked with either proving that the work was created by a human, which it hadn't been, or somehow convincing the Board to depart from long-established precedent.

The Review Board then moved on to discussing the precedent relevant to the matter at hand. 

Paragraph 306 of the US Copyright Compendium sets out that "the Office will refuse to register a claim if it determines that a human being did not create the work".  As such, human authorship is a prerequisite to copyright protection in the US. 

Similarly, 17 USC 102(a) affords copyright protection to "original works of authorship", and, while the phrase is very broad in its definition, it still requires human authorship. This has been supported by US Supreme Court decisions in Burrow-Giles Lithographic Co. v Sarony and Mazer v Stein where the Supreme Court required human authorship for copyright protection. In addition to the highest court in the US, the lower courts have also followed these decisions. 

Although the courts haven't directly decided on AI authorship in relation to copyright just yet, Mr Thaler has featured in a recent decision in the case of Thaler v Hirshfeld (the very same individual as in this decision) where the court decided that, under the Patent Act, an ‘inventor’ must be a natural person and cannot be an AI. 

The Board also discussed a recent report by the USPTO who discussed IP issues around AI. In this report, the USPTO noted that "existing law does not permit a non-human to be an author [and] this should remain the law"

The Board also briefly touched on Mr Thaler's secondary argument, which was that AI can be an author under the law because the work-made-for-hire doctrine allows for non-human, artificial persons such as companies to be authors. The Board rejected this argument since the work was not made for hire. This was due to the requirement for such a work to be prepared by e.g. an employee or one or more parties who agree that the work is one for hire. No such agreement was in place between Mr Thaler and the Creativity Machine. Even outside of this, the doctrine merely addresses ownership and not copright protection, so human authorship is, yet again, required. 

The Board, therefore, rejected the appeal and refused to register the work. 

The decision by the Board is by no means surprising, as the status of AI-created works seems to be more and more established as the years go by as a no-go. While copyright protection could be possible in some jurisdictions, as discussed in the above blog articles in more detail, it seems highly unlikely to be possible in the US without legislative intervention. As AI develops the law will have to address this and potentially grant AI-creation some level of protection, however, this will take some time to come true. 

11 January, 2022

Just in the Middle - CJEU Decides on Intermediary Liability for Copyright Infringing User Content

The issue of intermediary liability for copyright infringement has been a thorny issue in recent years, and the case involving YouTube (discussed more here) in this regard has been trundling through the EU courts for what feels like decades (exacerbated by the pandemic no doubt). However, the European courts have reached their decision last summer, giving some needed guidance on the matter and setting the scene for intermediary liability for the future. This writer is woefully behind the times in writing about this decision, but it does merit belated discussion and is a very important decision to keep in mind in relation to intermediaries.

The case of Frank Peterson v Google LLC (along with another case; Elsevier v Cyando) concerns Nemo Studio, a company owned by Mr Peterson who is a music producer. In 1996, Nemo Studio entered into a worldwide artist contract with Sarah Brightman covering the use of her audio and video recordings. Further agreements were entered into in the subsequent years, including one with Capitol Records for the exclusive distribution of Ms Brightman's recordings and performances. One of her albums, "A Winter Symphony", was released in 2008 along with an accompanying tour that year. The same year her album and private recordings from her tour were uploaded onto YouTube. Mr Peterson attempted to have the materials removed from the platform, including through cease-and-desist letters, and Google subsequently blocked access to the offending videos. However, even after this the content could be accessed on YouTube again, and Mr Peterson initiated legal proceedings against Google in Germany for copyright infringement, with the matter ultimately ending up with the CJEU.

The other combined case, very briefly put, dealt with at the same time involved Elsevier, an international specialist publisher and Cyando, a website where users could upload files for hosting and sharing. Users had uploaded copies of three works for which Elsevier owns the copyright onto the Cyando website, which were then shared on other websites via web links. Elsevier also brought legal proceedings against Cyando in Germany for copyright infringement, after which the CJEU combined both actions into one.

The first question posed to the court concerned "…whether Article 3(1) of the Copyright Directive must be interpreted as meaning that the operator of a video-sharing platform or a file-hosting and ‑sharing platform, on which users can illegally make protected content available to the public, itself makes a ‘communication to the public’ of that content".

As a primer, Article 3(1) provides the exclusive right to copyright holders to communicate their works to the public, which also gives them the right to prevent said communications by other parties without authorization. Additionally, what amounts to a 'communication to the public' includes all communication to the public not present at the place where the communication originates, so giving a very broad right to copyright holders. However, a balance has to be struck between the interests of the copyright holder and the interests and fundamental rights of users, in particular their freedom of expression and of information.

To dive a bit deeper into the meaning of the phrase 'communication to the public', this encompasses two specific criteria, namely: (i) an act of communication of a work; and (ii) the communication of that work to a public. 

In terms of the first criterion, an online platform performs and 'act of communication' where "…it intervenes, in full knowledge of the consequences of its action, to give its customers access to a protected work, particularly where, in the absence of that intervention, those customers would not, in principle, be able to enjoy the broadcast work". For the second criterion, a 'public' comprises "…an indeterminate number of potential recipients and implies, moreover, a fairly large number of people".

Finally, in addition to the above criteria, the courts have determined that for an act to be a 'communication to the public' "…a protected work must be communicated using specific technical means, different from those previously used or, failing that, to a ‘new public’, that is to say, to a public that was not already taken into account by the copyright holder when it authorised the initial communication of its work to the public"

One thing to note in the context of both YouTube and Cyando is that both platforms operate through user uploads, who act autonomously from the platform, decide whether the content will be publicly available and are responsible for their own actions. For file-hosting services, like Cyando, there is the additional requirement that the users have to share a link to the content for it to be accessed and/or downloaded, which is up to them to do. While the users might very well be doing an 'act of communication to the public', it remains unclear whether the intermediaries are doing the same through this.

The Court noted that the platforms play an indispensable part in making potentially illegal content available, but this is not the only criterion that you need to concern yourself with; the intervention of the platform needs to be deliberate and not just incidental. This means that the platforms have to intervene in full knowledge of the consequences of doing so, with the aim of giving the public access to protected works for there to be an infringement.

The Court highlighted that, in order to determine this, one has to "…take into account all the factors characterising the situation at issue which make it possible to draw, directly or indirectly, conclusions as to whether or not its intervention in the illegal communication of that content was deliberate". These circumstances can include: "...[i] the circumstance that such an operator, despite the fact that it knows or ought to know that users of its platform are making protected content available to the public illegally via its platform, refrains from putting in place the appropriate technological measures that can be expected from a reasonably diligent operator in its situation in order to counter credibly and effectively copyright infringements on that platform; and [ii] the circumstance that that operator participates in selecting protected content illegally communicated to the public, that it provides tools on its platform specifically intended for the illegal sharing of such content or that it knowingly promotes such sharing, which may be attested by the fact that that operator has adopted a financial model that encourages users of its platform illegally to communicate protected content to the public via that platform".

In short this means that platforms will have to take positive steps to prevent the sharing of infringing content. Mere knowledge that this is being done isn't sufficient to conclude that a platform intervenes with the purpose of giving internet users access to that content unless they have been specifically warned by rightsholders of this activity. Making a profit from this activity will also not, in itself, make a platform liable. 

The Court summarized its answer to the question as: "… Article 3(1)… must be interpreted as meaning that the operator of a video-sharing platform or a file-hosting and ‑sharing platform, on which users can illegally make protected content available to the public, does not make a ‘communication to the public’ of that content… unless it contributes, beyond merely making that platform available, to giving access to such content to the public in breach of copyright. That is the case, inter alia, where that operator has specific knowledge that protected content is available illegally on its platform and refrains from expeditiously deleting it or blocking access to it, or where that operator, despite the fact that it knows or ought to know, in a general sense, that users of its platform are making protected content available to the public illegally via its platform, refrains from putting in place the appropriate technological measures that can be expected from a reasonably diligent operator in its situation in order to counter credibly and effectively copyright infringements on that platform, or where that operator participates in selecting protected content illegally communicated to the public, provides tools on its platform specifically intended for the illegal sharing of such content or knowingly promotes such sharing, which may be attested by the fact that that operator has adopted a financial model that encourages users of its platform illegally to communicate protected content to the public via that platform".

The Court then turned to the second and third questions together, which it posed as asking: "…whether Article 14(1) of the Directive on Electronic Commerce must be interpreted as meaning that the activity of the operator of a video‑sharing platform or a file-hosting and -sharing platform falls within the scope of that provision, to the extent that that activity covers content uploaded to its platform by platform users. If that is the case, that court wishes to know, in essence, whether Article 14(1)(a) of that directive must be interpreted as meaning that, for that operator to be excluded, under that provision, from the exemption from liability provided for in Article 14(1), it must have knowledge of specific illegal acts committed by its users relating to protected content that was uploaded to its platform"

Under Article 14(1) Member States have to ensure that service providers of information society services are not liable for the information stored at the request of the recipient of the services, i.e. users, provided that the provider does not have actual knowledge of illegal activity or information and, is not aware of facts or circumstances from which the illegal activity or information is apparent, or that the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove the information or to disable access to it. What is key here is that the provider has to be an ‘intermediary service provider’, meaning one that they are "…of a mere technical, automatic and passive nature, [i.e.] that service provider has neither knowledge of nor control over the information which is transmitted or stored".

The above assessment can be made considering Article 3(1), and if a provider contributes, beyond merely providing its platform, to giving the public access to protected content in breach of copyright, they won't be able to rely on the above exemption (but this isn't the sole determination on the exemption). Also, the technological measures that a provider takes to prevent copyright infringement, as discussed above, doesn't give the provider active knowledge of the infringing activity, and bring them outside the scope of Article 14(1). Knowledge or awareness, however, can be derived through internal investigations by the providers or through notifications by third parties.

Summarizing its position in relation to the questions the Court determined that "…Article 14(1)… must be interpreted as meaning that the activity of the operator of a video-sharing platform or a file-hosting and -sharing platform falls within the scope of that provision, provided that that operator does not play an active role of such a kind as to give it knowledge of or control over the content uploaded to its platform"

The Court then moved on to the fourth question, which asked "…whether Article 8(3) of the Copyright Directive must be interpreted as precluding a situation where the rightholder is not able to obtain an injunction against an intermediary whose services are used by a third party to infringe the rights of that rightholder unless that infringement has previously been notified to that intermediary and that infringement is repeated".

To put in simpler terms, the question asks if rightsholders are prevented from seeking an injunction against an intermediary if their services are used to infringe their rights, unless they have been notified of the infringement previously. 

The Court considered that national courts have to be able to stop current and/or future infringements under the Article. The process and any requirements are up to Member States to determine (but must not interfere with other adjacent EU provisions), which includes a potential requirement of notification before an injunction can be granted. However, this isn't the case strictly under the relevant Article. 

The Court did note that, pursuant to Article 15(1) of the E-Commerce Directive, Member States cannot impose a monitoring obligation on service providers over the data that they store. With that in mind, any obligation to monitor content in order to prevent any future infringement of intellectual property rights is incompatible with this Article.

In light of its considerations on the question, the Court summarized its position as: "…Article 8(3) of the Copyright Directive must be interpreted as not precluding a situation under national law whereby a copyright holder or holder of a related right may not obtain an injunction against an intermediary whose service has been used by a third party to infringe his or her right, that intermediary having had no knowledge or awareness of that infringement, within the meaning of Article 14(1)(a) of the Directive on Electronic Commerce, unless, before court proceedings are commenced, that infringement has first been notified to that intermediary and the latter has failed to intervene expeditiously in order to remove the content in question or to block access to it and to ensure that such infringements do not recur. It is, however, for the national courts to satisfy themselves, when applying such a condition, that that condition does not result in the actual cessation of the infringement being delayed in such a way as to cause disproportionate damage to the rightholder".

The fifth and sixth questions were the only ones remaining, but the Court decided that they didn't need to be answered as the first and second questions were answered in the negative.

As is apparent, the case is a huge milestone for the regulation of intermediaries and copyright on the Internet. It sets fairly clear guidelines and expectations on service providers but seems to attempt to strike a proper balance with freedom of expression and the interests of service providers and rightsholders. This, however, is very much subject to change with the introduction of the Digital Copyright Directive, which, under Article 17, now imposes an obligation on service providers to under threat of liability to make 'best efforts' to obtain authorization and act in a similar way when infringing content is present on their services. The case and the new Directive set the scene for service providers, who will have to stay on their toes with the prevalence of infringing content online these days.

18 October, 2021

An Unusual Inventor - Australian Federal Court Gives Green Light for AI Inventors of Patents

Having spoken about whether artificial intelligence could be deemed to be an inventor was something that was discussed on this blog over a year ago, with both the UK, the US and the EU, currently, not allowing for AI to be considered as an 'inventor' and therefore any inventions created by the AI would not be patentable. While this position seems to remain in those jurisdiction, an interesting development has emerged in Australia which seems to go counter to what the above jurisdictions have decided. 

The case Thaler v Commissioner of Patents concerned a patent application for a patent by Stephen Thaler (application VID108/2021) for a patent for two inventions, created by DABUS (Mr Thaler's AI system), for an improved beverage container and a flashing beacon to be used for emergencies. The case focused on whether the AI system could be an inventor under the Patents Act 1990. The Deputy Commissioner of Patents initially rejected the application as, in their view, the application didn't comply with specific requirements and that under s. 15 of the Patents Act an AI could not be treated as an inventor. Mr Thaler then sought a judicial review of the decision, which ended up in the Federal Court.

The Court first discussed the issue generally, noting that: (i) there is no specific provision in the Patents Act that expressly refutes the proposition that an artificial intelligence system can be an inventor; (ii) there is no specific aspect of patent law, unlike copyright law involving the requirement for a human author or the existence of moral rights, that would drive a construction of the Act as excluding non-human inventors; (iii) the word "inventor" has not been expressly defined and under its ordinary meaning it can include any 'agent' which invents, including AI; (iv) the definition of an "inventor" can be seen to potentially evolve and change with the times, potentially to include non-human authors; and (v) the object of the Act is to promote the economic wellbeing of the country, which should inform the construction of the Act. 

To include AI as an inventor would, according to the Court, help in promoting innovation and the promotion of economic wellbeing in Australia. But the problem of who the ultimate legal owner of the rights still remains. This would indeed promote the objective of s. 2A of the Act, which sets out the object of the Act as discussed above. 

The Court highlighted the need for a human applicant, on behalf of the AI system, for any inventions that these systems might come up with, who would then also have the rights to any patents granted for that AI system. 

The Court also discussed the notion of the "inventive step" under s. 18 of the Act, which is a legal requirement for patentability. Both s. 18 and 7, which elaborates more on the meaning of the "inventive step", don't require an inventor per say, nor that such an inventor, if even required, would be legal person. The Court concluded that, for there to be an inventive step, a legal person isn't required as the inventor at all, even though the Act does refer to "mental acts" and "thoughts".

The Court then moved onto discussing the various dictionary definitions for what an "inventor" is. This was quickly dismissed as a grounds to limiting inventors to legal persons, as the Court noted that the definitions, as set out, have moved on from the historical meanings given to them and can't be limited in the same way. 

The next issue was who the patent would be granted to under s. 15 of the Act (but focusing on still who could be classed as an "inventor"). The section includes four different classes of person to whom a patent can be granted. 

The first is if the inventor is a person. The Court quickly determined that, as DABUS is not a person that the section won't apply in this instance, but also discussed that this does not demonstrate that the concept of a "person" would be different to that of an "inventor". An AI system could indeed be the inventor, but not able to be granted a patent as they don't fulfill the requirements of s. 15. 

The second one is a person who would, on the grant of a patent for the invention, be entitled to have the patent assigned to them. This will be discussed in more detail following the other classes. 

The third one is a person who derives title to the invention from the inventor or a person mentioned in the second class. Again, this will be discussed further in the context of the second class. 

Finally, the fourth one is a person who is the legal representative of a deceased person in one of the earlier named classes, which, unsurprisingly, the Court determined to not apply here.

Returning to the question of the second and third classes, the Court first highlighted that Mr Thaler could indeed fall under the second class as the programmer and operator of DABUS, through which he could acquire title in any inventions that may be granted patents over. The title in the patent flows through to Mr Thaler automatically, so wouldn't require the assignment of any rights by the inventor (here DABUS) to Mr Thaler. Additionally, the Court determined that s. 15 doesn't require an inventor at all, but only requires that an applicant is entitled to have a patent assigned to them. 

Also, Mr Thaler, according to the Court, would fall under the third class as he, on the face of it, he has derived title to the invention from DABUS. As the AI system cannot legally assign any inventions to Mr Thaler, however, the language of s. 15 does allow for one to derive rights in an invention even outside of legal assignment of those rights. All of Mr Thaler's rights in any invention become his by virtue of his ownership and operation of the inventor, DABUS. 

The Court succinctly summarised the matter as "generally, on a fair reading of ss. 15(1)(b) and 15(1)(c), a patent can be granted to a legal person for an invention with an artificial intelligence system or device as the inventor".

Ultimately what the case focused on is whether a valid patent application has been made, rather than who will own any patent that might be granted in the future. 

What the Court determined was that "...an inventor as recognised under the Act can be an artificial intelligence system or device. But such a non-human inventor can neither be an applicant for a patent nor a grantee of a patent". This still leaves the ownership of a patent in the air, but, focusing on what was discussed by the Court, it is more than likely that ownership in any AI inventions would automatically pass to the owner and operator of that AI system. 

The Australian Court's approach to this question has hugely departed from the viewpoints of that of the US, UK and the EU, where AI systems have been rejected as potential inventors. It will be very interesting to see the law change and evolve in this in the years to come, in particular any decisions on the granting of a patent to the owner of an AI system, but so far at least Australia seems to be on the vanguard of allowing for AI systems to potentially push the envelope on innovation and to protect those innovations in the process. 

10 August, 2021

Copying Not Allowed - Search Engines Copying Databases Infringes Database Rights If It Causes Financial Detriment, says CJEU

Following an earlier opinion from the Advocate General, the CJEU set out its findings on the infringement of database rights by search engines in the indexing of other websites' content. Discussed earlier in this blog, the decision is an important one to set the potential boundaries of database rights and indexing, especially when indexing is so ubiquitous in how the Internet as we know it functions. The decision will leave some rightsholders wanting more, as now the position swings firmly in the direction of the indexing websites.

The decision in CV-Online Latvia SIA v Melons SIA concerned the website CV-Online, which included a database of job advertisements published by various employers. The website also included various metatags, or 'microdata', which, while not visible to the users of the website, contained key information to allow internet search engines to better identify the content of each page in order to index it correctly. These metatags included keywords like  ‘job title’, ‘name of the undertaking’, ‘place of employment’, and ‘date of publication of the notice’.

Melons operate a separate website containing a search engine that specializes in job ads. The search engine allows users to search a number of websites containing job ads in one go according to specific criteria that they set. The Melons' website then produces results based on that search, where users can click on links that take them to that particular job website where the ad is located (including CV-Online). Unhappy about this indexing of their content, CV-Online took Melons to court for a breach of their 'sui generis' right under Article 7 of Directive 96/9. The case progressed through the Latvian courts, ultimately ending up with the CJEU this Summer. 

The CJEU was asked two questions, which the court decided together. 

The question posed to the court asked: "whether... the display, by a specialised search engine, of a hyperlink redirecting the user of that search engine to a website, provided by a third party, where the contents of a database concerning job advertisements can be consulted, falls within the definition of ‘re-utilisation’ in Article 7(2)... and... whether the information from the meta tags of that website displayed by that search engine is to be interpreted as falling within the definition of ‘extraction’ in Article 7(2)(a) of that directive".

The court first discussed 'sui generis' rights in general, which allow rightsholders to ensure the protection of a substantial investment in the obtaining, verification or presentation of the contents of a database. What is required for a database to be protectable is "...qualitatively and/or quantitatively a substantial investment in the obtaining, verification or presentation of the contents of that database". Without that investment, the courts will not protect any databases under sui generis rights. The court assumed that this would be the case in terms of CV-Online, however, this matter would ultimately be decided by the local courts following the CJEU's decision. 

For there to be an infringement of sui generis rights there has to be an ‘extraction’ and/or ‘re-utilisation’ within the meaning of Directive, which, as summarised by the court, includes "...any act of appropriating and making available to the public, without the consent of the maker of the database, the results of his or her investment, thus depriving him or her of revenue which should have enabled him or her to redeem the cost of that investment"

The court discussed the applicability of the above to the operation of Melons' website and determined that such a search engine would indeed fall within the meaning of extraction and re-utilisation of those databases that the website copies its information from (including in indexing that content). However, this extraction/re-utilisation is only prohibited if it has the effect of depriving that person of income intended to enable him or her to redeem the cost of that investment. This is important since without this negative financial impact the copying will be allowed under EU law.

The court also highlighted that a balance needs to be struck between the legitimate interest of the makers of databases in being able to redeem their substantial investment and that of users and competitors of those makers in having access to the information contained in those databases and the possibility of creating innovative products based on that information. Content aggregators, such as Melons, are argued to add value to the information sector through their acts and allow for information to be better structured online, thus contributing to the smooth functioning of competition and to the transparency of offers and prices.

The referring court would therefore have to look at two issues: (i) whether the obtaining, verification or presentation of the contents of the database concerned attests to a substantial investment; and (ii) whether the extraction or re-utilisation in question constitutes a risk to the possibility of redeeming that investment.

The court finally summarised its position on the questions as "...Article 7(1) and (2)... must be interpreted as meaning that an internet search engine specialising in searching the contents of databases, which copies and indexes the whole or a substantial part of a database freely accessible on the internet and then allows its users to search that database on its own website according to criteria relevant to its content, is ‘extracting’ and ‘re-utilising’ the content of that database within the meaning of that provision, which may be prohibited by the maker of such a database where those acts adversely affect its investment in the obtaining, verification or presentation of that content, namely that they constitute a risk to the possibility of redeeming that investment through the normal operation of the database in question, which it is for the referring court to verify".

The decision will, as said above, come as a big blow to database owners, as the added requirement of financial detriment will be a big hurdle for many to overcome in order to protect their databases from potential copying. The CJEU also notes the utility of indexing and the 'copying' of such databases by content aggregators as useful means of organizing the Internet, which leads to the question of where the limits actually are. It will remain to be seen how the decision will impact databases going forward, but one might imagine the more commercial value in a database the less likely the courts will allow copying of that database. 

08 June, 2021

Broken Down and Built Again - AG Szpunar Opines Whether the Decompilation of Computer Programs Infringe Copyright

Hot off the heels of the recent decision in Oracle v Google, copyright protection in computer programming is facing its latest hurdle, which is this time in the European courts. A big aspect of computer software management is the fixing of ever-present errors, be it due to the implementation of the software or the software in isolation. When you encounter such bugs you might have to decompile the program, which, to put simply, means to convert an executable computer program into source code, allowing you to fix errors and then to recompile the program to enable its execution. While decompiling can be used for more nefarious purposes (such as to enable the 'cracking' of protected software), it can allow for the fixing of legitimate issues in licensed copies of software. But, because of this potential nefarious way of using decompilation, would doing so breach copyright? Luckily the CJEU is on the case, but before the main court has looked at the issue Advovate General Szpunar gave his two cents a little while ago.

The case of Top System SA v Belgian State concerned a number of applications developed by Top Systems for the Belgian Selection Office of the Federal Authorities (also called SELOR), including SELOR Web Access and eRecruiting, which is in charge of the selection and orientation of future personnel for the authorities' various public services. Many of the applications developed by Top Systems contained tailor-made components according to SELOR's specifications. In February 2008 Top Systems and SELOR agreed on a set of agreements, one of which concerned the installation and configuration of a new development environment as well as the integration of the sources of SELOR’s applications into, and their migration to, that new environment. As with many projects, there were issues which affected the various applications. Following these problems Top Systems took SELOR (and the Belgian state) to court for copyright infringement due to the decompilation of Top Systems' software. After the usual progression of the matter through the Belgian courts the matter ultimately ended up with the CJEU, and also the desk of the AG. The referring court asked the CJEU two questions. 

The first question asked "...whether Article 5(1) of Directive 91/250 permits a lawful acquirer of a computer program to decompile that program where such decompilation is necessary in order to correct errors affecting its functioning"

The AG first noted that, while computer programs have copyright protection over the reproduction of the program, it is limited due to the nature of how computer programs work (i.e. they need to be reproduced on a computers memory to operate). Strict enforcement would impede consumers' use of computer programs and therefore isn't desirable. The same applies to the protection against alteration. The Directive above does take account of this and sets out that reproduction and alteration of computer programs "...do not require authorisation by the rightholder where they are necessary for the use of the program by the lawful acquirer, including for error correction". However, Article 5 does leave the possibility open to restrict these acts through contractual provisions. 

Even with that being a possibility, the ability to contractually restrict the reproduction or alteration is there, this cannot extend to all such activities, as the acts of loading and running of a computer program necessary for that use may not be prohibited by contract. 

The AG then moved onto the matter of decompilation for the purposes of correcting errors. He agreed with the submissions made by the interested parties in that decompilation is covered by the author's exclusive rights under the Directive, even though there is no express 'decompilation' provision there. 

Gary wanted to fix the software, but his hands were tied...

Although argued by TopSystems, decompilation cannot be excluded from those rights under Article 6 according to the AG (which allows for reproduction and translation of a computer program if it is indispensable for the operation of the program with other programs). The relevant provision therefore remains at Article 5 and 6 will have no bearing on the exclusion of decompilation. Article 6 does not stand as the only instances where it is possible to decompile a computer program.

The AG therefore proposed that the first question be answered as "... Article 5(1)... is to be interpreted as permitting a lawful acquirer of a computer program to decompile that program where that is necessary in order to correct errors affecting its functioning"

He then moved onto the second question which asked "...in the event that Article 5(1)... were to be interpreted as permitting a lawful acquirer of a computer program to decompile that program where that is necessary in order to correct errors, that decompilation must satisfy the requirements laid down in Article 6 of that directive or, indeed, other requirements"

Article 6 sets out, as discussed above, the decompilation of a computer program where that is necessary in order to ensure the compatibility of another independently created program with the program. The AG discussed the specific requirements under Article 6, namely that: (i) they only apply to those who have lawfully acquired the program; (ii) the decompilation must be necessary in order for that program to be used in accordance with its intended purpose and for error correction; and (iii) the intervention of the user of the computer program must be necessary from the perspective of the objective pursued. 

In relation to the second condition, the AG noted that an 'error' should be defined as "...a malfunction which prevents the program from being used in accordance with its intended purpose". One has to note that this does not include the amendment and/or improvement of a program, but simply the enabling of the use of the program for its intended purpose. 

On the third condition, the AG specified that, while the decompilation of a program can be necessary, it always isn't. He determined that "...the lawful acquirer of a computer program is therefore entitled, under [Article 5(1)], to decompile the program to the extent necessary not only to correct an error stricto sensu, but also to locate that error and the part of the program that has to be amended". So, a lawful user can decompile a program to both fix and locate any errors, which gives them quite a wide berth in doing so. Nonetheless, the AG noted that the decompiled program cannot be used for other purposes. 

The AG therefore proposed the answer to the second question as "...Article 5(1)… is to be interpreted as meaning that the decompilation of a computer program... by a lawful acquirer, in order to correct errors in that program, is not subject to the requirements of Article 6 of that directive. However, such decompilation may be carried out only to the extent necessary for that correction and within the limits of the acquirer’s contractual obligations"

The CJEU's decision will be an interesting one and will undoubtedly impact the ability of programmers and companies to decompile computer programs to correct any errors they might find during the use of said programs. It would make sense to allow for their decompilation for this restricted purpose, since without that right major errors could render programs useless until the point when the provider can, and decides to fix those errors. It remains to be seen whether the CJEU will agree with the AG, but it would seem sensible for them to follow his responses.