30 May, 2018

A New Intelligence - UK House of Lords Releases Report on Artificial Intelligence

Artificial intelligence is absolutely a hot button issue these days, particularly with a focus on what's coming in the near and more distant future and AI's potential impact on people's lives (both in the good and the bad).  This writer is very interested in AI, having recently published an article discussed the same in relation to copyright, and in that vein was quite thrilled with the release of the House of Lords' report on AI last month. The report is very detailed and discusses many facets of the technology and its potential impact, but what potential recommendations does the report give from a legal perspective?

The Report (titled AI in the UK: ready, willing and able?) considers the development, implementation and impact of AI in the UK in the future, and recommends that the government take certain actions sooner rather than later. This article will endeavour to focus on some of the more interesting, potentially IP specific issues, and general law considerations.

A big point in the Report is the mitigation of risks associated with AI. Largely this relates to any assumed liability for an AI's actions, negligence or malfunction, which, as the law stands, is not covered. The Report does recommend that "…the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to Government appropriate remedies to ensure that the law is clear in this area. At the very least, this work should establish clear principles for accountability and intelligibility". Without accountability, one could imagine AI skirting liability, and causing potential issues for those wishing to pursue any damages resulting from an AI's decisions.

The consideration of liability ties into the matter of whether AI should be treated the same as humans, or have separate legal or non-legal entity status. For example, should IP rights be awarded to the AI, or the AI's creators (more on which here)? Both paths have their issues, but the Report fails to address this aspect in detail.

Another point of contention in the Report is the amassing of data by large corporations, which would therefore, in a way, control the development of AI and the marketplace for the same. The Report encourages that "…publicly-held data be made available to AI researchers and developers" to facilitate AI development for smaller players. Even so, it does highlight a need for "…legal and technical mechanisms for strengthening personal control over data, and preserving privacy" in the light of AI; however, this has been, at least arguably, largely achieved through the introduction of the GDPR this month.

A human future without TPS reports
In an attempt to kerb any misuses of data, or its monopolisation, the Report recommends that "…the Government, and the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by the big technology companies operating in the UK". Clearly this might create a data competition framework, where companies with large swathes of data could be prevented from merging due to competition issues, or simply to prevent the centralization of requisite data for AI creation and/or operation.

The Report does discuss the IP issues with the development of AI in university settings, and the commercialization of research. As was discussed in the Hall-Pesenti Report last year, licencing of technologies could be problematic, and both the Report and Hall-Pesenti have sought to mitigate this. In this vein, the Report recommends that "…universities should use clear, accessible and where possible common policies and practices for licensing IP and forming spin-out companies", and that a policy should be drafted for Universities to enforce the same. Even so, Universities should be able to protect the know-how relating to AI, but within a reasonable framework.

Although not discussed in the House of Lords' report, the Hall-Pesenti Report also highlights a need for the reform of copyright in relation to data published in research, which would infringe copyright. This is problematic since "[t]his restricts the use of AI in areas of high potential public value, and lessens the value that can be gained from published research, much of which is funded by the public". In terms of reform the report suggests that "…the UK should move towards establishing by default that for published research the right to read is also the right to mine data, where that does not result in products that substitute for the original works". In terms of the development of AI this makes perfect sense, but when that data relates to human individuals, particularly in the light of the GDPR that just recently came into force, we should be cautious for giving AI free reign over all data. A balance needs to be struck with the interests of individuals, but the robust development of AI, provided the developments are for the public good.

As one can imagine, AI is a very difficult and potentially thorny subject, and there will be without a doubt many pitfalls we will end up in and will need to play catch-up with. Nonetheless, the potential of the technology could outweigh these issues, but maybe this writer is too optimistic in terms of AI's future.

No comments:

Post a Comment

All comments will be moderated before publication. Any messages that contain, among other things, irrelevant content, advertising, spam, or are otherwise against good taste, will not be published.

Please keep all messages to the topic and as relevant as possible.

Should your message have been removed in error or you would want to complain about a removal, please email any complaints to jani.ihalainen(at)gmail.com.