It would be remiss of Wakefields not to jump on the band wagon of discussing artificial intelligence – or ‘AI’ in its abbreviated form. From self-driving cars to Chat GPT, AI is proving successful in igniting the imagination of the world – with varying degrees of positivity.
Let’s get this out of the way; I am a lawyer, not a tech expert. The extent of my programming knowledge extends as far as setting up automatic signatures in Microsoft Outlook, so this blog will not cover any technical aspects of AI. We are also not going to address the implications of AI replacing humans in the workforce, though certainly we are starting to see this already.
What we do want to achieve in this article is to take a look at some of the legal complications and traps that we can foresee in the AI world. If you intend to use AI in your workplace, or as a tool in your life, this blog is for you.
I want to first outline two of the main issues currently debated around AI:
Data Sets
Many of the open-source AI machines have ‘scraped’ the internet of as much data that is publicly available and have taught themselves from that. The internet is not the friendliest of places. It is certainly not un-biased or neutral.
Black Box AI
Much of the common place AI is ‘non-explainable.’ The developers cannot tell us how, or why, an AI came up with an answer. AI does not make a habit of showing it’s working or process. These types of AI are called ‘black box AI’.
Thus, data sets and black box AI are two areas in particular where we can encounter some thorny legal issues.
Employment Law
AI is already being utilised in the recruitment space. In America, it would be common for your CV to never be seen by a human, if the role you were applying for was popular. AI software can be purchased to sort through CVs and job applications and produce shortlists for the employer to review.
To see how these recruitment tools could be problematic, we can look at a famous example in the medical field: the ‘cancerous ruler’ model.[i] Dermatologist Roberto Novoa teamed up with Stanford University’s computer science department to see if they could train an AI model to diagnose skin cancer. Their database was trained on 129,000 images of benign and already diagnosed malignant lesions and had some success in correctly identifying the cancerous lesions.
But…
When they further looked into the AI’s processes (as they could do given it was not a black box system) the researchers found that it was far more likely to diagnose any picture that included a ruler as being cancerous. The AI had taught itself that rulers were malignant, as medical images of cancerous lesions were more likely to have a ruler for scale included.
I don’t think you need to be a lawyer to see how that could lead to some big problems in our Employment Law context, especially with black box AI. Under New Zealand Law, when you are hiring an employee, you must not discriminate on the basis of prohibited grounds of discrimination under the Human Rights Act 1993 (age, sex, gender, religious beliefs, sexuality, etc). However, you would have no control over what a black box AI could teach itself and whether it would discriminate on one or several bases. Similarly, depending on the AI model, you may have no control over what data set the AI used.
There is no doubt that it is time consuming to go through multiple CVs and job applications. Some roles may even have hundreds of eager applicants to assess. It can be tempting to think that an AI with the ‘right’ dataset will produce fair and balanced results in a fraction of the time.
This kind of thinking is already commonplace in USA. In January 2023, the USA’s Equal Employment Opportunity Commission’s chair (Charlotte Burrows) estimated that some 83% of employers, including 99% of Fortune 500 companies, used some form of automated tool as part of their hiring processes.[ii] In 2018, Amazon.com Inc had to scrap an AI recruiting tool it had been developing when they realised it was teaching itself that male candidates were preferable.[iii]
If you are using AI, you open yourself to a huge risk that it will teach itself something you did not foresee. If that AI is a black box AI, it can be difficult to realise that this is even happening and fixing it can be near impossible.
There have not been any published cases in New Zealand of an AI recruitment tool landing an employer in hot water, but it is only a matter of time. An employer cannot abrogate their responsibilities to not discriminate through giving the job of sorting candidates to a robot.
We would expect the Human Rights Commission or the Human Rights Review Tribunal to be rather unforgiving of the excuse that the employer didn’t mean for discrimination to occur, especially if black box AI was used.
If you are considering using AI in recruitment, our top tips are:
Intellectual Property Law
We have already seen a large volume of discourse regarding AI and Intellectual Property Law. The important thing you need to be aware of is that under the current law, you will not be able to copyright or protect property that is generated by AI. AI cannot legally be an “Inventor”.[iv]
The data set issue comes into play here too. You need to be reasonably confident on what data that the AI you are using has trained on if you want to reproduce AI work. Getty Images is currently suing Stability AI, an AI art generator, in the United States for allegedly using its content.
Privacy Law
Security of AI software also poses issues in the context of privacy law. Generative AI requires huge quantities of data to learn from, and as discussed above this can include massive amounts of personal information.
The obvious concern is the potential for data breaches. Cybercrime is on the rise and becoming ever more sophisticated. The World Economic Forum cites cybercrime as one of the gravest threats facing businesses.[v]
If you are planning on using AI in your line of work in a way where you provide the machine with data and information, you will need to ensure that your terms and conditions are airtight. This is especially true if you host a website, or you collect personal information of customers.
Technology is a difficult area to legislate, and we would be surprised if 2023 saw any new legislative proposals in this area.
This blog is not here to fearmonger, nor are we commenting on the ethics of AI.
Today, AI is the worst it will ever be again. It needs to be treated with caution and be held responsible to explain its conclusions and processes, especially if people plan to use it in the course of business or in a workplace.
If you are thinking of utilising AI in your workplace in ways that may have employment, intellectual property or privacy implications, we would encourage you to get in touch. We are experts in these areas of law (and more) and keep up to date with all the latest changes and developments. Don’t get yourself into trouble because of AI – use it effectively, legally and with caution. Contact the friendly team at Wakefields Lawyers today on (04) 970 3600 or email info@wakefieldslaw.com.
– Sam Wood (Solicitor)
This Article was not written with the aid of ChatGPT.
References
[i] Paper published in February 2017 https://www.nature.com/articles/nature21056.epdf
[ii] https://www.npr.org/2023/01/31/1152652093/ai-artificial-intelligence-bot-hiring-eeoc-discrimination
[iii] https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
[iv] Thaler v Commissioner of Patents [2023] NZHC 554
[v] https://www.weforum.org/agenda/2022/07/fraud-cybercrime-financial-business/