Beautiful Virgin Islands


Can artificial intelligence be registered as a patent inventor? A landmark Australian court decision says it can

For decades, only human beings could be recognised as inventors by the patent office - but not anymore. 

In a landmark decision, an Australian court has set a groundbreaking precedent, deciding artificial intelligence (AI) systems can be legally recognised as an inventor in patent applications.

That might not sound like a big deal, but it challenges a fundamental assumption in the law: that only human beings can be inventors.

The AI machine called DABUS is an "artificial neural system" and its designs have set off a string of debates and court battles across the globe.

On Friday, Australia's Federal Court made the historic finding that "the inventor can be non-human".

It came just days after South Africa became the first country to defy the status quo and award a patent recognising DABUS as an inventor.

AI pioneer and creator of DABUS, Stephen Thaler, and his legal team have been waging a ferocious global campaign to have DABUS recognised as an inventor for more than two years.

They argue DABUS can autonomously perform the "inventive step" required to be eligible for a patent.

Dr Thaler says he is elated by the South African and Australian decisions, but for him it’s never been a legal battle.

"It’s been more of a philosophical battle, convincing humanity that my creative neural architectures are compelling models of cognition, creativity, sentience, and consciousness," he says.

"The recently established fact that DABUS has created patent-worthy inventions is further evidence that the system 'walks and talks' just like a conscious human brain."

Ryan Abbott, a British attorney leading the DABUS matter and the author of The Reasonable Robot: Artificial Intelligence and the Law, says he wanted to advocate for artificial inventorship after realising the law's "double standards" in assessing behaviour by an AI compared to behaviour by a human being.

"For example, if a pharmaceutical company uses an AI system to come up with a new drug … they can't get a patent, but if a person does exactly the same thing they can," Dr Abbott says.


How it works

Short for "device for the autonomous bootstrapping of unified sentience", DABUS is essentially a computer system that's been programmed to invent on its own.

Getting technical, it is a "swarm" of disconnected neutral nets that continuously generate "thought processes" and "memories" which over time independently generate new and inventive outputs.

In 2019, two patent applications naming DABUS as the inventor were filed in more than a dozen countries and the European Union.

The applications list DABUS as the inventor, but Dr Thaler is still the owner of the patent, meaning they're not trying to advocate for property rights for AI.

The first invention is a design of a container based on "fractal geometry" that is claimed to be the ideal shape for being stacked together and handled by robotic arms.

The second application is for a "device and method for attracting enhanced attention", which is a light that flickers rhythmically in a specific pattern mimicking human neural activity.

The DABUS applications sparked months of deliberation in intellectual property offices and courtrooms around the world.

The case went to the highest court in the UK, where the appeal was dismissed, with the same result in US and EU courts.

Justice Johnathan Beach of the Australian Federal Court has become the first to hand down a judgement in favour of Dr Thaler, ruling "an inventor … can be an artificial intelligence system or device".

Dr Abbot says: "This is a landmark decision and an important development for making sure Australia maximises the social benefits of AI and promotes innovation."

Dr Thaler's Australian representatives at the Allens law firm say they're delighted with the result.

The case has not been successful in any other parts of the world except South Africa, which was an administrative decision that didn't involve this sort of judicial consideration," Richard Hamer, the Allens partner running the case, says.

For this reason, he says, Justice Beach's comprehensive 41-page judgement will certainly set a precedent as international jurisdictions continue to deliberate on the issue.

"AI aiding [inventions] has been overtaken by AI actually making the inventions and it's critical that those inventions are able to be patented because in the future they are going to be such an important part of innovation and the aim of the patent system is to encourage innovation … and encourage inventions to be published in patent specifications", Mr Hamer says.

IP Australia says the Commissioner of Patents is considering the decision and won't comment further at this stage.

Testing the boundaries of AI

Dr Thaler's legal team says its aim is to test the boundaries of the patent system and instigate reform.

"It isn't a good system because as technology advances we're going to move from encouraging people to invent things to encouraging people to build AI that can invent things," Dr Abbott says.

"In some fields AI may have a significant advantage over a person when it comes to inventing, for example when it requires vast uses of data or very extensive computational resources."

Already the current system has prevented numerous patents from being registered because the inventions were generated autonomously by AI, and this is causing uncertainty in AI investment.

Take technology company Siemens as an example: In 2019 it was unable to file a patent on a new car suspension system because it was developed by AI.

Its human engineers would not list themselves as inventors because they could not claim to have had input in the inventing process and the US has criminal penalties for inaccurately putting the wrong inventor down on a patent application.

"We want a patent system that adequately encourages people to make AI that develops socially valuable innovations", Dr Abbott says.

He says they are not going to back down from further appeals against unfavourable decisions and thinks the legal proceedings could drag on for up to a decade in some jurisdictions.

We're all training AI

The DABUS case is part of a larger debate about how existing and emerging AI technologies are regulated.

The law can be notoriously slow to reform and accommodate new technologies, but with innovation picking up at an increasingly rapid pace, many argue politicians should be more open to change and not be limited by laws made when such advancements could not have been contemplated.

AI is based on machine learning, which means AI systems are being literally trained by teams of people and the systems learn from the data they're fed.

Because an AI system keeps accumulating "knowledge" and can't forget things like humans can, their learning potential is exponential.

AI trainers are everywhere now. Some countries even have "AI sweatshops" where thousands of employees train algorithms.

And it's not just workers training these systems — we all are.

Social media platforms use AI to curate our feeds, suggest content and ads, recognise and remove harmful content and use facial recognition to help suggest people to tag in our photos, or in the case of TikTok's, monitor your emotions and personality traits.

And it's not just social media — have an Amazon Alexa? You're an AI trainer.

The infinite amount of data we feed into these everyday AI systems just by scrolling or engaging with them helps them get more intelligent, for better or worse.

As former Google design ethicist Tristan Harris said in Netflix documentary The Social Dilemma: "If you are not paying for the product, you are the product."

Already AI's capacity to rival the creative and innovative capacity of humans is closer to reality than conjecture, and AI systems are now fully capable of inventing, creating artworks and producing music.

In 2018, an AI-generated artwork on auction at Christie's sold for more than $600,000. Since then the AI art industry has been drawing in a steady stream of interest and income, made even more lucrative with the arrival of NFTs (non-fungible tokens).

But under current laws, AI-generated artworks can't be protected by copyright, which automatically protects original creative works.

AI has already proven itself capable of outgunning the human brain's analytical capability, with the Watson computer famously proving it can beat us at Jeopardy and chess nearly a decade ago.

Perhaps giving computers recognition as creators and inventors is the final frontier to recognising the creation of truly artificial intelligence envisioned by Alan Turing.


Could AI create harmful inventions?

One of the arguments for allowing AI systems to be listed as inventors or creators is that it facilitates accountability.

Patents, for example, once accepted and registered, are published on a public register, so anyone can look up details about the invention.

Although the two DABUS inventions are useful, as autonomously inventing technologies become more commonplace there's certainly potential for the development of less beneficial and potentially harmful inventions.

Commentators suggest patent offices develop common guidelines to govern AI generally and any inventions they produce.

With the unknown possibilities of AI, the attribution of inventorship incentivises the full disclosure of AI-generated inventions.

Dr Abbott vehemently denies artificial inventors or creators would give any rise to a discussion of legal personhood, or recognising a machine as a person under the law.

Similarly, Justice Beach said in his judgement that in the discussion of AI he was "leaving to one side any possible embodiment of awareness, consciousness or sense of self".

In 2017, Saudi Arabia controversially granted citizenship to a robot called Sophia, sparking ethical discussions around giving AI legal personhood and questions about sentient machines.

But Bruce Baer Arnold, associate professor of law at the University of Canberra, says Sophia's citizenship was purely a publicity stunt and truly artificial intelligence or "sentient machines" that have consciousness are a long way off.

However, he says it is important we have legal and ethical discussions around the potential of AI.

"As a community, we need a meaningful public discussion about [AI] and [to] start preparing for some of the difficult questions that might come up," Dr Arnold says.

Dr Arnold also says there's no reason to panic about intelligent machines because academics are "just having fun with ideas" and pushing the boundaries of what personhood, human rights and machine rights might look like in the future.

He says this decision to recognise an AI system as an inventor does not mean the AI systems in your devices are going to end up with the right to vote.

"All countries are grappling with this," says Dr Arnold, from politicians and academics to AI developers, but the reality of sentient AI is, perhaps thankfully, one we don't have to face — just yet.

Newsletter

Related Articles

Beautiful Virgin Islands
0:00
0:00
Close
×