Will artificial intelligence system
be recognized as the inventor of new ideas? Some academics are testing this in two patents filed on behalf of an AI named Dabus.
The AI has designed interlocking
food containers that are easy for robots to grasp and a warning light that
flashes in a rhythm that is hard to ignore.
Patents offices insist innovations are
attributed to humans - to avoid legal complications that would arise if
corporate inventor-ship were recognized. And
it could see patent offices refusing to assign any intellectual property rights
for AI-generated creations.
As a result, two professors from the
University of Surrey have teamed up with the Missouri-based inventor of Dabus
AI to file patents in the system's name with the relevant authorities in the
UK, Europe and US.
Dabus was previously best known for creating surreal art
thanks to the way "noise" is mixed into its neural networks to help
generate unusual ideas.
Unlike some machine-learning systems, Dabus has not been
trained to solve particular problems.
Instead, it seeks to devise and develop new ideas -
"what is traditionally considered the mental part of the inventive
act", according to creator Stephen Thaler
The first patent describes a food container that uses
fractal designs to create pits and bulges in its sides. One benefit is that
several containers can be fitted together more tightly to help them be transported
safely. Another is that it should be easier for robotic arms to pick them up
and grip them.
The second describes a lamp designed to flicker in a rhythm
mimicking patterns of neural activity that accompany the formation of ideas,
making it more difficult to ignore.
Law professor Ryan Abbott told BBC News: "These days,
you commonly have AIs writing books and taking pictures - but if you don't have
a traditional author, you cannot get copyright protection in the US.
"So with patents, a patent office might say, 'If you
don't have someone who traditionally meets human-inventorship criteria, there
is nothing you can get a patent on.'
"In which case, if AI is going to be how we're
inventing things in the future, the whole intellectual property system will
fail to work."
Instead, he suggested, an AI should be recognised as being
the inventor and whoever the AI belonged to should be the patent's owner,
unless they sold it on.
However, Prof Abbott acknowledged lawmakers might need to
get involved to settle the matter and that it could take until the mid-2020s to
resolve the issue.
A spokeswoman for the European Patent Office indicated that
it would be a complex matter.
"It is a global consensus that an inventor can only be
a person who makes a contribution to the invention's conception in the form of
devising an idea or a plan in the mind," she explained.
"The current state of technological development
suggests that, for the foreseeable future, AI is... a tool used by a human
inventor.
"Any change... [would] have implications reaching far
beyond patent law, ie to authors' rights under copyright laws, civil liability
and data protection.
"The EPO is, of course, aware of discussions in
interested circles and the wider public about whether AI could qualify as
inventor."
The UK's Patents Act 1977 currently requires an inventor to
be a person, but the Intellectual Property Office is aware of the issue.
"The government believes that AI technology could
increase the UK's GDP by 10% in the next decade, and the IPO is focused on
responding to the challenges that come with this growth," said a
spokeswoman.
predictive healthcare AI ‘breakthrough’
In a related subject, DeepMind, the Google-owned U.K. AI
research firm, has published a research letter in the journal Nature in which
it discusses the performance of a deep learning model for continuously
predicting the future likelihood of a patient developing a life-threatening
condition called acute kidney injury (AKI).
The company says its model is able to accurately predict
that a patient will develop AKI “within a clinically actionable window” up to
48 hours in advance.
In a blog post trumpeting the research, DeepMind couches it
as a breakthrough — saying the paper demonstrates artificial intelligence can
predict “one of the leading causes of avoidable patient harm” up to two days
before it happens.
“This is our team’s biggest healthcare research breakthrough
to date,” it adds, “demonstrating the ability to not only spot deterioration
more effectively, but actually predict it before it happens.”
“This research is just the first step,” she confirmed. “For
the model to be applicable to a general population, future research is needed,
using a more representative sample of the general population in the data that
the model is derived from.
“The data set is representative of the VA population, and we
acknowledge that this sample is not representative of the U.S. population. As
with all deep learning models it would need further, representative data from
other sources before being used more widely.
“Our next step would be to work closely with [the VA] to
safely validate the model through retrospective and prospective observational
studies, before hopefully exploring how we might conduct a prospective
interventional study to understand how the prediction might impact care
outcomes in a clinical setting.”
That app, called Streams, which makes use of an NHS
algorithm for detecting AKI, has been deployed in several NHS hospitals. And,
also today, DeepMind and its app development partner NHS trust are releasing an
evaluation of Streams’ performance, led by University College London.
The results of the evaluation have been published in two
papers, in the Nature Digital Medicine and the Journal of Medical Internet
Research.
Posted by Dr. Rob Long
No comments:
Post a Comment