AI could predict which patients require emergency treatment by analysing a person’s medical history
Artificial intelligence may determine which hospital patients need emergency care.
That’s according to research from the University of Oxford’s George Institute for Global Health, which suggests scanning electronic health records could accurately flag-up priority cases.
The data surveyed 4.6 million patients from 1985 to 2015 via the UK’s Clinical Practice Research Datalink.
Their findings, published in the PLOS Medicine journal, mean health officials could effectively monitor a patient’s need and avoid unplanned admissions, which are a major source of healthcare spending.
Artificial intelligence: Scanning e-records may determine which hospital patients need emergency care – and which do not, saving the NHS huge sums of money
‘Our findings show that with large datasets which contain rich information about individuals, machine learning models outperform one of the best conventional statistical models,’ said Fatemeh Rahimian, former data scientist at The George Institute UK, who led the research.
‘We think this is because machine learning models automatically capture and “learn” from interactions between the data that we were not previously aware of.’
A wide range of factors was taken into account for each scan, including age, sex, ethnicity, socioeconomic status, family history, lifestyle factors, diseases, medication and marital status, as well as the time since first diagnosis, last use of the health system and latest laboratory tests.
By using more variables combined with information about their timing, the machine learning models were found to provide a more robust prediction of the risk of emergency hospital admission than any models used previously.
‘There were over 5.9 million recorded emergency hospital admissions in the UK in 2017, and a large proportion of them were avoidable,’ Rahimian said.
‘We wanted to provide a tool that would enable healthcare workers to accurately monitor the risks faced by their patients, and as a result make better decisions around patient screening and proactive care that could help reduce the burden of emergency admissions.’
Over £200 million is spent treating patients for unplanned admissions owing to diabetes emergencies with an average cost per patient of £4,477, say Medical Technology Group
The study comes just weeks after an NHS hospital trust became the first to use AI robots as secretaries in an effort to cut costs.
Ipswich Hospital, run by East Suffolk and North Essex NHS Trust, has ’employed’ three virtual workers to free up staff from ‘mundane and repetitive tasks’, such as submitting scans and blood test results.
This is thought to allow real medical secretaries more time to focus on patient care, with the system being eight times more productive than human staff, the trust claims.
The system, which is run by a computer, has been in place since July and has saved more than 500 hours of work, according to the trust’s deputy director of ICT Darren Atkins.
Built by the automation technology company Thoughtonomy, the software monitors incoming referrals from GPs throughout the day.
It deciphers the reason for referral and supporting information, such as blood test results.
The robots then put all this information in a single document, which is flagged to the lead consultant for review.
HOW DOES ARTIFICIAL INTELLIGENCE LEARN?
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.
ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.
Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images
Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.
The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge.
A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other.
This approach is designed to speed up the process of learning, as well as refining the output created by AI systems.