Today, The Lancet’s EBioMedicine journal will publish a study led by scientists from IBM Research-Australia and the University of Melbourne marking important progress in personalized seizure forecasting with AI. The findings, described in a paper titled ‘Epileptic Seizure Prediction using Big Data and Deep Learning: Toward a Mobile System,’ present new results in epileptic seizure prediction using deep learning algorithms deployed on a brain-inspired, mobile processor.
By rerunning 10 patient cases using data from a previous clinical study the researchers demonstrate the feasibility of using this technology as part of a wearable seizure warning system. Investigators found that the AI algorithm successfully predicted an average of 69 percent of seizures across patients, including patients who previously had no prediction indicators. The current study’s tested AI algorithms also had no knowledge of future data, which allowed the researchers to simulate how the system could operate in a real-life scenario. Previous published research had not reported a forward-looking approach, meaning they weren’t able to demonstrate how such systems would perform for a real patient in a clinically relevant environment.
In a survey by the American Epilepsy Society, patients selected unpredictability of seizures as a top issue, with many writing about the fear of not knowing when and what will cause a seizure. Of the 65 million people worldwide living with epilepsy, one third have uncontrollable seizures and do not respond to available treatment. These staggering numbers have not reduced in decades, even with more than 14 new treatments since 1990, making epilepsy prediction technology an important area of research which could potentially improve the lives of many patients.
Research in this area has historically been limited due to low volumes of data, however through a previous study by The University of Melbourne, this research was able to draw upon long-term iEEG data recordings from ten epilepsy patients. This is the largest and most comprehensive epilepsy iEEG dataset in the world, gathered from electrodes implanted in the brain, offering an average of 320 days of continuous brain activity recordings per patient.
Given the uncertain nature of epilepsy, there are many hurdles to creating a viable warning system for seizures, however new advances in AI offer great potential to help clinicians. To date, much of the research has been limited to training algorithms based on general patterns for seizures (Karoly et al, Cook et al) – for example doctors manually selected signs and patterns which could pre-empt seizures, which were then used to train prediction algorithms. However these researchers were limited in their ability to reliably predict seizures across all patients in a long-term fashion, given brain activity patterns are not only specific to an individual but also change over time. New deep learning techniques have helped us improve from previous results, allowing the system to automatically identify seizure patterns for individual patients and adapt to changing brain signals over time, without additional human involvement.
Our published system used an initial 60 days of data per patient for AI algorithm training before it was put into prediction mode. The system was then retrained periodically and tested continuously on individual patient data in a strictly forward-looking manner. Results on portions of the same dataset reported by Cook et al. and Karoly et al. were achieved using substantially fewer inference days, limiting their ability to report on the long-term performance and real-life applicability of these types of systems. Our results mean that in the future, a prediction system could theoretically be put into use only two months after implantation, adapting to changes in a patient’s brain activity.
Example illustration of research system monitoring and measurement
In designing a seizure prediction device, we must also consider a patient’s preference for how and when they wish to be alerted. For example, while sleeping a patient may wish to ‘turn down the dial’ as such so the system would only alert them when they are at very high risk of a seizure, if at all. Similarly when driving a car or socialising, a patient may prefer a more sensitive alert system due to safety considerations. This has been an important consideration in our system, offering the ability to adjust the seizure advisory system to individual preferences.
Deploying the system on IBM’s neuromorphic computing chip – which takes inspiration from how the brain processes data and thus positions us to run deep learning algorithms in an extremely power efficient way – also presented new opportunities for how our technique could be taken out of the lab environment. Previous epilepsy prediction research has been achieved on high power computers, but with a chip the size of a postage stamp and operating on the power budget of a hearing aid, we break trail towards creating an intelligent wearable.
There is much to be excited about in the field of epileptic seizure prediction research. Today’s paper has moved us beyond the restrictions of conventional AI/machine learning, towards a deep learning system which has the potential to offer greater insight for medical decision-makers in epilepsy management and treatment. Our partners at the University of Melbourne continue to advance the way in which data is collected, most recently using sensors outside the skull, an approach that would be less invasive and much more scalable to more patients. While not offering as rich a data source as today’s study which was collected from electrodes inside the skull, if we could train our algorithms on data from an external setup it could bring us even closer to a clinically relevant prediction system.
The study ‘Epileptic Seizure Prediction using Big Data and Deep Learning: Toward a Mobile System’ will be published in Lancet’s EBioMedicine, and was showcased at the Dec 2017 Annual Meeting of the American Epilepsy Society (AES) in Washington, DC. The presentation was selected for “honorable mention” at AES, marking it as one of the top three in the conference.
IBM scientists presented three papers at INTERSPEECH 2019 that address the shortcomings of End-to-end automatic approaches for speech recognition - an emerging paradigm in the field of neural network-based speech recognition that offers multiple benefits.
Recent advances in deep learning are dramatically improving the development of Text-to-Speech systems through more effective and efficient learning of voice and speaking styles of speakers and more natural generation of high-quality output speech.
IBM Research's papers at INTERSPEECH 2019 showcase our focus on improving the underlying speech technologies that enable companies provide their customers with a uniformly good experience across different channels and extract actionable insights from these interactions.