Modified on by fstein
Guest blogger: Zoya Yeprem, Senior Technical Solution Specialist Intern
IBM Summit Program, IBM Federal
Computer vision is a technology that acquires, processes and analyzes images, and can automate what human visual analysis can perform. And with recent advances in this field, we are witnessing its use in our personal and professional lives more than ever. Such systems are heavily dependent on highly complex deep learning models called Deep Neural Networks (DNN). These models are capable of successfully performing complex tasks like object detection, image classification, segmentation, etc. without being explicitly programmed.
Numerous applications and systems that we will be using in the future will be using deep learning models behind the scenes to perform high-level cognitive tasks e.g. self-driving cars. In addition to the day-to-day use of such systems, computer vision can be extremely useful for government agencies. It can help agencies categorize and analyze images and gain high level cognitive insights automatically in real time. For instance, the surveillance system in airports can be integrated with computer vision systems that can help detect any abnormal activity that may be considered a threat. Also, when the threat is detected, it can help track down the person responsible by identifying other sightings of similar looking individuals. Another use case is in geospatial technologies. Geospatial imagery encompasses a wide range of graphical products that convey information about natural phenomena and human activities occurring on Earth's surface. This technology uses computer vision to provide complex insights in real time, providing crucial information to humanitarian and disaster relief agencies where accuracy and timeliness are top priorities.
While deep learning models used in computer vision systems are normally very accurate, they are vulnerable to special attacks that use adversarial examples. Adversarial examples are input images that have a carefully crafted noise added to them. While these images appear identical to the originals, they are completely misclassified by the DNN. A simple example can be found below which demonstrates how a stop sign image is misclassified as an “Ahead Only” sign when certain noise is added to it.
Adversarial attacks pose a real threat to the deployment of AI systems in security critical applications. Virtually undetectable manipulations of images, video, speech, and other data have been crafted to confuse these systems. Such manipulations can be crafted even if the attacker doesn’t have exact knowledge of the architecture of the deep learning model or access to its parameters (Black-Box attacks). Even more worrisome, adversarial attacks can be launched in the physical world: researchers have proven that instead of manipulating the pixels of a digital image, adversaries could defeat visual recognition systems in autonomous vehicles by sticking patches to traffic signs, or they can fool facial recognition systems by wearing specially designed glasses. Therefore, it is crucial to protect our deep learning models against such attacks.
IBM Research in Ireland has released the Adversarial Robustness Toolbox (ART) that provides protection against adversarial attacks on DNNs. ART is an open source library written in python that supports most popular deep learning frameworks such as: TensorFlow, Keras, PyTorch, etc. ART can provide protection to a DNN in three stages:
First, we check to see if the DNN model is vulnerable against adversarial attacks as not all DNNs are vulnerable. ART has implementations of state-of-the-art attacks, which can be used to craft an adversarial image and feed it to the DNN. Then, by recording the loss of accuracy on adversarially altered inputs, you can detect how vulnerable your model is to that specific attack. Other approaches measure how much the internal representations and the output of a DNN vary when small changes are applied to its inputs.
Second, after confirming vulnerability of a certain type of attack, a given DNN can be “hardened” to make it more robust against adversarial inputs. Common approaches are to preprocess the inputs of a DNN, to augment the training data with adversarial examples, or to change the DNN architecture to prevent adversarial signals from propagating through the internal representation layers.
Finally, runtime detection methods can be applied to flag any inputs that an adversary might have tempered with. During this stage, ART can somewhat act like an antivirus application where it checks the inputs and flags the one that are adversaries to protect the DNN.
In conclusion, any new technology comes with strengths and weaknesses. Take E-mail technology for instance; it provided fast and convenient way of communication and reduced the need of hard copied documents dramatically. However, in the beginning, users were extremely vulnerable to different worms and viruses spread across mailboxes. But through several years of using them, we learned how to mitigate those vulnerabilities while enhancing its positive capabilities. Same goes with visual recognition technology. no one can deny all the goods that these systems have brought to us but to embrace it, it’s crucial to first: find possible vulnerabilities and second: have tools to protect our system against adversaries, and this is exactly where Adversarial Robustness Toolbox can help.
ART in Action
Open source demo can be found here: ART Demo.
This implementation contains attack and defense against a model trained on GermanTrafficSign dataset. Full documentation on each step of the implementation is included in the notebook file.
ART open source library
To install ART and start using it, check out the open-source release under Adversarial Robustness Toolbox .The release includes extensive documentation and tutorials to help researchers and developers get started.
 Sharif et al. 2016, “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition”
 Eykholt, et al. 2018, “Robust Physical-World Attacks on Deep Learning Visual Classification” arXiv:1707.08945v5 [cs.CR]
You can contact the author at email@example.com
Modified on by fstein
AI & Quantum – Tie the Knot
Guest Post By IBM Summit Trainees Sophie Nguyen, Justin Miller, Kavita Dhallan, & Megan Clifford.
“Quantum computing holds various promises.” – Bob Sutor
Quantum computing. From the outside looking in, the subject may come off intimidating especially for a team of new Summit hires from various backgrounds in Marketing, Finance, Law, and Biomolecular Science. The event itinerary lists presenters like Steve Margolis, PMP, CISSP (not to mention an IBM Q Ambassador), Aaron Potler (who is a Distinguished Engineer focusing on High Performance Computing, and also an IBM Q Ambassador), Bob Sutor, Ph.D. (IBM’s VP for IBM Q Strategy & Ecosystem), and Kenneth Wood (the Global Business Development Lead for IBM Q Network). They are all IBM Q subject matter experts. Are we going to understand what Quantum Computing is? Where it’s going? How, as future IBM sellers, are we going to be able to do them justice speaking on quantum computing? Soon into the presentation, however, we quickly realized there was as much for us to gain from the event as the IBM clients that were invited to the A3 Center that day in Washington, D.C.
Bob started the presentation asking us to think of the words “open mind” differently. Yes, we’re constantly asked to think outside-the-box in the realm of technology, but he means it literally. “You’re thinking too classically,” as he explains in the world of computers, “Don’t do it today.” That’s exactly what he meant when he proposed that quantum computing holds various promises. To see those promises, we need to stop thinking classically.
Thinking Non-Classically. Steve Margolis dives into this after Bob. Today’s classical computer works on classical bits. For those of you who don’t know what a bit vs. a qubit is (like the people writing this blog), a bit is short for “binary digit”, the basic unit of logic that sits as 0 or 1. A qubit or quantum bit is the fundamental unit for quantum information just like bits are for classical computing. Qubits are able to exist in a combination of two states, a 0 or a 1, based on the principals of entanglement and superposition. Just remember… quantum computers work on qubits. This video was a great resource for understanding qubits.
Quantum Computing Promises and Possibilities. What are these “various promises” the presenters speak of? We’re talking about a computer that can compute mathematical data in numbers and volumes that are more than the number of atoms that exist on our planet Earth. We’re talking about applying that sort of ability from quantum computing into new application areas such as: Chemistry (material design, oil & gas, drug discovery), AI (classification, machine learning), and even Financial Services (asset pricing, risk analysis, rare event simulation). The takeaway was that unlike the classical computer (for example, today’s servers), quantum computing has room for exponential growth that can do more to help these application areas.
Current State of Quantum Computing. Currently, IBM has multiple quantum computers and a growing network of users (IBM Q Network). Yes, they admit that naming them after global cities ended up being confusing for instance, the IBM Q Tokyo and IBM Q Melbourne are actually located in Yorktown, NY. One name that you probably won’t get confused on is the latest IBM Q System One. It is a beauty! Not only is it going to pave the way for quantum computing possibilities, it is astonishingly beautiful. We’re talking the whole 9 yards, or should we say feet (it’s 9x9 ft). IBM built this with a dream team of engineers, mathematicians, and industrial designers to work on all its nuances such as sound sensitivity while thinking about it visually.
Future State of Quantum Computing. We still have research and development to do. We still have kinks to sort out, including error rates, as our goal is to create a universal fault-tolerant quantum computer. Right now we are in the “Quantum Ready” stage. That is, we are beyond the early stage of Quantum Science, making qubits work reliably and making them “last” longer, which is coherence time. In the Quantum Ready stage IBM and members of the IBM Q Network are developing algorithms to work with this new type of computing and making the infrastructure to run quantum computers in commercial data centers.
How do we continue to progress along this path to the goal of demonstrating a true advantage over classical systems for commercial and scientific applications? That’s where our next speaker, Aaron Potler comes in. He explains the available networks and quantum computing platforms available to continue growth and collaboration needs in order to gain more insight. Anyone can register to use the public, cloud-based 5- and 16-qubit IBM Q Experience systems. More than 110,000 people have run more than 7 million experiments, and published more than 130 research papers using the IBM Q Experience. The IBM Q Experience, as well as the commercial IBM Q systems use Qiskit, an open-source quantum computing framework for programming today’s quantum processors for research, education, and business. There have been articles and educational pieces published that lead us to ideas and theories about where quantum computing is going. Research articles include, “Quantum Risk Analysis” by Stefan Woerner, “Scientists Prove a Quantum Computing Advantage over Classical,” by Sergey Bravyi, and an article from MIT Technology Review, “Machine Learning, Meet Quantum Computing.” You can find these papers by following the link at the end of this article.
“First movers can accrue substantial value,” Kenneth Wood states as he enters next into the presentation. The world, including the US, is investing heavily in quantum research. Recently, Congress agreed on unanimously passing the National Quantum Initiative Act in November 2018. The initiative is to provide $1.275 billion in research funding from 2019 to 2023. Moreover, according to Gartner, “Within five years, analysts estimate that 20 percent of organizations will be budgeting for quantum computing projects and, within a decade, quantum computing may be a USD15 billion industry.” At CES 2019, Ginni Rometty comments “Quantum does not replace every kind of computer, it’s for a certain kind of problem. And it’s the kind of problem where the world doesn't realize how many things are approximated out there.” That is why we are ahead of the game in the number of quantum computers available to the public and why we have built the IBM Q Network. The missions of the IBM Q Network are: accelerating research, commercial application, and educate and prepare. The offerings of the IBM Q Network are: technology, enablement, collaboration, and business framework.
By the end of the presentation, we had ourselves thinking about the very questions Bob asked when he started, “How can you do more? How can you learn more?”
If interested, check out more about IBM Q and the articles mentioned here.
Find out more about the IBM Center for Analytics, Automation, and AI solutions at ibm.com/a3center
Modified on by fstein
GUEST POST BY Mike Byerly, IBM Client Representative
Report on Thursday, September 27th: A3 Center Event: Using Artificial Intelligence to Transform Government
How has the US federal, as well as state and local, government already leveraged artificial intelligence to drive outcomes for their constituents? What have they learned? What has worked well, and what hasn’t? What lessons can other agencies learn from? These were the questions on discussion during the recently held event Using Artificial Intelligence to Transform Government, held on September 27th.
Located in the heart of Washington, DC, IBM’s A3 Center (which stands for Analytics, Automation, and AI solutions) stands in a unique position to bring together the public and private sector for thoughtful discussion. In the discussion, senior leaders from the US federal government, municipal government, the private sector, and IBM discussed some of the key successes and challenges that individual leaders have encountered when leveraging AI technologies to improve customer (and citizen) outcomes.
The conversation was wide-ranging and touched upon a number of different topics – the entire recording and pdf's can be found here. In order to keep this blog post to a reasonable length, I have highlighted a few of the topics discussed, and gone into more detail around a few select remarks.
The first panel, entitled “The Future Has Begun: Using AI to Transform Government” was moderated by Mallory Bulman (of the Partnership for Public Service) and included Camron Gorguinpour (Principal, Woden LLC), Alex Holsinger (Criminal Justice Coordinator for Johnson County, KS), and Maureen Rajaballey (IT Manager, Miami-Dade County, FL). All panelists made interesting points. Maureen Rajaballey, IT Manager from Miami-Dade County, Florida, discussed how her county had begun to leverage call center solutions to improve their bill-collecting capabilities. One of the first questions she was asked by her workers was: is this technology going to replace me? Is it going to take my job?
Maureen discussed the importance of emphasizing how technology can play a role in augmenting and improving the lives of call center works. What call center worker wants to work at 3AM, or on holidays? By focusing on the positive aspects of the call center technology, and involving those workers in the discussion about its implementation, she was able to create advocates that embraced the technology. The initiative has proven a huge success. A new analytics dashboard allows Miami-Date to monitor how many calls they receive per hour, how many are taken by AI, how many are resolved, how many gas vs. electric bills are being paid. By continually reinforcing the “wins” of the program, Mallory has been able to expand its success.
The second presentation, by Kevin Desouza of Queensland University of Technology, discussed his thoughtful report “Delivering Artificial Intelligence in Government.” Desouza frames the governmental opportunity in three key areas: technology and data, workforce, and risk management. In his view, government agency leaders are already beginning to take the first steps needed to take advantage of artificial intelligence solutions, for example, upgrading existing IT infrastructure to support AI systems, identifying data intensive applications that can benefit from AI and establishing data governance to take advantage of the benefits of AI, and enabling their workforce to use AI (through agile implementations and redesigned work processes). By being aware of these challenges and addressing them thoughtfully, government officials are more likely for a successful implementation of AI that will be embraced and yield results.
The final presentation, “Delivering AI in Government: Challenges and Opportunity” was moderated by Claude Yusti (of IBM), and included Franz Gayle (Science and Technology Advisor within the Marine Corps), Joe Greenblott (Acting Director, Analysis Division, Office of Planning, Analysis and Accountability, EPA), Jose Arrietta (Associate Deputy Assistant Secretary, Division of Acquisition, HHS), Mallesh Murugesan (Founder & CEO, Abeyon) and Armita Soroosh (of TSA).
Again, all of the panelists made interesting points. Jose Arrietta of Health and Human Servicers discussed how his team more effectively managed $24 billion dollars per year of government contract spending. By ‘building the limbic system of the enterprise’ (an indexed taxonomy of buying behavior across departments), HHS was able to push pricing info directly to their agents, show them the best prices, and enable them to negotiate better rates with suppliers, all in real-time. By doing so, Jose drove dramatic cost savings. By beginning with a small pilot project, getting consecutive buy-in as the project increased in scope, and eventually rolling out the solution to all agents, HHS was able to entirely transform its buying behavior. The cost savings were significant.
Franz Gayle, of the USMC, discussed the broad range of interest that the DoD has expressed in artificial intelligence and its potential. For example: while the Marine Corps has traditionally been characterized as a “follower” within the DoD in terms of innovation and trying new things (following the Army and Air Force), current leadership understands that today, this is no longer a viable strategy. For this reason, the USMC is currently working with AI technology firms to fund projects that “can fail”. By carefully implementing AI solutions that are similar to those that have been tried and tested in private sector, risk can be in reduced to the USMC. Creating useful military applications should therefore be possible and relatively straightforward. The DoD recognizes the importance of continuing to evolve its AI capability.
While the government is still in its early stages of leveraging the full potential of Artificial Intelligence technologies, the discussions made it clear: there are government agencies realizing benefits from AI today. Change will continue, and it will only continue to accelerate. IBM is working hand-in-hand with government customers, using best practices learned from the private sector (and other government customers), to adopt AI successfully. Public agencies are still in their relatively early days of experimenting with AI, and these efforts are bound to intensify.
To help government innovators progress in this area, the A3 Center will continue to hold events to discuss these important topics. Visit the A3 Center website to see upcoming events and register to attend.
Modified on by fstein
GUEST BLOG BY MICHELLE HUCHETTE:
My name is Michelle Huchette and I am a rising fourth year at the University of Virginia studying Computer Science and Statistics. This summer I was fortunate enough to be a part of the IBM Summit Program as a Technical Sales Intern. In this role I was able to experience what it is like to be an IBM seller by attending customer events and working on various tasks and projects over the course of 11 weeks. A few weeks ago I was challenged with creating a Proof of Technology lab that would interest customers in the field of machine learning. This is a brief overview of the creation and utilization of the model I created to diagnose breast cancer tumors.
The data set used for the lab was found in UC Irvine Machine Learning Repository (https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29) that contained information regarding breast cancer tumors and information to help predict the diagnoses of the tumors as malignant or benign. The data set contains 10 measurements of each cell nucleus captured using images of cell nuclei gathered from a fine needle aspiration procedure (FNA). The average, standard error, and extreme values of all nuclei in the tumor sample were calculated for each of the following features:
- Texture (standard deviation of gray-scale values)
- Smoothness (local variation in radius length)
- Compactness (perimeter2/area – 1)
- Concavity (severity of concave portions of the contour)
- Concave points (number of concave portions of the contour)
- Fractal dimension (coastline approximation – 1)
After finding a data set, a machine learning model could be created to diagnose breast cancer tumors. In order to do so we needed to set up a Watson Studio account on the IBM Cloud platform( https://console.bluemix.net/registration/ ). Within Watson Studio(https://console.bluemix.net/catalog/services/watson-studio ) we created a Jupyter Notebook which was used to write a python code to work with the data set, create a model, and make the predictions, all using Apache Spark as the analytics engine.
The first step to creating the machine learning model required determining which type of model would be the best fit for the data. Research found that there are different types of models that Spark supports such as Naïve Bayes, Decision Trees, Random Forests, and Regression Models, which are the most common. Because Naïve Bayes required a strong independence assumption between the features, that type of model was ruled out. Ultimately, a Logistic Regression Model was chosen since it is often used for models of binary categorical outcome (exactly what we’re dealing with when trying to diagnose a tumor as malignant or benign) and it is good at measuring the relationship between the labels and features.
To start out, the logistic regression model was set to have the default parameters so that an initial model could be created and improved upon if needed. Once the model was defined, a pipeline was set up which contained a sequence of stages to be run in a specific order. Within a pipeline each stage is either a transformer, which converts a dataframe into other dataframes, or an estimator which calls fit() and trains a model. There are many different options that you can include in your pipeline, including tokenizers, hashes, normalizers, etc. In terms of this dataset and for the sake of creating an easy to follow lab our pipeline started by using StringIndexer to turn the label (diagnosis) into a form that SparkML could use by encoding the input columns to a column of indices based on their frequency. Then a Vector Assembler combined the list of columns into a single vector column to be used in training the model. A normalizer was added to normalize each vector into a standard form to improve the algorithm. Lastly, our defined logistic regression instance was implemented and IndexToString was used to get the results of the model back into human readable form.
Following the definition of a pipeline, the logistic regression model and pipeline could be used to train and test the model. The data set was split with the standard 70/30 split for the training and test dataset, respectively. The training data set was then used to fit the pipeline and train the model to make predictions and the accuracy of the model was tested using a Receiver Operator Characteristic curve for binary classifiers. This value is calculated by plotting the true positive rate (recall/probability of detection) against the false positive rate (fall-out/probability of a false alarm) at various levels. A value when using the ROC curve that is close to 1 suggests that the model performs very well, whereas a value close to 0.5 is about as good as flipping a coin. Once the model was trained and evaluated, the test data set was used to make predictions The logistic regression model that was created in the steps previously described resulted in a value of 0.989, meaning it was able to predict the diagnoses of tumors very well.
Even though the model was already proven to be able to diagnose tumors accurately it could still be improved on. Hyperparameter tuning includes the use of model selection tools that test different parameter values for the pipeline and find the best possible values. There are two main options when working in Spark in terms of model selection tools, a CrossValidator or a Train-Validation Split. For this project we used a CrossValidator because even though they can be more expensive for larger data sets, they are more reliable when the data set isn’t sufficiently large because it evaluates each parameter k times, rather than just once.
CrossValidators first split the data set into “folds” which are used as separate training and test data set pairs. We set the value of the number of folds for this project to be 10 and therefore the CrossValidator generated 10 training/test data set pairs which are all used to test the parameters. The average performance among the 10 instances for each parameter are averaged and compared to other parameter values tested. We defined a paramGrid which stated the values to be used for the parameters within the pipeline. For this pipeline we could define values for maxIter, elasticNetParam, regParam, which are the parameters in the logistic regression model, or the normalizer parameter of the pipeline. Included in our paramGrid for this lab was parameter values for elasticNetParam, which must be between 0 and 1. This is an important parameter in the pipeline because it can make the model closer to a Lasso regression model (coefficients that are not relevant are set to 0) with a value close to 1 or a Ridge regression model (minimize the impact of irrelevant coefficients without setting them to 0) with a value close to 0. Because of this the values to test the elasticNetParam in the grid were set to 0, 0.5, and 1 to see which type of regression model would be best for the data. The second parameter defined in the paramGrid is the normalizer from the pipeline. The normalizer ensures that the algorithm runs correctly and the value set for the parameter represents the p-norm for normalization. The default value of 2 was previously used so within the paramGrid the values to be tested were set to 1 and 3.
After using a CrossValidator to find the best paramMaps, that model was trained using the testing data set and it was evaluated. The model improved 0.058% due to hyperparameter tuning, meaning the newly defined model was 99.5% accurate.
With an almost perfect predictive model defined, the last step was to grab the undiagnosed tumors from the original data set and use the model to predict their diagnoses with a high level of confidence.
The creation of this highly accurate model shows the power of Machine Learning in bettering the lives of people worldwide. It allows for the augmentation of breast cancer diagnosing and ensures that doctors see the patients in dire need of medical attention. Models such as these can help detect cancer earlier and, in more individuals, than doctors can do alone. Machine Learning has already started to be implemented in oncology to diagnose tumors, pathology to analyze bodily fluids, and in diagnosing rare diseases using facial recognition and deep learning to detect rare genetic diseases. Machine Learning serves many purposes from chatbots to augmenting the medical diagnostic process and with the continued advancements in technology and AI its applications are sure to expand even more.
I gave the keynote address to George Washington University’s DATA Conference on December 2. This is what I told the students. Please reply with your thoughts and ideas to extend the conversation on how to make the world a better place through data.
Think about how you can use data and data science to make the world a better place. We are now in a unique time in history because we now have huge amounts of data being collected by all the digitized systems in the world (almost 1 ZB or 1 times 10 to the 21st power Bytes) and the Data Science techniques are becoming more powerful and easier to use. These two factors will give you the ability to do more to improve the lives of your fellow students, their professions and society at large than has ever been possible before.
Data Science innovation will be central to solving humanity's grand challenges by capitalizing on this unprecedented quantity of data now being generated on human behavior and attitudes, human health, commerce, communications, migration and more. You can help to accelerate and advance the development and democratization of Data and Data Science solutions that can address specific global challenges related to poverty, hunger, health, education, the environment, and others.
To help stimulate your imagination, I will present several examples from our work at IBM. The key is to combine your growing expertise in Data Science, with your passions. At IBM, we are encouraging students to combine Data Science studies with other disciplines, such as natural science, social sciences, healthcare, etc. - - the problem domains where the Data Science can be put to work.
For the first example of “Doing Good”, I’d like to tell you about IBM Fellow Chieko Asakawa. She became blind at the age of 14, and as a result has devoted her professional life to building solutions to allow her and other blind people to access the world and regain their independence. Chieko has developed an object recognition solution so she can “see” ordinary objects in her home and at stores, and allow her to pick out wine or know the directions on a package – all using machine learning. She has also developed an indoor navigation system that helps her to easily get from place to place at work. Both use smartphones as the user interface. See these links for more details on Chieko’s inventions: Image rec: https://www.youtube.com/watch?v=RNp4OpToAdQ (many interesting solutions, Chieko’s is featured at minute 17); Nihonbashi Tokyo NavCon: https://www.youtube.com/watch?v=mlGcutE2t2A ; TED talk: http://www.ted.com/talks/chieko_asakawa_how_new_technology_helps_blind_people_explore_the_world ).
The second example is from IBM’s Cognitive Build Competition. Two IBM employees, Karibi and Jenn proposed and prototyped a solution to help children with Autism. The solution, dubbed Pino after Karibi’s newphew, uses Watson Conversation service to help children with autism communicate more independently by providing real-time verbal prompts. It can also be used with other conditions that affect communication ability, such as stroke and Alzheimer's disease. I met Jenn a few weeks ago. She told me, “At a birthday party a couple of years ago, I saw how upset my son was when he didn't receive a cupcake because he couldn't say "yes" when offered one. He needs a therapist or caregiver to prompt him to answer basic questions. He has a communication device that can help him speak, but it requires him to know he needs to respond… When Cognitive Build started, I thought it would be great if my son's communication device could be cognitive so it could help him to be more independent when I'm not around.” Learn more at this link: https://medium.com/cognitivebusiness/addressing-autism-project-pino-3741ce13d39
The third example is about the opioid epidemic, which has become one of the worst health crises in US history. In 2015, more than 90 Americans died every day from opioid overdoses, a number comparable to deaths in car accidents and projected to have risen further in 2016 and 2017. The Centers for Disease Control and Prevention (CDC) estimate the total economic burden of prescription opioid abuse to be $78.5 billion a year, including healthcare costs, lost productivity, and criminal justice involvement.
For many addicts, the problem often begins with legitimate healthcare treatment in which opioid painkillers are first prescribed, such as for surgeries or chronic back pain. During treatment, some patients become addicted and go on to suffer the well-documented consequences of addiction, while others do not, even if they become long-term users. To combat the epidemic, it is vital to understand the exact circumstances under which medically sanctioned treatments can devolve into addiction. That’s where data science comes in to play.
This summer, we took the first steps in tackling this question in a project within our Science for Social Good program. The team, led by Bhanu Vinzamuri, focused on analyzing the relationship between factors surrounding an initial opioid prescription and a subsequent diagnosis of addiction. We found that those people that received initial prescriptions for more than 7 days has a significant correlation to Long-Term usage, as does use of Synthetic Opioid prescriptions. We also confirmed that days of supply matters much more for addiction than quantity (e.g. in milligrams of morphine equivalent) prescribed per day. Other factors that were positively correlated with long term use and which should be used by doctors when prescribing opioids were age, certain regions of the country, rural location, healthcare utilization and depression, osteoarthritis, or diabetes. See more projects at http://www.research.ibm.com/science-for-social-good/#projects
Because of the power that Data Science and data is bringing to Humans, we need to be sure it is a force for good and not for evil. IBM and XPRIZE Foundation believes Artificial Intelligence (and the data science algorithms it uses) will be central to solving humanity's grand challenges. Solutions to pressing problems related to health and wellbeing, education, energy, environment, and other domains important to humanity can potentially be found by capitalizing on the unprecedented quantities of data and recent progress in emerging AI technologies. That’s why IBM is putting up $5 million for the Watson AI XPRIZE. See https://ai.xprize.org/ for more details.
But even if you are not up for competing for the AI XPRIZE, there is lots that you can do. Find a societal problem that you are passionate about. It all starts with a problem or need, like Chieko’s blindness, or Jenn’s child with autism, or the opioid crisis. Then come up with an idea or approach. There is a lot of data now available. Our Data Science Experience is out there on the web for you to play with. It is designed to allow data scientists, business analysts, stakeholders, and programmers work together on a data project. It’s easy to use. Go out and try it. There are tutorials to guide you. It is at https://datascience.ibm.com/ . Don’t just study the problem and write a school paper, create a solution that helps people. Your university’s office of entrepreneurship can help you to build a business case for your solution. Finally, consider pitching your idea to one of the many Pitchfests that are around. One I’m familiar with that exposes your ideas to corporate sponsors such as IBM is NCET2. They are at https://ncet2.org/. Go ahead and make the world a better place!
Modified on by fstein
Do you have Super Powers? Would you like to have Super Powers? I was recently invited to give a talk about AI at the Escape Velocity Science Fiction Conference (https://escapevelocity.events/) put on by the Museum of Science Fiction in Washington, DC. I focused the talk on how our advances in AI technology and augmenting human intelligence (Intelligence Augmentation = IA) are starting to provide humans with Super Powers, once only the realm of sci-fi writers. I’ve seen a lot technology come and go. And what we are now developing has the most potential of any of the technologies I’ve experience to help people to do more, do it faster, and do things we couldn’t do before. IA is going to have more impact on individuals, our professions, and society than all the previous advancements in computers to date.
John Campbell, the famous editor of Astounding Science Fiction, who published the likes of Asimov and Arthur C Clark, pushed his writers to create heroes and foes that had cognitive abilities that were better than humans, or had different attributes. So too, the comic books that came out featured heroes with unique powers, some cognitive and some around endurance and power. As you know, this vision of achieving super human capabilities has existed for most of recorded history. And it has been a dream of computer scientists for as long as computer scientists have existed too.
We haven’t made a lot of progress in the non-fiction world of creating people that are different - - Evolution is a VERY slow process. And while the world’s knowledge keeps increasing, people think pretty much at the same speed with the same memory limitations as before. Therefore, my talk focused on how we can use technology and data to help us to achieve super human capabilities.
Just like we’ve created assembly lines full of machines for our factories, we are starting to create tools to help those of us that are called Knowledge Workers to do our jobs more efficiently and create results that haven’t been possible in the past. Technology will redefine our professions and our jobs within our fields. These changes won’t just provide marginal improvements, they will provide significant new capabilities that will provide higher productivity to our employers, enhance our own well-being, and solve significant problems facing society.
We will know what customers are looking at in every store in the world, what they pass over and what they buy. Some might call that Omni-presence. We will be able to predict who will click on which ad on the web, who will buy which product, and who will get which diseases and which drugs will work for which individual. Is that Precognition or Clairvoyance? We’ll be able to instantly recognize a face in a crowd of thousands and see through objects. Our cars will help us to see black ice on the road and around corners. Our super-hearing will not only hear from a distance, but will allow detection of emotional stress and mental health issues that others might be facing – probably before they themselves realize it. Even better than superman!
In the government space, these super power of Super Vision will enable us to spot terrorists pictures among the millions of videos and images collected, as well as detect illegal fishing and logging operations. Precognition will allow us to predict the outbreak of a potential pandemic early enough to mount a robust public health defense, and predict weather events in time to evacuate and prepare emergency operations. In the cybersecurity world, we'll have to super power to detect threat patterns quickly, predict likely fast-fluxing techniques used by the intruders, and provide rapid advice for the response teams.
These Super powers come from taking all the data the world is now generating – which mostly is going to waste – and analyze it to find patterns and answers to questions that we couldn’t answer in the past. We’re now creating almost 10 Zettabytes per year – and the amount is increasing exponentially. Analyzing all that data will give us these superpowers and as that data grows, so too will our Super Powers. Analyzing all this data requires very sophisticated technology which IBM and others in the I/T industry are intensely developing. We will do this using Machine Learning, NLP processing, and reasoning.
My goal in the talk was not to talk about the technology but instead to show how far we have come in creating super human capabilities. I talked about some of the applications of IA – Intelligence Augmentation – to businesses, professions, and society. See the slides for some of the examples I used: https://www.slideshare.net/frankibm/getting-your-super-powers-with-watson-and-ia
I concluded with some discussion on how humans and machines can complement each other so that we can accomplish more together. It is my belief that we will need this collaboration to solve some of society’s hard problems such as climate change, supporting all the people that will soon be on planet earth, and even protecting us from incoming asteroids. The final slide shows 2 famous quotes regarding the value of the combination of people and machines:
- “The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” - JCR Licklider, 1960, Professor at MIT
- “The computer is incredibly fast, accurate, and stupid. Man is unbelievably slow, inaccurate, and brilliant. The marriage of the two is a force beyond calculation.” – Leo Cherne, Presidential Advisor
Write to me at: firstname.lastname@example.org
The six years since IBM ushered in the new era of Cognitive Business have witnessed several pivotal transitions. The massive system of servers and disk drives that beat Jeopardy! using an advance orchestration of machine learning, natural language processing and statistical reasoning has evolved into a sophisticated set of services delivered through a world class cloud infrastructure. To help you understand the direction of these enhancements and their impact on Cognitive Business, the IBM Analytics Solution Center was pleased to have Rob High, IBM Fellow, VP and CTO Watson Solutions, present on the future of cognitive augmented intelligence.
Rob started by taking us back to the Jeopardy Challenge in 2011, reminding us how hard it is for a machine to answer a question correctly, but also how good people, like Ken Jennings, are at answering questions. What changed that allowed Watson to win at the game of Jeopardy? IBM took a different approach than classical AI which focused on semantics, ontologies, and rules -- IBM focused on linguistics and the use of machine learning to help uncover signals to the right answer.
Rob cited the consulting firm IDC’s Futurescape report that said that “by 2018, half of all consumers will regularly interact with services based on cognitive." Why this remarkable adoption of cognitive technologies? We collectively are generating so much data today that we can’t consume and make sense of all we are generating. Doctors can't read everything in their field – they would need to spend 150 hours a week to read everything, leaving no time for doing their job – or sleeping. Every one of us is in a similar situation.
What are Cognitive systems? Cognitive systems have 4 characteristics – they understand, reason, learn, and interact with people. Rob explained that these systems are different from traditional rule-based systems because they are taught based on data rather than programmed. This training data impacts how the system will answer questions – customers will use training data specific to their organization and thus create cognitive systems that conform to their organization’s business approach, and more broadly, its philosophy.
Since the Jeopardy Challenge, IBM has been very active in enhancing the technology and providing new Cognitive offerings. Rob focused on IBM’s latest work on Conversation services. Conversations are much broader than just answering fact-based questions. Conversations, whether between two people, or people and machines, should engage the user, understand the user’s concerns, build on an idea, and leave the user inspired and satisfied at the end of the conversation. In the best conversations, each party comes away from the conversation with new thoughts that were generated within the conversation. It will be hard to develop such a sophisticated Conversation service but this is our goal.
Rob then showed a video of a future Cognitive Mergers & Acquisition Advisor named Celia that responded to questions from two people analyzing acquisition targets. Celia could understand the conversation between the two people and then interrupted to ask, “It sounds like you are discussing the work we did last week, would you like me to bring up the results from that session?” Imagine a cognitive assistant that could participate in your conference calls, recalling previous action items, checking to see if the items had been accomplished, or performing analysis that it deems pertinent to the discussion.
One of the crowd-pleasers at the Seminar was the demo of “Embodied Cognition” using a Pepper humanoid robot (from Softbank Robotics) connected to Watson Conversation service. Besides answering questions, Pepper would turn to face the speaker, gesture with her (?) hands, and provide inflection in her voice. Pepper can also use the Watson Visual Recognition Service to recognize individuals and Watson Tone Analyzer to understand the user’s emotional state. Although the answers were no different than what Watson could provide without the Pepper embodiment, the human-like interactions were a strong draw to the humans attending the seminar!
Rob’s slides are available at www.ibm.com/ascdc under the May 31 event. Or email me if you’d like more information: email@example.com
My work this year has taken me from Big Data and Analytics towards Cognitive Computing and what IBM is now dubbing Cognitive Businesses (or Cognitive Government in our case). Cognitive businesses leverage cognitive computing technology (think Watson) to enhance, scale, and accelerate the expertise of their personnel. Below is the summary of the first part of a symposium I co-chaired last week. I'm happy to answer any questions you may have.
The AAAI Fall Symposia on November 12-14 included tracks on 1) AI for Human-Robot Interaction, Cognitive Assistance, Deceptive and Counter-Deceptive Machines, Embedded ML, Self Confidence in Autonomous Systems, and Sequential Decision Making for Intelligent Agents. This post will provide my general impressions of the Cognitive Assistance symposium.
Jerome Pesenti, IBM VP of Watson Core Development, provided the 1st day keynote. He started with the great quote from Fred Jelinek (Cornell/IBM/JHU) that “Every time I fire a linguist, the performance of the speech recognizer goes up.” He then talked about how deep learning is allowing reco systems that approach or surpass human performance. This led to a lively discussion with the audience on the universality of learning algorithms and whether the machines were learning in the same manner that humans learn something (no). Jerome finished with some applications of Watson including the Oncology Advisor, citizen support (e.g, tax questions), and security (finding relationships between data).
The rest of the morning was filled with examples of cognitive assistance for legal tasks such as filing a protective order (Karl Branting) and human-computer co-creativity in the classroom(Ashok Goel), and a tool to help SMEs define their vocabulary to find the most relevant content on the web (Elham Khabiri).
During lunch, much of the symposium had lunch together and a lively discussion ensued on cognitive assistance. One topic that I found interesting was on ultimate chess where human-machine teams compete. While these teams in the past have beaten computer-only teams, Murray Campbell noted that the advancements in chess playing computers are decreasing the value-add of humans to the team.
The afternoon session of Day 1 started with 2 interesting talks on cognitive assistance for helping those with cognitive disabilities. Madelaine Sayko described Cog-Aid which would include a cognitive assessment, recommender system (based on the assessment) and an intelligent task status manager for starters. Then Daniel Sontag described the Kognit technology program which includes tracking dementia patient’s behavior using eye tracking and mixed reality displays to assist the patient perform activities in daily living. Kevin Burns presented a sense-making approach that could be used by an intelligence analyst to help understand and define the Prior and Posterior probability calculations as new evidence is added. This could eventually be embodied into a cognitive assistant. Next came a presentation on capturing cybersecurity operational patterns to facilitate knowledge chaining by Keith Willett.
The final session of the day was a panel discussion of workforce issues associated with cognitive assistants led by Murray Campbell. Erin Burke of Fordham University Law School talked about how legal education must transition and that she is working at the intersection of law, big data, and cognitive computing. Jim Spohrer, Director of IBM’s University Programs, provided some predictions including that by 2035 everyone will be a manager and will have at least one Cognitive Assistant working for them. A lively discussion ensued with the audience about our forthcoming relationship with Cogs including whether we could trust them, unintended consequences, whether we can build common sense into a Cog, and whether our brains will atrophy as we depend on Cogs.
I’ll cover Day 2 in the next blog post.
In medieval times, Alchemists hoped to convert base metals
into the noble metal gold through the use of a Philosopher's Stone.
Today, in the field of information science, we talk about
Information Alchemy, converting data into information and then into
knowledge. Some people even add a 4th
stage of converting knowledge into wisdom[i], but
that will be for another blog post.
Data is defined as the raw characters or numbers, whereas information is
defined as the processing of that data into various relationships so they have
some meaning. Dr. Eisenberg at the University of Washington describes knowledge as the
“collected, combined, organized, processed information for a purpose.” Over time, it is thought that accumulated and
refined knowledge leads to Wisdom.
This year, the total of all digital data created is forecast
to reach close to 4 Zettabyes, or 4x 1021, according to IDC[ii]. This is nearly four times the 2010 volume and
it is growing rapidly. All of this data
should let us make a smarter and better planet.
However, today we’re drowning in all this data because we don’t have the
time as individuals to process all this information, and we don’t have computer
systems that can turn this data into insight,
But soon that will change.
We are entering a new era in computing which IBM is calling Cognitive
Computing. The first of these systems is
the IBM Watson system which debuted on the Jeopardy! Show 2 years ago. Traditional computing systems have done a
great job with handling data, including storing it and manipulating it into
information. So now we have lots of
financial, inventory, customer, and all sorts of other, mostly numerical,
We also have lots of unstructured information such as text,
audio, graphics, and video. We used to say that 80% of the new bytes being
created today were associated with unstructured data, but that number is
probably closer to 90% given all the video being created these days. This text and multimedia information is
human-readable – in fact, it is designed by humans for humans to understand but
is not easily understandable by today’s computers.
And that is a considerable problem. Today, the transformation of information into
knowledge is primarily done in people’s heads.
Not just by scientists, engineers, or financial analysts, but by
everyone who reads an article or watches a video. The time available for people (some would
say skilled people) to analyze information to gain insights (knowledge) is the
limiting factor in the production of new knowledge today. To say this another way, we are now
information-rich, but knowledge-poor.
The goal of the cognitive computing efforts is to remove
this limitation by designing computer systems that can take this abundance of
information, much of it in human readable/viewable formats, and convert into
knowledge. For example, in the Jeopardy!
IBM Challenge, the Watson computer system analyzed its deep information stores
to find the answer that best answered the clue and the category. It did this feat by utilizing many different
algorithms to attempt to “understand” the text information and a machine
learning (artificial intelligence) scoring system to select the best response.
In a more significant effort, IBM is working with Memorial
Sloan-Kettering and WellPoint (a major BC/BS licensee) to use cognitive
computing technology to assist doctors by helping to identify individualized
treatment options for patients with cancer. It is, in effect, creating knowledge of the
appropriate treatment options from information about the patient’s condition
and medical history, and information from clinical trials and best practices on
While the field of cognitive computing is just beginning, I believe
over the next several years, we will learn how to perform “Information Alchemy”
and we’ll see how this newly created knowledge can benefit our organizations
and our lives.
As the quintessential information-based organization, government agencies may be in the biggest need for "information Alchemy." Do you seen this need? Do you see opportunities for Cognitive Computing at your agency?
Director of IBM’s Analytics
[i] Eisenberg, Mike,
“Information Alchemy: Transforming Data and Information into Knowledge and
Wisdom”, March 30, 2012, http://faculty.washington.edu/mbe/Eisenberg_Intro_to_Information%20Alchemy.pdf
Derechos, Droughts, Hottest July on Record, Shattered
High Temp Records, Greenland Ice Sheet Melts. Just what is going on with the weather these
days? Is this weather really abnormal or
does it just seem to be that way? Is this part of a trend? Does global climate change mean we’ll have
more of these extreme weather events? Being
a data and analytics person, I started looking to see what data analysis had
been done on this subject.
The US Climate Extremes Index[i] provides
a measure to track the occurrence of extreme data (although it doesn’t take
into account Derechos and other severe wind events). The trend of the index (smoothed) has been on
the rise since 1970 and now is at an all time high, as shown below. The Index
was at a record high 46% during the January-July period, over twice the average
value, and surpassing the previous record large CEI of 42% percent which
occurred in 1934. Extremes in warm
daytime temperatures (83 percent) and warm nighttime temperatures (74 percent)
both covered record large areas of the nation, contributing to the record high
year-to-date USCEI value.
This index is
compiled by combining measurements throughout the country (1,218-station US Historical Climatology Network)
that show the percentage of the country impacted by extreme weather in terms of
maximum temperatures much above or below normal, minimum temperatures
above/below normal, percentage of country in severe drought/severe moisture
surplus, percentage of the country with a much greater than normal proportion
of precipitation derived from extreme 1 day events, and the percentage of the
country with a much greater than normal number of days with
The U.S. Global
Change Research Program in 2009 published a study which documented the changing
climate and its impact on the United
study uses 3 standard forms of data analysis: 1) reports on observations, 2)
predictions based on the observed trends, and 3) modeling to better predict future
climate changes based on various assumptions about the amount of heat-trapping
gases in the atmosphere. While the first
two types are based on large quantities of collected data, they use only U.S.
observations. The modeling, however,
must be done on a global basis which substantially increases the amount of data
that must be crunched.
Here are some of the findings as they relate to extreme
Overall Warming of the Climate
Temperatures, on average, in the1993-2008 period are 1-2ºF
higher than in the 1961-79 baseline. By
the end of the century, the average U.S. temperature is projected to
increase by approximately 7-11ºF under a high emissions model and by
approximately 4-6.5ºF under a lower emissions scenario. The temperature observations show that there
has been an increase in warmer and more frequent warm days and warm nights, and
warmer and less frequent cold days and cold nights in most areas.
More intense, more frequent, and longer-lasting heat waves
In the past several decades, there has been an increasing
trend in high-humidity heat waves, characterized by extremely high nighttime
temperatures. Parts of the South that
currently have about 60 days per year with temperatures over 90ºF are projected
to experience 150 or more days a year above 90ºF under a higher emissions
scenario. In addition to occurring more
frequently, at the end of this century these very hot days are projected to be
about 10ºF hotter than they are today.
Increased extremes of summer dryness and winter wetness with a generally
greater risk of droughts and floods.
Trends in drought have strong regional variations. Over the past 50 years, with increasing
temperatures, the frequency of drought in many parts of the West and Southeast
has increased significantly. Models show
that the Southwest, in particular, is expected to experience increasing drought
as the dry zone just outside of the tropics expands northward with global
Precipitation coming in heavier downpours, with longer dry periods in
While average precipitation over
the nation as a whole increased by about 7% over the past century, the amount
of precipitation falling in the heaviest 1% of rain events increased nearly
20%. One of the outputs of the climate
modeling is to project the probability of certain events. For example, heavy downpours that are now a “1
in 20 year occurrence” are projected to occur about “once every 4-15 years” by
the end of the century. These heavy downpours are expected to be
10-25% heavier by the end of the century than they are now. This will likely cause more flooding events
(flooding depends both upon the weather and the susceptibility of the area to
More intense but fewer severe storms
Reports of severe weather such as
tornadoes and severe thunderstorms have increased during the past 50 years.
However the climate study indicates that much of this may be due to better
monitoring technologies, changes in population areas, and increasing public
awareness. Climate models do project an increase in the frequency of
environmental conditions favorable to severe thunderstorms. But the report notes, “the inability to
adequately model the small-scale conditions involved in thunderstorm
development remains a limiting factor in projecting the future character of
severe thunderstorms and other small-scale weather phenomena.[iii]” Advances in modeling and big data analytics,
as well as improved monitoring networks are likely to reduce this limitation in
The June Derecho that hit the Washington metropolitan
area shows an example of the current state of the art in forecasting a severe
storm. The Storm Prediction Center of
NOAA was able to provide approximately 4 hours advance warning of the
storm. Longer term predictions would
require additional data about the atmospheric instability that propelled the
Derecho from Iowa to the Washington
Metro area, as well as better real time modeling.
Shift of storm tracks towards the poles
Cold season storm tracks are
shifting northward over the last 50 years, with a decrease in the frequency of
storms in mid-latitude areas. The
northward shift is projected to continue, and strong cold season storms are
likely to become stronger and more frequent, with greater wind speeds and more
extreme wave heights.
The climate changes will have an
interesting effect on the so called “lake-effect”. Over the past 50 years, there is a record of
increased lake-effect snowfall near the Great Lakes. As the climate has warmed there is less ice
on the Great Lakes which has allowed greater
evaporation from the surface resulting in heavier snowstorms. Eventually, the temperatures are expected to
rise sufficiently that much of the precipitation will end up falling as rain,
reducing the snow totals.
While trending of individual elements such as temperatures
is useful, accurate predictions require consideration of the interaction
between the climate elements. For
example, there is mutual enhancement effect between droughts and heat
waves. Heat waves enhance soil drying,
and drier soil heats the air above more since no energy goes into evaporating
the soil moisture. Big data modeling can
show the results of this escalating cycle of warming on the future climate.
The New Normal
So it seems that all this abnormal weather we are seeing
will become the new normal. Forewarned
Analytics Solution Center, Washington, DC
[ii] Global Climate Change
Impacts in the United States,
Thomas R. Karl, Jerry M. Melillo, and Thomas C. Peterson, (eds.) Cambridge University Press, 2009