Myself and some colleagues from IBM’s Emerging Technology attempted a two-day project just before Christmas this year, because we thought we might have the tech to tackle a big problem: cell phones in prisons, we know prisoners have them, but they are rarely ever tracked down. This is something the UK government is currently planning to spend £7 million on. We thought we could do it for fifty quid.
Cell phones broadcast over radio frequencies much like traditional radios, and for our project we decided to home in on the two major UK telephone networks: Vodaphone and Telefonica (O2/GiffGaff).
Our first big win was to get a copy of Vodafone’s Licence agreement with Ofcom, he UK radio spectrum standards agency.
The biggest nugget here was finding the frequencies on which Vodafone operates The spectrum is purchased in 5Mhz chunks, and here are the ones the company uses for their PWN.
Time to get some hardware. Here’s what we used, and how much it cost:
Raspberry Pi, £20
SDR(Software Defined Radio), £30
And that’s it!
And have a little peek around the spectrum:
As you can see from the image above, when we looked in the lower parts of the Vodaphone spectrum, we could see a big spike in the RSS (Relative Signal Strength) whenever we made a call out.
The next thing to do was to see if we could get more exact information. We won’t bother you with the day we spent fiddling with gain, frequency and granularity setting, trying to dial in on what was happening. Instead, we’ll just present you with our best chart:
This gave us the RSS for both sets of traffic every time a cell phone call was made. The downlink ended up being very useful, as the cell tower signal was a lot stronger and therefore easier for us to pick up and work with.
Bringing in AI
Now we could see the signals from the mobile phone (although not read them, as the GSM protocol that cell phones use is encrypted), we could watch how they changed as I moved around the room.
We divided our space into seven chunks: our big room was split into six pieces, and the nearby kitchen made up the seventh. Then I made a lot of phone calls. And I really do mean A LOT. Altogether, my team and I cranked up over 150 missed calls over a period of two days. But this meant we had training data, so we could show an AI things like “this is what the signal looks like by the sofa”, and “here is what it looks like by the projector”.
For example, when there were no phones active we saw RSS values of:
(Base was our name for “No call active ”)
And when I was in the kitchen we would see values of:
(We designated the kitchen Cell six)
See the difference? Well we trained a AI system, using Watson Machine Learning, to be an expert at telling the difference.
With only six “wandering around the room’s” worth of data, we managed to hit around 75% accuracy on our models. In the real world we would push this up, with more and better sensors, and of course way more data. BIG DATA.
Getting a better Baseline.
The problem with our working environment was that it was a hackathon with tons of mobile phones, people, fitbits and little electronic bits and bobs everywhere. To solve this, it was time to go underground… to our Anechoic Chamber, a room where almost no electromagnetic radiation can enter or leave.
This allowed us to get much better “base signals”, and we are looking to add this into our machine learning system soon.
All of this work allowed us to build a prototype system for detecting where a cell phone was in the conference room in which we were working.
This was all build in Node-red, a visual programming language that came out of IBM Emerging Tech over the last few years.
First we had a basic debug interface, which allowed us to see the RSS values we were interested in at any given moment.
Next we made a basic map of our testing area, with each square showing one of the “cells” of the room. It would light up red if a phone call happened, and show our confidence we were correct as a percentage.
You can see in our first screen shot here, it lights up when I make a phone call in the top left of the room, were our team was working.
Then as I walked further down the room towards the kitchen, it can see that movement and show it up.
The confidence scores are low, but this is to be expected; we had only made six phone calls from each location to train the machine learning system. Furthermore, this is one of the fastest and easiest wins to improve the system: just teach it more!
Animating emotion. This is a project to show of the use of affective computing (Emotional AI) in the Watson suite, which is IBM’s collection of cloud-based AI API’s. We hooked this into The Waston SDK for Unity, which allowed us to use this as a 3D Environment. The goal here was simple: to create an […]
With the release of the Watson Unity SDK in 2018, myself and Amara Graham (Keller) set out to build a chess game that could be completely voice controlled. In order to tackle this, we had three major tasks ahead of us, each of which had a clear solution. First of all, we would need our application to […]
Call for Code This year, IBM, in partnership with the American Red Cross, the UN, the Linux Foundation and many more, launched the Call for Code. Developers have revolutionised the way people live and interact with virtually everyone and everything. Where most people see challenges, developers see possibilities. The Call for Code is a multi-year, […]