30/06/2017 | Written by: Vicky Bunyard
Share this post:
This piece first appeared in ´Preview – The magazine of the International Council on Systems Engineering UK Chapter`, click here. In this context Systems Engineers are those actively engaged on large Capital Projects, Critical Infrastructure, Aerospace and Defense, Oil and Gas for example.
When I first started to hear about the Internet of Things, one thing that went through my mind was that the traditional triangle of forces that we all draw around project and product development – ‘Cost, Quality, and Time’ – was no longer sufficient, and I began to think in terms of a new set of opposing forces – ´Security, Privacy, and Interoperability´.
(Original exploration published 2014 DeveloperWorks)
I could clearly see that when considering the Internet of Things, these three forces are constantly pulling us in different directions, the need for interoperability compromising security and privacy, but more interestingly the play of security versus privacy, where just because something might be secure, does not automatically mean it is private, and ensuring privacy may actually compromise security.
I feel somewhat vindicated in this view by the sheer number of scare stories that pass across my twitter feed every day, for example the recent Mirai Botnet or Dallas Hacker ‘prank’.
As a Systems Engineer, originally, I believe that Systems Engineers are strongly placed to fully understand the worst-case scenarios that these types of threats could lead to, since they are the people working on critical infrastructure projects, Aerospace and Defense projects, roads, railways, chemical plants, oil rigs etc., and one of the things which occupies our minds almost constantly is: how do we make sure our systems are safe and secure?
We tend to think of technology as ordered, structured and manageable. As Systems Engineers complexity is in our blood, but our goal is always to produce structured technologies that solve complex problems, those technologies may also be complex, but traditionally we, as the systems engineers, control that structure. We also, traditionally, control the human interactions with those technologies.
We saw that change as we increased the use of software in our systems, software being notoriously resistant to boundary conditions and the laws of physics, the Internet added a level of emergence that we were probably not ready for, and the Internet of Things has effectively upped the stakes by several orders of magnitude. We can no longer assume control over the human interactions with those technologies.
It is critical that we understand that Internet of Things for what it is – a living, breathing, organism that takes on the behaviors of those engaged in it. In CyberSecurity we are seeing a movement away from trying to solve problems from a purely technical perspective, and towards adding aspects of Behavioural Science – see, for example, the National CyberSecurity Institute , but also in a trends towards CyberSecurity elements being included in Behavioural Science degrees, and even as entire fields of study .
This change in thinking is needed, I strongly believe that ‘we cannot solve our problems with the same thinking we used when we created them’. Yes, it is a quote that is overused, but cliché gets that way for a reason. The sticky part is that we often trot out the quote without really changing anything we do – so let’s think about this for a moment:
We (and by ‘we’ I mean engineers and technologists in general) seeded the problem by being our cheerfully creative selves and building lots of great technical stuff without necessarily seeing the consequences of plugging it all together, releasing it into the wild, and giving access to anyone, anywhere on the planet.
We can’t fix this by continuing to think in terms of individual systems, or even in terms of integrated or interoperable systems, and we certainly cannot ignore the nature of the beast. On a positive note, we probably can achieve a lot by continuing to be our cheerfully creative selves and building lots of great technical stuff – hooray!
Considering the beast for a moment – if we think of this as a living, breathing, organism, made up of all the organisms that contribute to its existence, basically us, then perhaps we should be adopting recent Cybersecurity thinking: putting our security in terms of immune system responses, rather than reactive fixes; in terms of health-care, rather than patches and fixes.
I find this way of thinking utterly compelling, at least in part because I see traditional approaches to threat, characterized by ‘Identify, Assess, Respond, Monitor’ becoming increasingly ineffective due to the speed and volume of attacks. The fact that even in cybersecurity the ‘identify’ phase is still most often characterized by experts watching news feeds, performing internet searches, subscribing to Threat Intel providers and doing a huge amount of manual work to understand what they should be looking for, means that we respond too slowly, and often too late. Couple this with some realistic thinking from my own colleagues at IBM Security:
(Extract from IBM Security Point of View: Internet of Things Security)
We have to conclude that we cannot simply ‘fix everything and we are done’, and we need to acknowledge that now because that allows us to stop fooling ourselves and start engaging on meaningful collaborations towards success.
Taking the immune system and healthcare as our primary analogy for Security in the IoT, I like to think about two extremes – I cut my finger, versus I have a deep wound to my abdomen. If I cut my finger I pretty much leave it to heal itself, I know there is risk, but the risk is low and I know my immune system understands how to fix that without intervention. If I have a deep wound to my abdomen I am going to call for help, expect that help to plug the leak, provide painkillers, and provide antibiotics to ensure I do not get an infection.
Achieving systems that are ‘self-healing’ for the most part, but able to perform self-diagnosis and ‘call the doctor’ when necessary are not beyond us at this time, and for Internet of Things they are going to be essential.
The emerging cognitive technologies will be fundamental to this type of approach, enabling each ‘thing’ to understand what is normal for its own context, spot the abnormal, fix it, or request help. This is the only way we will be able to keep up with the threats of tomorrow, or even slightly later this afternoon!
At this point, if I was speaking to you face to face I would expect one of two phrases to be on almost everyone’s lips:
- So what is the problem – the technologies are available, we just need to adapt and adopt.
- IT´S SKYNET! We’re doomed, doomed I tell you! (this happens way more often than I ever expected!)
And therein lies the problem. Whilst we are prepared to trust the apps on our phones with all kinds of personal data, whilst we happily skip about on the internet, do hobby projects connecting things to other things, using all kinds of untested stuff and plugging it into other exciting new untested stuff, and whilst we rapidly adopt new technologies to deliver new capabilities in our systems, we seem extremely reluctant to adopt these new technologies in the Security space.
I understand that, I really do, but the speed at which those who would do harm can adopt new technologies far outpaces our own, and that puts us in a very risky position.
Factoring in a recent panel discussion I heard, amongst security professionals, where the premise was that we should put more responsibility on the end user for security in the Internet of Things. To me, that way madness lies – we could be talking about any kid with a phone, all the way to my Granny and her health monitoring device that links straight to the hospital records – leads me to conclude that there are four fundamentals that we must pay attention to in order to ensure our IoT is secure, and therefore safe:
- We must think holistically, not just about our own things, but everyone else’s things.
- We must define ownership and responsibilities, whilst aspects of that must lie with end users, we cannot give away responsibility where there is no control.
- We must find faster ways to adopt new technologies effectively, and that probably means we need to collaborate more.
- We must enshrine the same ‘Design for’ mentality for ‘Security’ into the culture of systems design and development, which we have always done for ‘Safety’.
Now if that doesn’t sound like fundamentals of Systems Engineering I don’t know what does!
Please feel free to find me on twitter @VBunyard
 National CyberSecurity Institute – How to Combat Insider Threat Using Behavioural Science – accessed here: http://www.nationalcybersecurityinstitute.org/general-public-interests/how-to-combat-insider-threat-using-behavioral-science/
 Western Sydney University Bachelor of Cyber Security and Behaviour – accessed here: https://www.westernsydney.edu.au/future/future_students_home/ug/policing/bachelor_of_cyber_security_and_behaviour