IT and InfoSec professionals have been playing catch up with users since the beginning of time (as long as you consider the first computer the beginning of time like I do). This is at least partially caused by an all-encompassing misunderstanding that has been rarely noticed at best and certainly never been remediated.
The systems we have come to rely on so much for conducting business and most of the rest of our day-to-day lives were not designed with typical human behavior in mind. What I am hoping to do is introduce the science of human consciousness and decision making to the world of information security. The goal: create a more effective, efficient, and enjoyable InfoSec experience for all.
Cultural Reliance on Technology is NOT New
Humanity is incredibly reliant on technology. I am not just talking about the smartphones we must have in our pockets (or, more likely, hands), or the computers we use to conduct virtually all transactions. I am also talking about cars, tools, plumbing, electricity, and a million other things, but I do have a word limit so I won’t attempt to list them all.
We humans are incredibly reliant on two things: technology and luck. To illustrate this, let me point out a couple of things that most of you smart readers probably already know. Why did Google beat out Yahoo!? One was not imperially better than the other but instead “the market chose”. Meaning that for (presumably) an unknown reason people overwhelmingly chose Google.
This same thing plays out every day – VHS vs. Betamax, IoS vs. Android, Facebook vs. Myspace. PC Makers, car makers, banks, and every other vertical have stories like this. The reason that most of this actually happened is luck, pure and simple. Products and services that happen to beat their competitors are almost always just “in the right place at the right time”. There is little reason or logic as to why one thing can beat another in the court of public opinion. However, there are some insights as to why, and that can help us all do our jobs better.
Risk and Our Inability to See It
Let us begin with what is often considered one of the most boring topics in InfoSec – risk. Risk, risk assessments, risk tolerance … all of these are terms that we hear thrown around all the time in the enterprise. Do they really mean anything? Or are we just paying lip service and living off of luck and best intentions? Sadly, I think it is more so the latter.
We don’t really see risk, especially if the one seeking to understand it has any stake whatsoever in what is at risk. It has been proven in numerous academic studies that our ability to understand a situation, the risks involved, and to make a conscious and sound decision is tenuous at best. In fact, decision theory has shown that logic and reason are rarely part of the actual moment of decision and choices can be as accurately predicted by algorithms and logic as they can by the flip of a coin.
Decision Theory and the Story of Economics
Economists spent decades trying to understand this. Initially, they did the exact same thing that most other industries (including IT and InfoSec) are still doing to this day. They developed all of their models on the assumption (alarm bells should be going off here) that people are rational. Yes, I said rational. People are not rational, but for some reason all too many of the things we do in InfoSec are still wholeheartedly reliant on these exact assumptions.
The sea change came in the 1970’s to the world of economics in the form of an unlikely duo. Two research psychologists in Israel began to wonder why these assumptions existed and why they seemed so wrong. Daniel Kahneman and Amos Tversky were the unlikely pair that made unintentional waves in economics. So much so that Kahneman was eventually awarded the Nobel Prize for Economics.
With decision theory and prospect theory, Kahneman and Tversky turned the assumptions that had been ruling the world’s economies on their heads. They were able to prove beyond a shadow of a doubt that people are not good at making sound decisions, especially if they have a stake in the outcome, and rationality had absolutely no place in economics (or anywhere else for that matter).
So, we know a little about decisions and hopefully you, the ever-intelligent reader, are going to take this with you. Begin questioning decisions, or use this knowledge to just jump and not worry so much. Either way, relying on our rationality is out of the question now. But, it doesn’t stop there – we can’t trust our memories either.
Memory is an Illusion
Dr. Julia Shaw is the latest in a long line of neurologists, psychologists, and other academics that have studied and found surprising conclusions on human memory. Dr. Shaw released a book late last year called “The Memory Illusion” that wonderfully ties up decades of research and study into an entertaining and mind-blowing (excuse the pun) read.
I know you won’t probably read her book today so I will bring you up to speed on some of the conclusions her and her colleagues have come to. The most important is that our memories are not to be trusted. In several studies conducted by Dr. Shaw and others, they have been able to simply implant false memories into study participants.
No, they did not use some memory drug or anything like hypnosis. Instead, they simply told the people that something that never happened (and could not have ever happened) took place at some time in their past. Sometimes they used doctored photos, but that wasn’t even necessary. All they had to do was tell a convincing story, and the subjects overwhelmingly believed they had done something they never had or likely never would have done.
I can already hear you asking me (in your head): What does this have to do with InfoSec? Well, I am glad you asked.
How is This Applicable to InfoSec?
I think it is safe for me to say that computers were not designed with the humans using them in mind. Not beyond a method for us to interact (keyboard, mice, and voice) and a means by which to see the results (monitors and printers). Beyond this, how much thought has been given to the hard truths about human nature? Not much, I think.
Passwords are reliant on memory, and as we now know memory is not a reliable vehicle. Risk assessments, penetration tests, and audits are all based on human decision making which we have seen is seriously flawed at best. The management, networking, use, and administration of systems is not intuitive, not designed to accommodate humans, and so incredibly complex it is hard not to mess something up.
Almost every single breach, malware infection, ransomware threat, and vulnerability is caused directly by human error. People forget their passwords so we have simple passwords or easily tricked “forgotten password” features. Malware and ransomware authors have it easy since tricking someone into clicking a link seems to be getting easier despite the growth of user awareness training. Training itself is typically a slide deck and a few multiple-choice questions. This is not an effective way to teach people.
Stories, AI, and Pragmatism … Oh My
Storytelling, better algorithms, and good old-fashioned pragmatism are the best ways we can combat this major flaw. I think that storytelling is the most effective tool in our inherited toolset. Story has the power to influence far beyond any rational facts or concepts. Story is the method by which humans learn and have learned for millennia.
If you go back to the earliest signs of humankind you find cave paintings put there to tell stories. Moving forward through time you find that most learning and passing on of lessons and knowledge were done almost exclusively by story for most of human history. Story has the power to illustrate complex concepts and make important the abstract. A short story can teach better than weeks of classroom or online learning.
Making the subject matter personally applicable is another method to defeat human nature. If awareness training wants to live up to its moniker, then those designing it need to be more aware of human nature. By combining the clever use of story (which can be delved into in this book) and personalization, we can make awareness training actually work.
For example, instead of telling people to look out for phishing emails or scams, tell them a story about how these threats have affected other people they identify with, not other corporations. Teach people to protect their personal phones, their children, their loved ones and they will automatically do those same things at work.
Other solutions are emerging, and I will go into much more detail in the upcoming whitepaper on these solutions. But for now, we have user and entity behavioral analytics (UEBA). This is basically letting an artificial intelligence (AI) in conjunction with machine learning record, analyze, and understand behavior on a network. AI has an advantage over people in that it is not biased, does not make assumptions, and looks at the whole picture without emotion of any kind.
I know this is still a little vague on all the details, and it is meant to be. However, the hyperlinks throughout the article will give you plenty to read, and I am actively working on a lengthy whitepaper that goes into even more detail. Plus, you can see and hear more at B-Sides LV and in the Social Engineering Village at DEF CON. The associated whitepaper is set to be released at the end of the month to tie in with DEF CON and B-Sides Las Vegas talks. Maybe if all of you are interested enough I will have to write a book and really explore this.