Artificial Intelligence

Creating AI Without Bias

Play the latest episode
|
Mar 17, 2020
35 minutes

Subscribe

Listen to the Security Intelligence Podcast wherever you get your podcasts.

Creating AI Without Bias
March 17, 2020
| |
20 min read

Listen to this podcast on Apple Podcasts, SoundCloud or wherever you find your favorite audio content.

Bias happens. Human beings are preconditioned to make decisions based on experience and expectations. In an enterprise security setting, however, this poses a risk of minimizing key concerns or overreacting to smaller events. Artificial intelligence (AI) leverages code rather than craniums to make tough decisions, but as podcast hosts Pam Cobb and David Moulton discover, that doesn’t mean it is unaffected by bias.

On this week’s SecurityIntelligence podcast, Pam and David are joined by Aarti Borkar, vice president of Offering Management at IBM Security, who offers insight on breaking bias and activating the AI advantage.

Get the Whole Picture

When Borkar began working on AI with IBM, she quickly noticed that human bias “based on our backgrounds, upbringing, ecosystems and environments actually impacted the programmers as they built AI systems.” It makes sense: Using knowledge gained through both personal and work experience, programmers have created AI tools predisposed to respond in familiar ways to familiar threats — and potentially miss novel attacks.

For Borkar, the goal is creating “an AI environment that is holistically trained based on geographic differences, educational differences and patterns, gender, geography, years of experience, etc.” By reducing inherent bias, AI solutions help companies get the entire cybersecurity picture.

Seek Out Blind Spots

As noted by Borkar, people often describe themselves as “unconsciously biased” — they can recognize bias but don’t perceive it happening in the moment. For AI, Borkar says the goal should be the opposite: “consciously unbiased.” In effect, tools and systems must be built and trained with the ability to both recognize and eliminate bias when it occurs.

Accomplishing this goal means both seeking out blind spots and correcting for them before AI goes live. But what does this look like in practice? Borkar highlights three factors:

  • Correcting code — Teams must actively search out and correct for bias in code through regular testing.
  • 360-degree data — According to Borkar, “diversity of data” is critical to reducing AI bias. Diverse data sources with multiple vantage points help improve AI performance.
  • Cognitive diversity — It’s critical to employ experts across multiple security fields with different backgrounds to get the whole picture and examine all potential outcomes.

For Borkar, this trifecta helps enterprises aim for the “holy grail” of AI: “knowing the authenticity and the fairness of the outcome we are going for.”

Build the Right Team

On a small scale, bias may have minimal impact. On a large scale, it could be disastrous. Borkar highlights the story of an AI recruiting tool that contained a historical bias against resumes with the word “women” in them. According to Borkar, miscalculated algorithms have also created “issues in lending and mortgages where historical data patterns and badly designed algorithms have prevented certain categories of people from getting loans.”

Add in the rapid evolution of advanced AI, which now lends a measure of trust to these decisions because they’re made by supposedly unbiased devices, and there’s real potential for long-tail problems.

In security, this could mean ignoring a specific set of threats, using only a single set of data or relying on expertise from only one field, in turn exposing systems to substantial risk that AI tools won’t flag as potentially dangerous. For Borkar, breaking the bias problem means building a diverse team that includes three key groups:

  • Computer science experts — This group includes architects, product experts, product managers and offering managers.
  • Subject matter experts — This group both understands key AI functions and will use the tool on a daily basis.
  • Research and design — This group focuses on developing the best-fit AI model, designing core features and testing decision-making outcomes.

Start With People

People remain the paradox of artificial intelligence. Human beings are both the bringers of bias and the most familiar with its impact, making us the barrier to and the springboard toward improved infosec intelligence.

Ultimately, Borkar suggests bias isn’t unbeatable. For Borkar, “the more careful we are as a market, as an industry, as we build and as we consume that AI, the faster the positive side effects of AI will be visible to us, the good side — the good side that’s trying to protect everybody from the issues and the offenses that will come by.”

Episode Transcript

David: Pam, have you ever had a personal experience with bias?

Pam: I have. Actually, in my hobby, I’m a quilter by trade and that’s typically an activity that’s thought of for older women that are maybe in retirement. And this one time I was in my mid-30s, and I was in a quilt shop, and I was looking at this pattern that I wanted to buy and it had a lot of small pieces to it and a lot of different fabrics. And I pulled the pattern off the rack and the worker at the shop who was an older woman I would guess in her 60s looked at me and said, “Oh, honey, that one’s super hard, are you sure you want to try it?” Like a 30 something-year-old wouldn’t be able to sow some fabric together. And at that point, I’ve been quilting for 15 years, so I was like, “I’m pretty sure I do, thank you.”

David: This is my jam.

Pam: I know some stuff about some fabric.

David: I remember years ago, kind of similar, I walked into a IT help desk at the university I worked at and, you know, summertime wearing a tee shirt and some cargo shorts as one does. And I was trying to get my computer repaired and the guys behind, you know, the desk just looked at me and were like, “Come back later.” Pretty much blew me off. But I also worked for the office of the president and had different meetings and had to go suit up and came back later suited up. I don’t think they recognized the difference and was immediately, “Oh, yes, sir, we’ll take care of that.” And the only thing that was really different was, you know, some comb on the hair and, you know, a suit.

And I thought, “Wow, you know, same problem, same guy, same day, same team, treated completely different just off appearance.” It’s one of those things that we all know, you know, humans have inherent bias but for many reasons expounded on by today’s guest, we need to be conscious about scrubbing bias out of AI.

Pam: Exactly. This is the SecurityIntelligence Podcast, where we discuss cybersecurity industry analysis, tips and success stories. I’m Pam Cobb.

David: And I’m David Moulton. I was thrilled to get to talk to Aarti Borkar, vice president of Offering Management at IBM Security about a topic that she’s very passionate about. Here’s our conversation about AI and bias. So today we’re gonna have a conversation about ethics and AI. And I have with me Aarti Borkar from IBM Security. Hi, Aarti. Thanks for being on the podcast with us today. Let me start off with maybe a broader question around AI and it’s an easy one. So when you think about it, what does bias in AI reveal about bias in ourselves as humans?

Aarti: Hi, David. Thanks for having me here on the podcast. It is the fundamental question, what you asked, and I think it is how I started getting involved in the topic in the first place. As a data geek who spent many years in AI, I had the privilege of working with our HR and neurosciences team to look at taking AI to solve problems in a very human aspect around HR. And that brought together a group of people that had neuroscience backgrounds and human backgrounds with hardcore geeks that were working in AI and ML and more.

And what we realized as we were doing some of these initial experiments is that human behavior, which in general has a lot of biases and it’s entirely based on our backgrounds, upbringing, our ecosystems and environments, actually impacted the programmers as they built AI systems. So what does that mean? Now bias, as much as it sounds like a negative word, it’s just a word, it’s who we are. Every human being has some sort of bias. I think there’s, you know, over 80 known human biases documented in the world of neuroscience, my favorite one is called the ostrich bias, by the way, and you can imagine what that is.

When you start looking at those biases in our behaviors, they make us decide if certain things are good or bad, certain things are okay or not. And when you write code, those good, bad, safe, unsafe, true, untrue biases start impacting the way we program for the outcome we’re looking for. Now, in a non-security kind of construct, if you’re using AI for health, you might start realizing and making decisions of symptoms that occur in a particular race, gender, or age as the expected norm when that may not be the case, and you may not have the data to prove yourself right or wrong. The same behaviors start affecting AI in the world of security as well, which is probably where you’re gonna lead me next.

David: What do we know about a set of biases in our security technology and how diversity impacts those biases?

Aarti: That’s a great question. So continuing on the thought we were on, if you think about security professionals, not computer scientist building software for security, but SOC analysts and teams that are out there handling breaches, etc, a lot of their knowledge base comes from the surroundings they are in the types of attacks that they solve for the geographies that they work in.

I had a unique conversation a few months ago with a banking leader in the Benelux. And one of the things he started talking about was they monitor attacks that start north of Europe because there’s a pattern in where tax take place in Europe and the types of environments that they attack. And they’ve gotten nearly comfortable with the idea of, “Hey, if somebody gets hit in Scandinavia, next person is gonna get hit here, then here and so on, so forth.

To me, as I was having that conversation with that expert, it just showed me a bias. Now if that same person would have stopped writing code and program, that inherent bias of the geographies that are attacked that influence them is an issue. Now, how computer science is taught is a different story. If you grew up in eastern Europe or Asia-Pacific, you learn a different set of attack patterns than you would if you were learning computer science and security in a west coast of United States college. Each of these patterns start influencing how you look at attacks, how you look at defending them.

Now you asked specifically about diversity. So I’ll give you a very basic thing and it’s when we say it, it sounds natural, but at a very, very basic level, men and women attack differently. I’m not talking about computer science, I’m just talking about, you know, if you’re watching a sport or if you’re watching an actual battle of some sort, male and female patterns of attack are different. The same patterns flow through in the world of gaming, in the world of computer science in more ways than one, and thus security. If you’ve been in the industry for a few years, decades maybe, you have a different set of experiences that might allow you to attack and defend differently. Now, each of these patterns are useful.

And so if you’ve got an AI environment that is holistically trained based on geographic differences, educational differences and patterns, gender, geography, years of experience, etc., you start getting an AI system that is far more holistically-trained. The opposite of that is missing one of these and making someone feel like one of these areas is safe when it’s not.

David: And I guess that leads me to the next question is how do you start to uncover those blind spots in your AI? You know, acknowledge, I suppose, upfront that we have some of these biases and they can help us or hurt us, what are you doing or what are teams doing to make sure that you don’t have those risks?

Aarti: So a few things that we’re all doing. So let me start by saying, apart from the cognitive diversity, which is the way to describe the need for experts of different kinds on a particular AI outcome problem, the diversity of the data also helps. So, all of these entities that we talked about and the true geographic coverage of offenses that are coming in and incidents coming in allow us to have, you know, a true 360-degree coverage. So it’s experts as well as diverse data.

Now, at a fundamental level, IBM has been working on bias in AI algorithms for years. So we spend a lot of time looking at bias at a base, algorithmic level. The next couple of steps is ensuring we do have a 360-degree data foundation. Each of these avenues provides a different vantage point to data that makes a collective data store that much stronger. And the third thing that is really important for us is the cognitive diversity of our teams themselves, and taking the time to look at every aspect of the outcome that we’re going after as we build this.

But the holy grail that ties all of these three things together is knowing the authenticity and the fairness of the outcome we are going for. We define and we publish cleanly and clearly all the time what is the outcome this particular set of capabilities will provide? And that outcome defines what is right and wrong, what is ethical and not. And then these three aspects behind it around the data, and the people, and the core algorithms kind of round out an ethical and as unbiased as possible set of technology for our customers.

David: Is there an AI that goes and looks for AI bias?

Aarti: A lot of algorithms themselves…even before we had artificial intelligence set up like we do with Watson and everybody else, algorithms had the ability to be biased. You could make mistakes in algorithms that created negative impact for classes of people or classes of, you know, outcomes, etc. And we worried about them, but they never actually reached the scale that you can reach with AI. Because AI expands everything and exponentially provides, you know, a loudspeaker for problems and solutions that exist. So computer science has known ways to trigger and find the gaps and blind spots in algorithms for a while. We have an army of people who’ve now used AI to kind of find those lapse and blind spots in the core algorithm and protect against it.

David: And so, what you’re talking about there is the amplification of a problem that at a small scale may be contained to a small group but with the scale and, you know, frequency of use of AI becomes massive?

Aarti: Right. I think most programs that we’ve had, most computer software programs, affect only the people or a small set of things that the program’s written for. With AI, you can eventually, even if it’s supervised in its learning process, eventually has the impact on hundreds of thousands of people, accounts, customers, countries, public sector, etc. And the type of impact needs to be controlled with a lot more serious thought than we’ve historically done in the pre-AI era, I would say.

David: Okay. And I was just about to ask you if you could describe the risk or the business impact of biased AI, and it sounds like it could be multi-pronged. Do you have a couple of examples of that that come to mind?

Aarti: I’ll give you one security thought and one non-security thought because biased AI can have a lot of issues overall. And this one stuck with me just because it was very human and something everybody gets. It was an article in the newspaper not too long ago that a company was using AI for recruiting and was looking at people they had hired before and systematically did not like the word “women” in the resumes because of some historical bias, right? Now that has a negative impact.

Now this company found the issue early enough and it supposedly never affected any of their actual practices, but it didn’t make it to the press. And so, a miscalculated algorithm can cause a lot of human impact over time. It’s been seen to have issues in lending and mortgages where historical data patterns and badly designed algorithms have prevented certain categories of people from getting loans as an example.

David: That’s right. I saw an article along those lines that if you’d gone to historically black schools that you were precluded or excluded, I guess, from getting a mortgage and it just seemed like, you know, what might’ve been a small thing but scaled up became something that affected a massive part of the population.

Aarti: Exactly. And it comes from probably a preconceived bias in either the data or the people writing the algorithms or something to that effect, right? And so, there are lots of issues with AI that isn’t properly planned. And there’s a sense of inherent trust because a computer came up with the answer, but eventually, a human being programmed that system to at least guide that system in the general direction as you do with AI.

Similarly in the world of security, if we aren’t thinking about this carefully, we might just ignore a set of offenses because they haven’t been malicious in the past, or you just looked at data from one particular area, or the experience of the subject matter expert that the computer science had talked to didn’t mention something, right? And so having data, cognitive diversity with human beings and algorithmic checks on the biases are all nearly required to make sure you don’t make mistakes of that kind. Because we’ll only catch it after the breach happens, right? And I don’t think we can take that risk.

David: When we work with AI-infused with diversity, right, with this idea of conscious unbias, how does that help us overcome some of the security challenges that we face?

Aarti: So, I use the words consciously unbiased because human beings tend to use the excuse of being unconsciously biased all the time, right? Very often, somebody will make a comment saying, “Oh, that was unconscious. I have a bias, it’s unconscious, I didn’t notice it.” Why we build AI technology for mission-critical applications like security as vendors, as customers, as scientists and technologists, we don’t get to use that excuse of being unconsciously biased. We have to systematically be consciously unbiased in our approach. And so knowing our blind spots as we walk into a problem statement is nearly step one.

For any given problem, especially in the security domain, the outcome that you’re looking for and a true discussion in the team about the blind spots we might have is nearly an essential step as you start building the solution to that problem. Because then, you are paranoid about looking for every gap you might have and solving for that gap across data, and, you know, human expertise, client feedback and then, obviously, algorithmic support. And so that paranoia, that biases might impact your algorithms, and your solutions, and your outcomes needs to just be part of the culture of the team that’s building AI.

David: So when you talk about that team, who are the key players, and especially on the security team, who should be tasked with ensuring diversity in our AI?

Aarti: That is actually a great question. The number one group that will help is the subject matter experts. And we tend to have a mix of people involved in technology as it’s built for AI. You have the computer science team which is architects, product experts, product managers, offering managers that have gone out and talked to a variety of people. Then you have a second set of people who are just subject matter experts who would be doing this job on a day-to-day basis if there were no AI or that AI is going to be the sidekick or the supporting actor for this individual or set of individuals, those individuals need to be very involved in the AI being developed.

And then, obviously, you’ve got levels of research and design involved in that process as well because interaction with an AI system is actually very critical. Now, AB testing is a common phenomenon, the right thing, very often, is to let a set of experts or individuals solve for a problem and then have the computer-generated system, the AI system solve the same problem and start getting side-by-side results for a prolonged period of time to kind of check the authenticity level of the solution.

Now we did this with some of our solutions and we realized there was a tipping point where we crossed over and started becoming more accurate than the human beings because the speed of both processing as well as being able to collate and correlate information was faster with the AI than the human beings could ever achieve. But that kind of constant testing and interaction with subject matter experts, which not only come from one school of thought but multiple schools of thought, is critical. Nailing the answer to your question which is who can help figure this out? It is the tandem interaction between the computer science teams with the subject matter experts who would ideally do that job on a daily basis with or without AI.

David: So it’s a bit of a delightful paradox where people are both part of the problem and part of the solution. Aarti, talk to me about how you would go about training people to monitor their bias?

Aarti: I have a point of view on how people would monitor their bias, I am, by no means, the expert on this. But having heard, you know, neuroscientists and behavioral scientists talk about this for very long and working with them as we worked with the initial stages of AI, just step one is nearly knowing what outcome you want to reach, the ethics of that core outcome. And what I mean by that is when you decide this is the problem I’m gonna solve, solving that problem has an ethical connotation. It doesn’t always need to be good for everybody involved. Solving a problem sometimes could be bad for somebody. Knowing that upfront and caring about that upfront is step one, right?

And then the second part is actually realizing what blind spots you have as a team. Now, an individual very rarely is going to be that evolved to realize all their blind spots. So, the word team becomes important here because as a team you need to know, “Hey, here’s what I’m missing.” If you look at universities, especially the math departments, they sometimes have the most diversity you can ever find in any corporate world, more than any corporate world.

And it is for a specific reason, if you want to solve a complex math problem, you need opposing forces and opposing thoughts because opposing mistakes cancel each other out and get you closer to the solution. That realization is just part of the ethos of education, especially in the math department. That kind of realization needs to be part of the team. The reason most of the times we run into issues in AI training is not because someone maliciously badly trained the AI, because they just weren’t unaware of the ramifications, the negative ramifications of making mistakes and then didn’t go through a process of figuring out what those mistakes could be. It sounds simple but when you put a team together, it’s a pretty complex process to get there.

David: No, it actually doesn’t sound simple to me, it actually sounds like it’s one of those things that, you know, you’ve got to go beyond that point where you don’t recognize that you have a bias to one where you do have a recognition of it and a willingness to be consciously unbiased. When you think about what empowerment is needed for practitioners to be a part of the change, you know, where do people get started? What is, I guess, the first step for those teams that really want to close those gaps?

Aarti: The first one is knowing the lineage of the data you’re using and making sure you have the breadth of data. You know, just because you have a ton of data, that data may or may not be accurate. So lineage of that information and the fact that it was ethically obtained and is usable in the way that it should be is nearly step one and the fact that you have the breadth of data. The second thing is knowing who trained that AI, and if you’re forming a team, know that you have all the skills in the team.

Very often, we’ll download an app or buy a solution and you’ll see the marketing story around how this AI is very cool. Would you hire a bunch of people in your SOC [SP] that you’ve never met and allow them to solve your toughest problems? The answer is unequivocally no. So then, how were you allowing an AI into your environment without knowing who trained it? So ask the question, who trained the system? How was it trained? And make sure that that can be explained by whoever it is you’re buying the system from or enhancing the system if, you know, you bought it to build on top of it.

And the third part of it is knowing where the outcomes end up. So, know who trained your system, know what data was used to train your system and then figure out what the outcome is and what happened to your information after they gave it to you, the very least start there. Here’s the absolute truth. In the world of security even, AI can provide a higher fidelity response and security than human beings ever could, so it’s going to happen. But make sure that you’re using environments and technology that benefit, and are positive, and are providing an added level of security than you already had. Because just as it could be a very powerful force for good, it can be a force for bad as well. And so, it’s worth knowing which force you are letting in through your doors.

David: Right. It sounds just like you wouldn’t let somebody come onto your team and get access to your data and make decisions for you without vetting them because they’re a form of intelligence. You wouldn’t want to make the mistake of not vetting the intelligence that you bring in through, you know, different systems, tools, data, algorithms, that type of thing. So shifting a little bit here. How can CXOs leverage their poll to ensure their companies are doing everything in their power to prevent AI bias?

Aarti: Now I’m definitely in the phase of providing you an opinion based on everything I’ve seen and giving you IBM’s perspective on it, but I’m going to have a point of view anyway.

David: That’s what we love about you, Aarti. You always have a very strong point of view, but that’s what we need. I think this is a really important topic to have a strong point of view around. And, you know, if our listeners are out there wondering what it is that they’re responsible for as leaders, you know, I think this is that moment where they could use that leadership from somebody like yourself that is so deeply and richly-educated in this particular space.

Aarti: Thanks, David. I’m gonna talk to you a lot more often, it makes me feel good about what I do and who I am. But going back to the question on company’s AI and bias, I think I’ll start by saying fear is not helpful here because AI is gonna be part of your system whether you like it or not. It is in most of the apps on your cell phone and it is in technology that is being consumed by your marketing teams and your HR teams and a variety of other teams that are just part of like SAS software you are using, etc. So it’s happening to you whether you like it or not.

Though, the things to be careful about is ensuring that you are vetting the technology, especially if it is getting access to critical data from you. So, this is part of data security and it’s part of just good behavior with AI at an environment. I think the C-level executive team needs to know what AI is being used and for what, is data from your company getting used to train that AI?

So, if there’s an AI tool in the cloud and pick data of choice from your environment is being copied over to some fabulous cloud storage environment so this AI tool of choice can be trained and give you some very wise words, think twice because do you want all your data to be going somewhere else so that it can train an AI system that will give you some help but might help a whole lot of other people with your data? That’s a problem, right? That’s your crown jewels being used to train and share information to someone else, sometimes your competitors. Step one, like, make sure you know before someone goes off and buys an AI solution what to do, that’s if you are buying.

And if you are building AI, in the world of security, we talk about insider threat very often, and we talk about threats that are generated by insiders because they’re misusing data, because they’re misusing information for malicious reasons. Here’s the thing, you don’t even have to be malicious to use AI incorrectly and have some of those same impacts. So the same focus you have on insider threat, have that same focus on usage of data and creation of AI with that data and the outcomes that it cause. Don’t let it happen to you, make sure you’re watching these environments.

So, again, when you’re building, make sure you know what is being built for what reason and what data is getting used. And if you are buying, be doubly sure that you’re not giving away your crown jewels and data to somebody else to do something with the hope and aspiration that they will give you some valuable insights back with their AI. I think being paranoid and, you know, nearly employing the zero-trust philosophy that we use around security in everything to do with AI is a good idea.

David: I like that idea of being able to move up to a zero-trust mindset beyond the control levels and security but to the business models in the different types of endeavors that we’re going after. So, Aarti, one last question for you and it’s a really simple one just like my opening question was a real lightweight question. Looking into the future, what does AI look like in cybersecurity?

Aarti: So I am going to use my optimistic self here. I think AI and cybersecurity is gonna be a norm, it’s highly required for us to go from a very reactive security model that exists in the market today to a far more proactive model which is expected for the next generation of security to be what we want it to be, and AI will be predominant in that storyline. The more careful we are as a market, as an industry, as we build and as we consume that AI, the faster the positive side effects of AI will be visible to us, the good side, the good side that’s trying to protect everybody from the issues and the offenses that will come by.

David: So, Aarti, thank you so much for being on the podcast with us today. As always in talking to you, I’ve learned a bit and look forward to our next conversation.

Aarti: Thanks, David. Thanks again for having me here, letting me talk about something I care about a whole lot, and look forward to our next chat as well.

Pam: So that was a great conversation with Aarti. And I have to say, David, it really reminded me of a previous podcast and some of the findings from the X-Force Threat Intelligence Index and the idea of the inadvertent insider and just so much trouble from just whoopsies out there.

David: I think you’re keying in on a theme that we heard, you know, Aarti said that you don’t have to be malicious to use AI correctly. And it goes back to this idea of being conscious about what you’re doing, understanding that your behavior, whether you meant it to be for something good, can actually have an outcome that isn’t so great and it can harm people and certainly insecurity can expose you to some risk. That awareness is really critical in what we’re working on with AI in our industry and, frankly, broadly.

Pam: So user education is gonna be the thing that saves us again, huh?

David: Maybe.

Pam: Good old people. The best and the worst thing about cybersecurity. So, in the vein of good nudge, do you have any good news for us, David?

David: My real thought here is the Open Cybersecurity Alliance (OCA). The more I look at this idea of open security of vendors, smart people coming together, you know, all, ships rise together, it strikes me that breaking the norms of closed systems, of closed cultures, insecurity is the opportunity to flip the asymmetrical battle that we face as defenders, as enterprises, or businesses are keen to protect the things that we care about, you know, our data, our people, our customers, our employees, that sort of thing. And the Open Cybersecurity Alliance is a bright spot. Every time I read about it and see the different groups that are a part of what the OCA is doing, you know, I’m encouraged because it’s showing that there is a way to work together and there’s an opportunity to really unlock innovation, you know, when we work in this way rather than going in and closing people out.

Pam: Well, that’s it for this episode. Thanks to Aarti for joining us as a guest.

David: We’re on Apple Podcasts, Google Podcasts, SoundCloud and Spotify. Subscribe wherever you get your podcasts. And thanks for listening.

Douglas Bonderud
Freelance Writer

A freelance writer for three years, Doug Bonderud is a Western Canadian with expertise in the fields of technology and innovation. In addition to working for...
read more

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today
Press play to continue listening
00:00 00:00