Home / Software & Service News / Robocops can’t tackle online crime without human assistance

Robocops can’t tackle online crime without human assistance


Cybercrime is on the rise and organizations across a wide variety of industries — from financial institutions to insurance, to healthcare providers, and large e-retailers — are rightfully worried.  In the first half of 2017 alone, over 2 billion records were compromised.  After stealing PII (personally identifiable information) from these hacks, fraudsters can gain access to customer accounts, create synthetic identities, and even craft phony business profiles to commit various forms of fraud. Naturally, companies are frantically looking to beef up their security teams. But there’s a problem.

A large skills gap is causing hiring difficulties in the cybersecurity industry. So much so that the Information Systems Audit and Control Association found that less than one in four candidates who apply for cybersecurity jobs are qualified. The ISACA predicts that this lack of qualified applicants will lead to a global shortage of two million cybersecurity professionals by 2019.

In response, many companies are turning to artificial intelligence to pick up the slack. This raises a very important and expensive question: Are robocops ready for the job?

Training & supervision are paramount

AI was seemingly built to alleviate the need for humans to provide authentication.  Monitoring implicit data points, i.e., a user’s environment (geo-location), device characteristics (metadata of the call), biometrics (heartbeat), or behavior (typing speed and style), to validate someone’s identity is more effective and quicker with AI than a human eye.

Companies are already seeing great results from AI as illustrated by FICO’s newest Falcon consortium models, which have improved CNP fraud detection by 30n percent without increasing their false positive rate.

While AI’s ability to authenticate may outweigh that of a human, without strategic direction from a human to alleviate the cold-start problem, cybercrime is too intricate an issue to solve. Given the complexity of a cybersecurity environment and the lack of proper foundation as to where to start solving, unsupervised cyber sleuthing from robocops gets us nowhere. Identifying patterns in big data is an impressive feat for AI, but these analyses in themselves are ill-equipped to fight the war on fraud and eliminate inefficient CX.

On the other hand, supervised machine learning techniques depend upon human-supplied test cases to help train algorithms. As an analogy, instead of trying to reinvent the wheel, a supervised algorithm is just figuring out the best tire circumference for given car models and weather conditions.  Supervised learning can find patterns from big data. However, more than that, it can provide actionable intelligence.

AI and machine learning can analyze massive quantities of data and identify patterns within that data that humans could never distill. Human direction is still needed to lay the foundation for what the machine is learning and to set AI off on the right foot in its pursuit of fraud and great customer service.

Readying AI for first contact

When artificial intelligence comes across a new dataset instance that doesn’t fit its induction-based models, a human decision may be necessary to resolve that specific situation and train the algorithm on how to react in the future.

To understand why, consider the military metaphor of commander’s intent. In war, the saying goes that “no plan survives first contact.” You’re probably going to make contact, so does that mean you give up before the battle begins? No, you follow the commander’s intent – the ‘why’ behind the details of your plan and its execution. So even if you’re plan falls apart, you can still accomplish the mission if you know why you’re fighting.

Similarly, in authentication, you have an enemy (fraudsters) who are actively trying to best your protections. He hits you high, you put your hands up and he finds a new gap in the gut of your omni-channel defenses. This is in contrast to many common applications of machine learning. For example, meteorologists’ machine learning algorithms have substantially improved prediction accuracy over the past several years. Hurricanes, however, aren’t actively trying to fool meteorologists’ models — they’re acting naturally, albeit perhaps more intensely thanks to climate change.

Authentication AI needs to be able to adapt to fraudsters’ new methods. And without an understanding of the cybersecurity commander’s intent, AI will not adapt in many cases. Hence, a human element is needed to constantly guide and refine these powerful algorithms.

But what about GANs you say? Generative adversarial networks are a relatively new concept to machine learning where you have two machine learning algorithms. Algorithm A is doing a job, and Algorithm B is actively trying to poke holes in how A is currently doing its job.

For example, take a GAN image processing algorithm.  A is trying to identify if a given image contains a bird.  As A sees more and more pictures, it improves its ability to differentiate bird-filled pictures from bird-less pictures. Meanwhile, B is working to create pictures where A incorrectly identifies there is a bird or not. Applied to authentication, A represents the AI authentication, and B represents white-hat hackers trying to poke holes in your system. If effectively implemented, GANs have been shown to produce superior model performance than traditional techniques and would allow authentication AI to actively prevent future criminal cyber activity.

However, even with GANs, the algorithm would not understand the cybersecurity commander’s intent. And that’s where the overseeing human element comes into play – once again.

Preventing false positives

On the back-end of authentication, certain fringe cases will never fit the best algorithms because they are based solely on inductive decision-making and past experience. For exceptions to those inductive rules, we need a human eye as an examiner. Otherwise, seemingly innocent customer interactions can go very badly. Imagine your company addressing a customer differently because of their gender or skin color.

Machine learning algorithms are analyzing massive amounts of data and doing it well. But in the end, the conclusions drawn are probabilistic, and there will always be exceptions to rules. Just as we cannot identify fraudsters 100 percent of the time, even after drawing out endless contingency plans, some customers who check all the boxes for fraud may actually be real customers in extenuating circumstances.

Take this example: A customer, Jose, who frequently calls from Houston is using a VoIP connection from Mexico. He’s nervous and fidgety on the phone, and your biometric behavioral sensors pick up on it. Additionally, he’s trying to activate a $5,000 wire transfer from his account. Most machine learning algorithms – even if supervised – would flag this as fraud. However, Jose explains that he went to Mexico to live with his family after his house was flooded from Hurricane Harvey. He needs the money for hospital bills for his grandmother in Mexico who never told the family how bad her health had become.

What do you do? It’s a tricky situation because if you reject the request, you may be contributing to a PR nightmare, and worse, indirectly harming your customer’s grandmother. But fraudsters often take advantage of disasters. For these situations, the algorithm cannot give a hard answer.

While it’s possible that in the near future cybersecurity forces will consist mostly of bots, today humans remain critical in the fight against fraud and the pursuit of great customer experience. Only we can recognize the ‘why’ behind cybersecurity, define key metrics to monitor faults in our algorithms, and make game-time decisions on fringe-case false positives that don’t fit our AI models.

Ian Roncoroni is CEO of Next Caller, a Y-combinator-backed provider of authentication and fraud detection technology.

Click Here For Original Source Of The Article

About Ms. A. C. Kennedy

Ms. A. C. Kennedy
My name is Ms A C Kennedy and I am a Health practitioner and Consultant by day and a serial blogger by night. I luv family, life and learning new things. I especially luv learning how to improve my business. I also luv helping and sharing my information with others. Don't forget to ask me anything!

Check Also

Elon Musk says Tesla Semi trucks are BAMFs

TwitterFacebook

Elon Musk says Tesla's new Semi trucks are BAMFs.

Yes, for those of you at home, that's the widely known acronym for "Bad Ass Mother F***er."

Musk playfully threw out the acronym as a "scientific term" Tesla's development team used as they designed the new big rigs. The name sounds ridiculous — but the specs, if they wind up being legit, are actually pretty impressive.

Musk claims the Semi can accelerate from 0 to 60 mph in just five seconds, well faster than traditional diesel trucks. That's the cab by itself or with an empty trailer loaded on. Read more...

More about Tesla, Tesla Motors, Tesla Semi, Tech, and Elon Musk

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php