The ever-evolving landscape of Insider Risk programs can be a confusing place to navigate, especially now that Artificial Intelligence (AI) is on the scene. In this article, we delve into the intricate dynamics of Insider Risk management, shedding light on emerging trends and the potential pitfalls that lie ahead. In the process, we separate fact from fiction, uncovering three undeniable truths and one dangerously deceptive misconception about Insider Risk programs. These valuable insights can empower your organization’s approach to Insider Risk in the age of AI.

Truth: Governance Matters

The perception may be that governance is merely a line on a compliance checklist, but the truth is that effective governance forms the bedrock of any robust insider risk program. It ensures alignment with organizational objectives and regulatory requirements. By establishing clear policies, procedures, and accountability structures, organizations can mitigate potential risks. 

A critical element to building the governance foundation is a case management solution. This can help drive leadership and business unit buy-in by elevating insight, guiding activities, collecting valuable information, and justifying the investment with measurable results. Program managers should be able to demonstrate actual business impact – proving that governance is not just a set of guidelines but instead an ongoing activity that creates real value. 

Data-driven assessments and decisions are key to maturing the program and sustaining BU and senior leadership support. Organizations must demonstrate the value of current efforts and justify future investment. It should answer the questions "How did we do this year? Are we demonstrating value to our business stakeholders?” and "We want additional investment – do we have the data to show which organizational or technical controls are leading to adverse impacts?"

Truth: AI Without Value is Money Down the Drain

While AI has the potential to augment your insider risk program, the mere adoption of AI tools does not guarantee success. The true measure of AI's efficacy lies in its ability to scrutinize multiple data sets to deliver actionable insights that empower analysts to make informed decisions. But offerings that deploy machine learning on multiple data sets as a comprehensive AI for Insider Risk solution simply don’t exist…yet. 

We are seeing progress in AI solutions focused on single elements of behavioral insight, such as sentiment analysis in communications or behaviors from public information data sources. Still, there have been no recent revelations in insider risk behavioral science driving AI advances that are making a dramatic difference in holistic insider risk detection. This is why we continue to partner closely with those organizations driving the science behind detection and mitigation. 

To realize ROI on your AI investments today, prioritize gathering clean and relevant component-level datasets that add additional context for analysts. By collating disparate data sets, applying proven data science methodology, and leveraging accurate risk-scoring models, analysts can establish a baseline of user behavior and more easily spot out-of-norm activities. Organizations can realize the power of AI for Insider Risk now through more precise scoring to highlight earlier signals of risk. 

Truth: MITRE ATT&CK Framework-Mapping isn’t Enough

It’s time to debunk the myth. While the MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) framework provides a solid input for pressure-testing potential threat models, it is only one possible ingredient to building an effective Insider Risk program. The MITRE database and model for cyber adversary behavior is really built to help defend against external attackers. 

However, while the model is effective and useful for cybersecurity organizations to collaborate, share information, and improve defenses, it was not designed to holistically model insider threats and the risk they pose to your critical assets. It is possible to “stretch and pull” on elements of the model to make it somewhat work, but I can do that almost as well by mapping the problem to my mom’s spaghetti sauce recipe. 

Lie: Insider Risk is a Cyber Problem with a Human Component

Framing insider risk as a cyber problem with a human component distorts the nature of insider risk. While cyber behavior plays a role, discounting the human dimension limits our ability to devise comprehensive mitigation strategies, leaving organizations vulnerable to evolving threats. Mitigation strategies for Insider Risk programs should start with a strong business-driven foundation, such as use cases that have been analyzed for organizational and technical controls. Then, assess the role of behavioral data collection and analysis. Communication and activity across email, chat, or discussion board channels can reveal intentions before any cyber activity is taken. Once an action is taken, cyber controls and documentation can tell us what happened but not why. Insight into intention is crucial to truly understanding risk. We need to look at this as a human problem with a cyber component. 

Conclusion

A final truth: Insider risk sits at the junction of behavioral science, organizational leadership, and cybersecurity crossroads, and the operational model must involve organizational and technical controls that holistically address human and adverse outcomes. These models are under development, and we work in the very spaces where data is being gathered, and research is being conducted. Still, we can't wait for those models, so today's programs rely on a blend of art and science. However, as with everything cyber-related, we expect this to shift quickly over the months and years ahead. Contact Everfox to discuss how we can help you roll out an Insider Risk program ready to tackle whatever comes.