Explainable Threat Intelligence: Moving Beyond \\\Black Box\\" Threat Convictions"

  • Tuesday, 31 Mar 2020 10:30AM EDT (31 Mar 2020 14:30 UTC)
  • Speaker: NULL

Bridging the Gap between Human Analysts and Machine Learning Classifications

The cyberthreat landscape has outpaced our ability to detect and respond manually 'hence more and more security solutions are leveraging today's compute capacity to automate analysis through techniques like machine learning. Sounds good, right! However, most machine learning-powered classifications are NOT designed for the humans who need to act on this information. They are often 'black box ' technologies with outputs lacking sufficient context to be actionable. There's a presumption the analysts will trust these conclusions and somehow push forward. But this puts these individuals in even more stressful situations where they are obliged to either react blindly and face the consequences, or do their own research which is time consuming and often highly specialized. This just exacerbates the security skills gaps and efforts to retain these professionals.

What if today's security analysts had access to the most timely and relevant threat intelligence, in a consumable easy to understand manner that was interpretable, verifiable, and explainable?

Join our webinar as we examine the next generation of 'explainable ' threat intelligence solutions and how ReversingLabs has taken a fresh look at machine learning classification.

In this session, we'll discuss:

  • How contemporary malware is challenging security teams, and why destructive object insights are so relevant;
  • How new explainable machine learning models are improving analyst malware knowledge and SOC productivity over time;
  • How the concept of 'transparency ' and being able to defend a classification decision is empowering the SOC team and facilitating cross functional collaboration;
  • How this new threat intelligence integrates to existing environments (e.g. SIEM, SOAR) and maps to common attack frameworks (MITRE ATT&CK).