Is the “perfect AI” a replica of human intelligence? Not according to F-Secure. Such thinking limits the scope of what AI can actually achieve. Project Blackfin aims to challenge our views on what AI should be and bring it to a new level – already with first successes.
Why AI will be Inhuman
Cyber security provider F-Secure has launched a new research project to further develop the decentralized artificial intelligence (AI) mechanisms currently used in its detection and response technologies. The initiative, dubbed Project Blackfin, aims to leverage collective intelligence techniques, such as swarm intelligence, to create adaptive, autonomous AI agents that collaborate with each other to achieve common goals.
According to F-Secure Vice President of Artificial Intelligence Matti Aksela, there’s a common misconception that “advanced” AI should mimic human intelligence – an assumption Project Blackfin aims to challenge.
Aksela, head of F-Secure’s Artificial Intelligence Center of Excellence says:
“People’s expectations that ‘advanced’ machine intelligence simply mimics human intelligence is limiting our understanding of what AI can and should do. Instead of building AI to function as though it were human, we can and should be exploring ways to unlock the unique potential of machine intelligence, and how that can augment what people do. We created Project Blackfin to help us reach that next level of understanding about what AI can achieve.”
Project Blackfin is a research initiative conceptualized by Aksela’s cross-disciplinary team of artificial intelligence and cyber security researchers, mathematicians, data scientists, machine learning experts, and engineers.
Taking inspiration from patterns of collective behavior found in nature, its overarching theme is to use collective intelligence techniques, such as swarm intelligence similar to ant colonies or schools of fish, to power fleets of distributed, autonomous, adaptive machine learning agents. The project aims to develop these intelligent agents to run on individual hosts. Instead of receiving instructions from a single, centralized AI model, these agents would be intelligent and powerful enough to communicate and work together to achieve common goals.
Using such an approach, the agents learn to protect systems based on what they observe from their local hosts and networks and are augmented further by observations and emergent behaviors learned across different organizations and industries. Local agents then get the benefit of the visibility and insights of a vast information network without requiring them to share full data sets.
“Essentially, you’ll have a colony of fast local AIs adapting to their own environment while working together, instead of one big AI making decisions for everyone.”
Not only does this help increase the performance of an organization’s IT estate by saving resources, but it also helps organizations avoid sharing confidential, potentially sensitive information via the cloud or product telemetry.
While the project is expected to require several years before realizing the full extent of its potential, it has experienced some early success. On-device intelligence (ODI) mechanisms developed by Project Blackfin are already being incorporated into F-Secure’s breach detection solutions.
But the potential applications for Project Blackfin’s research goes beyond corporate security solutions, and even the cyber security industry. F-Secure Chief Research Officer Mikko Hypponen foresees the project’s line of research as a way to challenge people to rethink the role AI can play in our lives.
“Looking beyond detecting breaches and attacks, we can envision these fleets of AI agents monitoring the overall health, efficiency, and usefulness of computer networks, or even systems like power grids or self-driving cars. But most of all, I think this research can help us see AI as something more than just a threat to our jobs and livelihoods.”