The security issues that organizations encounter are inevitably changing along with the digital landscape This article explores discussion year cybersecurity. . According to the most recent Dark Reading readership survey, there is some agreement about what security teams should prioritize in 2026.

Surprisingly, agentic AI risk is at the top of the list. The poll draws inspiration from last month's final Reporter's Notebook videocast discussion of the year, where cybersecurity experts Rob Wright from Dark Reading, David Jones from Cybersecurity Dive, and Alissa Irei from Tech Target Search Security analyze a compiled list of all the insights people sent Dark Reading reporters in December about their predictions for security operations in 2026.

The results of our poll of readers on the top 4 (the ones most frequently mentioned by the experts who contacted the Dark Reading news team) were insightful: it's evident that agentic AI is widely regarded as the next big target for cybercrime, and people don't have much faith that the poor password situation that too many organizations are facing will change anytime soon. Related: Cross-Platform Attacks Using China-Backed "PeckBirdy" ## Agentic AI & Autonomous Systems Become the main targets of cyberattacks By the end of 2026, nearly half (48%) of respondents think that nation-state threats and cybercriminals will use agentic AI as their primary attack vector.

However, this also increases attack surfaces—including access points with non-human identities—exponentially.Despite all of that, Geoffrey Mattson, CEO of SecureAuth, thinks the poll's findings highlight a crucial flaw in AI security theory. See also: Complex VoidLink Malware for Linux AI-generated He emphasizes, "While everyone is concerned about AI systems being attacked, the real vulnerability is what those AI agents can access once they're compromised." "Prompt injection defenses and traditional guardrails are not working well.

Because of this, the real battleground for safeguarding autonomous systems is becoming authentication and access control rather than AI safety features. He says, "You can't LLM your way out of an LLM problem," alluding to AI-driven security on board.

Because AI agents are not under human supervision, minor mistakes or malevolent injections have the potential to escalate into significant security incidents. "With it comes the opportunity for the board and executives to implement safety and security measures designed specifically to address agentic AI threats and vulnerabilities, which requires budget and foresight," she continues, adding that the threat is real and imminent. Even non-AI-related system outages and data loss from cyberattacks are still major operational concerns, according to Omdia's Marks, and they probably ought to become Board Favor.

"To ensure success and support business growth, security leaders must collaborate closely with other teams to align on business goals and plans for technology adoption," she explains.

Passkey Adoption & Password Elimination Lastly, just 10% of Dark Reading respondents in the also-ran category believe that passkey adoption and password elimination will become commonplace this year. According to Adam Etherington, practice leader for cybersecurity at Omdia, "stronger forms of authentication are taking a backseat when it comes to investment and focus." This is not to say that security teams are ignoring password protections.