Best Practices Archives - Best Information Security SIEM Tools, Software, Solutions & Vendors https://solutionsreview.com/security-information-event-management/category/best-practices/ Buyer's Guide and Best Practices Tue, 05 Aug 2025 15:33:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://solutionsreview.com/security-information-event-management/files/2024/01/cropped-android-chrome-512x512-1-32x32.png Best Practices Archives - Best Information Security SIEM Tools, Software, Solutions & Vendors https://solutionsreview.com/security-information-event-management/category/best-practices/ 32 32 Why Trust Is the Future of CX: A Human + Tech Security Strategy for Digital Leaders https://solutionsreview.com/crm/2025/07/29/why-trust-is-the-future-of-cx-a-human-tech-security-strategy-for-digital-leaders/ Tue, 29 Jul 2025 15:25:54 +0000 https://solutionsreview.com/security-information-event-management/why-trust-is-the-future-of-cx-a-human-tech-security-strategy-for-digital-leaders/ Ljubiša Velikić, the VP of Trust & Safety at TELUS Digital, explains why digital leaders must develop a security strategy that combines human expertise with technology to stay competitive in the future of CX. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI. As generative AI (GenAI) continues […]

The post Why Trust Is the Future of CX: A Human + Tech Security Strategy for Digital Leaders appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Why Trust Is the Future of CX

Ljubiša Velikić, the VP of Trust & Safety at TELUS Digital, explains why digital leaders must develop a security strategy that combines human expertise with technology to stay competitive in the future of CX. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.

As generative AI (GenAI) continues to reshape digital experiences in all areas of our lives, trust is increasingly becoming the key factor driving sustainable business growth and success. Today’s brands are under mounting pressure to deliver customized experiences, even as consumers become more conscious of the inherent privacy risks associated with sharing their personal data. For enterprise decision-makers, balancing these diametrically opposed demands requires an approach that fuses human oversight with technological safeguards.

Forward-thinking digital leaders are no longer treating trust as a compliance issue but elevating it as a core driver of customer loyalty, market differentiation, and sustained revenue growth through stronger retention and brand reputation. As digital threats grow more complex and regulations evolve, business leaders invest in trust and safety to not just manage risk but also strengthen customer relationships and competitive advantage.

Recent findings from TELUS Digital’s Safety in Numbers report, conducted in partnership with Ryan Strategic Advisory, underscore this shift.

From Risk Management to Value Creation

Trust, safety, and security can no longer be confined to the IT department. These capabilities must now span every team that shapes the customer experience, including product, engineering, marketing, sales, and support, as they play a critical role in shaping brand perception and driving business value.

Whether it’s onboarding, identity (ID) verification, fraud detection, or content moderation, every touchpoint, visible or behind the scenes, contributes to how customers perceive and trust a brand. The report findings confirm this, with leaders citing ID verification (73 percent), fraud detection (69 percent), and Know Your Customer (KYC) processes (64 percent) as top investment priorities for their companies in 2025.

Top Barriers to Trust and Safety in Digital CX and How Leaders are Solving Them

While the strategic importance of establishing and maintaining customer trust is widely recognized, putting it into practice at scale has its challenges. Insights from the report show that organizations are grappling with several obstacles, with the top three being the oftentimes prohibitively high costs to build, implement, monitor, and update trust, safety, and security solutions, regulatory complexity, and a lack of internal expertise. However, resourceful leaders are overcoming these hurdles in the following ways:

1) Investing strategically: starting small, scaling wisely 

Among the surveyed CX leaders, cost emerged as the most frequently cited barrier to implementing trust and safety initiatives, with 27 percent identifying it as their greatest obstacle. To address this, organizations are moving away from building end-to-end programs all at once, in favor of modular, phased approaches. Businesses can demonstrate quick wins, validate ROI, and build momentum by targeting high-impact areas first, such as ID verification or fraud prevention.

Over time, these focused investments can be expanded and integrated, enabling a comprehensive trust strategy that evolves alongside the organization’s needs. This staged method manages spending more effectively and ensures agility in responding to emerging threats and customer demands.

2) Embracing hybrid models that combine human expertise and AI 

GenAI and automation have become powerful enablers of scaling trust and safety operations, but human oversight remains essential. According to our report, the majority of enterprises (65 percent) still rely on humans, either entirely or as part of a hybrid approach, when managing risks across ID verification (79 percent), KYC (61 percent), fraud detection (66 percent), and content moderation (48 percent).

Even in areas where automation is very advanced, such as fraud detection, more than half of the respondents (56 percent) still said they maintain human-in-the-loop safeguards. These findings suggest that trust shouldn’t be fully delegated to AI, but requires a thoughtful balance alongside human judgment to mitigate bias, ensure accuracy, especially for edge cases, and keep pace with evolving regulations.

3) Partnering for speed, scale, and specialized insights

Building robust trust and safety capabilities from the ground up requires significant time, investment, and expertise. Our Safety in Numbers report reveals a clear gap between these demands and the resources organizations have, with leaders pointing to internal skill shortages, regulatory complexity, and integration hurdles as major barriers to progress.

Strategic partnerships with specialized providers can help close these gaps. Third-party experts bring turnkey tech stacks, access to expansive partner ecosystems, domain expertise in global privacy regulations, cross-border data policies, and a rapidly evolving landscape of AI governance and security standards. Many also provide built-in regulatory support as part of their tools and services, helping enterprises stay compliant with changing laws without adding pressure on internal teams.

Trust is no longer a byproduct of delivering reliable, dependable service. It’s a measurable, strategic differentiator and competitive advantage that must be proactively built into every customer touchpoint. In today’s high-risk and high-expectation environment, leading organizations are wisely investing in smarter and more scalable trust and safety models.

By combining AI with human oversight, forging partnerships with trusted providers, and focusing investments on business-critical areas, leaders will deepen customer loyalty, protect their brand reputation, and build foundational resilience to future-proof their operations. This will enable them to compete successfully in a trust-driven economy.


The post Why Trust Is the Future of CX: A Human + Tech Security Strategy for Digital Leaders appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Evolving Zero Trust for the Age of AI https://solutionsreview.com/security-information-event-management/evolving-zero-trust-for-the-age-of-ai/ Fri, 25 Jul 2025 14:46:15 +0000 https://solutionsreview.com/security-information-event-management/?p=5861 Stephen Douglas, the Head of Market Strategy for Spirent Communications, explains how companies can evolve their zero trust initiatives for an evolving world of AI. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI. Earlier this year, OpenAI publicly accused DeepSeek, a Chinese startup and competitor, of a […]

The post Evolving Zero Trust for the Age of AI appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>

Evolving Zero Trust for the Age of AI

Stephen Douglas, the Head of Market Strategy for Spirent Communications, explains how companies can evolve their zero trust initiatives for an evolving world of AI. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.

Earlier this year, OpenAI publicly accused DeepSeek, a Chinese startup and competitor, of a new type of intellectual property theft: unauthorized “distillation” of its AI models. OpenAI claims that DeepSeek improperly extracted large amounts of ChatGPT-4o output data, using carefully structured queries to train DeepSeek’s own smaller, less mature AI model. DeepSeek denies the accusation.

Even if DeepSeek did engage in distillation, does that actually constitute theft? The question is more complicated than it sounds. However, enterprises can take one clear lesson from this episode: if AI is becoming an important part of your business strategy, you should take steps to protect it. The same confidential models and datasets you’re counting on to give you a competitive edge represent an attractive new attack surface for cyber-criminals targeting your business.

Inside the AI Threat

As AI takes on a larger role in your business operations, distillation is just one of several potential threats that should be on your radar. Others include:

Model extraction and inversion

This is similar to distillation, but rather than using a proprietary model’s output data to train another AI, here, attackers attempt to copy the model itself. For example, imagine a tech startup that develops a proprietary, AI-based dynamic pricing application that it licenses to hotel chains so that they can adjust pricing based on real-time inventory and customer information. A competing firm could repeatedly query the AI model via fake user accounts and reconstruct its internal weights and parameters to duplicate it.

Data Poisoning

Rather than stealing someone else’s proprietary data, attackers could also target an organization’s AI to try to make it less effective, either to harm the target company or enable other types of crimes. For instance, a malicious actor seeking to damage a self-driving car-maker could attempt to manipulate the manufacturer’s AI training data, uploading altered or misclassified images of traffic signs to confuse the model. Similarly, criminals seeking to manipulate a bank’s AI-driven fraud detection could repeatedly classify fraudulent transactions as legitimate, ultimately training the AI model to overlook certain types of fraud.

Lateral movement attacks

These cyber-attacks aim to breach an organization’s internal networks and data using AI as a launchpad. For example, a consumer technology company might use an AI chatbot to provide customer-facing product and technical support. To provide more intelligent and personalized assistance, the AI needs access to the company’s internal engineering systems and customer databases. But this also means that if attackers gain access to the chatbot server (such as through an API that the company exposes to retail partners), they can pivot to other systems and business applications to exfiltrate data.

Safeguarding AI with Zero Trust

These are just a few examples of the new generation of cyberthreats targeting the AI attack surface, which every business should be thinking about. Fortunately, even as AI attack strategies evolve, we can continue to employ the same overarching cybersecurity strategy that companies have relied on for years: zero trust.

The basic principles of zero trust security were first introduced in 2009:

  • Never trust, always verify before allowing access.
  • Apply “least privilege access” to prevent users and devices from even seeing resources they’re not explicitly authorized for.
  • Restrict lateral movement to reduce the damage a successful breach can cause.

Within a decade, these principles became industry best practice. But while the zero trust model is far from new, its tenets are as applicable to AI models as they’ve been for conventional networks and systems. Indeed, in a modern IT landscape dominated by mobile workers, cloud services, and applications that autonomously communicate with each other over APIs, zero trust is more relevant than ever.

Here are some examples of ways to apply zero trust to the growing AI attack surface:

Stop model extraction with continuous authentication

Attacks seeking to recreate AI models might be new, but the remedy is the same one used for decades to guard against proprietary data theft: stringent authentication and authorization. Use zero trust principles to build strict access control policies around your AI applications, including requiring multi-factor authentication and rate-limiting user queries. Depending on the application, you may also want to consider techniques like differential privacy, which add subtle randomness to model outputs to make it more difficult for attackers to reconstruct private or confidential training data.

Preventing data poisoning with least privilege access

Again, the key to protecting proprietary datasets is ensuring no unauthorized party can access them. Use strict authorization mechanisms like role-based access control (RBAC) to narrowly define who can modify training data, and use context like user device, location, role, and time of day to detect suspicious access attempts. Conduct ongoing data integrity verification checks to spot anomalies. And use micro-segmentation to isolate datasets from less secure parts of your environment.

Blocking lateral movement attacks with micro-segmentation

Micro-segmentation functions the same way as traditional network segmentation to restrict lateral movement, but it takes the concept a layer deeper, isolating individual system processes and workloads. Make sure you’re using it for sensitive AI models and datasets. And consider applying AI-driven defenses like anomaly detection tools, which can detect and shut down suspicious activity in queries and data access patterns.

Identifying the Right Approach

As with traditional cybersecurity, don’t expect zero trust AI defenses to come without tradeoffs. Adding extra layers of authentication could slow down model training, potentially adding unexpected costs and delays. In certain real-time use cases, such as AI fraud detection, the latency added by new security controls could make applications less effective. Micro-segmentation, too, must be carefully designed and implemented, or you risk breaking legitimate data flows.

In all cases, you must find the right balance for your business. Depending on the level of security your applications and data demand, you may also want to consider applying AI-enabled defenses to your AI attack surface. For example, new AI-driven security orchestration tools can dynamically adjust access control levels to protect AI models and datasets without adding unnecessary latency or delays.

Whichever interventions you choose, ensure you have a concrete strategy to protect your AI attack surface from rapidly evolving threats. And if you’re wondering how to begin, “never trust, always verify” remains a great place to start.


 

The post Evolving Zero Trust for the Age of AI appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Closing the Gap Between Perception and Proven Cybersecurity Capabilities https://solutionsreview.com/security-information-event-management/closing-the-gap-between-perception-and-proven-cybersecurity-capabilities/ Mon, 21 Jul 2025 16:11:11 +0000 https://solutionsreview.com/security-information-event-management/?p=5856 Keatron Evans, the VP of Portfolio and Product Strategy at Infosec, explains how companies can close the skills gap between perception and proven cybersecurity capabilities. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI. Cybersecurity is the backbone of digital trust, from safeguarding critical data and user […]

The post Closing the Gap Between Perception and Proven Cybersecurity Capabilities appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Closing the Gap Between Perception and Proven Cybersecurity Capabilities

Keatron Evans, the VP of Portfolio and Product Strategy at Infosec, explains how companies can close the skills gap between perception and proven cybersecurity capabilities. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.

Cybersecurity is the backbone of digital trust, from safeguarding critical data and user identities to defending systems in an increasingly connected landscape. But there’s a problem: the industry is in dire need of skilled workers; in fact, it needs 4.8 million more workers globally. Highly qualified talent is primed to successfully address challenges like the rapid progress of artificial intelligence (AI) and ongoing security breaches. 

To close the long-standing cybersecurity skills gap, a growing number of companies are offering various certificates, credentials, and training programs that help current employees and job seekers enhance their industry knowledge. However, it’s becoming overwhelming for hiring managers to keep track of all the different learning programs available and which ones build the real-world skills your organization needs.  

For organizations to truly understand a candidate’s skillset and experience, they have to go beyond credentials and ensure candidates can actually perform the required skills. So, let’s dive further into why validating security skills has been a difficult process, the risks employers face if they don’t understand the level of expertise candidates bring to the table, and the different approaches employers can take to verify said qualifications. 

Validating Cybersecurity Skills is a Challenging Process 

Employers generally agree on the specific skills that are important for the security field, like penetration testing, human risk management, and incident detection. However, not every course or certificate on the market teaches the same depth, level of skilling, or type of content. Some may just focus on understanding key concepts, whereas others dive deeper into applying, analyzing, and evaluating those same concepts. 

We’re seeing hiring managers continue to rely on tried-and-true standardized certifications like CompTIA Security+ and ISC2 CISSP to gauge a candidate’s knowledge. Now, as companies also require various accreditations for specific products and solutions–like Fortinet’s NSE (Network Security Expert) Certifications or Microsoft’s SC-200, for example–hiring managers must be more wary about the type of assessments these learning pathways entail, and how many of those skills candidates are proficient in.  

Additionally, as technology rapidly evolves, so do the tactics of those who exploit it. Threat actors are ahead of the curve—using new tactics like deepfakes and generative AI or new ways to exploit new tools like this recent campaign abusing Google Calendar to evolve their social engineering, phishing, scamming, and more. This means that security candidates’ skills must also keep pace, and employers must evaluate the skills of new hires and current employees on an ongoing basis. 

Making Skills Validation Frictionless  

According to Fortinet, more than half of technology leaders believe that some of the leading causes of security breaches stem from employees’ lack of necessary skills, training, and awareness–another example of why validating candidates’ skills is essential to ensure the cybersecurity workforce remains healthy.  

Thankfully, there are several tactics employers can leverage in skills verification, including:   

Utilizing AI to dive deeper into credentials

Every organization requires its workforce to have unique skills and experience. However, it can be challenging to narrow down what that expertise should look like to meet their security and business needs. Thankfully, there are AI solutions on the market that help organizations determine the top skills they need their employees to have, and flag existing cybersecurity skills gaps so that employees can embark on the right training opportunities. AI can play a pivotal role in validating those skills. For example, AI can look at how employees approach problem-solving and task-completion, and use real-world data to judge how employees perform and utilize their skillset. 

Understanding the expertise needed

Hiring and cybersecurity managers need to assess what the real need is. They’ll need to check for what is a priority in the short- and long-term for their teams to look for the specific expertise in candidates. It needs to be clearly positioned as titles and responsibilities usually vary between organizations. For example, SOC analysts in smaller companies may be well versed in open-source tools or limited SIEM setups. At the same time, those working at the enterprise level would be adept at more complex platforms and environments (e.g., diverse cloud environments or having more in-depth skills with platforms like Splunk or QRadar).  

Offering hyper-personalized training

In addition to researching the courses and learning experience cybersecurity candidates tout on their resumes, using AI tools can help map out highly specific skilling pathways for new hires. This ensures employees can immediately receive training that meets their needs and individual goals. This will help to effectively hire people that are also highly trainable for companies’ unique needs. This process will go beyond the interview by using data about a candidate’s current skillset and the responsibilities the open role requires. Hiring managers can use AI to determine what the candidate will need to learn if they step into the job and what learning resources their employer has readily available. 

Risks of Skipping Skills Validation 

As hiring managers scramble to fill open positions, skills verification often slips into an optional “nice to have.” This needs to change.  

Skills validation must be seen as an essential part of the hiring process because overlooking it can put the wrong people in charge of handling issues that are too complex or niche for their level of experience. In fact, forgoing skills validation can lead to several workplace and security challenges, including: 

  • Underequipped workforce: Placing candidates in roles for which they lack the skills or expertise to succeed can dip productivity and morale, waste time, and potentially tarnish an organization’s competent reputation. 
  • Increased security issues: Undertrained employees tend to let red flags and suspicious activity fall to the wayside. The outcome? Far more breaches with greater reach. 
  • Setting a bad hiring precedentIf companies don’t develop a process for skills validation soon enough, they risk cementing poor hiring processes that will be harder to untangle later. This unnecessarily uses money, time, and other resources to fix preventable issues. 

Amid ongoing labor market volatility, increasingly expensive data breaches, and cybersecurity skills gaps in the industry, hiring managers must place a greater emphasis on skills verification and its ability to attract higher-quality talent.  

While skills verification may introduce an extra layer to the hiring process, it plays a critical role in reducing the risks of mis-hires. By validating their knowledge before entering a role, organizations can build stronger teams and decrease the risk of potential security issues while contributing to a more resilient security industry. 


The post Closing the Gap Between Perception and Proven Cybersecurity Capabilities appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
How to Assess the AI Readiness of Your Information Security Team https://solutionsreview.com/security-information-event-management/how-to-assess-the-ai-readiness-of-your-information-security-team/ Wed, 09 Jul 2025 19:53:06 +0000 https://solutionsreview.com/security-information-event-management/?p=5850 To help companies remain competitive amidst changing markets, the Solutions Review editors have outlined how companies can assess the AI readiness of their information security teams. Integrating artificial intelligence into information security operations is the latest in a long line of fundamental shifts in how organizations defend against threats. However, the success of these new […]

The post How to Assess the AI Readiness of Your Information Security Team appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
How to Assess the AI Readiness of Your Information Security Team

To help companies remain competitive amidst changing markets, the Solutions Review editors have outlined how companies can assess the AI readiness of their information security teams.

Integrating artificial intelligence into information security operations is the latest in a long line of fundamental shifts in how organizations defend against threats. However, the success of these new and developing AI-driven security initiatives depends heavily on the readiness of the teams responsible for implementing and managing these technologies. Assessing AI readiness requires a systematic evaluation across multiple dimensions outside technical competency.

Understanding AI Readiness in Security Contexts

AI readiness in information security encompasses the organizational capacity to effectively deploy, operate, and optimize AI-powered security tools while maintaining human oversight and decision-making authority. That readiness manifests across cognitive, technical, operational, and cultural dimensions that determine whether an AI initiative will enhance or hinder security outcomes.

The cognitive dimension involves the team’s understanding of AI systems’ capabilities, limitations, and failure modes. Security professionals must develop intuition about when AI recommendations should be trusted, questioned, or overridden. This requires an in-depth familiarity with concepts like model drift, adversarial attacks against AI systems, the statistical nature of AI decision-making, and the potential pitfalls of failing to address AI ethics.

Technical readiness extends beyond basic AI literacy and expands into practical skills in data engineering, model validation, and system integration. Security teams must understand how to prepare data for AI consumption, evaluate model performance in security contexts, and integrate AI outputs into existing security workflows without introducing new vulnerabilities.

Meanwhile, operational readiness encompasses the processes, procedures, and governance structures needed to deploy AI technologies responsibly within security operations. This includes incident response procedures when AI systems fail, processes for continuous model monitoring, and frameworks for maintaining human accountability in AI-assisted decision-making.

Cultural readiness is the final step and involves the team’s willingness to embrace AI as a force multiplier rather than a replacement for human expertise. Here, Empathetic AI (EAI) frameworks become crucial, as they help organizations overcome the natural resistance to automation while maintaining healthy skepticism about AI capabilities and outfitting security professionals with the training and support they need to use the technology effectively.

Let’s dive a bit deeper into each of those stages.

Cognitive Assessment Framework

The cognitive assessment should begin with evaluating the team’s understanding of AI fundamentals within security contexts. This goes beyond general AI awareness to focus on security-specific applications and challenges.

To get started, you must test the team’s grasp of supervised versus unsupervised learning in security contexts. Can they articulate when each approach is appropriate for different threat detection scenarios? Do they understand the implications of training data quality on model performance? A mature team should recognize that anomaly detection models require evaluation criteria different from the classification models used for traditional malware detection. Here’s a breakdown of the key areas to examine:

  • Assess understanding of AI attack vectors and defensive considerations. Security teams working with AI must understand how adversaries can target AI systems through techniques like model poisoning, adversarial examples, and data poisoning attacks. Evaluate whether team members can identify potential attack surfaces introduced by AI systems and develop mitigation strategies.
  • Examine the team’s ability to interpret and act on AI-generated insights. Present scenarios where AI systems provide recommendations with varying confidence levels and assess how to respond. A ready team should demonstrate sophisticated judgment about when to act on AI recommendations, when to seek additional validation, and when to override AI suggestions based on contextual factors the model cannot process.
  • Evaluate understanding of bias, fairness, and ethical considerations in security AI applications. Can the team identify potential sources of bias in threat detection models? Do they understand how historical security data might perpetuate biased response patterns?

Technical Competency Evaluation

Technical assessments must address both foundational skills and security-specific AI applications. The evaluation should be practical and scenario-based rather than purely theoretical to ensure teams have the know-how required to get the most value from the new technologies. Here are the pillars companies should be targeting in their technical competency evaluations:

  • Data engineering capabilities: Security AI systems require massive amounts of high-quality, properly formatted data, and teams must be able to identify relevant data sources, clean and normalize security data, and create appropriate training datasets.
  • Model selection and validation skills: Present real security use cases and assess the team’s ability to select appropriate AI approaches. Can they articulate why deep learning might suit some threat detection scenarios while traditional machine learning approaches might work better for others? Do they understand the trade-offs between model complexity and interpretability in security contexts?
  • Integration and deployment capabilities: Evaluate the team’s ability to integrate AI systems with existing security infrastructure, manage model versioning and updates, and maintain system performance under production loads. This includes understanding containerization, API design, and real-time processing requirements.
  • Model monitoring and maintenance skills: Security environments change rapidly, and AI models must adapt accordingly. Assess the team’s ability to detect model drift, evaluate ongoing performance, and implement model updates without disrupting security operations.

Operational Readiness Assessment

Operational readiness evaluations focus on the processes, procedures, and governance structures that enable effective AI deployment and management. These assessments determine whether technically sound AI initiatives will succeed or fail once deployed in a production environment. It’s less about the technology or the user’s ability, but rather, the capability of an operation to handle the new tools and processes. Companies should focus their examinations on these areas:

  • Incident response procedures for AI system failures: Traditional incident response focuses on external threats, but AI systems introduce new categories of internal failures that teams must know how to manage.
  • Change management processes: Adapting to AI systems is significantly different from traditional software updates, as AI model updates can alter system behavior in difficult-to-predict ways. Teams should assess whether they have appropriate testing procedures, rollback capabilities, and validation processes for AI system changes.
  • Documentation and knowledge management practices: Models can behave unexpectedly, and institutional knowledge about model behavior, edge cases, and workarounds must be captured and maintained so teams can document AI system behavior, maintain runbooks for everyday issues, and transfer knowledge about AI system operations.
  • Compliance and audit readiness: AI systems in security contexts are subject to regulatory requirements or internal audit processes, so teams must be able to document AI decision-making processes, maintain audit trails, and demonstrate compliance with all relevant standards.

Cultural and Organizational Factors

While often overlooked, assessing the cultural and organizational readiness for AI adoption is one of the most important things a business can do. AI is a tool designed to help humans, so if a company’s human workforce is unable or ill-equipped to utilize that tool, they won’t use it. Companies can avoid these common failure points for AI initiatives by assessing and tracking the following factors:

  • Resistance to automation: Security professionals often have strong opinions about automated decision-making, particularly in high-stakes environments. As such, you should assess the team’s attitudes toward AI assistance versus replacement, their comfort with AI-generated recommendations, and their willingness to cede certain decisions to automated systems.
  • Human-AI collaboration: Effective collaboration between humans and AI involves understanding when to rely on AI recommendations, provide human oversight, and intervene in AI decision-making.
  • Learning agility: Assess the team’s willingness and ability to continuously update their skills, adapt to new AI capabilities, and incorporate emerging AI techniques into their security practices.
  • Risk tolerance and decision-making under uncertainty: AI systems operate probabilistically, and security teams must become comfortable making decisions based on confidence levels and statistical likelihood rather than deterministic rules. This represents a significant cognitive shift for many security professionals.

Methodologies for AI Readiness Assessments

Theoretical knowledge assessment alone provides an incomplete picture of AI readiness. Security teams may demonstrate a strong conceptual understanding of AI yet fail to apply the appropriate techniques in high-pressure operational environments. Practical assessment methodologies must therefore simulate real-world conditions, time constraints, and decision-making pressures that characterize actual security operations.

These assessments should reveal not just what team members know, but how they perform when AI systems behave unexpectedly, when data quality degrades, or when AI recommendations conflict with human intuition. The most effective assessments combine multiple evaluation approaches to create a comprehensive picture of individual competencies and team-level collaboration patterns. Teams should focus on these types of assessment methodologies:

  • Scenario-based evaluations provide the most realistic assessment of AI readiness. These realistic security scenarios require interaction with AI systems and should include normal operations, edge cases, and failure modes.
  • Hands-on technical challenges should test practical skills with real security datasets and AI tools. Rather than theoretical knowledge tests, provide users with access to security data and AI platforms that evaluate their ability to develop, deploy, and maintain AI solutions for specific security challenges.
  • Peer review and collaborative assessments reveal team dynamics and knowledge gaps. Have team members evaluate each other’s AI-related work and provide feedback on AI implementation approaches. This can demonstrate individual competencies and team-level readiness factors.
  • External benchmarking against industry standards and peer organizations provides objective comparison points. While specific metrics may vary, comparing assessment results against established AI readiness frameworks helps identify areas for improvement.

How to Address Identified Gaps

Gap remediation requires targeted interventions based on assessment results. Since different skill gaps require different approaches and timelines, each remediation process should be strategic rather than reactive, recognizing that AI readiness gaps often interconnect in complex ways that demand coordinated interventions.

For example, a knowledge gap in understanding model bias may be compounded by cultural resistance to AI automation and procedural gaps in model validation processes. Attempting to address these gaps independently might prove ineffective, as unresolved cultural issues undermine technical training efforts, while inadequate processes can negate improved individual competencies. The following are some recommended ways to address and resolve potential gaps in the workforce:

  • Targeted training programs can address knowledge gaps, but these must be practical and security-focused rather than generic AI education. Develop training that combines AI concepts with real security use cases and hands-on experience with security-specific AI tools.
  • Skill gaps require hands-on practice and mentorship. Pair experienced team members with those developing AI skills and create opportunities for practical application of AI techniques in low-risk scenarios.
  • Process gaps need new procedures and governance structures. Developing these structures may require collaboration with other organizational functions, such as compliance, legal, and risk management, to ensure AI governance aligns with broader organizational requirements.
  • Cultural gaps often require the most time and careful attention. Address concerns about AI’s impact on job security, communicate clearly about AI’s role as a tool rather than a replacement, and create opportunities for team members to experience AI benefits firsthand.

Future Considerations

The integration of AI into security operations will likely accelerate, making AI readiness assessment increasingly critical. As the technologies evolve, future assessments may also need to address emerging capabilities like large language models for security analysis, autonomous response systems, and AI-powered threat hunting platforms. Organizations should prepare for potential regulatory scrutiny of AI security implementations if they hope to stay agile.

Ultimately, assessing AI readiness in information security teams requires a comprehensive approach encompassing cognitive, technical, operational, and cultural dimensions. Success hinges on an organization’s ability (and willingness) to move beyond superficial AI awareness and develop deep, practical competencies that enable effective AI deployment and management in security contexts.

The assessment process itself should be viewed as an opportunity for team development rather than simply an evaluation exercise. By identifying specific gaps and developing targeted remediation strategies, organizations can build AI-ready security teams that enhance rather than compromise security outcomes. As AI becomes increasingly central to security operations, the teams that invest in systematic readiness development will gain significant advantages in the current and future information security marketplace.


Want more insights like this? Register for Insight JamSolutions Review’s enterprise tech community, which enables human conversation on AI. You can gain access for free here!

The post How to Assess the AI Readiness of Your Information Security Team appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Navigating the AI Revolution: Fostering Team Resilience in a New Era of Intelligent Threats https://solutionsreview.com/endpoint-security/navigating-the-ai-revolution-fostering-team-resilience-in-a-new-era-of-intelligent-threats/ Wed, 25 Jun 2025 19:20:33 +0000 https://solutionsreview.com/security-information-event-management/navigating-the-ai-revolution-fostering-team-resilience-in-a-new-era-of-intelligent-threats/ Laura Ellis, VP of Data and AI for Rapid7, explains why companies must foster team resilience in their security efforts if they want to defend against increasingly intelligent threats. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI. The cybersecurity industry is at an inflection point as we […]

The post Navigating the AI Revolution: Fostering Team Resilience in a New Era of Intelligent Threats appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>

Fostering Team Resilience in a New Era of Intelligent Threats

Laura Ellis, VP of Data and AI for Rapid7, explains why companies must foster team resilience in their security efforts if they want to defend against increasingly intelligent threats. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.

The cybersecurity industry is at an inflection point as we barrel into the AI era, driven not only by increasingly sophisticated attackers but also by the rise of smarter, more advanced tools. Much like the Industrial Revolution mechanized the physical world, artificial intelligence is rapidly transforming the cognitive landscape. For security teams, we need to redefine the rules of defense and engagement.

Defenders are firsthand witnesses to how generative AI and autonomous systems dramatically increase the speed, scale, and complexity of attacks. However, our security postures are also evolving with access to a new generation of tools that enhance automation and strengthen defenders’ efforts. We find ourselves at a pivotal moment that demands new mindsets, guardrails, and definitions of what it means to lead during this era of intelligent threats.

The Rise of AI-Powered Threats: Faster, Not Always Smarter

From sophisticated social engineering attacks to deepfakes and voice cloning, which make impersonation more realistic than ever, it is clear that AI has undeniably transformed the threat landscape. Malicious actors are no longer required to be expert coders to execute successful phishing campaigns, lowering the barrier to entry for cyber-attacks. While these threats may not be more intelligent at their core, they are significantly more scalable and harder to detect using traditional methods. They’re faster, cheaper, and everywhere.

This shift undermines longstanding assumptions in security operations. Traditional threat detection methods are becoming less reliable, as many current systems are built on the assumption that intelligent threats can be detected through static signatures or heuristic patterns. But AI-generated attacks can morph quickly, often bypassing those conventional defenses. To keep up, security teams must prioritize verification over mere identification. Multi-factor authentication, for example, must now be the norm: validating identities through multiple channels, enforcing second-factor callbacks, and even using designated safe words in verbal communications. These strategies treat trust as something to be actively earned, not passively assumed.

Agentic AI and the Changing Shape of Security Teams

AI in cybersecurity is no longer confined to a passive role. We’ve entered the era of agentic AI, in which tools go beyond decision-making to take autonomous action. AI agents are evolving from narrow-task assistants to active collaborators. Agents can triage alerts, enrich data with threat intelligence, and even initiate semi-automated protocols to isolate compromised systems or block malicious traffic. With AI performing these tasks, human capital can be used to focus on high-impact strategy and forensics.

But this leap in autonomy brings new challenges. If security leaders can’t explain where and how AI agents are making decisions, they’re neither collaborating with AI nor leveraging it—they’re surrendering responsibility to the technology. Governance is what keeps them accountable and in control. That starts with understanding where generative AI services live across the environment, enforcing security best practices specific to AI and ML workflows, and maintaining unified visibility that helps teams quickly distinguish what’s risky from what’s routine.

Guardrails must be built into every stage of deployment, from structured review cycles to escalation and continuous oversight. These systems must be transparent and held to the same accountability standards as their human counterparts. It’s not just about adding AI to the team but also ensuring it remains a tool, not a threat.

Talent and Expertise in an AI-Driven World

With AI accelerating every aspect of security operations, the definition of “expertise” is shifting under our feet. On the one hand, AI-assisted coding tools enable developers to create more functional code with minimal training, thereby helping smaller engineering teams achieve significant gains in productivity with “vibe coding” and increasing the efficiency of those capable of coding on their own.

On the other hand, there’s a growing risk that overreliance on automation will erode the deep, contextual knowledge that’s critical in a crisis. Consider aviation: a junior pilot trained on autopilot systems may fly a commercial route with ease until a system fails midair, and manual expertise is required to land the plane safely. In cybersecurity, the same principle applies. AI can support but not replace deep domain understanding.

Security teams need to remodel and realign their workflows in the era of AI. Developers, expected to maintain a broad knowledge of the attack surface, need to shift towards becoming increasingly specialized in training and architecting models. Analysts must evolve into skilled operators of AI who can direct, question, and refine systems to deliver trusted, actionable outcomes. Security leaders need to invest not just in advanced tools, but also in training. That means upskilling teams on AI fundamentals, fostering fluency in emerging technologies, and building a culture that prioritizes continuous learning over static job descriptions.

Accountability Without Ownership: Governing AI Responsibly

It’s easy to assume that AI governance starts and ends with the teams building the models, but that’s only part of the picture. In reality, some of the most significant responsibilities lie with those applying AI in day-to-day operations. Even if an organization only uses pre-built AI tools, those usage choices have real-world consequences, from escalating energy consumption to reinforcing biased datasets. AI risks are business risks, and governance isn’t about authorship but impact.

Think of it this way: you may not run an oil company, but if you drive a gas-powered car, your choices still contribute to emissions. Similarly, security teams using AI have a responsibility to assess and mitigate their downstream effects. This is an opportunity for forward-thinking security leaders to model what ethical AI adoption can look like. Establishing internal AI use guidelines, conducting bias audits, and demanding transparency from vendors are just a few ways to lead with integrity, even in a landscape where regulation is still catching up.

Looking Ahead: Navigating Leadership in an Era of Change

Like any major technological shift, the rise of AI has sparked both enthusiasm and skepticism. Regardless of where you stand, whether as a critic, an advocate, a defender, or even a threat actor, AI is embedded in the fabric of cybersecurity. This transformation has no finish line, only acceleration. For security leaders, success will hinge less on fixed protocols and more on the ability to evolve.

It’s not enough to secure today’s systems; you have to be ready to change your approach tomorrow. That means embracing modular architectures, automating routine tasks while maintaining human oversight and continuously reassessing how emerging technologies shape the threat surface. Above all, it means rejecting complacency. The age of intelligent threats demands intelligent leadership grounded in ethics, driven by curiosity, and unafraid to rewrite the playbook. We’re witnessing the birth of a new security paradigm. The question is: will your team be ready to lead it?

The post Navigating the AI Revolution: Fostering Team Resilience in a New Era of Intelligent Threats appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Empathetic AI is the Key to a Successful AI Risk Management Framework https://solutionsreview.com/identity-management/empathetic-ai-is-the-key-to-a-successful-ai-risk-management-framework/ Fri, 13 Jun 2025 14:32:36 +0000 https://solutionsreview.com/security-information-event-management/empathetic-ai-is-the-key-to-a-successful-ai-risk-management-framework/ To help companies remain competitive amidst changing markets, the Solutions Review editors are exploring how an empathy-first approach to AI risk management can transform a company’s ability to adopt and utilize AI technology successfully. Implementing artificial intelligence (AI) into your company is as much about integrating the technology itself as managing the potential ripple effects […]

The post Empathetic AI is the Key to a Successful AI Risk Management Framework appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Empathetic AI is the Key to a Successful AI Risk Management Framework

To help companies remain competitive amidst changing markets, the Solutions Review editors are exploring how an empathy-first approach to AI risk management can transform a company’s ability to adopt and utilize AI technology successfully.

Implementing artificial intelligence (AI) into your company is as much about integrating the technology itself as managing the potential ripple effects it could have on the business. As the National Institute of Standards and Technology (NIST) explains, as many benefits as AI can provide—economic growth, improved productivity, boosted agility, etc.—it can also “pose risks that can negatively impact individuals, groups, organizations, communities, society, the environment, and the planet.” That’s where the value of an AI Risk Management Framework comes into play.

If these frameworks aim “to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems,” as the NIST says, empathy must be an essential part of any risk management strategy. With that in mind, this article will examine the crucial role AI risk management plays in today’s evolving world, specifically focusing on how valuable an empathetic AI (EAI) policy is to an AI risk management framework.

Addressing the Empathy Gap in Current AI Risk Frameworks

If you didn’t already know, the most widely adopted and recognized AI risk framework is the NIST AI Risk Management Framework (AI RMF), released in January 2023. However, much has changed in the years since, as few as they are. According to a report McKinsey & Company released in 2025, “78 percent of respondents say their organizations use AI in at least one business function, up from 72 percent in early 2024 and 55 percent a year earlier.” That’s a significant increase since the NIST released their AI RMF, and the landscape has changed.

While the NIST’s AI RMF remains the standard, and rightfully so, public perception of what it means to have a risk management strategy for AI adoption seems to lack the proper focus on empathy. Most AI risk management frameworks being deployed treat risks as quantifiable variables that can be addressed through technical controls and governance processes. That approach makes sense, since companies require a methodology that can be replicated and deployed as easily as possible. However, it can also create what you might call an “empathy gap,” resulting in AI systems failing to account for the emotional, contextual, and relational dimensions of human decision-making.

Consider the case of AI-powered customer service systems that function correctly but cause brand damage by failing to deliver the correct tone during customer interactions. While these systems could technically pass a traditional risk assessment, they fail in practice, harming consumers, users, and the company. There have been studies done on AI’s ability (or lack thereof) to utilize empathy in various settings, including medical care, for example, and most of the findings demonstrate that, despite AI’s growing capabilities, it cannot replicate the experienced empathy humans use on a daily basis.

Consequently, empathy must be a top priority in developing or deploying an AI risk management framework. With an EAI mindset, we believe companies can transform how they create and use AI technologies to maximize business potential and support their human workers. It’s like the NIST’s framework says: “AI risks–and benefits–can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed.”

The Business Case for Empathetic AI Risk Management

Unlike traditional AI metrics that focus on speed or accuracy, empathetic AI focuses on sticky, differentiated value propositions that are inherently difficult for competitors to replicate because they require deep integration of emotional intelligence, cultural sensitivity, and contextual awareness across entire product ecosystems. To get specific, the business case for empathetic AI in risk management rests on the premise that traditional risk frameworks catastrophically underestimate human-centric failure modes by treating users as rational actors rather than complex emotional beings.

An EAI-centric risk management strategy recognizes that the most disruptive AI failures often emerge not from technical malfunctions but from misaligned human-AI interactions where systems fail to understand user emotional states, cultural contexts, or unstated needs. By shifting to an empathy-first approach, companies can move their risk assessment from purely probabilistic models toward dynamic, relationship-aware frameworks that can predict and even prevent the social and reputational damages that emerge when AI systems inadvertently cross a line.

A study from 2021 explains, “AI lacks a helping intention towards another person as the basis of its attentional selection, because it does not have the appropriate motivational and inferential structure.” That lack does not mean AI is incapable of being helpful or acting empathetically. However, it does necessitate that humans adopt an empathy-first mindset when designing AI or giving it directions. Failing to do so can result in empathy failures that generate negative publicity that affects market capitalization, far exceeding the technical infrastructure investments.

EAI risk management can help your brand avoid that negativity by providing early warning systems that the technology and its users identify by continuously monitoring emotional sentiment, cultural alignment, and relationship quality metrics that traditional risk systems ignore entirely.

These AI risk management frameworks take time and investment, requiring companies to collect extensive training data about human emotional states, cultural norms, and psychological vulnerabilities—information that presents massive privacy and security risks. Yet, even with the complexity, an EAI risk management strategy is still worth exploring, especially since it means getting in “on the ground floor” for an emerging methodology already sending ripples throughout the enterprise technology marketplace.

The Competitive Advantage of Empathetic Risk Management

Organizations that successfully integrate empathetic AI into their risk management frameworks are developing sustainable competitive advantages that extend beyond traditional operational metrics. The ability to understand and respond to human emotional contexts creates differentiation opportunities in customer experience, employee engagement, and stakeholder relations that are difficult for competitors to replicate. It will also show employees that company decision-makers are taking AI seriously and not viewing it as a quick fix, which can improve employee trust. And the more trust employees have in the business, the easier it will be for them to adapt to the changes AI will inevitably introduce.

More strategically, empathetic AI capabilities position organizations to better navigate the increasing regulatory focus on human-centric AI governance, which is already a crucial part of AI risk management strategies. As regulations evolve to require more consideration of human factors in AI systems, organizations with mature empathetic AI frameworks will face lower compliance costs and faster regulatory approval processes. Organizations that recognize this and invest accordingly will position themselves as leaders in the next generation of AI-powered enterprises.

The question for enterprise leaders isn’t whether to integrate empathetic AI into risk management frameworks, but how quickly they can develop the capabilities necessary to do so effectively while avoiding the significant pitfalls that await unprepared implementations.


Want more insights like this? Register for Insight JamSolutions Review’s enterprise tech community, which enables human conversation on AI. You can gain access for free here!

The post Empathetic AI is the Key to a Successful AI Risk Management Framework appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
What Will the AI Impact on Cybersecurity Jobs Look Like in 2025? https://solutionsreview.com/endpoint-security/what-will-the-ai-impact-on-cybersecurity-jobs-look-like-in-2025/ Tue, 20 May 2025 15:03:00 +0000 https://solutionsreview.com/security-information-event-management/what-will-the-ai-impact-on-cybersecurity-jobs-look-like-in-2025/ The editors at Solutions Review summarize some of the most significant ways AI has impacted cybersecurity jobs, hiring, skillsets, and more. Regardless of your job title or industry, artificial intelligence (AI) has likely impacted your company’s internal and external processes. This can be especially true for cybersecurity professionals, as AI has changed how threat actors […]

The post What Will the AI Impact on Cybersecurity Jobs Look Like in 2025? appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
What Will the AI Impact on Cybersecurity Jobs Look Like in 2025

The editors at Solutions Review summarize some of the most significant ways AI has impacted cybersecurity jobs, hiring, skillsets, and more.

Regardless of your job title or industry, artificial intelligence (AI) has likely impacted your company’s internal and external processes. This can be especially true for cybersecurity professionals, as AI has changed how threat actors plan and execute attacks and introduced new ways to combat potential and active threats. What is less clear is the specific impact AI has had on cybersecurity and whether these professionals have cause for concern.

As AI is integrated into cybersecurity operations at unprecedented levels, the form and function of a company’s cyber team will continue to undergo rapid changes. To keep track of those changes, the Solutions Review editors have outlined some of the primary ways AI has changed cybersecurity, what professionals can do to remain agile during those evolutions, and what the future may hold for them and the technologies they use.

Note: These insights were informed through web research using advanced scraping techniques and generative AI tools. Solutions Review editors use a unique multi-prompt approach to extract targeted knowledge and optimize content for relevance and utility.

How Has AI Changed the Cybersecurity Workforce?

In just a few years, the impact of AI on cybersecurity has dramatically restructured the industry’s roles, responsibilities, and required skill sets. This transformation has been freeing for many, as AI technologies have streamlined user workloads and empowered teams to focus on more specialized, high-value tasks and projects. For comparison’s sake, consider how the global market for AI in cybersecurity is estimated to reach a market value of USD 133.8 billion by 2030, compared to its reported USD 14.9 billion in 2021. These technologies are exploding, and they’re not going anywhere.

However, it’s not uncommon for cybersecurity professionals to feel uneasy about the rapid adoption of these technologies, as they have already proven capable of rendering some tasks and roles nearly obsolete. Here are some of the job roles and processes that have been impacted the most by AI:

AI-Powered Automation and Analysis

AI is reshaping how cybersecurity analysis happens by expanding its scope and compressing its cognitive overhead. Traditionally, analysis involved hours of log inspection, correlation of alerts, and cross-referencing of threat intel feeds. However, with AI, especially those using machine learning (ML) and natural language processing (NLP), companies can automate those time-consuming processes to reduce alert fatigue and allow analysts to focus on the highest-risk threats.

For example, consider how leading cybersecurity platforms like Microsoft Defender XDR or IBM QRadar use ML models to correlate log entries and contextualize hundreds of alerts into real-time attack narratives. These streamlined analyses can dramatically reduce workloads by streamlining the process of identifying probable causes, unlocking cross-functional insights, and deploying that data to defend against future threats.

AI might be evolving what “analysis” looks like in cybersecurity, but it’s not ready to fully replace the necessity of human intervention. With AI handling the workload of detecting and aggregating information, human analysts will commit their time and expertise to interpretation, intent modeling, and escalation decision-making.

Threat Hunting and Adversarial Behavior Modeling

For years, traditional threat hunting has been hypothesis-driven: an analyst suspects that a particular tactic—e.g., credential stuffing or lateral movement—is occurring and searches logs or telemetry for artifacts that confirm or debunk that suspicion. However, this process is often narrow and human-biased, which is where AI can help. With its unsupervised learning and clustering capabilities, AI can identify and track patterns without preconceptions.

AI has essentially made “continuous hunting” possible. Some of the leading cybersecurity tools already use AI and behavioral models to proactively surface deviations, such as beaconing new domains or unusual SMB shares accessed at odd hours. Since AI can run 24/7, threat hunts no longer have to be ad hoc. It also adds a new data engineering dimension to threat hunting, as cybersecurity professionals are now encouraged (if not outright expected) to have AI-specific skills around curating telemetry, labeling behavior, and tuning features.

There’s no denying that AI is a double-edged sword for cybersecurity—cyber-criminals launched 36,000 malicious scans per second in 2024, according to Fortinet, and there’s been a 1,200 percent surge in phishing attacks since the rise of GenAI in late 2022. However, if companies want to keep up with the volume of attacks, they need the support that AI-boosted cybersecurity tools provide.

The Emergence of AI-Centric Cybersecurity Roles

The rise of AI in cybersecurity has not only affected existing workflows—it has spawned entirely new job categories, restructuring the profession around data-centric and model-centric competencies. These AI-centric cybersecurity roles represent a convergence of disciplines: traditional security, data science, ML operations (MLOps), and even behavioral psychology. Other roles like “blue team analysts” or “SOC engineers” are supplemented or outright replaced by titles like AI Threat Analyst, ML Security Engineer, and Adversarial ML Red Teamer.

It’s also possible that the future of cybersecurity jobs will start to resemble AI safety roles more than traditional InfoSec. This would involve an increased focus on validating agent boundaries, applying RLHF to constrain behavior, and building sandboxed testbeds for threat simulations. While there’s potential in that future, active and aspiring professionals should be wary, as that trend could result in a skills bar that leaves traditional network defenders behind unless they retrain aggressively.

The meta-trend here is becoming clear: Cybersecurity is evolving into a data science problem, and the workforce is shifting accordingly. The people who can reason statistically, build or probe AI systems, and think adversarially will define the next generation of cybersecurity leadership. Conventional roles will likely persist but may increasingly resemble operational support for AI-first tooling. Regardless, like LinkedIn’s Skills on the Rise report says, AI literacy will continue to be the skill that “professionals are prioritizing and companies are increasingly hiring for.”

Upskilling for the Future

AI isn’t a new technology, but it’s hitting the cybersecurity job market fast and hard. According to Cybersecurity Ventures, there will be 3.5 million unfilled jobs in the cybersecurity industry through 2025, a 350 percent growth from the one million open positions reported in 2013. If professionals want to keep their jobs—or future-proof themselves from potential displacement—they must equip themselves with AI-centric skills as soon as possible.

To reinforce that urgency, look at IBM’s Cost of a Data Breach Report, which shows that half of the organizations encountering security breaches also face high security staffing shortages. Even with 1 in 5 organizations using some form of generative AI, that skills gap remains a real challenge. Companies across industries need professionals fluent in adversarial and algorithmic logic, as that expertise will empower them to stay relevant regardless of the future. Mike Arrowsmith, the Chief Trust Officer at NinjaOne, puts it like this: “The best way to rein in AI risks is with more employee training. People have to know what to look out for, especially as AI technology evolves.”

One area professionals can focus on is soft skills. A recent study by Skiilify demonstrated that 94 percent of tech leaders believe soft skills—like curiosity, resilience, tolerance of ambiguity, perspective-taking, relationship-building, and humility—are more critical than ever. Soft skills can also help cybersecurity professionals understand how models can fail, how attackers exploit statistical assumptions, and how to wrap AI systems in resilient human oversight.

With Gartner predicting that, by 2028, “the adoption of GenAI will collapse the skills gap, removing the need for specialized education from 50 percent of entry-level cybersecurity positions,” it’s more crucial than ever for cybersecurity professionals to find and refine the skills that make them unique.

Will AI Replace Cybersecurity Professionals?

“AI won’t replace cybersecurity professionals, but it will transform the profession,” says Chris Dimitriadis, the Chief Global Strategy Officer at ISACA. The cybersecurity marketplace is already changing in response to AI tools and threats, but the transformation is far from finished. Even if the profession itself doesn’t go away, there’s a chance that current cybersecurity practitioners will be left behind as their job evolves into something they’re no longer equipped for.

In the longer term, AI will likely reshape cybersecurity professionals into decision supervisors. Their responsibilities will be less focused on making decisions and instead emphasize overseeing, calibrating, and intervening in AI-driven decision-making as necessary. It’s a subtler shift, but if the current workforce doesn’t upskill themselves in preparation, they may find that their expertise isn’t quite as valuable as it used to.

According to Sam Hector, Senior Strategy Leader at IBM Security, AI will “fundamentally shift the skills we require. Humans will focus more on strategy, analytics, and program improvements. This will necessitate continuous skills development of existing staff to pivot their roles around the evolving capabilities of AI.” The future of cybersecurity will be charted by practitioners who expand their perspective, prioritize their professional growth, engage with their peers, and collectively learn how to improve their AI-centric skills and literacy.


Want more insights like this? Register for Insight JamSolutions Review’s enterprise tech community, which enables human conversation on AI. You can gain access for free here!

The post What Will the AI Impact on Cybersecurity Jobs Look Like in 2025? appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
The Benefits of On-Premises AI: Regaining Control in the Era of Data Sovereignty https://solutionsreview.com/security-information-event-management/the-benefits-of-on-premises-ai-regaining-control-in-the-era-of-data-sovereignty/ Thu, 15 May 2025 16:11:00 +0000 https://solutionsreview.com/security-information-event-management/?p=5817 Praveen Jain, the SVP/GM of AI Clusters and Data Center at Juniper Networks, outlines how on-premises AI can help companies regain control in this era of data sovereignty. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI. A decade ago, the public cloud promised enterprises greater flexibility and […]

The post The Benefits of On-Premises AI: Regaining Control in the Era of Data Sovereignty appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>

The Benefits of On-Premises AI

Praveen Jain, the SVP/GM of AI Clusters and Data Center at Juniper Networks, outlines how on-premises AI can help companies regain control in this era of data sovereignty. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.

A decade ago, the public cloud promised enterprises greater flexibility and lower costs. Today, many realize the reality is far more complex, and we are witnessing a significant shift back to on-premises solutions, especially for enterprises deploying AI workloads. This shift stems from mounting challenges with public cloud deployments, from unpredictable GPU costs and security vulnerabilities to vendor lock-in concerns. Organizations are increasingly recognizing that the promise of simplified cloud deployments often comes with hidden complexities and costs that can impact long-term success.

To illustrate the optionality, a recent survey found that nearly 50 percent of IT decision-makers are now equally considering both on-premises and public cloud solutions for new applications in 2025, marking a significant departure from the “cloud-first” mindset.

Data Sovereignty and Security: Bringing AI Workloads Home

In today’s digital landscape, where data breaches can easily cost organizations millions, security cannot be an afterthought.

The challenge becomes particularly acute when training large language models (LLMs) using private data in public cloud environments. On-premises AI infrastructure provides organizations with complete control over their security protocols and data governance—a crucial advantage for complying with regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). This control extends beyond mere compliance. However, it enables organizations to implement custom security measures that align precisely with risk tolerance and operational requirements.

Consider the financial services sector, where institutions process millions of customer transactions daily. When AI models are trained and deployed on-premises, these organizations maintain full data sovereignty while significantly reducing breach risks due to the more direct visibility into all hardware, software, and in-house security measures. There’s no guesswork, no hoping a third-party provider has things locked down. This autonomy becomes even more critical when considering that GDPR non-compliance fines, for example, typically range from $10M to $22M.

The ability to maintain complete control over sensitive data while running sophisticated AI workloads has become a competitive necessity in heavily regulated industries. However, it’s important to note that on-premises benefits extend beyond data sovereignty alone.

The Economics and Technical Advantages of AI: Cost Efficiency and Control

While short-term projects—like a specific research study or business analysis—might find temporary solace in the lower cost of entry offered by public cloud solutions, the long-term cost implications for AI are often overlooked. The truth is, the substantial recurring costs associated with running resource-intensive GPUs in the cloud quickly add up.

In contrast, private AI data centers, while requiring a more significant upfront investment, ultimately deliver substantial savings in terms of total cost of ownership (TCO) and operational expenditures (OpEx). This economic advantage is further compounded by the technical control gained from on-premises deployments.

In the automotive industry, for instance, companies developing autonomous vehicles are producing massive data volumes, presenting a unique challenge. Original Equipment Manufacturers (OEMs) and their suppliers find that the bandwidth costs alone for moving massive datasets to and from the cloud can be prohibitive. Moreover, these software and interoperable hardware developers require real-time processing capabilities to support critical functions like over-the-air updates and rapid iteration in AI model development. Latency introduced by cloud data transfers can severely hinder these operations.

By deploying on-premises AI infrastructure, automotive companies and OEMs reduce bandwidth costs and gain the necessary control to fine-tune their infrastructure for specific workload requirements. This leads to better cost predictability and often results in lower TCO for sustained AI workloads. Recent analysis finds a 35 percent TCO savings and 70 percent OpEx savings over five years for private AI data centers compared to public cloud offerings, primarily due to the high recurring costs associated with public cloud services.

These advantages extend beyond pure economics, however, as organizations also gain the ability to fine-tune their infrastructure for specific workload requirements, optimize performance for certain AI models, and maintain complete visibility into their entire AI stack.

The Future of AI Infrastructure: Automation and Optimization 

Looking ahead, there is little doubt that AI and machine learning are crucial for modern, reliable, and secure end-user experiences, underscoring the importance of optimizing the underlying infrastructure. Modern on-premises solutions are evolving to incorporate advanced capabilities in high-performance networking and GPU clusters, specifically designed for complex tasks like LLM training. The focus is shifting toward automation that directly enhances control and efficiency.

To that end, advancements in automation are being adopted to directly address the need for greater efficiency:

  • Automated Resource Scaling: Systems can automatically adjust computing resources based on real-time demand, ensuring optimal performance without manual intervention.
  • Intelligent Workload Placement: AI-driven tools can analyze workload requirements and dynamically allocate them to the most efficient resources, maximizing utilization.
  • Proactive Performance Maintenance: Automated monitoring and optimization tools maintain consistent performance levels, minimizing downtime and ensuring smooth operations.

Organizations can achieve cloud-like flexibility by focusing on these key automation capabilities while retaining the essential control and security benefits of on-premises AI infrastructure.

The Path to Efficient AI Operations

While cloud services will continue to play a role, on-premises AI infrastructure remains essential for organizations serious about building sustainable, scalable capabilities, particularly those requiring fully optimized data and computing resources. The decision between cloud and on-premises AI infrastructure isn’t just about hardware—it’s all about aligning IT priorities with long-term business objectives and operational realities.

As organizations mature in their AI journey, many are searching for the optimal balance of control, security, and cost predictability to launch large-scale AI deployments efficiently. By opting for on-premises AI infrastructure, organizations can build a strong foundation that keeps their data and workloads secure, compliant, and cost-effective in the long term.


The post The Benefits of On-Premises AI: Regaining Control in the Era of Data Sovereignty appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
Enhancing Security with Microsoft’s Expanded Cloud Logs https://solutionsreview.com/security-information-event-management/enhancing-security-with-microsofts-expanded-cloud-logs/ Wed, 07 May 2025 12:20:16 +0000 https://solutionsreview.com/security-information-event-management/?p=5802 Botond Botyánszki, the Founder, CEO, and CTO at NXLog, examines how Microsoft’s expanded cloud logs can help companies enhance their security. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI. Nation-state-sponsored hacking stories are a big part of everyone’s favorite Hollywood movies. Until it becomes a real-life […]

The post Enhancing Security with Microsoft’s Expanded Cloud Logs appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>

Botond Botyánszki, the Founder, CEO, and CTO at NXLog, examines how Microsoft’s expanded cloud logs can help companies enhance their security. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.

Nation-state-sponsored hacking stories are a big part of everyone’s favorite Hollywood movies. Until it becomes a real-life story of our compromised personal or corporate sensitive data ending up on the dark web or in hackers’ hands, that is. In real life, cyber espionage groups’ activities trigger stringent security enforcement. First, in the government sector, the government standards slowly shift, dictating industry norms by gently forcing vendors to sell into government contracts.

This is the case when it comes to the recently announced playbook on Microsoft Expanded Cloud Logs Implementation Playbook, issued by the U.S. Cybersecurity and Infrastructure Security Agency (CISA). It all started in July 2023, when the Chinese cyber espionage group Storm-0558 exploited a vulnerability in Microsoft’s Outlook email system to gain unauthorized access to email accounts belonging to U.S. government agencies and other organizations. The attackers bypassed security measures using a stolen Microsoft security key to forge authentication tokens. In fact, most attacks use BEC (Business Email Compromise) as a successful entry point in their attack vectors. Why? Because it works.

The fallout in 2023 resulted in Microsoft expanding free logging capabilities for all Purview Audit Standard users, among other changes. Now, realizing the necessity for further strengthening defenses, CISA has emphasized the transformative potential of Microsoft’s expanded cloud logs for proactive threat detection and provided guidance in the playbook.

Introducing Microsoft’s Expanded Cloud Logs in Microsoft Purview

Microsoft teamed up with CISA in October 2023 to elaborate on the journey and eventually created guidance for government agencies and enterprises on using cloud logs and extending cloud log data sources. Microsoft Purview Audit has now raised the bar with its expanded logging capabilities, empowering organizations to monitor thousands of events across Exchange, SharePoint, and Teams. These newly added logs provide deeper insights into user and admin activities. The idea initially came from and was recommended by CISA to mitigate advanced intrusion techniques.

Without collecting and utilizing Microsoft’s newly added logs, organizations would miss an opportunity to see what is happening in their IT systems’ “blind spots.”

These are the types of logs you would be able to collect:

  • Microsoft Exchange audit logs
  • Microsoft SharePoint audit logs
  • Microsoft Teams audit logs
  • Microsoft Viva Engage audit logs
  • Microsoft Stream audit logs

Challenges in Operationalizing the New Log Data

Challenges with data volume

As with every log type, collecting, processing, normalizing, and shipping cloud logs are not without challenges. Organizations may face notable challenges when trying to operationalize these logs. Without an effective solution, they risk being overwhelmed by the sheer volume of audit events, incurring high storage costs, and struggling to filter relevant data for usable and actionable insights.

Adaptation with existing SIEMs

The need to adapt the SIEM configurations appropriately to process, display data, and trigger alerts based on the newly available logged events is critical. Without logs on security issues, organizations lack real-time alerts for incidents and the ability to trace problems back to their source. Don’t forget: SIEMs are optimized for analytics, but analytics can only be as good as the data sources provided. Failing to incorporate essential data sources leads to incomplete and unreliable analytics.

Filtering relevant data

CISA released a playbook, Microsoft Expanded Cloud Logs Implementation Playbook, regarding Splunk and its own SIEM offering, Microsoft Sentinel. This playbook explains how to use these logs, which mitigates the pain of those using these SIEM technologies. Yet, this playbook does not solve many organizations’ problems, and they must seek solutions themselves.

The effort required to adapt existing configurations and systems to handle and extract value from the newly available log events can be overwhelming. Without an accurate understanding of the new log data and appropriate tooling, financial and human IT resources can be exhausted.

Tackling the Challenges with Microsoft’s Expanded Cloud Logs

What about those outside of the Microsoft Sentinel and Splunk SIEM ecosystems?

If your organization uses Microsoft Sentinel or Splunk, you may already have support for these logs, but the reality is often more complex. These are just two of many SIEM solutions available, and most organizations still need to find ways to add these additional data sources and extract meaningful value from their log data. Every organization eventually needs to handle logs effectively, requiring a solution tailored to its requirements.

These challenges underline the need for a solution beyond the capabilities of native SIEM integrations. This is where a multi-platform logging solution can come into play. Organizations need the widest data source collection capabilities—from legacy systems through BEC data to cloud apps—that can simplify collecting, filtering, and normalizing logs from Microsoft technologies, helping them get the most out of cloud logs.

Real-World Benefits of a Cross-Platform Logging Platform

A solution with advanced log collection and seamless processing can help organizations efficiently correlate events across Microsoft 365 and beyond, regardless of their preferred SIEM solution. This empowers faster identification of unauthorized email access, unusual searches, and potential insider threats. This proactive approach safeguards organizations against advanced cyber threats and can help when it comes to compliance with regulatory requirements.

For example, imagine a mid-sized enterprise dealing with a sudden spike in phishing attempts. With a cross-platform logging platform, they can collect and process logs with Microsoft Purview Audit to identify unusual email access patterns and flag a potential security breach in near real-time. This proactive approach could prevent further damage and strengthen their overall security posture.

Despite CISA acknowledging that the implementation might be slightly costly for small and mid-size organizations, it’s likely that over time, these recommendations will become mandatory requirements—the future changes. There will always be new log sources in an organization’s IT security journey. Therefore, organizations can be ahead of the curve by adopting this approach.

Conclusion

CISA’s latest guidance, combined with Microsoft’s expanded logging features, marks a significant advancement in addressing cybersecurity challenges. Integrating these logs with a cross-platform logging solution helps organizations stay proactive against evolving threats while maintaining strong compliance and eliminating security gaps that otherwise make an organization vulnerable to cyber-attacks.


The post Enhancing Security with Microsoft’s Expanded Cloud Logs appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>
World Password Day Quotes from Industry Experts in 2025 https://solutionsreview.com/identity-management/world-password-day-quotes-from-industry-experts-in-2025/ Thu, 01 May 2025 19:17:09 +0000 https://solutionsreview.com/security-information-event-management/world-password-day-quotes-from-industry-experts-in-2025/ For World Password Day 2025, the editors at Solutions Review have compiled a list of comments from some of the leading industry experts. As part of this year’s World Password Day, we called for the industry’s best and brightest in Identity and Access Management and the broader cybersecurity market to share best practices, predictions for […]

The post World Password Day Quotes from Industry Experts in 2025 appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>

For World Password Day 2025, the editors at Solutions Review have compiled a list of comments from some of the leading industry experts.

As part of this year’s World Password Day, we called for the industry’s best and brightest in Identity and Access Management and the broader cybersecurity market to share best practices, predictions for the future of passwords, and personal anecdotes. The experts featured represent some of the top influencers, consultants, and solution providers with experience in these marketplaces, and each projection has been vetted for relevance and ability to add business value. The list is organized alphabetically by company name.

World Password Day Quotes from Industry Experts in 2025


Tim Eades, CEO and Co-Founder at Anetac

“As we recognize World Password Day, it’s time to acknowledge a fundamental matter in identity security. Credentials are the keys to the castle. Passwords alone cannot safeguard our digital identities in today’s complex, hybrid environments. Identity-based vulnerabilities have become the primary attack vector for modern breaches.

“Our research reveals alarming statistics across industries: passwords unchanged for 15+ years in financial institutions, 74 percent of healthcare credentials remain unchanged for over 90+ days, and widespread credential sharing in critical infrastructure. The basics are critical. Without proper cyber hygiene, enterprises across the globe will continue to be victims of bad actors.

“Weak or unchanged passwords across human and non-human identities create a dangerous, often overlooked security gap that can quickly go from a headache for security teams to a full-blown breach. A dormant service account or an orphaned human account with an old or weak password is a bad actor’s most exciting find. Utilizing complex passwords, refreshing them every 3 months, using multifactor authentication when available, and investing in modern identity security solutions are necessary to minimize the likelihood of a breach.

“That’s why password hygiene remains a cornerstone of effective identity security. The ability to detect and assess credential age, behavioral anomalies, and lifecycle blind spots across all identities is critical. Identity security isn’t just about who has access—it’s about how that access is managed, monitored, and secured over time. Not only this, you need the tools to actually know the identity behind the account and that they are who they say they are.

“Passwords aren’t disappearing, but their importance in our security strategies must be properly acknowledged within the broader identity ecosystem. It may be an aging technology, but they remain a top attack vector and we need to treat them, and the accounts they protect, with the same seriousness we give to any other security asset.”


Arun Shrestha, CEO and Co-Founder at BeyondID

“Passwords are old news, and World Password Day—once a reminder of cybersecurity best practices—now underscores the importance of phasing out the very authentication method it once championed. With stolen credentials topping the breach origin charts and phishing attacks up 4,151 percent since the launch of ChatGPT, it’s clear that traditional passwords are no longer sufficient. Modern threats call for passwordless authentication—not just for stronger security, but for a frictionless user experience. It’s time to answer the phone.”

Read on for more.


Randolph Barr, CISO of Cequence

“World Password Day is a great time to remind people about the importance of maintaining good password practices. Passwords are the most important line of defense for organizational and personal information, which means they are also a top target for threat actors.

“The easiest way to keep attackers at bay is to make strong, unique passwords for each account. One of the most common attack tactics is a brute force attack, which is an authentication-related attack that takes advantage of people who use either generic or shared passwords. By exploiting this weakness, cyber-criminals can access an entire organization with one faulty password.

“Multi-factor authentication is an additional preventive measure that can help protect information; many banking and fintech enterprises make use of the safeguards it brings. Password managers are also helpful, as they store multiple passwords across separate accounts, all protected by one ultra-strong master password.

“While password hygiene and multi-factor authentication remain essential today, the cybersecurity community is clearly moving toward a passwordless future. Even the strongest passwords can be phished or exposed, which is why many Fortune 100 technology companies have transitioned large portions of their workforce to passwordless authentication using mobile authenticators, device-based login, and biometric verification. Additionally, global financial institutions are enabling passkey support and app-based logins, while Fortune 500 retail and consumer platforms are deploying passwordless login options to reduce fraud and improve user experience.

“To prepare for this future, organizations should begin testing passwordless flows within internal environments, choosing identity platforms that support passkeys and FIDO2 standards. On the individual level, users can explore these capabilities already available on major devices, such as Android, Google, iOS, and MacOS (to name a few).”


Art Gilliland, CEO at Delinea

“Passwords still are the gatekeepers of our digital identities, but relying on traditional passwords is simply not enough. Cyber-criminals are getting smarter when attacking passwords, especially those tied to privileged accounts, to breach networks and access sensitive data. With 80 percent of security breaches involving the misuse of privileged credentials, it’s clear that organizations must adopt a Privileged Access Management (PAM) approach, combined with Zero Trust principles for data protection.

“It’s essential to use World Password Day as a reminder that password security alone isn’t enough. We must never assume trust, especially privileged accounts, and always verify every access request. By taking control of who has access to what, when, and how, organizations can significantly reduce the risk of breaches. Smart identity security starts with Zero Trust and PAM, because data safety begins with stronger, verified access.”


Tony Ball, President of Payments and Identity at Entrust

“For decades, passwords have been the weak link in cybersecurity–outdated, overused, and increasingly ineffective. But now, organizations are making a clear shift. Multi-factor authentication and sign-in links have emerged as the primary methods for user authentication across the US, UK, and globally, overtaking passwords.

“This step change comes as over half of business and IT decision-makers report higher fraud attempts with username and password alone compared to other methods. We’re at a cybersecurity inflection point: passwords are no longer sufficient. Modern, layered authentication methods, such as facial biometrics, device recognition, or generated codes, are stepping in.

“Rather than forcing users to create longer, more complex passwords, it’s time for organizations to embrace a passwordless future where customers and employees can prove their identity conveniently and securely using their biometrics. This approach reduces risk, streamlines access, and meets the expectations of today’s digital-first users.”


Joel Burleson-Davis, Chief Technology Officer at Imprivata

“This World Password Day, it seems appropriate to shift the discussion from securing and managing passwords to the demise of the password. Passwords have served us well (sort of), and we’ve been long talking about ditching the traditional, complex password because of their burden and unintentional insecurity. However, with every second mattering in critical work, now more than ever, passwordless authentication has become business-critical.

“There are signs of good adoption of both passwordless strategies and shunning our old password-burdened ways in mobile devices, which are built with and extensively leverage facial recognition for security purposes, but some of our most critical technologies in our most critical sectors have been reluctant to implement similar solutions in their operations. As life- and mission-critical industries like healthcare and manufacturing cope with staffing challenges while being increasingly targeted, it’s time they reconsider access management and their relationship with the password paradigm.

“In healthcare, for example, and in particular, the delivery of health care, where a 17-character password is not practical for clinicians who are treating patients who need rapid and frequent access to Electronic Health Records (EHRs) in all kinds of situations. Entering a complex password for these users only creates barriers that delay patient care, eats up clinician time, and exacerbate burnout.

“Passwordless solutions, particularly biometrics-based ones, offer a tailored and frictionless experience that enables everyone from healthcare providers to manufacturing operators to maintain the highest security standards while empowering them to deliver timely, critical work without unnecessary barriers. I look forward to a World Password Day in the future that is full of cheering and celebration because we’ve finally released ourselves from the burden of putting memorized, complex strings into a little prompt box for the sake of security.”


Erich Kron, Security Awareness Advocate at KnowBe4

“Reusing passwords across different websites and services can be a catastrophic mistake. If there is a data breach at a website and bad actors are able to steal the passwords, they use a technique called credential stuffing to try the usernames and passwords to access various popular websites such as credit card portals, retail websites, or banking accounts. This is how a password stolen from a hobby forum could lead to a bank account being compromised.

“Multifactor authentication, also known as MFA or two-step authentication, can significantly increase a login’s security. While not foolproof, it makes it much tougher for cybercriminals to log into an account even if they steal your credentials. These options are available on most shopping, credit card, and bank websites, as well as social media accounts.”


Stephanie Schneider, Cyber Threat Intelligence Analyst at LastPass

“World Password Day is a great reminder for every organization that identity access management is the foundation of effective company security. Abusing legitimate credentials is one of the easiest and most common ways hackers gain unauthorized access to systems. Given the rise of infostealers over the last few years, which frequently target credentials and other sensitive data to resell on underground marketplaces, acquiring these is easier than ever. Credentials and session cookies stolen from employees’ personal devices can be used to breach corporate networks.

“A key aspect of stealers is their heavy reliance on the ‘spray-and-prey’ tactic, rather than directly targeting corporate networks, they’re counting on individuals having weaker security on their personal devices and using their work credentials on personal devices. The time from infection via stealer malware to the time that information is posted to the dark web can be speedy, especially with automation tools. Organizations must monitor for exposed credentials and change credentials as quickly as possible to disrupt breaches and attacks before they can occur. In a world where hybrid work has blurred the lines between personal and professional devices, businesses can’t afford to be casual about credential management.

“Using strong, unique passwords is just the tip of the iceberg when protecting your identity access. Reusing passwords across services is still one of the most common mistakes employees make—and one of the easiest ways for attackers to gain access. Requiring multi-factor authentication (MFA) should be standard for every business account, and it is a good idea for personal accounts, too.

“This World Password Day, take a look at your access policies. Are you protecting your company or making it easier for someone else to break in?”


“Leverage passkeys as the primary authentication method whenever possible. While passkeys are not immune to cyber-attacks, they are significantly more secure and phishing-resistant because they are linked to a device or leverage biometric authentication. Plus, they’re a whole lot easier to manage than constantly juggling new password combinations.”


Anthony Cusimano, Solutions Director at Object First

“I believe the death of the password is just around the corner. Passwords are no longer a secure method of authentication and should not be treated as secure. So, I’ll share the advice I have taken up in the last year: use a password manager, app-based or browser-based (either works!).

“Password managers securely store your passwords in a locked vault and come with convenient browser extensions that autofill logins. They can also generate unique, complex passwords for every account. Many of these tools allow you to customize password requirements according to your preferences, including specifying length and incorporating symbols, numbers, and mixed case. Additionally, password managers can alert you to duplicate or weak passwords and often suggest optimal times for changes.

“The password alone is NOT a secure authentication method; that’s why I have given up trying to maximize their security and left the brainwork to someone else. It’s 2025—let an app do the password legwork for you, and here’s to hoping that passwords become a thing of the past sooner rather than later.”


Nicolas Fort, Director of Product Management at One Identity

“Passwords have come a long way, from punch-tape reels in 1961 to the world of multi-factor authentication and fingerprint identification we inhabit today. The next leap is already happening—passkeys tied to devices, one-time AI-generated tokens, and even blockchain-backed session receipts. It’s no accident that password technology is constantly evolving.

“Cyber-attacks are more frequent, threat actors have more sophisticated tools at their disposal, and as businesses continue to store more and more sensitive data online, regulators are rightly demanding that they keep up. The EU’s NIS2, the UK’s Cyber Resilience Act, DORA, HIPAA, and countless other rules and regulations now demand rock-solid control over user accounts at every touchpoint. That means audited sessions, behavioral analytics, rotating passwords, and just-in-time credentials—so that no matter how hard attackers try, there’s simply nothing there to steal.”


“World Passkey Day is a reminder that the future of authentication is here—and it’s passwordless. Passwords have long been a point of vulnerability, often leading to breaches and user frustration. Passkeys represent a meaningful step toward improving both security and usability, moving us closer to a more resilient digital infrastructure. They’re especially valuable in securing high-risk interactions like financial transactions, where strong, phishing-resistant authentication is critical.

“FIDO passkeys take traditional authentication a step further by using cryptographic credentials stored on a user’s device, ensuring identity verification and security. This method strengthens authentication across desktops and mobile devices, creating a more secure digital environment. As the adoption of passkeys grows, I’m confident they will be key to transforming how we protect our most sensitive online interactions.”


Drew Perry, Chief Innovation Officer at Ontinue

“As positive a day as World Password Day is, I look forward to the day it no longer exists or is at least renamed! With the rise of passkey support across major platforms and devices, we’re finally seeing a shift towards more secure and user-friendly authentication. Passkeys are cryptographic credentials that eliminate the need for passwords entirely, offering phishing-resistant, biometric-based access. It’s time we moved beyond passwords, which are too often reused, weak, or compromised. Simpler identity protection is needed so we, as humans, don’t just pick a random string of characters that we will never remember!”

“We have come a long way. Password manager adoption is rising, multi-factor authentication is available for most critical online services, and people are reusing the same passwords less. But still, hackers are succeeding in their attacks. We have been saying since the early 2010s that “hackers don’t hack in, they log in,” and as time goes on, it becomes even more true.

“Stolen credentials overtook email phishing as the second most frequently observed initial infection vector in 2024 during intrusions into businesses. At Ontinue, we have witnessed first-hand the rise of sophisticated infostealer malware, which captures passwords as they are entered by users during login. This enables attackers to simply log in if no other secondary authentication methods are enabled, which, sadly, is often the case.

“Awareness is key. Enable passkeys where possible. I suggest we lay the password to rest and embrace the passwordless future.”


“Passwords have long been a security crutch; in today’s digital landscape, they’re quickly becoming a liability. Users continue to rely on weak, repurposed credentials, making them easy targets for sophisticated cyber-attacks fueled by AI. Recent data shows that 87 percent of consumers are concerned about identity fraud, yet many still depend on outdated methods to secure their most sensitive data. Even worse, 48 percent of IT leaders admit they’re not confident their current defenses can withstand AI-driven attacks. That should be a wake-up call. With the rise in phishing, credential stuffing, and deepfake scams, it’s time for organizations to retire traditional passwords altogether.

“In the spirit of World Password Day, we must double down on access solutions that eliminate the guesswork and the risk. Passwordless authentication, like biometrically protected passkeys and secure device-based login, not only strengthens security but also improves the user experience. Organizations must embrace a future where identity is both frictionless and fundamentally more secure.”


Denny LeCompte, CEO of Portnox

“World Password Day serves as an annual reminder of a universal truth: passwords are a pain. Despite being a cornerstone of our digital lives, they consistently fall short. From the widespread practice of password reuse—a virtual invitation to cyber-criminals—to the ease with which they can be compromised through social engineering or simple guessing, the inherent weaknesses of password-based authentication are undeniable.

“While Multi-Factor Authentication (MFA) has been lauded as a critical security layer, our recent findings indicate a growing unease among security leaders. A staggering 99 percent of CISOs worry that MFA alone doesn’t adequately protect their organizations, with concerns amplified in younger companies. The consensus is clear: 100 percent believe MFA struggles to keep pace with the evolving threat landscape.

“This reality is driving interest in passwordless authentication methods. With compromised passwords implicated in a significant majority (81 percent) of breaches, the appeal of eliminating them entirely is obvious. While only a small fraction (7 percent) of organizations have fully embraced passwordless solutions, a substantial number (32 percent) have begun or completed implementation, and a further 63 percent are actively planning or open to adoption.

“The benefits are compelling: over half of CISOs anticipate stronger access control and an improved employee experience. However, challenges such as cost, complexity, and potential user resistance need to be addressed for widespread adoption.

“The journey towards a more secure, passwordless future requires a strategic approach. Organizations must prioritize robust identity verification processes, such as certificate-based authentication, and embrace a Zero Trust security model. Continuous risk assessment, employee education, and a strong security culture are also crucial components.

“While passwords may not disappear overnight, the momentum towards passwordless authentication is building. World Password Day is an opportune time to acknowledge the password headache and explore and embrace the promising alternatives that can truly enhance our digital security. The future of access is increasingly looking less like a complex string of characters and more like a seamless, secure experience.”


Melissa Bischoping, Head of Security Research at Tanium

“On this World Password Day, it’s worth reflecting on how far we’ve come, and how far we still need to go in securing our digital identities. The humble password has been a cornerstone of how we access data and technology since 1961, when MIT’s Compatible Time-Sharing System (CTSS) became the first system to leverage modern passwords for safeguarding access to private files. In the 64 years since, passwords have evolved in length, complexity, and character requirements, but despite these advancements, they’ve also introduced layers of complexity to the user experience, resulting in a more burdensome method of securing identity and file access.

“Today, the average user manages 80-100 passwords, more than most of us can possibly keep track of. As a result, we’ve entered the era of password managers, in other words, one ‘super password’ to secure all the others. On the surface, this is a major step forward in usability (and an essential method to encourage users to use complex, unique passwords for every account), but we’re still not getting it quite right when it comes to password security. Here are a few key tips to strengthen password security.

For software providers:

  • MFA should be mandatory and not locked behind a premium subscription tier.
  • All apps should enable single-sign-on (SSO) by default for easier management of secure accounts.
  • Don’t make it unnecessarily difficult to update or change credentials; this will make the user more likely to stick to the outdated, weaker password.
  • Software providers should spend more time on meaningful user experience research and design for password management.

For technology users:

  • Secure your primary password with additional levels of protection like robust, phishing-resistant MFA
  • Use at least one form of MFA; for most users, any MFA is better than none.
  • For better security, use passkeys or hardware tokens (like Yubikeys) over passwords paired with SMS-based MFA.
  • Take advantage of password manager features like password audits, reuse detection, and breach alerts.
  • Review your cell phone provider’s offerings for additional layers of security to prevent a SIM-swapping attack.
  • Review your email provider’s additional security features that can be enabled; this is especially important since email accounts are often used as a password recovery option for OTHER accounts.
  • Using more secure alternatives, like passkeys, in modern operating systems and apps can help less-technical family and friends adopt stronger data protections.
  • Regularly check the security of SSO accounts used for logging into platforms like Google, Facebook, and Apple ID. An attacker can use these individual accounts as the ‘keys to the kingdom,’ so they warrant additional protections.

Carla Roncato, VP of Identity at WatchGuard

“Today, it’s not just careless password reuse or weak combinations that pose a threat—it’s the industrial-scale theft and sale of login data. Credentials are harvested through phishing, malware, and breaches, then packaged, sold, and exploited at astonishing speed. A single leaked password doesn’t just unlock one account; it can be a skeleton key to an entire digital identity.

“Dark web marketplaces function with the efficiency of e-commerce platforms, complete with customer service and user reviews. For as little as a few dollars, attackers can purchase verified credentials tied to financial services, corporate VPNs, or personal email accounts. Once inside, they move laterally, escalate privileges, and often remain undetected for weeks or months.

“On this World Password Day, the question is no longer ‘Are your passwords strong enough?’ but ‘Do you know if your credentials are already out there?'”

“Organizations must treat credential exposure as a threat to be hunted and mitigated, not just a hygiene issue. That means proactive monitoring of the dark web, real-time alerting on compromised credentials, and an incident response plan that assumes breach, not just tries to prevent it. Cyber-criminals have evolved. It’s time our mindset around password security evolves, too.


Munu Gandhi, President of IT Solutions at Xerox

“On World Password Day, I encourage every organization to prioritize strong password protocols as a critical part of cybersecurity. At Xerox, we’re committed to Zero Trust principles—using multi-factor authentication, regular updates, and user education to protect data wherever it’s accessed. Strong passwords aren’t just good practice, they’re essential to keeping your business secure.”


Kern Smith, VP of Global Solutions at Zimperium

“World Password Day is a timely reminder: passwords are only as strong as the device they’re stored on. As cyber-criminals adopt a mobile-first attack strategy, mobile devices have become the front door to corporate access—and a primary target. Through mishing (mobile-targeted phishing), malware, and other tactics, attackers steal credentials by compromising the mobile endpoint. Strong passwords matter, but without securing the device, they’re not enough. Organizations need mobile-specific protection to detect and stop threats before credentials and critical data are exposed.”


The post World Password Day Quotes from Industry Experts in 2025 appeared first on Best Information Security SIEM Tools, Software, Solutions & Vendors.

]]>