How to Test Partial Discharge with PicoScopes

- Derek Hu Pico Technology The main reasons...

Energizing Tomorrow: Career Opportunities in the Battery Industry with ESSCI

A career in battery industry systems provides...

Trending

2024 State of Cybersecurity Survey Report from ISACA on AI Policy

Organizations adopting a generative AI policy can ask themselves a set of key questions to ensure they are covering their bases, according to ISACA.2024 State of Cybersecurity Survey on AI Policy by ISACA The VOlt Post

According to the recently released 2024 State of Cybersecurity survey report from ISACA, a global professional association advancing trust in technology, only 27% of cybersecurity professionals or teams in India are involved in the development of AI policy governing the use of AI technology in their enterprise, and half (50 percent) report no involvement in the development, onboarding, or implementation of AI solutions.

Security teams in India responded to fresh inquiries posed by the yearly study, sponsored by Adobe, which features the opinions of over 1,800 worldwide cybersecurity experts on subjects pertaining to the cybersecurity workforce and threat landscape.

They stated that they are mostly utilizing AI for:

  • Endpoint security (31 percent)
  • Automating threat detection/response (29 percent)
  • Automating routine security tasks (27 percent)
  • Fraud detection (17 percent)

Examining the Most Recent Advances in AI

ISACA has been creating AI resources to assist cybersecurity and other digital trust professionals in navigating this game-changing technology, in addition to the conclusions of the 2024 State of Cybersecurity survey report:

  • EU AI Act white paper: Enterprises need to be aware of the timeline and action items involved with the EU AI Act, which puts requirements in place for certain AI systems used in the European Union and bans certain AI uses—most of which will apply beginning 2 August 2026. ISACA’s new white paper, Understanding the EU AI Act: Requirements and Next Steps, recommends some key steps, including instituting audits and traceability, adapting existing cybersecurity and privacy policies and programs, and designating an AI lead who can be tasked with tracking AI tools in use and the enterprise’s broader approach to AI.
  • Authentication in the deepfake era: Cybersecurity professionals should be aware of both the advantages and risks of AI-driven adaptive authentication, says new ISACA resource, Examining Authentication in the Deepfake Era. While AI can enhance security by being used in adaptive authentication systems that adapt to each user’s behavior, making it harder for attackers to access, AI systems can also be manipulated through adversarial attacks, are susceptible to bias in AI algorithms, and can come with ethical and privacy concerns. Other developments, including research into integrating AI with quantum computing that could have implications for cybersecurity authentication, should be monitored, according to the paper.
  • AI policy considerations: Organizations adopting a generative AI policy can ask themselves a set of key questions to ensure they are covering their bases, according to ISACA’s Considerations for Implementing a Generative Artificial Intelligence Policy—including “Who is impacted by the policy scope?”, “What does good behavior look like, and what are the acceptable terms of use?” and “How will your organization ensure legal and compliance requirements are met?”

Advancing AI Knowledge and Skills

ISACA also has added to its education and credentialing options to help the professional community keep pace with the changing AI and cybersecurity landscape: 

  • Machine Learning: Neural Networks, Deep Learning, Large Language Models— ISACA’s latest on-demand AI course, which joins the recent Machine Learning for Business Enablement course, as well as others on topics such as AI essentials, governance, ethics and audit, can be accessed through ISACA’s online portal at the learner’s convenience and offers continuing professional education (CPE) credits. The courses are available at isaca.org/ai.
  • Certified Cybersecurity Operations Analyst— As emerging technologies like automated systems using AI evolve, the role of the cyber analyst will become more critical in protecting digital ecosystems. ISACA’s upcoming Certified Cybersecurity Operations Analyst certification, launching in Q1 2025, focuses on the technical skills to evaluate threats, identify vulnerabilities, and recommend countermeasures to prevent cyber incidents.

Key Comments

“In light of cybersecurity staffing issues and increased stress among professionals in the face of a complex threat landscape, AI’s potential to automate and streamline certain tasks and lighten workloads is certainly worth exploring,” says Jon Brandt, ISACA Director, Professional Practices and Innovation. “But cybersecurity leaders cannot singularly focus on AI’s role in security operations. It is imperative that the security function be involved in the development, onboarding and implementation of any AI solution within their enterprise – include existing products that later receive AI capabilities.”2024 State of Cybersecurity Survey on AI Policy by ISACA The VOlt Post

“AI is promising for enhancing cybersecurity operations, but for the benefits to be fully realized, cybersecurity teams must be integrated in the AI governance process. The fact that only 27 percent of these teams in India are currently involved in AI policy-making is a missed opportunity to ensure that AI is implemented securely and responsibly,” says RV Raghu, director, Versatilist Consulting India Pvt Ltd, and ISACA India Ambassador. “There is an urgent need for organizations to rethink how they integrate cybersecurity professionals in AI decision-making. The strategic importance of collaboration between AI and cybersecurity experts must not be overlooked by organizations.”

A complimentary copy of ISACA’s 2024 State of Cybersecurity survey report can be accessed Here

 

- Advertisement -

Don't Miss