Cybersecurity and Artificial Intelligence Regulation

Posted by:

|

On:

|

Cybersecurity regulation has evolved over the years. This has become the norm as technology is constantly advancing and being used in new and scary ways. Similar to how the Health Insurance Portability and Accountability Act, otherwise known as HIPPA, tells administrators who can view patient information and how that information must be protected digitally, cybersecurity regulation tells administrators how to properly protect and manage information technology and computer systems. With the emergence of Artificial Intelligence (AI), it’s important to reflect on how the regulations have evolved over time to be proactive in maintaining security posture.

A recent regulatory change that affects many companies came from the U.S. Securities and Compliance Exchange Commission (SEC). The change requires companies to file a cybersecurity incident disclosure form within four business days of determining a material incident (1). A material incident is an incident that has a significant impact on a company’s operations, finances, or reputation. An unknown user using company credentials to exfiltrate company data would be classified as a material incident.

This puts a strict deadline for cybersecurity professionals affected by this change. Response, Triage, Analysis, and reporting all need to be completed within a strict timeframe to address the incident. Overall, the goal of this change was to inform shareholders of incidents that affect company value. This change also benefits the consumer as more organizations will need to inform their shareholders of material incidents and that information is available to the public.

AI will play an important role in how we respond to material cybersecurity incidents. Currently, AI isn’t as heavily regulated as other facets of computers and technology. Thus, the use of AI has become very popular, even amongst cyber criminals. One malicious use of AI was to create a “ChatGPT” clone trained on malware-focused data (3). This clone was then used to spread phishing and malware attacks easily. Some tools to help detect AI-generated content are already publicly available. There is a call from the White House to protect Americans from AI-generated fraud by establishing standards and best practices for detecting AI-generated content and authenticating official content (2). This helps set the baseline for future regulation and law regarding the use of AI.

1. “Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure.” https://www.sec.gov/corpfin/secg-cybersecurity, 19 Apr. 2024.

2. Biden, Joe. “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.” Www.Whitehouse.Gov, 30 Oct. 2023, www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/. Accessed 6 Sept. 2024.

3. Toulas, Bill. “Cybercriminals Train AI Chatbots for Phishing, Malware Attacks.” Www.Bleepingcomputer.Com, 1 Aug. 2023, www.bleepingcomputer.com/news/security/cybercriminals-train-ai-chatbots-for-phishing-malware-attacks/. Accessed 6 Sept. 2024.

Leave a Reply

Your email address will not be published. Required fields are marked *