close
close

Solondais

Where news breaks first, every time

sinolod

NYDFS’s Adrienne Harris Issues Warning on AI Security Risks

Adrienne Harris
New York State Superintendent of Financial Services Adrienne Harris says banks need to ensure their use of AI complies with all applicable cybersecurity regulations.

Christophe Goodney/Bloomberg

On Wednesday, the New York State Department of Financial Services, or NYDFS, issued directives highlighting four of the main risks that AI poses to cybersecurity in the financial services industry and the mitigation measures businesses must put in place, in accordance with state regulations.

The four risks highlighted by the department relate to both how malicious actors can use AI against businesses (AI-based social engineering and AI-enhanced cybersecurity attacks) and the threats posed by the use and reliance on AI (exposure or theft of non-public information and increased vulnerabilities). due to supply chain dependencies.

The six mitigation examples the department highlighted in its letter to industry will be familiar to many cybersecurity and risk professionals – among them risk-based programs and policies, supplier management, security controls, access and training in cybersecurity. However, their mention in the guidelines is noteworthy because the ministry has directly linked these practices to the requirements set out in its regulations.

Adrienne A. Harris, the department’s superintendent, acknowledged that while the guidance focused on the risks of AI, the technology also offered financial institutions an opportunity to improve their cybersecurity.

“AI has enhanced the ability of businesses to improve their threat detection and incident response strategies, while simultaneously creating new opportunities for cybercriminals to commit crimes at a larger scale and faster,” Harris said In a press release. “New York will continue to ensure that as AI-based tools become more prolific, security standards remain rigorous to protect critical data, while allowing the flexibility to meet various risk profiles in a constantly evolving digital landscape.

How bad actors are using AI against banks

Social engineering, which relies on manipulating people to break into a system rather than exploiting more technical vulnerabilities, has long been a concern in the cybersecurity field. Many companies, including KnowBe4, Fortinet, SANS Institute and others, offer security awareness training programs that aim to mitigate the threat of social engineering by teaching employees the signs that could target them with such a campaign .

One of the factors that differentiates the most dangerous social engineering campaigns from the rest is how realistic a campaign is, and interactivity is one of the keys to this. AI has improved the ability of threat actors to present a more convincing facade through deepfakes, according to NYDFS guidelines.

One example cited in the guidelines occurred in February, when an employee working for the Hong Kong branch of a multinational company transferred $25 million to fraudsters after being tricked into joining a video conference where everyone else Participants were AI-generated deepfakes, including one posing as the company’s CFO. As a result, the employee made 15 transactions on five local bank accounts, according to local media reports.

According to the NYDFS guidelines, AI can also enhance the technical capabilities of threat actors, enabling less technically proficient actors to launch attacks on their own and improving the effectiveness of those who are more technically proficient, e.g. example by accelerating the development of malware. In other words, AI can help threat actors at almost every stage of an attack, including in the middle of an intrusion.

“Once inside an organization’s information systems, AI can be used to perform reconnaissance to determine, among other things, how best to deploy malware and gain access and exfiltrate non-public information,” the guide says.

How Banks’ Reliance on AI May Pose a Threat

A malicious actor does not need to infiltrate a bank’s IT systems to steal data; they also steal data from third parties to whom a bank has entrusted data. Indeed, it is a tactic increasingly used by bad actors hoping to steal consumer data, even regardless of the rise of AI.

So-called third-party risks and supply chain vulnerabilities are a common concern among banks and regulators, and AI amplifies these concerns.

“AI-based tools and applications rely heavily on the collection and maintenance of large amounts of data,” the NYDFS guide states. “The process of collecting this data often involves working with third-party vendors and service providers. Each link in this supply chain introduces potential security vulnerabilities that can be exploited by malicious actors.

Due to the vast amounts of data that banks and third parties must collect to enable and improve their AI models, NYDFS has also highlighted exposure or theft of these vast troves as a risk of relying on AI.

“Retaining non-public information in large quantities poses additional risks to covered entities that develop or deploy AI because they need to protect significantly more data, and malicious actors have greater incentive to target these entities for the purpose of extract non-public information for malicious purposes. financial gain or other nefarious purposes,” the guide states.

Six strategies to mitigate risks

The NYDFS guidance highlighted the need for financial services companies to put into practice the principle of defense in depthwhich is layered security jargon. This practice ensures that where one control may fail or insufficiently reduce risk, another control can provide the necessary protection.

From a compliance perspective, the first and most important measures that banks operating in New York can implement are cybersecurity risk assessments. This is one of the most critical aspects of the NYDFS Cybersecurity Regulation, also known as Part 500, which the department last amended in November 2023.

The Cybersecurity Regulation requires banks to maintain programs, policies and procedures based on these risk assessments, which, according to the guidelines, “must take into account the cybersecurity risks faced by the covered entity, including including deepfakes and other threats posed by AI, to determine what defensive measures they should implement.

The cybersecurity regulation also requires banks operating in the state to “establish, maintain, and test plans containing proactive measures to investigate and mitigate cybersecurity events,” such as data breaches or ransomware attacks. Again, NYDFS guidance states that AI risks should be considered in these plans.

Second, the NYDFS “strongly recommends” that each bank consider, among other factors, the threats its third-party service providers face from the use of AI and how those threats could be exploited against the bank herself. Efforts to mitigate these threats could include imposing requirements on third parties to take advantage of enhanced privacy, security and confidentiality options available, according to the guidance.

Third, banks must implement multi-factor authentication, which the Cybersecurity Regulation requires all banks to use by November 2025. The ministry has said previously that multi-factor authentication is “one of the most effective ways to reduce cyber risks”. Indeed, just as multiple layers of security protect a bank’s systems, multiple layers of authentication protect accounts (whether user accounts or employee accounts) from unauthorized access.

Fourth, the ministry reminded banks of the need to provide “cybersecurity training for all staff” at least once a year, and this training must include social engineering – another requirement set out by the Cybersecurity Regulation . This ensures that bank staff know how threat actors can use AI to improve their campaigns.

“For example, training should address the need to verify the identity of a requester and the legitimacy of the request if an employee receives an unexpected money transfer request by phone, video or email,” the guidance states. .

Fifth, covered entities “must have a monitoring process in place” that can quickly identify new security vulnerabilities so they can be addressed quickly. The guidelines remind banks that the cybersecurity regulation requires them to monitor user activity (mainly employees), including email and web traffic, in order to block malicious content and protect against installation of malicious codes.

“Covered entities that use AI-enabled products or services, or that enable staff to use AI applications such as ChatGPT, should also consider monitoring for unusual query behavior that could indicate an attempt to “extract NPI and block staff requests that could expose NPI to a public AI product or system,” the guide states.

Sixth and finally, the guidance recommends that banks use effective data management practices. An important example is the disposal of unused data when it is no longer needed for business operations. This practice is imposed by departmental regulations and, from November 2025, banks will also have to maintain and update data inventories. This “should” include identifying all information systems that rely on or use AI, according to the guidelines.