Financial services institutions have strong reasons to ensure that the software code they use is secure. An obvious motivation, of course, is the need to protect their data from a breach, which can be costly in terms of leaked PII, cleanup expenses, fines and restitution, and damage to an organization’s reputation. Another ongoing reason is the need to stay in compliance with an array of industry and government regulations.
Because they handle so much sensitive, personal and financial information, as well as people’s money, financial services are a highly regulated industry. Depending on the services they provide, companies must comply with a mix of rules and requirements.
The Payment Card Industry Data Security Standard (PCI DSS) is well-known for its rules on protecting cardholder data, for instance. Requirements in the Sarbanes-Oxely Act govern financial records management. Companies that operate internationally are familiar with the Digital Operational Resilience Act (DORA), which is a binding risk management framework, and the global standards for fund transfers established by the Society for Worldwide Interbank Financial Telecommunication, known as Swift.
And laws such as the California Consumer Privacy Act (CCPA) and the EU’s General Data Protection Regulation (GDPR) set requirements for protecting the privacy and personal information of customers. There are others, such as regulations set by the U.S. Office of the Comptroller of the Currency (OCC) and the European Central Bank (ECB).
If that’s not enough, the National Cybersecurity Strategy states among other things that software makers should be held responsible for ineffective software security. And the Cybersecurity and Infrastructure Security Agency (CISA), alongside 17 U.S. and international partners, has issued guidance on Secure-By-Design principles for software development.
A common thread for financial services companies is that secure code is a critical element in helping to meet the goals of those regulations that can help make demonstrating compliance with them easier. And it underscores why developers need training and upskilling to ensure that security is applied to code at the start of the development process.
As an example of how that works, let’s look at the newest version of PCI DSS.
Secure coding (and developer training) Is at the core of PCI DSS 4.0
PCI DSS 4.0, which became mandatory as of April 1, 2024, includes several substantial updates to PCI DSS 3.2.1—not least of which is an emphasis on the role developers play in ensuring secure software code.
PCI has long recognized the importance of secure software in the past. Version 2.0, which was released in 2017, included guidance for developers on ensuring secure transactions on mobile devices. The guidance in Version 4.0 now emphasizes applying security best practices to software development and includes some specific guidance on developer training.
The requirements are often broadly stated, though companies may want to implement a more thorough approach.
For example, one requirement from Version 4.0 states that the “processes and mechanisms for developing and maintaining secure systems and software are defined and understood.” A good way to ensure that is true is for developers to receive precise training in the programming languages and frameworks they’re using in order to fill any knowledge gaps.
Another requirement states that developers working on bespoke and custom software must receive training at least once every 12 months, covering:
- Software security relevant to their job function and development languages.
- Secure software design and coding techniques.
- How to use security testing tools—if they are being used—to detect vulnerabilities in software.
In reality, however, once a year isn’t frequent enough to address core security issues and break bad coding habits. Training should be continuous and measured, with a skills verification process to ensure it is being put to good use.
PCI DSS 4.0 includes more than a half-dozen other requirements that address areas such as preventing and mitigating various types of attacks, documenting third-party software components, identifying and managing vulnerabilities, and other security steps. In every case, organizations would be wise to thoroughly pursue those measures. Training on attack vectors should be frequent, rather than yearly. Third-party components, frequently a source of vulnerabilities, should be carefully inventoried through a Software Bill of Materials (SBOM) program. And organizations should be sure to have clearly defined roles for managing vulnerabilities.
The new version also gives organizations some flexibility in meeting its requirements, focusing on outcomes rather than checking boxes, while adding new requirements for authentication controls, password lengths, shared accounts and other factors.
Compliance starts at the beginning of the SDLC
What these regulations have in common is that they attempt to set high standards for protecting data and transactions in the financial services industry. And, as in the case of the latest PCI DSS version, they are increasingly emphasizing the importance of secure code at the beginning of the software development lifecycle (SDLC). The National Cybersecurity Strategy and CISA’s Secure by Design principles also place responsibility for security with the makers of software—before it’s shipped—so that even companies that aren’t directly governed by financial services regulations need to comply.
Organizations need to bridge the gap that exists between DevOps teams (who are focused on speed of development) and AppSec teams (who are rushing to bring security into the process) by training developers to make security inherent to their approach. That requires a complex set of skills, however, that involves more than yearly training sessions or standard, stagnant educational programs can provide. Training should be continuous and agile, built to engage learners with active, real-world scenarios and delivered in small bursts that fit their work schedules.
Financial services companies need to ensure security to both protect against breaches and to stay compliant with increasingly stringent regulations. The costs of non-compliance can be as damaging as a breach in terms of the impact on a company’s reputation and monetary costs. IBM’s Cost of a Data Breach Report 2023, for example, found that organizations with a high level of non-compliance faced, on average, fines and other costs totaling $5.05 million—a half-million dollars more than the average cost of an actual data breach.
The continued growth of cloud-based environments and digital transactions has made the security of software in the financial services industry of paramount importance, something that regulations such as PCI DSS recognize. The best way to ensure secure software is through secure coding at the start of the SDLC. And the way to get there is by effectively training developers in secure coding practices.
For a comprehensive overview of how secure coding can help ensure success, security, and profits for financial services companies, you can read the newly-released Secure Code Warrior e-book: The ultimate guide to security trends in financial services.
Check out the Secure Code Warrior blog pages for more insight about cybersecurity, the increasingly dangerous threat landscape, and to learn about how you can employ innovative technology and training to better protect your organization and your customers.