Spotlight on regulatory reporting data quality: industry challenges and regulatory expectations May 2017
Focus on data quality
In focus: • Regulatory reporting expectations and requirements continue to evolve as evidenced in regulatory guidance (CCAR ROPE, SR 15-18, BCBS 239, CFO Attestation requirements, and recent FR Y-14Q/M horizontal reviews and CCAR feedback). • Increased regulatory scrutiny has been placed on the completeness and accuracy of data, including conformance with reporting requirements. • Transaction testing to support data quality continues to be a recurring theme in regulatory feedback across major reports (e.g., FR Y-14Q/M, FR Y-9C, FFIEC 009), including data governance and accountability. • This brief focuses on data quality regulatory expectations, challenges and key considerations regarding ongoing transaction testing programs as part of an overall data quality framework.
1
Regulatory expectations
Challenges
The 2008 financial crises brought into focus the importance of ongoing transparency and monitoring of financial systems infrastructure. While the standards for regulatory reporting have significantly evolved ever since the crises, over the last two years, regulators have placed even more reliance on data-driven supervision with increased granularity, complexity and frequency of regulatory reporting requirements. Regulatory reporting provides the data necessary for regulatory agencies to monitor the safety and soundness of individual financial institutions, monitor systemic risk and capital conditions of the broader banking and financial system, establish and conduct monetary policy, and monitor cross-border financial activity.
As a result of increased quality data expectations, banks face multiple challenges as they continue to rely on antiquated and disparate systems and processes resulting from multiple mergers and acquisitions. The infrastructure at these firms was not designed to serve regulatory reporting needs, and many operational processes and systems require significant transformation and manual intervention to improve data quality.
Given the regulators’ reliance on this information, they have increased the expectations and scrutiny regarding data accuracy and completeness. Regulators are beginning to assess a financial institution’s capital adequacy based on the quality of its internal controls, data, governance and effective challenge across the three lines of defense as highlighted in CCAR ROPE Guidance (2013), SR 15-18, CFO Attestation requirements, and FR Y-14Q/M horizontal and CCAR feedback. Recently implemented CFO Attestation requirements for large complex firms, for example, have incorporated SOX-like internal control frameworks, increasing regulatory reporting control standards, accountability and applicability across the reporting process. Additionally, recent horizontal reviews for the FR Y-14Q/M and other regulatory reports have become more targeted in focusing on a bank’s data quality. Exam feedback for many institutions has been significant, requiring that banks complete a comprehensive review of their reported data, establish ongoing data monitoring or transaction testing programs, and improve the control environment by focusing on data onboarding/origination.
Additionally, many regulatory reports require reporting of nonfinancial data (e.g., FR Y-14Q/Ms) as well as granular instrument/transaction-level detail. While established SOX frameworks have provided control standards and mechanisms for monitoring select regulatory reporting data (financial data), this level of rigor is not fully implemented at most banks for much of their nonfinancial data and may not exist through the end-to-end reporting process at the element level. Where banks have established regulatory reporting control frameworks/ standards, it is often observed these are not harmonized across the full suite of risk and regulatory reports, and multiple standards/ framework have been created in isolation. Last, given the size of many banks and the number of stakeholders involved in regulatory reporting, clear accountability is not always established for reporting data, controls and processes. As reporting data issues persist, it is often difficult to identify and hold responsible stakeholders accountable. Lack of a control framework and accountability can lead to multiple issues, including limited to no challenge around whether reported data is fit for purpose (e.g., regulatory reporting, financial reporting, etc.). It is often observed that banks incorrectly or inconsistently report data across multiple external reports.
Response to improve data quality To improve regulatory reporting data quality, most banks acknowledge that a multifaceted approach is required. Many are implementing regulatory reporting data quality control frameworks that establish standards across regulatory reporting controls (control standards, data dictionaries, etc.), data governance (data lineage, data profiling, etc.), accountability and training (accountability policy, attestation structure/ process), and ongoing testing (control, data lineage and transaction testing). Data quality control frameworks are typically predicated based on BCBS 239 principles; however, given the long-term implementation life cycle of such an initiative, the control environment at most banks are still evolving to provide sufficient coverage over the end-to-end production process. Establishing the necessary controls and data quality at the point of origin and at the various transformations that occur throughout the data supply chain continues to be a focus over the past several years by the industry and regulators. Additionally, CFO Attestation requirements for larger complex banks have required an accelerated implementation of various aspects of data quality control framework, including the accountability and assurance from line of business and other stakeholders that source and own data to support executive attestations. While all aspects of the framework are critical for improved data quality, recent regulatory feedback has placed an increased emphasis on the need for ongoing testing and monitoring programs. Many banks have existing regulatory reporting monitoring and testing programs; however, most of these programs have been established for a singular purpose (e.g., controls) or concentrated on specific reports. Most recently, regulators have required banks to establish ongoing transaction testing programs to identify existing data quality issues (an effective detective control) and provide ongoing monitoring over reported regulatory data.
Spotlight on regulatory reporting data quality: industry challenges and regulatory expectations |
2
Regulatory reporting transaction testing Transaction testing overview
Data issues identified by transaction testing
Transaction testing is a mechanism for identifying and monitoring inaccurate or misclassified financial and nonfinancial data and has been part of regulatory exams for at least the last decade. Mechanically speaking, transaction testing compares reported data back to underlying source documentation (e.g., loan contracts, trade tickets and other documentation produced at the inception of the bank’s transaction) and identifies discrepancies in reported data elements or classification of the underlying transaction in conformance with the report’s instructions. The transaction testing approach may vary depending on the type of report and information tested. Many institutions, through self-initiated initiatives or prompted via regulatory feedback, have established or are in various stages of establishing ongoing transaction testing programs.
Transaction testing assists in identifying internal control issues, including those related to operational processes and systems. Using loans and leases as an example, data generally enters the bank through underwriting or origination systems by a team of credit officers or relationship managers that may have little knowledge or awareness of the downstream regulatory reporting implications. While documentation created at the time of origin (e.g., loan agreements, credit memos, appraisal documentation, etc.) may contain all the necessary information needed for complete and accurate reporting, any number of the following challenges and limitations may prevent the data from reaching its final destination: • R ► etention of incomplete or insufficient origination or other supporting documentation • Limited preventive and detective controls (at point of data entry or elsewhere throughout the data supply chain) • Outdated systems with limitations on capturing transaction data • Limited regulatory reporting knowledge and accountability (especially among bank underwriting/operational resources) • Errors in data transformations or system handoffs as the data moves downstream • Misinterpretation of regulatory reporting requirements and associated data sourcing/report logic Transaction testing results can often serve as valuable input into the prioritization of the firm’s tactical and strategic initiatives and infrastructure investment. In addition to identifying discrepancies on regulatory reports, effective end-to-end transaction testing can help surface US GAAP, compliance, legal and operational issues that may be systemic in nature and pose significant risk to the firm.
3
Spotlight on regulatory reporting data quality: industry challenges and regulatory expectations |
4
Transaction testing implementation considerations Banks are facing many challenges as they begin to implement ongoing transaction testing programs. Despite regulators’ reliance on conducting transaction testing as part of regulatory reporting examinations during the past decade, only a few banks have developed an in-house transaction testing programs. As a result, there is limited experience from which the industry can draw upon to design testing standards, establish risk prioritization frameworks, and assign ownership. Included below are key implementation considerations and industry observations noted as banks begin to develop ongoing transaction testing programs:
5
• Functional Ownership: No clear trend has emerged within the industry with regard to transaction testing program ownership. Often, ownership has been determined based on where available capacity and the necessary skills and subject matter knowledge existed to complete testing. In other instances, the capacity or skills may not have existed and had to be established or “purchased.” Ownership has ranged from residing in quality assurance-type groups within first-line regulatory reporting risk or finance functions (1.5 line functions), to second-line compliance or operational risk functions, to third-line internal audit groups. In some cases the testing program was established in one functional group (given the urgency to quickly stand up a program) but transitioned to another area within the bank. As banks face the challenge of transaction testing ownership, they should consider the necessary testing resources, skill sets, infrastructure and access to banks that stakeholders need for sustainable testing. • Risk prioritization: Similar to other testing programs, regulators expect an ongoing transaction testing program to prioritize testing at a level and frequency commensurate with the bank’s materiality framework, complexity, report criticality and existing control environment. To accomplish this objective, many banks have established a transaction testing risk prioritization framework that serves as a mechanism to collect input and risk rank regulatory reports, schedules, line items and data elements. Prioritization frameworks should account for multiple criteria, including balance sheet/income statement impact, known system or data issues and deficiencies, input from regulatory reporting and line of business stakeholders, key business elements (KBEs), and other prioritization criteria.
• C ► ross-report testing strategy: Where already established, current transaction testing programs typically focus on a few key reports; however, banks should consider evolving testing programs to consider multi-report testing strategies (e.g., capital, liquidity, consolidated, stand-alone, cross-border exposure, etc.) Cross-report testing strategies would identify common elements by product and those that drive report classification across reports. A common test approach would test multiple reports with a single set of supporting documents and allow for efficiencies across multiple aspects of the testing process (e.g., document retrieval, engaging key stakeholders, executing testing and conducting root cause analysis/ impact assessments). This approach would also allow for data remediation to occur at the point of entry for all reports instead of ad hoc remediation. • Transaction sampling: Once prioritized reports, schedules and line items have been identified and a testing approach has been defined, a set of transaction samples should be selected. Sample selection, especially in the early stages of a transaction testing program, should be predicated upon reviewing a wide range of data that requires a judgment-based approach and considers criteria such as product and borrower type, source system, origination date, dollar amount and other criteria that may be unique to a firm. Over time, as the transaction testing error rate stabilizes, statistical sampling plus some level of judgmental sampling can be considered. • Document collection/retrieval: Document retrieval is one of the most critical components of an ongoing transaction testing program. Depending on the number of elements tested, a single transaction may require multiple sets of
documents (e.g., loan agreement, credit memo, appraisal, income documentation, etc.) Due to legacy acquisitions, most firms have a large range of document formats, types and imaging systems with minimal document standardization and centralization. Significant up-front planning and engagement with key businesses and other stakeholders is necessary to review the testing scope and identify the appropriate documentation needed to validate the reported data. Obtaining a proxy for a small set of transactions prior to collecting the entire sample has proven to be effective in reducing the time and burden spent retrieving this documentation and in ensuring the appropriate documentation will be provided by the stakeholders. • Resourcing: Another critical component of a transaction testing program is identifying internal resources that possess the combination of requisite knowledge and skill sets to execute testing while maintaining independence from the functional area that owns the regulatory reporting data. Resources must be able to define and develop report and schedule-specific testing methodologies and understand key regulatory reporting requirements, instructions and underlying product and source documentation. Many banks do not readily have these resources available requiring short to medium-term external support along with rigorous training programs to establish sustainable internal testing resources. Some banks have also considered supplementing transaction testing teams with nearshore or offshore resources (internal and/ or external) to reduce the overall cost associated with these programs.
• Technology and automation: As firms seek to scale transaction testing for a particular report or extend testing programs across multiple regulatory reports, more testing efficiencies and automation will be required to make testing sustainable and cost-effective. Some banks have already begun to utilize existing or implement new test management platforms to replace manual Excel- or Access-based testing tools, allowing banks to take advantage of workflow and other platform capabilities. Some firms are considering the use of optical character recognition (OCR) or robotics technology to automate certain aspects of transaction testing. These are relatively new and unproven technologies employed throughout the financial industry; however, select firms are beginning pilot programs focused on portfolios for which contracts and data are relatively standard. • Linkage to issue management: The final step in the transaction testing process is completing root cause analysis on any identified issues. Root cause analysis allows a bank to determine the source of data quality issues (e.g., insufficient controls, system limitation, errors in reporting logic, etc.) as well as the broader effect on other report types. Errors identified as part of FR Y-14Q/M transaction testing, for example, may affect CCAR loss forecasting models and also affect the accuracy of the FR Y 9C and the FFIEC 101. Banks are often challenged in wanting to remediate oneoff transaction testing errors without engaging other impacted stakeholders so that thoughtful, strategic and enterpriselevel remediation can occur.
As regulatory reporting expectations continue to evolve, firms should consider implementing or enhancing ongoing data quality review or transaction testing functions. This brief was intended to provide regulatory and industry perspective related to this topic. For information on transaction testing or any other regulatory reporting topic, please contact one of the practitioners below.
How we see it Transaction testing can be complex and often requires a different testing approach depending on report, schedule or product. For example, the testing approach required for a FR Y-14Q Operational Risk schedule is different from the testing approach required for the FR Y-9C or the FFIEC 009. Proper planning and engaging the right stakeholders across a bank’s regulatory reporting, line of business and operational teams is critical to the overall success of the transaction testing program. The time required to complete transaction testing and the number of affected parties within the bank should not be underestimated.
Transaction testing should not be viewed as a one-time data review exercise. The evolving regulatory expectation is that banks establish ongoing transaction testing programs across multiple reports and schedules. Not too dissimilar from traditional audit programs, transaction testing programs should establish risk-based frameworks that drive the level of testing and frequency. Annual testing plans should be established with the necessary staff who have the subject matter experience to appropriately execute transaction testing.
Spotlight on regulatory reporting data quality: industry challenges and regulatory expectations |
6
Ernst & Young LLP contacts Anita Bafna Partner +1 212 773 3938
[email protected]
Gagan Agarwala Partner +1 212 773 2646
[email protected]
Vadim Tovshteyn Executive Director +1 212 773 3801
[email protected]
Joe Zurenko Senior Manager +1 312 879 2554
[email protected]
Steven Bopp Executive Director +1 212 773 7259
[email protected]
Patricia Maone Senior Advisor +1 212 773 5675
[email protected]
We appreciate contributions from the following individuals who assisted in the development of this report: Rui Bao, Greg Cannella, Lauren Byrnes, Joe Dudley, Harsh Vasavada, and Hannah Wiles
EY | Assurance | Tax | Transactions | Advisory About EY EY is a global leader in assurance, tax, transaction and advisory services. The insights and quality services we deliver help build trust and confidence in the capital markets and in economies the world over. We develop outstanding leaders who team to deliver on our promises to all of our stakeholders. In so doing, we play a critical role in building a better working world for our people, for our clients and for our communities. EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients. For more information about our organization, please visit ey.com. Ernst & Young LLP is a client-serving member firm of Ernst & Young Global Limited operating in the US. © 2017 Ernst & Young LLP. All Rights Reserved. 1704-2284522 SCORE No. 03751-171US ED None This material has been prepared for general informational purposes only and is not intended to be relied upon as accounting, tax or other professional advice. Please refer to your advisors for specific advice.
ey.com