VP of Marketing
Increasingly, organizations throughout Europe are expected to comply – and to demonstrate to auditors that they’re complying – with complex, risk-based corporate governance requirements and a range of information-related regulations. This has significant implications for IT governance. Demonstrating compliance at business process and application level is a lot simpler than doing so at the operating system and IT infrastructure level. However, compliance depends on – and can only be assured by – effective security in the IT infrastructure.
So how do companies go about ensuring they have the right processes and levels of security in place to deliver and prove compliance? Let’s first look at the frameworks within which organisations must operate.
The legal & regulatory background
EU-based organizations are subject to an extensive range of information and IT-related regulatory requirements. These fall into four distinct groups: corporate governance requirements (the UK’s Combined Code on Corporate Governance and the US Sarbanes Oxley Act of 2002); financial requirements (the Financial Services Authority’s Rule Book and associated regulations); Basel 2 requirements; and sector-specific requirements such as the Payment Card Industry (PCI) Standard.
Corporate governance regimes
The UK’s Combined Code is probably the most evolved corporate governance regime in the EU and companies listed on the UK Stock Exchange must comply with it. In addition, companies with listings in the US will also have to comply with the Sarbanes Oxley Act of 2002 (“SOX”). The UK’s Combined Code is a principles-based governance regime which requires listed companies to comply with its provisions or to provide an explanation for not doing so. SOX, on the other hand, is a rules-based statutory regime, which requires adherence to its provisions on risk of penalty for both the corporation and its officers.
Both regimes require organizations to develop, implement and maintain an internal control framework that will be suitable for and effective in enabling the board to manage risk – primarily, but not exclusively, financial risk - throughout the enterprise.
The Turnbull Guidance, which provides guidance to directors on the risk management aspects of the Combined Code, explicitly requires boards, ‘on an ongoing basis, to identify, assess, and deal with significant risks in all areas, including in information and communications processes.’ SOX requires US listed companies (and by implication their major suppliers) to annually assess the effectiveness of their internal controls, and places a number of other significant governance burdens on executive officers, including the section 409 requirement that companies notify the SEC ‘on a rapid and current basis such additional information concerning material changes in the financial condition or operations of the issuer.’
FSA Rule Book
Distilling the 8,800 pages in the FSA’s current Handbook is no easy task, but principles which emerge include:
• Prin 2.1.2 – “A firm must conduct its business with due skill, care and diligence”, and
• Prin 2.1.3 – “A firm must take reasonable care to organize and control its affairs responsibly and effectively, with adequate risk management systems.”
In its chapter on Senior Management Arrangements, the Handbook sets out requirements in respect of processes and systems, including SYSC 3A.7.1: “A firm should establish and maintain appropriate systems and controls for managing operational risks that can arise from inadequacies or failures in its processes and systems (and, as appropriate, the systems and processes of third party suppliers, agents and others).”
Encouraging firms to take appropriate steps to control operational risks through development of its IT systems, the Handbook says: ‘Automation (of processes) may reduce a firm’s exposure to some 'people risks' (including by reducing human errors or controlling access rights to enable segregation of duties), but will increase its dependency on the reliability of its IT systems.’
Basel 2 and MiFID
Basel 2, implemented throughout the EU via the Capital Requirements Directive (“CRD”), applies to banks, building societies and some investment firms and starts coming into effect from January 2007. In the UK, BIPRU 6 contains the FSA’s rules on evidencing an effective operational risk management approach. This approach, which must be robust and externally validated, must also integrate with the firm’s overall approach to risk management.
MiFID, the Markets in Financial Instruments Directive, comes into force from November 2007. The FSA, responsible for implementing MiFID as well as the CRD, has still to finalise those modules of the Handbook that will relate to these directives. It does intend, however, to “create a common platform in SYSC of systems and controls requirements to comply with both the CRD and MiFID. ” In other words, the control requirements identified earlier will be extended to apply to both these regulations as they come into force.
EU organizations are also subject to a range of other data-related legislation and standards, ranging from the Data Protection Directive (incorporated into UK legislation as the Data Protection Act 1998) through to the Payment Card Industry Data Security Standard (“PCI”). PCI contains very specific requirements in terms of information security (all of which have been mapped to ISO/IEC 27001:2005, the international standard specification for Information Security Management Systems) and merchants and service providers have to conform.
There are significant overlaps between many of these detailed requirements and the outcomes of operational risk management decisions that are made in complying with the FSA Handbook or Corporate Governance requirements. The common theme that emerges is the requirement for organizational internal control frameworks to manage risk effectively throughout the organization, for these frameworks to take into account the risks in IT systems, and to ensure the confidentiality, availability and integrity of information. Organizations are, in other words, required to assess the risks to their business and to develop an internal control framework appropriate to those risks. Crucially, they must also be able to evidence their decision-making process, and the effectiveness of their control decisions.
An appropriate internal control framework must take account of the need for monitoring the reliability of IT systems, and for information security.
The FSA requirements in respect of information security are specific. Firms must ensure:
1. confidentiality: information should be accessible only to persons or systems with appropriate authority, which may require firewalls within a system, as well as entry restrictions;
2. integrity: safeguarding the accuracy and completeness of information and its processing;
3. availability and authentication: ensuring that appropriately authorised persons or systems have access to the information when required and that their identity is verified;
4. non-repudiation and accountability: ensuring that the person or system that processed the information cannot deny their actions.
These information security controls are part of what are known as ‘general controls,’ which ensure the proper development and implementation of applications, and the integrity of program and data files and of computer operations.
ISO 27001 is an externally-auditable standard for information security management systems. It deals, explicitly, with the general control requirements identified above. Development, implementation and maintenance of an ISO27001-certified system is clearly a logical first step for organizations seeking to demonstrate compliance with those requirements. Whether or not an organization does choose to follow this route, it will still have to identify an effective way of dealing with a number of issues arising from the complexity of today’s business networks and IT infrastructures, from the volume of data processed through those systems, the changing universe of persons authorised to access that data, and the rapidly evolving, mutating information security threat environment.
The changing threat environment is of particular importance for today’s businesses. Attacks are increasingly blended ones, in which the techniques of hackers, spammers and virus writers are co-ordinated to breach corporate defences. These attacks might be large scale, or they might be focused on only a small number of targets. Where these attacks depend on internal involvement, they are even harder to defend against.
The 2005 FBI Computer Crime Survey reported that:
• 87% of organizations experienced a security incident in 2005;
• Viruses, worms, Trojans and spyware formed a ‘non-stop barrage’;
• While antivirus, antispyware, firewalls and antispam are almost universal, they did little to stop malicious insiders;
• 44% of attacks were from within the organization;
• 25% of attacks were from both inside and outside the organization.
Neither the threat level nor the range and sophistication of attacks is likely to decline any time soon. Unless organisations choose to disconnect from the Internet and return to the pre-networked period, they will have to take – and regulatory authorities will expect to seen them take – a more sophisticated approach to countering attacks.
Compliance – the issues
The traditional approach to achieving compliance in these areas includes monitoring device access (and attempted access) information at the individual event (i.e. device and access attempt) level. Anomalous event information can then be investigated and unauthorised intrusion attempts identified and countered. There are a number of practical barriers to the effectiveness of such an approach.
Volume and range of technical devices and appliances
Most networks today contain a significant number of devices, including workstations, remote access devices (e.g. PDAs, remote laptops), servers, routers, switches and communications devices that often support more than one operating system, a wide range of applications and services, both internal and external. Organizations that have grown by acquisition often contain networks that differ in detail. Every single one of these devices is likely to have its own Access Control List, and to generate a log of information which will include details about accesses. Few organizations, though, have the technical expertise to gather this information – which typically runs to tens of thousands of events per hour – so that they can analyse it adequately.
The first challenge is to differentiate between authorised and unauthorised access attempts. While those access attempts that follow a specific path are relatively easy to identify, every incorrect entry of a user name and/or password could appear to be an unauthorised access attempt, even if was only the result of user error, memory failure or device error. Each of these errors (‘false positives’) must be eliminated before the possible attacks can be identified and, by the time they are, it may be too late to counter the attack.
Volume of data unlinked to specific controls
In analysing the range of data, the specific control to which the date relates is not usually clear. For instance, an authorised user seeking access to an unauthorised application might be evidence of an attempted insider attack, or it might be evidence of a delay in amending user access rights. A different response is usually called for in each case but, unless there is some immediate information available on the nature of the violation, it’s difficult for a security administrator to identify the right response.
Data can speak to more than one control requirement
Similarly, an attempted security breach might be evidence of violation of more than one control, each of which will need to be addressed in order to eliminate loopholes. An individual application attack, carried out by someone authorised to access the IT system, is likely to be an ‘authorised use’ violation; it might also be violation of a ‘time-of-day’ control, and of a ‘segregation of duties’ breach. Uncertainly about which control has been breached – and therefore about what counter-action to take, can inhibit the security response.
Individual events, which on their own might be innocuous, might also, when linked together with other events, can identify a serious control breakdown. For instance, an authorised user accessing the system is unlikely to be identified as an attack. An attempt to transmit information through email is also unlikely to be identified as a security event. However, if the user who had logged in had also recently resigned, was logging in outside of normal hours, and was transmitting information – including client databases – to a hotmail account, then it’s obvious security is breached.
In-house programming and monitoring expertise
Traditionally, organizations who attempt to monitor individual events at a level of granularity that will enable to them to arrive at better quality decisions about many of these scenarios rely on third party software whose configuration and operation depends on their developing, in-house, significant programming and monitoring expertise. This is expensive and is not an option open to all organizations; it also raises the challenge of ensuring the effectiveness of the software.
The compliance solution
Ideally, the organization’s Chief Security Officer would have a software solution that continuously monitors all activity (human and device) across the network, compares events to the organization’s own security policy and controls, and which gives a report that says, in effect, ‘Everything’s green, apart from x, y & z’. In considering solutions that might deliver such an outcome, there are a number of key criteria that should be considered.
Critically, any solution must deal with more than just perimeter security. While it is important to gather and analyse perimeter security events (at firewalls, VPN servers, etc), it is even more important to monitor events throughout the network, and particularly at the network’s core, where corporate information and intellectual property is created, modified, accessed or stored.
Inclusion of this core security monitoring with the traditional peripheral security monitoring means that any solution must be capable of managing a very high level of events, and to track user and device activity at all stages of any session.
Experience shows that a typical correlation of events to alerts is in the order of 10:1; or one alert for every ten events. This means that an ideal solution must
• Provide end-to-end session tracking that supports both real-time security event monitoring and subsequent security audits;
• have very fast real-time correlation, processing (depending on the size of the network) between 5,000 and 20,000 events per second;
• authenticate and encrypt events that are transmitted;
• provide graphical reports to a central office that identify high risk alerts and which go back to the original initiation date of the system;
• have sufficient online storage for at least 3 months of events;
• be capable of easily adding new event sources (whether new applications, new devices, or new occurrences of existing devices);
• be capable of easy updating for new scenarios and emerging threats;
• have high availability and a tested disaster recovery capability.
Of course, any solution today also needs to be easy to deploy and manage; analyst and system administrator training should require no more than, say, one to two days, and it should not be necessary to develop and maintain an expensive internal skill bank to support the security solution.
This type of solution is referred to as Security Information and Event Management (SIEM).
So in conclusion, the ideal SIEM solution monitors critical business assets, behaviours and sessions, collecting events of all sorts from all the peripheral and core devices across the network, and process high volumes of events. These should be stored in a database, and standardized to enable correlation and comparison of different events on different devices.
Events and event sequences should be automatically compared to pre-determined scenarios (a scenario contains a sequence of events linked to a specific control implementation that will evidence compliance or non-compliance with a pre-determined corporate policy) and those that should be flagged are reported in real-time in a dashboard style report.
Ease of configuration should also be considered essential, as new threats and security threats are constantly emerging and speed and flexibility in creating new scenarios against which they can be assessed is vital.
Finally, a comprehensive suite of auditable reports must be available to demonstrate to external auditors and to regulators that the organization has taken – and continues to take – appropriate and effective steps to monitor and manage in real-time all the risks to their information and IT systems.
VP of Marketing
Jason Halloway is VP of marketing for ExaProtect, a global player in intelligent security management that offers organizations unified control of multi-vendor network and security systems with a ‘view & do’ approach. You may contact him at firstname.lastname@example.org.