Managed Luck website by Matthew Leitch (consultant, author, educator, and researcher)
Home / more articles - The author - Contact on your terms - Links - Services

BPCD Graphic


Designing internal control systems

by Matthew Leitch, 18 January 2003


Why design?
- data quality meltdown
- persistent waste
- regulations
- stress
What to expect
- shortage of information and time
- resistance borne of desperation
What not to do
How to do it right
- getting into the project plan
- how to do high level design
- proposing work packages
- near "Go Live!"
Tips on some key control mechanisms
- process monitoring
- ergonomics
- comparing totals
- validation and edit checks
- segregation of duties
Finally

Why design?

This paper is about designing internal control systems (precautions we take to guard against error, fraud, or other perils) for business processes, such as billing, purchasing, and treasury management.

But why recognise internal controls design as a discipline in its own right? Why not assume it is done by everyone in the normal course of their jobs?

When a business creates these processes for the first time, or makes significant changes, internal controls will be established. This usually happens because people involved know from their education or past experience that bad things happen and they need to take precautions. For example, the IT people will worry about hackers, viruses, and disasters like fire and flood in the data centre. Accountants will be looking for reconciliations and approvals. Managers will want reports. And so on.

This organic, decentralised process works pretty well but it does have flaws, and these lead to some serious risks and inefficiencies.

Data quality meltdown

Almost all organisations have a huge investment in data - data about customers, suppliers, products, employees, and so on - gathered, checked, and stored in databases and files. Consequently, one of the most costly problems with new systems or processes is the data quality meltdown. Here is how it happens.

At the start of the implementation project task lists are drawn up and lots of good ideas about quality assurance and internal control are usually captured and put into the plan. However, as time goes on and things take a bit longer than expected the pressure builds. As people become stressed their focus narrows. Meetings are held at which people ask "What do we really, really need to do?" Little by little the quality assurance and internal control tasks get de-scoped and eliminated. Go live weekend arrives (three months late but it still seems like a triumph) and the champagne is opened. For a while everything seems to be going well, though people are struggling with the unfamiliar way of working.

Then the first evidence of problems starts to emerge. Someone runs a suspense report for the first time, not having seen it before, and discovers thousands of errored transactions have already built up. More checks are done and more problems emerge. Time to panic. More temps are hired and a crisis team is formed. Already overtime is huge, possibly with shift working.

But by this time the vicious cycle has got a hold. People often make mistakes when they try to correct mistakes. Reference data is already contaminated with errors and is generating more and more incorrect transactions. The extra work of correcting mistakes is leaving people tired and stressed, so they make more errors, especially when trying to correct errors, late at night, for the third time.

Recovering from this sort of meltdown can cost more than the original implementation. It is better to apply expertise to internal controls from the outset to minimise the risk of meltdown.

Persistent waste

Fortunately, data quality meltdowns are not common. However, wasted time and damaged customer goodwill are an almost universal effect of not designing internal controls.

The most easily quantified and understood effects are external quality costs. In other words, situations where errors cost cash. For example, paying the same invoice to a supplier twice, or failing to bill a customer for all the goods and services provided.

In telecoms, "revenue assurance" is a well known buzzword. It refers to searching for and correcting system and process errors whereby customers are not being billed correctly for all the services they receive. In many telecom companies this has been found to be several percentage points of revenue - a vast amount of money and well worth doing something about it. Certain other industries will probably find the same.

There are also internal quality costs, principally from the extra labour involved in finding and fixing errors.

This persistent waste occurs because most organisations leave internal control to a natural, organic process of development and decay. Although this process is surprisingly effective it has enough flaws to justify deliberate design. The natural process works like this:

The first controls are put in place because individuals think of them - usually prompted by bad past experiences. This starting point is not ideal, but from time to time things go wrong prompting improvements which are normally specific responses to the issue. Over time, a set of internal controls grows. Occasionally, rising workload and/or staff cuts and turnover cause controls to fall away. Sometimes this is a good thing because they are no longer needed, but sometimes it is a bad thing, leading to renewed error and fraud, perhaps not directly affecting the person who should have been operating the control.

Overall this process is surprisingly effective and controls tend to be more developed where they are needed most. However, there are some problems:

One reason for designing internal controls in a deliberate, skilled way is to reduce the routine waste that results from the organic, natural process.

Regulations

Some very important corporate governance regulations affecting listed companies say that directors are responsible for "designing" internal controls. The Sarbanes-Oxley Act of 2002 affects companies listed in the USA, even if they are based elsewhere. In section 302 it states that the CEO and CFO must certify, among other things, that they have "designed" the internal controls to achieve various objectives. There is also a need to be able to report on significant changes to internal controls or other factors that could significantly affect internal controls. This implies a degree of monitoring of such factors, as will be described later in this paper.

Stress

The most compelling reason for designing internal controls deliberately, and with skill, is to reduce stress. Internal control problems cause stress for everyone involved, from the clerk working late to clear billing problems, to the CFO explaining to investors how his project to consolidate accounting in one global centre caused a £25m billing backlog, everyone suffers. The stress stretches out to irate and frustrated customers, exasperated suppliers, and horrified investors.

What to expect

Many people given the job of designing controls for a new or revised process, and joining a large project, get into difficulties because the project is a lot rougher than they expected. Most people who get involved with internal controls are auditors or accountants unused to large projects and un-prepared to deal with some of the more aggressive groups they are likely to meet. Events unfold more quickly than they expected, and the timing of project milestones seems to make success impossible.

Every project is different, but typically you can expect to deal with the following groups:

Shortage of information and time

Because the software people work down to the wire on most deliverables agreed process descriptions and system designs are not available until it is too late to respond to them from a controls point of view.

Resistance borne of desperation

Also, the software people will often be reluctant to accept some security and control requirements because they see them as non-essential and as getting in the way of them delivering on time. The more desperate they get the more the resistance. Sometimes only the steering committee can get them to accept control requirements as necessary.

What not to do

If you want to achieve nothing as a controls designer on a big project the steps you should take are as follows:

How to do it right

To succeed at controls design on a big implementation project you need to:

The following subsections discuss these in more detail.

Getting into the project plan

You can get into the project plan in two stages.

Initially, you will have only a hazy idea of what controls will be needed but you still have to see the project manager and make sure that everything you can say is in the plan. Your initial work will be to produce the high level design and propose packages of work to design and implement key components of that scheme. Explain this, and that you will provide more certainty as soon as you can. You may agree to put some general sounding tasks into the plan as place-holders.

Once you have the high level scheme and work packages you can go back to the project office and provide more detail about what you will do and how much work is likely to be involved. There may also be implications for other people on the project.

At all stages, the timing of your work will be determined by the timing of the work of the bigger, more important teams on the project.

How to do high level design

This is probably the most important step and the one requiring the most skill. This is how you will demonstrate the value your work provides and begin to get the support for internal controls you will need to survive the pressure to de-scope that will probably arise later on.

High level design must be done quickly, but convincingly. It should take no more than 10% of your total elapsed time to do it. Adjust the level of detail to fit the time and resources available. Do not wait for system and process details to be agreed.

The basic technique is to look at the factors that will drive the scheme of controls, and deduce what those factors tell you about how a typical, vanilla flavoured scheme of controls should be adapted and tailored to fit the specific circumstances faced. Some key points are:

Data and deduction

It is helpful to organise the initial thinking in a table where you list the facts observed, and the implications for internal controls. Divide the table into sections using sub-headings, one for each group of observations/factors. Divide the implications into five columns to contain implications for controls arising from deductions from the observation that relate to:

It takes knowledge and practice to become fluent at this. You need to build a repertoire of things you know to be common drivers and be able to recall quickly their typical implications. As a starting point, here are the sort of factors to look for and some of their most common implications. Asterisks mark the factors that are almost always key ones.

Drivers/observations

Some of the possible implications

Control performance requirements (from competitive strategy)

Very quick processing is required.

Need controls to catch delayed items.

Try to get controls off the critical path. Replace pre-transaction checks with post-transaction reviews where possible.

Automated controls preferred.

Very flexible processing is required.

Probably complex and allow easy adjustments, so easier to defraud.

Probably more parts of the process are manual.

Control mechanisms also need to be flexible.

Low hassle to the customer is crucial.

Fewer opportunities to impose controls on the customer.

Must avoid errors affecting the customer.

Exact timing is needed.

Controls are needed to catch delayed items, manage fluctuating workload, etc.

Reliable service to the customer is essential.

Mistakes affecting customers to be reduced.

Very economical processing is required.

A subtle one because errors create quality costs so a balance is needed.  Probably need automated controls if the processing is on a large scale.

Highly regulated activity (e.g. selling pensions).

Covering compliance risk will take more work.  Need to monitor forthcoming regulations/laws and start changing processes in good time.

Cultural features

Behavioural norms encourage fraud/theft.  Patterns of crime already established.

High risk of fraud/theft.  Collusion a possibility.  Could be social/staffing problems if established fiddles are tackled strongly without supporting action by top management.

Company wishes to empower its people.

Old fashioned controls undermine empowerment.  Try to make teams work and give people the information they need to make good decisions themselves for the company.  Talk of quality rather than control.

Functional silos.

Functional silos are a problem for process level monitoring controls so a cross-functional committee is needed.

Weak control environment.*

Undermines manual controls and may hinder controls design activity.  Monitoring is unlikely to work well unless the control is environment improved.  Expect control weaknesses at all levels.

Data features

Data is standing data.*

Data errors accumulate unless cleared.

Individual data items are typically more sensitive.

Data entry workload is probably uneven with few users trained to do it, so may need to pre-book work to ensure staff available.

Data is transaction data.*

Data errors probably will not accumulate.

Individual items typically less sensitive.

Data entry workload typically more even but much higher.

Data very complex.*

Difficult to enter correctly because of complex data entry screens.  Will require more emphasis on usability engineering.

Harder to write software correctly so more software quality assurance needed.

Very high volumes of transactions.*

Tiny error percentages still produce many exceptions to be corrected.  Often need system tools to prioritise, investigate, and clear errors.

Manual controls unlikely to be efficient.

Highly predictable data values.

Easier to control and to monitor. Favours computer filtering and assisted review.

Transactions can be divided into sub-populations which are highly predictable or at least have very common characteristics.

Look to split transactions into separate streams, each with their own controls.

Population contains some very high value and high risk items.

Requires either a very reliable process or a special approach to controlling the bigger items.

Data about individuals is held.

Privacy legislation applies and confidentiality breaches could seriously damage customer confidence.

Very abstract business based on rules, definitions, possibilities (e.g. insurance, derivatives).*

Higher error risks.  Trouble can arise through not understanding the business clearly.

Small actions can have big effects, perhaps not immediately visible to those involved.

Process features

The process is highly complex.*

This has different implications depending on whether the process is automated or manual.  Complexity is one of the top drivers of controls development effort.

Customers capture data (e.g. type it into a web site).

Error prone, especially if the product is complex e.g. insurance, mortgages. Companies assume their customers are interested in, and understand, their products; often this is wrong.

Rarely possible to provide training to customers. Usability is key, as are edit checks.

Customers could include professional fraudsters.

Suppliers capture data (e.g. type it into a web site).

Again, you can rarely train so the software must be usability tested with lots of edit checks.

Process is highly automated.*

Probably more reliable, but risk of systematic errors. More stress on IT controls.

Process is highly manual.*

Probably less reliable, especially if work is boring, flexible, complex, or under time pressure.

Manual controls are vulnerable to boredom and fatigue.

The assets are easy to dispose of if stolen.

Raises risk of theft/fraud.

High values of money are paid out.

High fraud risk including sophisticated computer attacks, and also risk of interest from money launderers.

Multilingual.

Communication difficulties.  Multiple versions of software perhaps.  Particularly high risk of misleading field names on data input screens and forms so usability testing needed.

International or geographically distributed.

Different sets of regulations to comply with.  Harder to control small, distant offices.  Nationalist distrust is possible.

Cultural differences in attitudes to fraud.  Potential for misunderstandings.

Many separate databases and interfaces.*

More places for interface failures and opportunities for databases to get out of agreement. More chances to mis-map fields in one database to fields in another.  Recoverability is more complex.

Existing business process controls are very good or very poor/there is no existing process.

If existing process and at least some good controls many people will be doing the right thing already. Otherwise, there will be more work on controls to do.

Immediate environment of the process is inside the organisation.

Look to protect the process from messy inputs.

Workload features

Workload is rapidly rising/falling.*

Staffing problems because many staff are new, or because they are insecure and disgruntled.

Workload is highly variable/constant.

High or low proportion of temporary staff affecting error risks and IT security.

Continuous work is required Vs periodic work only Vs slow response only is required.

Determines need for business continuity, among other things.

Environment is very fast changing or very stable.

Affects choice between very refined, automated controls Vs quick and dirty manual controls.

Many changes in processes, systems, or people.*

Lower inherent reliability is to be expected, so errors rates will be greater and controls and monitoring are more important than ever.

Very high/very low proportion of work in the existing process is controls.

Affects decision over how much effort to invest in refining controls in detail.

Project features

Poor project health.  (e.g. uncertain sponsorship, politics, unclear or shifting requirements, over-ambitious objectives and impossible timetables)*

Expect delays, frantic efforts to meet deadlines, and pressure to ignore controls.  Expect low reliability software and lack of adequate training of staff.  Compensate with powerful monitoring controls from go-live onwards.

If you've never really tried this technique before you will be amazed at how much you can predict from minimal facts, and how accurate your predictions can be. There's nothing particularly clever about the inferences, but as they build up many things become clearer.

There is no need to wait for the processes and systems to be agreed before doing this. The vast majority of your predictions and design decisions will be correct, with only a few changes being needed once you see the final system and process details.

Finding out the facts involves talking to people, looking at product literature, strategy documents, spreadsheets, and indeed anything relevant you can get your eyes on. It is not necessary to understand all this material to use it. For example, if the regulatory compliance manual giving rules for selling a particular product is 5cm thick, printed on flimsy paper in microscopic letters, and written in legalese you know regulatory compliance is going to take time!

A common error is to try to think of risks directly, rather than starting with drivers.  This tends to lead to lists of risks that are theoretical rather than likely.

Summarising the controls scheme

It is helpful to pull together the main conclusions of the initial deductions into one short document. This is also where you can see the multiple layers of the control scheme. The shortest summary omits the reasons for each control/type of control. You will also need to be able to explain the scheme with reasons so may have to do two summaries.

You may also want to do a separate summary for controls over loading initial data into a new system. This might be a big exercise if the new system is replacing an old one and there are perhaps millions of records to be copied, checked, reformatted, re-checked, and loaded.

Here's the multi-layer model I like and recommend for controlling financial cycles, starting at the top:

Proposing work packages

Now you know something about what you want to build and where the detailed work is most likely to be complex and time consuming you can propose work packages for internal controls development and implementation during the rest of the project, with estimates for timing and resources.

These will mostly be work packages for the controls specialist(s) but you may need to propose work for others too. The controls specialist(s) will work on important areas not already the responsibility of others, assist others on key controls, and review progress in other areas. It may be necessary to introduce other specialists to cover work not previously recognised in the overall project plan.

The high level design gives you the ammunition for these proposals. It allows you to say why the work is needed and what the deliverables will be.

Near "Go Live!"

The periods just before and just after going live with a new system or process are very interesting and important.

Just before going live people are usually working hard on user acceptance testing and loading data. These activities have a big effect on the initial error rate and workload. User acceptance testing usually involves people who are going to be real users once the system/process is live so this is a rehearsal and learning opportunity for them as well as a last chance to find problems.

The user acceptance testing should be as realistic as possible, and that includes people carrying out controls, such as checks on data and reconciliations, just as they will when they are live. Unfortunately, it doesn't always happen. Here are some of the warning signs:

Just after going live is a high risk period because the inherent reliability of systems and people is at its lowest. Systems typically contain several times more bugs than they will in a year or two's time, while users are still unfamiliar with their new ways of working.

Clearly, it is vital to be checking data and processes and monitoring closely the health of the process. Statistics on error rates and backlogs are vital.

However, there is a further danger. With the best available knowledge and techniques it is still very difficult to estimate most error and fraud rates to within an order of magnitude. They could be ten times less, or more, than you expected so controls need to be refined as quickly as possible so that they are efficient and effective during the early months.

Tips on some key control mechanisms

Process monitoring

To manage the reliability and performance of a process in an organisation you need to know what is going on. It is helpful to hear from people how they are coping, but it is vital to measure the health of your process by collecting statistics and presenting them in a regular report.

Since most processes of any significance cut across departmental boundaries it is usually necessary to form a cross-departmental management committee to receive the reports and agree actions.

The reports should show:

Too many reports just show workload and resources, which is not very helpful.

The ideal report will also contain information about forthcoming changes and challenges, such as trends in workload and new software releases, so that the process owners can take action in advance to manage the risks involved. People who measure the health of their process learn that they must manage in advance to keep their numbers looking good.

This kind of monitoring is extremely useful in meeting the requirements of section 302 of the Sarbanes-Oxley Act as it helps meet the requirement for notifying changes affecting controls and the stats themselves are powerful evidence of the effectiveness of internal controls, which also helps with section 404 of that Act.

One of the most important objectives is to improve inherent reliability and so reduce original error rates. This is the only feasible, economic strategy for most really large scale processes. To do this the report should also show breakdowns of errors into error types, showing them in descending order of their impact so that actions can be prioritised.

The report takes time to compile. To minimise that time follow these guidelines:

Ergonomics

Ergonomics is the most overlooked, yet most important subject in internal controls design.

Almost all errors arise, directly or indirectly, because of human error. Mis-coded transactions, bugs in software, a wrong VAT code entered - all human error. Even a computer hardware failure comes from the mistakes of the engineers who designed the robot that built the component. Training helps, but ergonomic improvements are more effective and far more cost effective.

Some human errors are outside your control because they happened too long ago, are outside the company, or are caused by something you cannot change. However, there are many errors you can reduce by paying attention to ergonomics. It is also vital to consider ergonomics when designing the details of internal controls.

The main tool in ergonomics is usability testing.

The following information comes from Thomas K Landauer’s book “The trouble with computers” and is derived from a series of studies of usability testing in practice:

These results, and experience as well, indicate that usability testing can reduce the difficulty and time for development while contributing dramatically to quality.

In “Usability engineering”, Jakob Nielson surveys a wide range of usability testing techniques. These do not include releasing a beta test version and going ahead if nobody complains bitterly enough! The most important techniques include:

Most work on usability is concerned with the design of new software. However, this is only one area where usability improvements are an important control. Here are some others:

If monitoring stats show errors arising then the most important action is usually to find out exactly where and why the errors occur. Typically, confusing design of something is the culprit and the cure is to improve the design so it helps people get things right instead of tricking them into getting it wrong.

Individual controls need to be designed with human factors in mind. For example, imagine a control that calls for someone to read computer reports looking for items that look suspicious and check them. If the report is long and suspicious items are very rare even the most motivated and highly trained person will glaze over after a while and miss items they should have noticed. The control is ergonomically infeasible. It could be improved by designing a report that searches for suspicious items, or sorts items in a particular way that makes the search easier.

A very common mistake is to rely on people to spot errors in situations where they don't have enough time or information at hand to do it reliably.

Comparing totals

Controls that involve comparing totals can be broken into three groups:

These controls are often good for detecting the presence of an error, but do not directly help you identify what it is or how to correct it. For that, further investigation is needed.

Despite this, comparisons are a vital component of most control schemes because they are often very cost effective and can provide strong evidence that there are no errors, or that the errors are small. Key comparisons should be identified in high level design and specified even at that early stage. If there are good opportunities for comparisons other controls are less important, but if comparisons cannot be used it is vital to compensate with stronger controls elsewhere.

There are some errors that many people make when working with comparisons. One is to talk as if only one number is involved, not two. For example, "We need a control account reconciliation?" is vague as to what the control account is being reconciled to, whereas "We need to reconcile the control account to the sub-ledger." is clear.

Another error is to overestimate the power of analytics. Analytics are good at revealing problems that arise suddenly and are of high value. Analytics are not good at revealing:

The resolving power of analytics can sometimes be improved by using statistical techniques to work out exactly how unusual a particular fluctuation is, or to improve the graphics used to help people search for anomalies. For example, rather than comparing today's figures with the figures for the same day of the week last week, it may be better to build an average weekly profile, adjusted with a seasonal fluctuation, and built on a long term trend to provide a more precise benchmark.

One very interesting application of comparisons is in the so-called "end to end reconciliation". This is a misnomer because they are almost always sets of interlocking reconciliations. For example, end to end reconciliations are sometimes used to help control billing for telephone calls by telecoms companies. The fact that a customer has made a call is initially recorded on a switch in the telecoms network. That record is sent across a data network to the telco's "mediation system", which passes it on to the billing system (which itself may have more than one stage), which generates data for posting to the company's general ledger and data for producing the bills themselves. Typical reconciliations making up the "end to end reconciliation" are:

At each stage there may be various reconciling amounts, some of which can be accounted for precisely, such as numbers of records rejected by mediation.

The great difficulty in designing reconciliations is finding comparable figures. Timing differences are one of the most common barriers. If they cannot be accounted for exactly it is possible to track the cumulative difference to find out if there are small but persistent problems.

Validation and edit checks

Computer people talk about "validation". The system will perform "validation" on user input or when loading data from an external file. Since the system has "validated" the data it is "valid" right? Wrong!

"Valid" in computer-speak just means the data conform to some basic requirements that allow the software to process them. For example, fields that should contain numbers are checked to make sure they have numbers. Text fields that should not be longer than a certain length are validated for length and to remove unprintable characters and trailing spaces. Field values that should match those of another record are matched. Validation often gets more subtle than this, for example to check that invoice detail lines add up to the invoice total, or that a person's date of birth is before their date of death, and so on.

"Valid" in computer-speak does not mean the data are the correct values or that they are genuine. You could enter your name as "Mickey Mouse" and expect it to be accepted as valid. You could claim to have been born in 1853 and most systems would be happy.

"Validation" does help filter out data entry errors, but be aware of the limitations and examine the exact rules being applied before you decide what control the software is giving you.

Segregation of duties

Segregation of duties is a way of making fraud more difficult. It involves preventing any one person from doing all the things necessary to pull off a fraud. Segregation of duties should be done sparingly and in conjunction with other fraud controls.

Segregation of duties is a very traditional control that has become even more common in the computer age. Almost all accounting software packages let you set up a profile for each user showing what they can and cannot do on the system. In the leading ERP packages (i.e. packages that do just about everything like SAP and Oracle Applications) it is possible to set up fantastically detailed and complicated profiles, though this takes a long time and is difficult to maintain.

The downside of segregation is that it can make processes less efficient. One of the most common strategies in business process reengineering is to let individuals do everything and so minimise hand-offs between people and departments. Segregation of duties can be inconvenient and frustrating, especially in small organisations.

There are a number of bases for segregation and some can be used in combination. These are the most common, written in a notation suited to high level design work:

It is rarely appropriate to apply all the bases at the same time. Choose the most appropriate and vary the tightness depending on the risks and scope for alternative controls. When designing controls in detail interpret the rules according to the job roles that exist or are being considered.

Finally

Internal controls for processes in organisations, especially big processes, should be designed with skill rather than allowed to evolve. The key to doing that is to be able to design the controls at a high level, sculpting something that fits the circumstances and needs of the process and organisation rather than applying "best practice".

This paper has provided an introduction to that skill. If you have any ideas, questions, or concerns please feel free to contact me at the e-mail address below. I normally reply within a couple of days.


Words © 2003 Matthew Leitch

Home / more articles - The author - Contact on your terms - Links - Services


If you found any of these points relevant to you or your organisation please feel free to contact me to talk about them, pass links or extracts on to colleagues and friends, or just let me know what you think. I can sometimes respond immediately, but usually respond within a few days. Contact details

Matthew Leitch - Author

About the author: Matthew Leitch is an independent consultant, researcher, and author specialising in internal control and risk management. He is the author of www.workinginuncertainty.co.uk and www.internalcontrolsdesign.co.uk and has written two breakthrough books. Intelligent internal control and risk management is a powerful and original approach including 60 controls that most organizations should use more. A pocket guide to risk mathematics: Key concepts every auditor should know is the first to provide a strong conceptual understanding of mathematics to auditors who are not mathematicians, without the need to wade through mathematical symbols. Matthew is a Chartered Accountant with a degree in psychology whose past career includes software development, marketing, auditing, accounting, and consulting. He spent 7 years as a controls specialist with PricewaterhouseCoopers, where he pioneered new methods for designing internal control systems for large scale business and financial processes, through projects for internationally known clients. Today he is well known as an expert in uncertainty and how to deal with it. more

Please share:            Share on Tumblr