Index AboutEthikos SelectedArticles SampleIssue EthikosOnCD-ROM PreviewOfNextIssue
PastArticlesByIssue PastArticlesBySubject OrderEthikos Links Contact
top 2203

[Return To Selected Articles]

March/April 2009 - By Ed Petry

The Limitations of Ethics Surveys (Part II)

In an earlier article (The Limitations of Surveys and How They Can be Used Most Effectively: Part I, March/April, 2008) I raised concerns about relying too heavily on employee opinion surveys to assess corporate culture, identify ethics and compliance risk areas, and evaluate the success of ethics and compliance programs. While there is no doubt that surveys can be a useful tool, the article examined instances where surveys have been badly designed, poorly implemented and even manipulated by managers and employees. As a consequence of these all-too-common problems, employee opinion surveys may not be as reliable as often assumed and they need to be balanced by other information-gathering methods.

In this article I now turn to some of the broader implications of over-reliance on both surveys and benchmarking. I also suggest solutions to addressing potential problems. Even when surveys and benchmarking are done well, if they are the principal measurement and assessment tool there may be unexpected and unwanted consequences. For example:

  • When inaccurate survey data or simplistic benchmarks are used as part of the risk assessment process, does this result in the misallocation of limited resources?
  • If ethics officers are designing surveys to facilitate benchmarking with one another, does this tend to accelerate both a leveling of our programs (excellence becomes defined in terms of the mean) as well as sameness (we are led to employ more or less the same management systems to control more or less the same risks)? Are leveling and the drift toward sameness increasing the likelihood that we will be unprepared for change and our programs will be ill-suited for our company’s unique risks?

Before continuing, an important distinction needs to be made. In this and the previous article, I have focused on the problems that arise from an over-reliance on employee opinion surveys. My critique is not meant to call into question all types of metrics; far from it. In fact the best way to avoid the problems cited is to make use of multiple information-gathering methods in addition to opinion surveys, and to benchmark based on a wide range of factors both quantitative and qualitative. You have little reason to worry if, in addition to employee opinion surveys you also use other information-gathering and assessment methods such as analyzing helpline call data, conducting focus groups, tracking investigation trends, combing through audit results, conducting onsite visits and interviews, using third parties or others to provide a more objective perspective, and trusting your own perceptions and experience. On the other hand, if surveys are your principal tool for assessing your program or culture at the expense of a balanced, multi-faceted approach, then you may have reason to be concerned.

There is another important point to make at the outset. As Joe Murphy has noted, employee opinion surveys only measure employee perceptions—not actual misconduct. “You can get a great ‘ethics score’ and still have a felony-level price-fixing conspiracy that has been going on for years.” His point underscores the importance of a balanced approach that includes objective data gathering in addition to opinion surveys.

Surveys and the misallocation of resources

Consider the following example:

 In an effort to better allocate limited resources, a company has increased its reliance on employee opinion surveys to help identify ethics and compliance risk areas. They now use the survey to help identify underperforming business groups. If a business site scores lower than expected on a question such as: “In his/her actions, my manager lives up to our Value of Respect—Yes or No?” —the managers at the underperforming location might receive a visit from the ethics office and perhaps additional training.

In Part I, we discussed how some employees and managers, knowing that a “wrong” answer to this question could mean unwanted attention from corporate, try to influence the responses in a positive direction. An unfortunate consequence of this can be a false sense that all is well. Another consequence is that business units that answer the question truthfully and candidly may appear to be the outliers when they are not. In such cases, over reliance on the survey can result in steering resources in precisely the wrong directions.

There are many steps that can be taken to address this problem. For example, unannounced surveys limit the opportunity for inappropriate survey “prepping” by managers. After the survey results are in, rather than taking the data as the final word, consider it as a starting point for additional fact finding. A few phone calls or a site visit will help provide context for both the surprisingly positive and the off-the-chart negative results. If employees and managers come to realize that the survey is not their only chance to provide information, it will help alleviate the pressure to manipulate the process.

A misallocation of resources can also occur when our reliance on surveys and benchmarking leads us to miss the big picture. Take for example the ethics officer at a utility company who “knows” that the principal problem in their culture is a fear of retaliation.

The ethics officer “knows” this to be true based on anecdotal evidence, his years of experience in the field, and formal and informal benchmarking with peer companies. For these reasons, year after year the employee survey includes a question on this topic and every year 40 to 50 percent of employees confirm his beliefs. Benchmarking has led him to conclude that these results are “typical.” Based on these findings, training and communications plans have been designed to combat retaliation.  

Unfortunately, the attention given to the survey results and the comfort the ethics officer has taken from benchmarking has resulted in the company missing the bigger picture. The ethics officer is unaware that while employees at the company do fear retaliation, in even greater numbers they also worry about “the ethics office getting involved and making everything into a big deal.” In fact, a significant number of employees believe it is the overreaction of the ethics office that is the principal cause of the subsequent retaliation. In addition, because of the repeated emphasis on retaliation in training and communications, the company grapevine has become hyper-sensitive to the topic and is always ready to accept a story of retaliation, whether true or not. Increasingly, these stories turn up on websites and blogs frequented by dissatisfied employees and angry investors, as well as job seekers. The relentless focus on retaliation has created a culture that assumes and expects retaliation. The survey, rather than help get to the bottom of the problem, has instead fueled a vicious cycle and a self-fulfilling prophecy.

In this case, the survey is part of a broader system that is predisposed to look for answers in only one direction. Sometimes it’s necessary to step back and ask the extra questions that may uncover a new line of reasoning. Site visits, keeping an open mind, and broadening your sources of information might open new leads and give you a new perspective.  

In this scenario the incomplete analysis and the misallocation of resources could have been avoided if the ethics officer had asked more or different survey questions or if he had relied on other information-gathering methods. Focus groups or onsite interviews may have been especially helpful. More generally, the best solution to this problem is to reach out to others who may have a different perspective. Using third parties may be a good approach, but don’t ignore the untapped input from your colleagues including line managers, audit personnel, human resources (HR), and even previous ethics and compliance managers. Too often some of the biggest blunders are made by ethics officers who mistakenly assume they need to go it alone and make their own determinations. 

Varying the survey questions over time would also help, but there are practical obstacles to doing so. The realities of administering surveys usually mean that you can only ask a limited number of questions. In addition, in order to facilitate benchmarking, there is an advantage in repeating the exact same questions year after year.

In the example above, the ethics officer felt that if the questions changed, he could not easily track progress nor could he compare results with other companies. In this sense, the desire to benchmark limited his ability to get to the truth and accurately assess the root problem that was undermining the program’s effectiveness. Remember, the goal of assessments is to improve program effectiveness, not to facilitate benchmarking. Benchmarking is a means to an end, not the end itself. There can be other unintended consequences of benchmarking as well.  

 Problems with benchmarking: Leveling and sameness

A manufacturing company was in the third year of its new ethics and compliance program. Helpline call volume had risen to 1.8 percent of the overall employee population. The company’s ethics officer was asked by his Board to comment on this finding, and he confidently noted that, based on multiple benchmarking studies, he concluded that the company’s call volume was within the normal range. Fewer calls might indicate a lack of trust, but more calls could indicate that there was an unacceptable level of actual offenses. He was pleased that he had achieved the target, was within the median range, and since this was considered “best practice” he now felt comfortable in shifting his focus to other matters while maintaining the current call volume.

The ethics officer was surprised by the Board members’ questions: “Why is the mean considered best practice? Shouldn’t our goal be to have a call volume that increases over time and approaches our actual level of wrongdoing? The best practice call volume is no indication that incidents are in fact at an acceptable level—is it? Do we actually have any way to know what the actual level of wrongdoing is?”

Another Board member asked: “Is the benchmark we use cross-industry or is it specific to our industry? Is the call volume consistent with companies that have our global footprint? Does the call volume include reports that come to our attention through all channels or just the Helpline?”

And a third added: “We’ve invested a lot in training about the Helpline, shouldn’t the business units that received the training exceed the mean—do they? What’s the relationship between our call volume and the rate of substantiated cases—that is, are the calls productive?”

The Board members wanted context and were asking the ethics officer to relate the call volume not just to a benchmark that represented companies in general but to more specific measures that corresponded to their industry and to their unique circumstances. The lesson here is that too often benchmarking becomes a matter of “hitting the magic number.” In the case of this manufacturing company, 1.8 percent call volume may be much too low given their recent training efforts. And further, as long as there are a significant number of incidents that are unreported, it seems odd to settle for an acceptable call volume that may be far below the incident level.  

In order to answer questions such as those posed by the Board, the ethics officer needs a far more subtle benchmarking process than he currently uses. He needs to be able to place his data in context based on relevant internal information from audits and similar sources as well as comparable data from similar companies. In addition, the ethics officer needs to make qualitative judgments and not just rely on quantitative analysis. Providing context begins with being able to ask the right questions, and it also requires access to a robust database. In short, simplistic, quantitative benchmarking alone cannot provide an adequate assessment standard. 

Providing adequate industry and historical context is a challenge especially for ethics officers who are new to their positions. Many may not know what questions need to be asked or they may have difficulty accessing the best data bases for benchmarking. Fortunately, peer associations as well as third parties can provide help including assistance from experienced practitioners, current and former ethics officers, and broad-based data bases.  

Surveys and benchmarking pose other problems as well. As noted above, in most cases, surveys are designed with benchmarking in mind, which means many surveys have considerable overlap and often include similar if not identical questions. In addition, as in the case of the utility company in our first example, many companies ask the same questions year after year to facilitate benchmarking, tracking and trend analysis. Questions are also shared at association conferences or through multi-company surveys to enable comparisons. While this is certainly understandable and has its benefits, it also carries a risk.

There has been an increasing tendency to develop and measure our programs against other companies rather than shape each program to match emerging risk areas and the changing ethics sensibilities of our particular constituencies. The inward-looking bias of our professional associations and conferences has exacerbated sameness among compliance programs. Today it is more likely for an ethics and compliance program to be deemed “effective” and a “best practice program” if it meets benchmarks than if it actually addresses the company’s unique reputational risk profile.

I have raised several concerns in this and the previous article that stem from an over reliance and misuse of surveys and benchmarking. Two solutions have been repeatedly mentioned throughout. The first is that surveys and benchmarking are not ends in and of themselves but should be tools to help determine a company’s risk areas and assess its effectiveness in addressing those risks. Identifying and assessing risk is one of the true measures of effectiveness.

Surveys and benchmarking also should serve as tools to help provide information to enhance communications and training and engage employees to do the right thing and build relationships of trust. And, survey data must be balanced with other sources of information. Perhaps most important are the insights gained from conversations with colleagues and employees.

Becoming back room analysts?

This brings us to our final topic, which comes from observing the way many companies are using surveys and benchmarking. Today, when more and more companies outsource their helplines, rely on computer-based training, and otherwise employ technology to improve efficiencies, are ethics officers becoming less available and less visible to employees? Are ethics and compliance professionals becoming backroom analysts, crunching data and preparing reports? Are you spending too much time managing numbers instead of issues or people?

 These are questions that go to the very foundation of what it means today—and in the years to come—to be an ethics and compliance officer. While these are questions that can best be answered by our professional associations in the years ahead, in the meantime they are questions that each of us should answer on our own, for they can have profound consequences on our role in our companies, how others perceive us and ultimately on our success.

Ed Petry is Vice President of the Ethical Leadership Group, a Global Compliance company. He is former executive director of the Ethics Officer and Compliance Association


[Return To Selected Articles]

BOTTOM-BAR_2103Site Designed By West Coast CreativeE-MAIL US NOW