Показаны сообщения с ярлыком measurement. Показать все сообщения
Показаны сообщения с ярлыком measurement. Показать все сообщения

воскресенье, 5 июля 2015 г.

Measuring customer effort: Is it time we moved on from Net Promoter Score?



CORMAC TWOMEY 

If customer satisfaction is directly linked to minismising effort, is NPS still a valid metric?

For many years, Net Promoter Score (NPS) has been a core metric for businesses, to quantify and monitor customer satisfaction levels and understand how this affects future behaviour. However, as with any metric the measurement is, by itself, just a means to an end. It is vital that brands are using NPS in the right way in order to deliver the most value to the business.
Ultimately, NPS measures brand loyalty, and is based on the fundamental perspective that every company’s customers can be divided into three categories: Promoters, Passives, and Detractors.  The percentage of detractors is then deducted from the percentage of promoters to give the organisation its Net Promoter Score.
Effort equals experience
It is important to remember, in many scenarios, customers end up calling contact centres after failing to get the resolution they seek in-store, online or through other self-service options. This leaves the contact centre staff to face the brunt of the customers’ dissatisfaction, and having to rely on their professionalism, systems and efficiency to allay customer frustrations before turning a challenging situation into a positive experience.
NPS is undoubtedly a useful metric, but by its very nature, leads organisations into focusing their resources on delighting the customer by exceeding their expectations. This message has been emphasised for years: delight the customer and they will become your brand advocate and an asset that will help to secure potential future customers and growth. While this may be true, investing all your resources on creating more promoters could leave you vulnerable to the negative effects detractors can have on your brand.
Recent studies have shown that the impact of promoters and detractors is not equal, suggesting that one promoter will not make up for the damage one detractor can cause. A recent Convergys study found that eight in ten dissatisfied customers told others about their recent interaction, compared with just five in ten extremely satisfied customers. 
With many contact centres now measuring Transactional Net Promoter Score (tNPS), whereby customers are invited to rate their last interaction or transaction and not the whole brand experience, a wealth of valuable data is being captured that can be transformed into actionable insight to pinpoint root causes and drive improvements that will help to minimise the number of detractors.
When customers are asked why they are not satisfied with a recent interaction, the biggest drivers of their dissatisfaction were related to effort: had to repeat myself; had to call multiple times; took too long to resolve, etc. And the majority of top answers to what is important when receiving customer service also relate to customer effort – that is, how easy is it to resolve a question or issue or conduct a transaction using a service channel. This begs the question, are we using the right metrics in order to drive the improvements that will minimise customer effort?
Expectations are rising
This damage that detractors can do to your business is increasing as consumer expectations around customer service continue to rise, making the margin for creating dissatisfied customers larger. The increased use of technology and social media is a key factor. Customers now expect a different type of service and quite often organisations are falling short. In the Convergys survey, 20% of consumers reported that they started on a company’s website, but were dissatisfied and abandoned it, with 38% trying the web again and 23% switching to the phone. When digital deployment is dysfunctional, consumers often switch channels to get resolution, which could increase the number of interactions without reducing effort or improving satisfaction.
This means that service is expected through multiple channels and traditional channels such as the call centre are now being avoided. In fact, 47% of customers say they will do anything to avoid calling a contact centre.People are looking for a seamless service experience and brands that can solve their problems quickly with a minimum of fuss.
The focus for brands should therefore be on ‘closing-the-loop’ and identifying where the root causes of customer dissatisfaction lie. Often, the real drivers of customer effort sit upstream from the agents in poorly optimised self-service channels, policies that make it hard to do business, and siloed technology.  It is important that organisations have the technology and skills to deliver an efficient personalised experience, as this is critical to minimising effort by resolving customer issues as quickly and efficiently as possible in their channel of choice.
Prioritise to maximise ROI
While NPS will remain an essential tool for measuring loyalty, brands should consider how they are allocating resources in order to have the best possible impact on the bottom line. Given the ever-increasing expectation for multichannel communication and the larger influence of the detractor, a smarter approach to NPS might be required.
Delighting customers will always be the ideal, however, with detractors 60% more likely to talk about your brand than promoters, the penalty for not meeting expectations is greater – and is only likely to increase.  Often, the best way to add value to the business will be to focus on minimising customer effort, which should lead to a drop in the number of detractors. Conversely, placing a greater emphasis on the number of detractors will enable you to gauge how well you are achieving this
As brands and customer service teams look to adapt to the new realities of this multichannel world, they must make sure they are factoring these trends into their metrics. After all, as the saying goes, what gets measured gets done!

вторник, 5 мая 2015 г.

7 Proven Ways to Boost Customer Happiness in 2015

If you want to boost customer happiness in 2015, you have to take purposeful action.


VP of marketing, When I Work

Getty Images

Happy customers are something that no business should ever take for granted, yet few companies make this metric a priority. If you want great business results in 2015, you can't focus on things like revenue and conversion rates alone--you need to be proactive about keeping your customers happy. Here are seven ways to do it:

1. Learn How to Measure Customer Satisfaction

You can't improve something you can't measure, so your first step should be learning to measure customer satisfaction. There are several ways to understand how customers feel about your company and service. For example, you could implement a telephone survey at the end of every service call, monitor online feedback, or create a web-based customer satisfaction survey to send to all of your clients. Keep in mind, though, that only those with strong opinions will take the time to answer, so you'll want to account for potential bias in the responses you receive.

2. Be the Expert

Your customers need your help--that's why they hired you or purchased your product. So take the time to position yourself as the expert in your chosen field. This does NOT mean spending all your time telling customers how amazing you are--it means honestly educating and empowering your customers to understand your industry in a new way. If you come across as an expert they can trust, your customers will be thrilled to work with you.

3. Use Live Chat to Offer Excellent Customer Support

Customers hate phone trees. It's a fact of business, and offering an alternate way to contact you can make a big difference in the level of customer happiness you see in 2015. One of the best ways to offer real-time customer support is to offer online chat straight from your website. This allows a customer to bypass all the button pushing and get directly to an agent who can help them. If you don't already offer live chat, put it on your agenda for 2015.

4. Make Sure Your Entire Team is Managing Customer Satisfaction

It's a mistake to think that only the customer service department impacts customer happiness. Everyone on your team--from the accounting department to HR to sales--plays a role in making sure the customers are happy. Remind your staff of this frequently, and make sure all of them understand their role in boosting customer satisfaction in 2015.

5. Keep Employees Engaged

Keeping your employees engaged and inspired is the only way you can count on them to do the same for your customers. A recent Gallup study demonstrated that workgroups with higher levels of employee engagement had a higher number of highly satisfied and loyal customers. In the federal agency studied, this equated to $1 million in additional revenue. Never forget that your team is the heartbeat of your organization--and if they aren't happy, they won't give great service to your customers.

6. Build Key Performance Indicators (KPIs) Around Customer Satisfaction

One way to keep your team focused on your mission to make customers happy is to tie their goals to it in a tangible way. This can be done with "a stick"--that is, with punishment for poor results. However, you may find that your employees respond better to "a carrot," or a reward for meeting a customer satisfaction goal. Whether it's a financial bonus, a special parking place or some other treat, find a way to reward employees who have excellent customer satisfaction results. Doing so will encourage others to strive for greatness as well.

7. Follow Up on Customer Satisfaction Measures Often

If you identify issues from your customer calls, satisfaction surveys or online reviews that you want to address, take the time to follow up and make sure that customers are satisfied with the changes you've implemented as a result. Measuring and responding to the key drivers of customer satisfaction is an ongoing process, not a one-time event.
If you want to boost customer happiness in 2015, you'll have to take purposeful action. Determine how you want to measure customer satisfaction, be the expert, offer live chat, get your team on board and keep them engaged and accountable. In the end, remember that the pursuit of customer satisfaction is a process. As long as you implement a plan and stay the course, you should see consistent improvements in your brand's reputation.

четверг, 5 марта 2015 г.

How to measure content marketing ROI



With content marketing budgets rising rapidly, the onus is on marketing directors to demonstrate that these dollars are being designated to the right discipline.
However, research from the Content Marketing Institute has revealed that less than a third of organisations believe they are successful at measuring the return on investment of their content marketing efforts.
Of course, this problem isn’t unique to content marketing - ROI measurement is problematic across the entire marketing discipline, despite growing demands for measurability. As Richard Armstrong, CEO and co-founder of Kameleon, notes: “Measuring campaigns is a difficult concept. If there was a set of numbers that truly showed marketers how well they were doing in their campaigns it would revolutionise the industry. Revenue may show the short-term impact, but it doesn't take into account the long-term effects such as consumer loyalty.”  
However, there is no question that there are some challenges that are specific to content marketing measurement. And as it is rising in prominence, these need to be addressed.
“Many businesses are using content marketing to raise their profile and build awareness and this has historically been a difficult thing to measure,” says Sarah Gavin, European marketing director at Outbrain. “Despite a huge increase in the volume of content marketing, it still remains a fairly new form of marketing for many brands. As such there are no defined metrics for how to accurately measure the ROI of content. The content marketing industry has also been fairly fragmented, with different tools and technologies making the implementation and tracking of campaigns overly complicated.”
If content marketing is to continue to gain prominence in the marketing mix, businesses will have to get a better grasp of ROI measurement in order to ensure that money is being wisely allocated.
So, with that in mind, what advice can the experts share?
Costs
To accurately evaluate your content marketing ROI, you first have to be able to calculate your costs. Content marketing can be divided into five areas where costs are accrued:
  • Planning. These are the strategic costs associated with establishing business goals, identifying business goals and generating personas. These tend to be one-off costs.
  • Ideation. These are the costs associated with generating a flow of ideas such as mapping buyer journeys and identifying the kinds of assets that will be required. Again, these are largely one-off costs.
  • Production. These are the costs associated with creating the content and the editorial plan. These are recurrent costs that will be incurred on every new content campaign.
  • Distribution. These are the costs associated with generating traffic, leads and sales through the promotion and nurturing of the content. Again, these will be recurrent costs.
  • Measurement. These are the costs associated with the evaluation and optimisation of the content.
With the production and distribution being the two main contributors to costs, it is worth drilling down into these particular areas.
Stephen Bateman, director and co-founder of Wise Up Media and Concentric Dots, has outlined some useful ways of how to evaluate the costs of producing content.
 “Let’s assume your company decides to invest three solid, 600-900 word, blog posts per week, sourced either from experts, or from members of the senior team. A solid blog post takes an average five hours to research, write, and edit.
“Let’s assume that the average annual earnings of a solid blogger (external or on the senior team) is £45,000 annum. If that person works 45 weeks in the year and 48 hours per week their hourly cost is £45,000/2160 hours = £20.83 per hour.
Hourly expense of solid blogger = £20.83 (NB this includes no charges or overhead). Therefore, the cost of one blog post is equal to £20.83 x 5 = £104.15. The annual cost of blogging at this frequency is £104.15 x 3 blogs week = £312.45 x 45 weeks = £14,060.45 per annum (not including overhead).”
Having produced the content, there is also the matter of promoting and nurturing it, whether this is vial activity on social media or through paid promotion via the likes of Google AdWords. Bateman has the following advice for working out promotion costs.
“For the sake of this costing exercise let’s assume two members of your team spend 90 minutes on content promotion per workday, each. Let’s assume that the average annual earnings of a social media / community manager (internal or agency) is £40,000 per annum
Hourly cost of that person is £40,000/2160 hours = £18.52 per hour (without charges or overhead). Therefore, the cost of promoting content is £18.52 x 1.5 x 2 (people) x 225 days = £12,501 per annum (in this case there is no paid content promotion).”
Using these approaches to evaluating costs in production and promotion, as well as adding in costs for measurement and reporting and so on, you can then arrive at a good idea of what costs you are accruing and what activity you need to produce in terms of leads, sales, etc to ensure that you get a positive return. And this brings us to the next stage.
Setting up analytics to estimate your returns
At its most basic level, the principle behind identifying the value of content marketing is identifying and segmenting users who have viewed particular content, categorising their previous familiarity with your brand, and then understanding that user’s behaviour by comparing it to those who have not been exposed to this content. However, while this sounds straightforward, but this type of analysis can be difficult if you’re not sure what you’re looking for, and how to set up analytics to determine that outcome.
“If you want to measure your content marketing ROI but in a way that gets you the most actionable results, you should plan how you’re going to distinguish the users exposed to your content, then track and compare them with users who haven’t,” advises Tom Wood, director at onebite.
“Google Analytics (GA) can help with this; however, most people believe that GA set up finishes once you have implemented the JavaScript. This is untrue - social media links, campaigns, blog posts and other web collateral on your website needs to be tagged properly or viewers of the content will appear as direct traffic, which does not give you insight into how well a piece of content is going. Additionally, content URLs need to be stripped of parameters otherwise they appear multiple times – making the data much more difficult to collate.”
“To counter this, before beginning any campaign or uploading content onto your site, spend a few hours configuring Google Analytics and ensure that all pages and links are properly tagged and tracked – the results will be well worth the investment of your time.”
By conducting analytics into who is viewing your content, you are able to see not only the quantity of people viewing your content but also make qualitative judgments as to whether you’re targeting the right people. This makes judging the ROI of your content much more reliable, says Wood.
“If you want to make the most out of analytics, use content grouping on GA. This is a fantastic tool that allows you to create buckets of site pages and analyse them together rather than separately – making it quicker and easier to make a judgment. It’s pretty straightforward to set up, but depending on the size of your site you may have to implement some additional coding to get the content grouped properly.
“If you’re an ecommerce site then, on a basic level, you can group category pages, product pages, and content pages rather simply. However, if your content is generated at a campaign level, then this allows you to put together content groups for upcoming campaigns in advance. Grouping categories, landing pages or blog pages that form the basis of your content marketing campaigns means you can single out the performance of the content and measure it across benchmark categories that are not linked to your campaign.”
Wood emphasises that to fully understand the analytics you generate from your content, you must first have a complete understanding of who you are trying to target, and most importantly, all the steps that a user might take before becoming a fully-fledged customer. This means not just measuring the bigger pieces of content but also the smaller content interactions that happen on your site, such as newsletter signups, content engagements or any other user actionable functions on your site.
“Once you’ve started to measure these actions, you are able to set goals and start building a more accurate view of what your user looks like.”
Holistic approach
In terms of a more holistic approach, Danyl Bosomworth has composed a simple matrix of ideas to help set KPIs or metrics, providing a useful framework to help select the best type of KPIs for different markets. This model covers the full range of measures covered from hard sales vs softer engagement metrics.
“Using these different types of measures is even more important for companies with long purchase or repeat purchase cycles, such as automotive, furniture or maybe PCs – and certainly for B2B businesses,” he suggests. “The longer buy cycles require the need to demonstrate ongoing engagement.”
The three measures it covers are:
  • Commercial measures: These are the harder business or commercial measures and what usually takes the longest to be demonstrable. These are the measures for the senior managers although they may well also need to know about Likes! Think audience share, sales, leads or at least clear indicators from people such satisfaction ratings or % that fed back. Remember that these need to be incremental and ideally attributable to your content marketing. 
  • Tactical measures: These include the views, clicks, interactions with your content – so involves the social shares such as Likes and Tweets. You might also use link shortening tools to help measure. These are hard indicators that your content is visible and worth sharing – so very key.
  • Brand measures: These are easier for bigger brands or where there’s less competition, simply because the tools seem to work better in that space. Think brand or key-phrase mentions, sentiment, share of market mentions over competitors and certainly site traffic. These are the bigger needles to get moving and often require a bit more momentum.

Armstrong provides some final brief rules of thumb for ROI measurement.
“In a nutshell, marketers need to put a number on it,” he says. “By clarifying the objectives that need to be achieved prior to planning the creative and distribution, they will ensure that the overall campaign is quantifiable.
“Don’t wait until the end of the campaign, real time insight is vital to optimise and make adjustments whilst the campaign is still live.
“Ensure all communication activity is trackable against the objectives. On social for instance, free tools such as Google, Facebook and Twitter analytics provide a wealth of information on how your content is being consumed and whether those targets are being hit.
“Measurement needs to be tailored according to the objectives of the content. If a content programme targeted at brand awareness is implemented, brands need to be looking beyond content consumption metrics such as video views and page views.”
THURSDAY 5 MAR 2015

воскресенье, 7 декабря 2014 г.

Metrics for Assessing Human Process on Work Teams




by John W. Bing, Ed.D.

Management has been defined simply as "getting things done through others."1 Managing and leading complex organizations is challenging, at least in part because the most utilized tools to assess management's approaches are end-point financial measures. Reviewing quarterly profit/loss statements as a guide to management's skills is a bit like measuring a doctor's skills by the apparent health of the patient rather than reviewing the full set of analytic results which more effectively predict health or illness. We know too well that reliance on short-term profit results is no certain indicator of the long-term health of a company. We need other measures, measures that assess both the application of specific managerial approaches and policies in addition to the output measures of financial returns. In so doing, we increase the opportunities open to managers to understand and improve the effects of their policies and approaches.
I suggest in this article that measurement and monitoring of work teams over time is a crucial way for organizational leaders to both support improved team performance and to measure, through the aggregation of human performance metrics on a digital dashboard, changes in team performance and, in turn, to relate these measures to bottom line, financial changes.
CHARACTERISTICS OF WORK TEAMS
Work teams are the backbone of contemporary work life. Executive teams run corporations. Project teams create new products and services; matrix teams are involved in the development of everything from pharmaceuticals to the delivery of services in consulting firms and charitable agencies. Marketing and sales teams deliver products and services to customers. Except in the most traditional of organizations, for example sometimes in governmental organizations in which highly structured departments remain, teams are essential to the way organizations carry out their work.
Global Teams are a special genre of teams. Most of the examples in this article are drawn from global teams. The notion of global teams would have been thought at best unlikely until the last two decades. Within that time, communications infrastructures began to provide efficient support for synchronous and asynchronous contacts between distant employees, and corporations began to realign their workflow through those individuals and those geographical areas most likely to increase efficiencies and productivity. Thus, virtual teams and global teams were created to allow companies to improve competitive advantage. Such teams, however, provide managerial challenges, as the imperative to "get work done through others" is different when the "others" are not found around the proverbial water cooler but are embodied in bits and bytes on computers and telephone exchanges. With increasing distance between team members, it is much more difficult to build trust, which underpins successful teams.2
Cultural and linguistic differences also play significant roles in mediating communications on global teams. Although the business of most global teams is conducted in English, there is typically more than one language natively spoken by members of a global team. These members may have more difficulty expressing themselves in spoken or written English than do their native-English-speaking colleagues.
Cultural differences are more subtle intermediaries. In one project monitored by ITAP International involving a pharmaceutical team located both in the United States and in Spain, there were numerous complaints from the American team members because their Spanish colleagues copied e-mail messages to their bosses, which the Americans perceived as undermining forthright communications. In a meeting convened to work through issues uncovered in measurements of team process,3 the Spanish members of the team asserted that it was their responsibility to inform their supervisors of the progress made by their team; to do otherwise would be negligent.
Dutch pioneer Geert Hofstede measured cultural differences through a five-dimensional system.4 The dimensional set which emerged from his research - Power Distance, Uncertainty Avoidance, Individualism/Collectivism, Masculinity/Femininity, and Long/Short-Term Orientation - is the most researched body of quantitative crosscultural research in the literature. The Spanish scores for one of these dimensions, Power Distance, a measure in part of "subordinates' fear of disagreeing with superiors and of superiors' actual decision-making styles,"5 are significantly higher than those for the United States. That specific difference was the root cause of a number of communications problems within this team. As another example of Power Distance issues, the American department head, who supervised the team, thought that the Spanish supervisor controlled the Spanish contingent too tightly; the two leaders had a number of meetings before this issue was at least understood, if not resolved. In circumstances in which one culture dominates another by virtue of the authority of the home company, such differences are often hidden or suppressed but no less significant and are surfaced through the application of process metrics or, less fortuitously, subsequent team malfunctions.
MEASURING TEAMS
Teams are a bit like people in their complexity and types; and although progress has been made in measuring individuals with respect to their type, development and capabilities, the science of team metrics is in its infancy. This is even more notable in that teams could well be natural units by which top management might most efficiently determine the effectiveness of their policies and leadership; yet management has rarely attempted such measures. Reorganizations and restructurings are common; yet how many have been implemented in which appropriate pre- and postmeasures have been taken to determine the specific areas that require redevelopment and restructuring and most important whether such efforts have achieved desired results?
Indeed, much of the reorganization and restructuring of organizations, which is enormously expensive in terms of time and other resources, might well have been avoided had diagnosis of problems been undertaken and smaller-scale changes made. In addition, in the contemporary organization, teams are perhaps the most appropriate unit around which to make complex and ongoing measures of both human process and productivity. As Jones and Moffett write, "an effective measurement system gives work teams the same kind of business data once used only to manage entire organizations.6
Lacking such measures, it seems quixotic at best and malpractice at worst for management to reorganize work units.
The lack of team output measures also accounts for the difficulties that management has in rewarding teams. Although we have 360-degree measurement (typically in Western corporations) to measure the development of managers, very few team measures exist to support rewards systems for teams. Most team rewards are based on individual assessments rather than team effectiveness.
Management's inattentiveness to such matters is, I believe, a constraint on improving the capacity of complex organizations to become more effective as learning organizations. For if management is not focusing on measuring the effectiveness and productivity of their organizations beyond financial measures, it is difficult to determine which parts of the organization are functioning well, and which are not, beyond a gut reaction. Such measures have the additional value of providing specific diagnostics, and therefore appropriate interventions, when teams falter.
TYPES OF MEASURES
There are two types of team measures to be discussed here, those of team process and those of team output or team productivity.
Productivity Measures
Some types of teams lend themselves to such measures more easily than others, and some measures are more typically applied than others. For example, a top management team might be measured appropriately by shareholders by means of the general productivity of the company in terms of its profit or loss over time; however, at the same time, measurements of the company as a learning organization are also important indicators of the capability of the company to maintain or increase productivity, and these are more difficult to make. At the shop floor level, measures are usually easier and can be made in terms of the number of units produced at specific levels of quality by work teams as well as the safety record of the unit over time; absenteeism, and so on.
Jones and Moffett, in a case study on measurement on work teams, list four common measures on such teams: productivity, quality, cycle time, and on-time delivery. They then note that:
To establish ownership, teams customize their measurement system in four ways. First, they can add a measure of their choosing that reflects the team strategy. Second, within limits they can determine the weighted importance of the measures to reflect their own thinking about strategy. Third, within limits they can set their own performance standards for each measure. Fourth, in some cases they can influence how a measure is calculated so that it comes more under their control.7
However, measurement often stops at the factory level.
While it may be more difficult to define appropriate metrics, various human resources department teams could be measured on the length of time required to recruit specific positions matched with supervisor satisfaction of the successful candidates and the costs involved; on the cost of payroll per employee; and training, by supervisor satisfaction with skills provided to subordinates.
Human Process Measures
Human process measures are likely to be precursors to productivity changes. Why? If communications fail or are marginal on a team, it is likely that productivity drops will not be far behind. If objectives are not clear, then how can a team reach them? If roles and responsibilities are muddy, how can the team efficiently carry out its work? If trust is lacking, how can a team work together? An American general who commanded troops in the first Gulf War commented: "We had an unusually strong team, and trust was a major factor.... You need people schooled and trained in their own type of warfare, and then you need trust in each other."8
Although there is face validity to the above statements, management is often reluctant to carry out measures to determine the degree of comparative effectiveness in these basic areas. In such resistance, there may be many missed opportunities.
COMPARING HUMAN PROCESS WITH OUTPUT ON TEAMS
Although this article is not focused on the measurement of team productivity, or outputs, it is important that human process metrics be combined with team output measures to correlate how changes in team human processes lead to changes in output. Of course, although finding of correlations do not prove causation, repeated correlations over time and with different teams will eventually build a solid base for viewing process measures and interventions as fundamental to improving team productivity.
A decade ago, a Swiss-based pharmaceutical company asked ITAP International to develop a method for measuring process performance on three global teams. The teams were composed of scientists from Europe, the Americas and Japan. They met four times over a period of two years, and continued their team responsibilities during the intervening periods. Their purpose was to create procedures to reduce research and development time in three drug delivery areas: oral, skin and subdermal. The teams were tasked with similar assignments and because the composition of the teams was similar, they became ideal candidates for studying differences in human processes on global teams and comparing results between the three teams.
A questionnaire was developed to measure human process on these teams, and was administered six times over the two-year period that these teams met together. At the end of the two years, specific questions from this questionnaire, now called the Global Team Process Questionnaire™ (GTPQ) were compared with peer rankings provided by the participants. These correlated positively; in other words, the highest-peer ranked team also had the highest GTPQ results on the questions tested.9
In Figures 1 and 2, process questions are correlated with the peer-ranked productivity of each team. In Figure 1, team objectives were compared with quality of output; in Figure 2, perceived communications levels were compared with peer-ranked outputs. In both cases, there were straight-line correlations between process and output.
Figure 1. Team Objectives vs. Quality of Output.
Figure 2. Perceived Communications Levels.


Team Process Examples
Let's review examples of measurement of human process on teams. The teams listed were all (except in one case) measured by the Global Team Process Questionnaire™, mentioned above, which has been construct-validated dimensionally, and which has proven reliable in repeated applications.10
The following examples are all taken from assessments of teams within the pharmaceutical industry. This industry makes extensive use of global teams in basic research, statistical analysis, product development, clinical development, regulatory, operations, and marketing and sales. During various stages of drug development and testing, many of these functional areas are representing on cross-functional coordinating teams supported by single-function teams. The functional teams are able to support a number of coordinating teams. Many of these teams may have member representation from more than one country.
FAILING TO APPROPRIATELY CHARTER A TEAM
One of the most important factors in a team's success is the initial chartering of a team. Chartering is, essentially, providing the internal and external objectives and structure for newly created teams and providing team members with an understanding of their roles and responsibilities. If this step is not appropriately provided, many problems can develop in the future activities of the team.
Figure 3 is the executive summary for a team that had not been appropriately chartered. The executive summary contains five dimensions (in a domestic version of the GTPQ, which has an additional dimension, Culture and Language).11 The domestic version is called the Organizational Team Process Questionnaire™ or OTPQ. Note that in all five dimensions this team lags the pharmaceutical industry averages that we maintain.12 In this summary, 1 is the "best" score and 6 the "worst," so the lower the score the better the outcome.
Figure 3. Executive Summary for an Inappropriately Chartered Team.
Figure 4 is a spidergram, in which each letter on the diagram represents a team member's score for this specific question - "Are the objectives of your team clear?" Only one of the nine team members indicated an understanding of the objectives; the remaining eight did not. This is another symptom of failed chartering and indicates that the team must go through a chartering process if it is to form the basis for teamwork.
Figure 4. A Spidergram.
ADDITION OF NEW MEMBERS INTO AN EXISTING TEAM
Over time, productive teams develop a sense of trust and a common approach to work, which can be disrupted when members transfer out or in. The larger the percentage of team members lost or gained, the larger the consequence.
Figure 5 shows data on four questions taken after the induction of 11 new members and a new team leader (with only three prior members remaining). All four questions show a distinct difference in perspective between the old team members and new members on the issues of team leadership, clarity of objectives, communications and trust. (In this table, 1 = best possible score; 6 = worst possible score). Here is clearly a case in which, based on the data, a new team leader combined with a large influx of new members requires a team rechartering effort. Otherwise, the old members will remain a disaffected group within the larger team.
Figure 5. Team Scores after Induction of New Team Members and Leader
QuestionAverage Score
All Team Members
Average Score
Old Members
Average Score
New Members
Difference
Effectiveness of team leadership2.503.332.22-1.11
Clarity of Objectives2.673.332.44-0.89
Communications3.003.672.78-0.89
Trust3.003.672.78-0.89

DEGREE OF MANAGEMENT SUPPORT
Management support is an external but powerful force that can either buttress or retard the productivity of teams. Figure 6 is a comparison of two teams, one with perceived management support, and one without such support.
The top spidergram is tight, indicating a team that acknowledges and appreciates management support. The diagram on the bottom shows a team that as a whole believes that management support is lacking, although there are strong differences of opinion. In an intervention based on this data, a facilitator would probe the team for the reasons behind the perceived ambivalence of management support. Management, in turn, would have data indicating that their support was either not communicated or not sufficiently provided.
Figure 6. Comparison of Teams With and Without Management Support.
THE EFFECT OF MERGERS ON TEAMS
There are other outside influences on teams. In an early version of this measuring instrument, three teams were tested over six iterations on a variety of team process issues. On the first five iterations, there were changes in the process effectiveness of the teams, although team B was clearly the leader overall. However, on the sixth iteration, all three teams fell (see Figure 7). At first, we were puzzled. We had expected that our interventions would improve team process and performance. Were our interventions faulty? Then we realized that the steep decline in all three teams was caused by the effects of a merger with a larger pharmaceutical company, announced just after the fifth iteration of the questionnaire had been administered. As is often the case in mergers, team members expressed concern for their jobs and positions within the smaller firm with which they were associated. This provided support for the notion that the metrics were reflecting the impact of "real world" events.13
Figure 7. Effects of Mergers on Teams.
KEEPING A TEAM ON TRACK
A corporate leadership team, which had consistently improved over four iterations, had a noticeable fall-off in the fifth iteration GTPQ results. At a team meeting, a facilitator helped the team discuss the issues that had the most marked negative responses and the biggest drop from the previous iteration (see Figure 8). The team was then able to identify common themes - direction, ownership, deployment, communication and cooperation - which they were then able to weave into the strategic planning discussions held throughout the remainder of their off-site meeting. The example in Figure 8, of a specific question from the GTPQ, was but one question of a total of 28. However, the aggregated team results showed the same fall-off in human process effectiveness for this team. Again, this illustrates both the kind of results obtained as well as how the information can be used to quickly check performance declines.
Figure 8. A Specific Question from the GTPQ.
CONCLUSION
Process and outputs are inextricably linked on teams.14 Future research will indicate whether human process measures can forecast productivity changes or whether these factors are correlated in other ways. In either case, metrics that allow managers to drill down to specific problems with processes on teams will allow interventions in the most timely and efficient manner, and thus improve or maintain superior team performance.
Similarly, when aggregating and comparing collective measurements from many teams, for example as part of digital dashboards, managers will be able to look at both process changes on individual teams as well as cumulative data for departments and divisions.
This will lead to improvements at the individual team level, as well as cumulative changes across teams as a way of determining management performances. This represents an alternative to reorganizing departments or companies without pre- and post-data in the hope that such changes will improve financial performance. Large scale change is not always beneficial, although it is sometimes necessary for strategic business purposes. Knowing the difference between change for its own sake and change for a specific set of objectives may save companies both time and money and increase shareholder value.
ENDNOTES
1 Hofstede, Geert. "Business Cultures" in UNESCO Courier, April 1994, V. 47, p. 12.
2 Asherman, Ira; Bing, John W., and Laroche, Lionel, "Building Trust Across Cultural Boundaries" originally published in Regulatory Affairs Forum and available online here.
3 The metrics approach used in the cases illustrated in this article were provided through the Global Team Process Questionnaire™, created and utilized by ITAP International.
4 The Hofstede cultural data has been taken from Geert Hofstede, Culture's Consequences: Comparing Values, Behaviors, Institutions and Organizations across Nations, Second Edition. Sage Publications, 2001.
5 Hofstede, page 79.
6 Jones, Steve, and Moffett, Richard G., "Measurement and Feedback Systems for Teams," in Sundstrom, Eric, and Associates, Supporting Work Team Effectiveness: Best Management Practices for Fostering High Performance. Page. 157. Jossey-Bass: 1999.
7 Jones and Moffett, op. cit., p. 159.
8 Quoted in Mathieu, John E, and Day, David V., "Assessing Processes within and Between Organizational Teams: A Nuclear Power Plant Example," in Brannick, Michael T., Salas, Eduardo, and Prince, Carolyn (eds.),Team Performance Assessment and Measurement: Theory, Assessment, and Applications. Lawrence Erlbaum Associates, 1997.
9 Bing, John W. "Developing a Consulting Tool to Measure Process Change on Global Teams: The Global Team Process Questionnaire™" (Page 4), proceedings of the 2001 national conference of the Academy of Human Resource Development and available online here.
10 Bing, op.cit.
11 The dimensions were created based on literature research and factor analysis of questions responses.
12 The database is maintained by ITAP International and is the collection of responses from over 100 teams, many followed through multiple measurements.
13 Bing, op.cit., p. 7.
14 For other examples of the relationship of process and productivity of teams, see Bing, John W., "The Relationship between Process and Performance on Teams".
John Bing is the chairman of ITAP International, a consulting firm with operations in Europe, the U.S. and the Asia Pacific region. His consuiting experience spans the Americas, Europe and Africa and the pharmaceutical, consumer product, information technology industries and United Nations' Agencies. He is the designer of ITAP International's Team Process Questionnaire family of consulting instruments and is developing a new version of the Culture in the Workplace QuestionnaireTM, originally created by Geert Hofstede. He has published papers and provided presentations at numerous professional conferences including the American Society for Training and Development (ASTD) and the Academy for Human Resources Development and recently co-edited (with Darren C. Short) a volume entitled "Shaping the Future of HRD" in the Advances in Developing Human Resources series. Mr. Bing was a founding member of SIETAR (the Society for Intercultural Education, Training and Research). He is a member of the Research to Practice Committee of ASTD and is the recipient of ASTD's International Practitioner of the Year Award. A graduate of Harvard College, Bing received his Ed.D. from the Center for International Education, University of Massachusetts. He speaks Afghan Farsi and is an avid hiker whose goals include hiking to three of Colorado's 14,000-foot peaks each summer.
This article originally appeared in the International Association for Human Resource Information ManagementIHRIM Journal, November/December 2004, Vol. VIII, Number 6.