Teams use the data in the assessment to generate conversations about what is working and not working on the team. The language of the assessment is common sense, accessible language that every team member understands.
Teams frequently tell us, “You speak our language.” They don’t have to learn a new language set, terminology, nomenclature or complex model – they see the results and can get right to work. Too often with other assessments, it takes valuable time just to understand what the data means and the language and meaning is quickly lost.
With the TDA, if Trust, or Decision Making score low, everyone knows what the terms mean and it’s easy to remember where the focus is. “What gets measured gets done” is the old saying. It’s still true. Fundamentally, the TDA is a measurement tool. It assesses the team against 14 key scales. The assessment provides metrics, a baseline, a common language for team members. The TDA gives a portrait of the team—how they interact, how they perform as a collaborative, interdependent group. It is a self-portrait drawn by the team.
The assessment gives the team and team leader (and organization) an answer to the question: “How is your team doing?” Most teams are underperforming. In our initial data sample of 200 teams less than 10% scored themselves as high performing—which leaves 90% of the teams in the organization not reaching their potential.
The TDA was very successfully used with a project team at Johnson & Johnson as the team first began the project. In fact the first time all 14 team members were in the same room was the day they saw the results of their assessment. The assessment provides a baseline, a starting point for team development. Almost any time in a team’s life cycle is a good time to measure the team against these 14 criteria. It gives the team a place to start the crucial conversations. Even if they haven’t operated as a team yet, they still have some ideas about what they’ve heard, what they assume, or what they want for themselves. The assessment is a baseline, not a judgment of the team. Continuous improvement is a hallmark of high performing teams and companies. The goal of assessments is to focus on areas for improvement, not on attaining perfect scores.
About a week is long enough. No kidding. If the new team member has attended one team meeting they have a feel for the dynamics of the team. Also remember, as a new team member they have a valuable voice to contribute as “newbie”—part of the values diversity emphasis.
- Each item on the assessment is a best practice and therefore it provides training about great teams
- It's a great way to on-board new members to the team
- When everyone participates in the assessment it creates a common shared experience across the team
- It will help new team members participate fully because their answers are included in the results
- No answer is wrong; a new team member can make assumptions based on the job interview process, the reputation of the company, what s/he has heard about the team even without personal experience of it
- The data is not a judgement, it simply creates a new conversation about new topics in a new way
Here we use the word “team” two ways. First, a narrow definition: a small group of interdependent members with a clear common purpose. That purpose would not be accomplished without the contribution of all team members. Typical team size: 6-9. When teams get larger than about 10-12 look for “nested” teams, or sub-teams within a larger team. You may decide to do the assessment with the large team or do assessments on each of the sub-teams. It depends on where you want team members to have their attention. They will answer the assessment items with only one team perspective in mind. You choose where you want the focus. The assessment has been used with meta-teams (very large “teams”) of as many as 54. Clearly there were nested teams within, but the intention was to have all 54 see themselves as part of the larger whole, rather than the smaller segments.
There was a time when team building activities, especially off-site adventures were very popular. Teams went to the woods for ropes courses, had a powerful experience, and then came back to the office and nothing changed. The work we do is change over time, not a one-time event.
Our coaching methodology improves team performance 20%. A simple way to measure ROI is to take the total payroll and expenses of your team. Multiply that by 20%. Compare that number to the total cost of your team coaching engagement. This may seem simple but thousands of teams have done this and been extremely satisfied with the answer.
Every time we start a team engagement we challenge the team to measure their improvement as a team against a business measure that is important. “How will you measure success?” We take a snapshot of the business measure at the beginning of the engagement and at the end, and have the team work on it during the coaching process. While the team is working on their business measure the team factors will come into play, and the coach can create awareness with the team about how they are showing up. The bottom line is… we want to improve the team factors for the sake of improving business measures.
It is likely that the answer is “yes”. However, we may not have a case study and we may not have a reference client example. That’s because we keep the company and team information confidential with each facilitator. In rare cases we can ask a facilitator who has worked with clients in the same industry if it is possible to get a company name or even a reference, but permission is typically slow (too slow) and many companies simply decline. In short: don’t count on it. The assessment has been used by more than 1,000 teams worldwide with outstanding results. Teams of all types and sizes from every industry.
The answer is “yes”, the Team Diagnostic assessment has passed the statistical, psychometric analysis for both validity and reliability. Reliability: addresses the question, “Is this a reliable measure?” In other words, is it a metal ruler or an elastic one? Does the assessment produce consistent results? Validity: addresses the question, “Does it actually measure what it says it measures?”
There are many excellent assessments available on the market today. You probably know the names of many: MBTI, DISC, Firo B, the Leadership Circle 360, Five Dysfunctions of a Team, Belbin Team Roles, and the list goes on.
The distinguishing question: is this a true team assessment—assesses from the team point of view, or is it a collection of individual assessments of team members aggregated into a team view? It is useful information for team members, knowing their personal preferences, decision making styles, etc., and that information will help them work together more effectively. However, an aggregate of individual assessments is not the same as an assessment of the team as a system.
The Team Diagnostic assessment and report gives the true team view - - one that they have never seen before. Many facilitators use a combination of individual assessment with the TDA to give team members the benefit of both. An easy way to check: do the items in the assessment ask for an individual perspective or a team perspective? All of the items in the TDA are written from the team point of view. For example, “On our team we have clear goals and strategies to achieve them.” In the Team Diagnostic model we include this awareness of individual differences, and how that impacts collaboration, under “Values Diversity.”
It’s important that team members appreciate the strengths of diversity on the team—and, it is one aspect of Positivity. Most assessments focus on what we would call the Positivity side of the equation: interpersonal relationships or emotional intelligence or social intelligence or leadership attributes. These qualities are important, obviously; that’s why there is a Positivity side to the Team Diagnostic model: how team members interact has enormous impact on the culture of the team and the ability of the team to achieve its goals. One of the distinguishing differences between the TDA and other assessments: we put equal weight on the Productivity side of team performance—because teams do!
When a team asks the question, “How are we doing?” they want to know how they’re doing when it comes to the areas of team performance they are most familiar with and where they are often measured: alignment around mission and purpose, accountability, ability to make good decisions, manage resources, etc. From the team’s perspective, these attributes are equally important, more familiar, and may even be seen as more important to the team. It’s easier to talk about “our decision making process” than it is “how we engage, or don’t engage in conflict on our team.” And yet our experience shows that whatever the team chooses—even if they choose to work on team accountability on the productivity side—all of the 14 factors will be in play.
The Team Diagnostic model uses the everyday language that teams already use. It is common sense, accessible, and allows teams to see their results and get right to work. With other assessments they need to learn a coded language first, understand the distinctions, and all of that before they can talk about the essentials. One of the things we often hear from teams, “You speak our language.” One difference often over-looked: The Team Diagnostic assessment can be customized for individual teams using the open-ended questions at the end of the assessment. The assessment serves as a targeted survey of the team.