FAQ for Administrators

The Human Patterns Inventory offers a comprehensive, personally tailored consultation by a trained administrator about an individual's preferences and interests.

FAQ for Administrators

Help and support for Administrators


Who administers Human Patterns?

The Human Patterns Inventory is available to coaches who have been trained in psychometrics and Human Resources professionals within organizations.  Prices are determined by the professionals delivering interpretations as part of their unique configurations of consultation and fees. For more information, contact us



What are the benefits of being an Administrator?

Knowledge!  The Human Patterns Inventory is a comprehensive assessment.  Our training is continuous and provides hands-on case based practicums for interpretation.  Administrators are provided access to other consultants with broad and deep experience in coaching and development and Human Resources.



How do I customize the report with my logo and/or tagline?

To customize the HPI reports with your branding, upon set up in our system and your dashboard, your logo will be uploaded for report customization. JPG, GIF or BMP are acceptable formats. Contact us for more information or for any questions.



How do I merge one report with another?

Merging reports is currently under development.  In the meantime, reports are published as .pdf files.  Other reports will be converted to a .pdf and then combined and saved as a single report.



How do I customize the results report?

To customize the HPI reports with your branding, upon set up in our system and your dashboard, your logo will be uploaded for report customization.  JPG, GIF or BMP are acceptable formats.  Contact us for more information or for any questions.  This option is currently in development.



How do I send an assessment?

Log in to the Administrator log in.  If you forgot your password click here to retrieve or request under the login.  Once you log in, Click “Applicant” and select “Add Applicant” and enter the applicant’s name and email address.  Click Submit and the assessment will be launched.  The applicant will receive an email with his personal login to complete the inventory. You can also contact us and we will forward a link on your behalf.



Where can I access educational materials for the HPI?

Educational materials support your interpretations.  You can access documentation and support materials as well as white papers and other reference information in the Resource section of the website.  Additional materials are available directly to administrators upon request.



Are volume discounts offered?

Not usually. For non-profits we sometimes make an exception on a case by case basis.  Please contact client services for more details.  Email: info@humanpatterns.com



Can I give away free reports?

We do work with an Administrator to offer sample administrations for prospective customers. The HPI sells itself with your delivery.  For more information on offering complimentary reports, please contact client services for more details.  Email: info@humanpatterns.com



How do I learn more about the Human Patterns Inventory?

Please contact: 919.740.5010 to speak with an Human Patterns support person or CLICK HERE to learn more about becoming an Administrator.



What is benchmarking?

Benchmarking offers businesses an ability to compare current and new team members against job profiles and work group or “culture” markers for a company. To learn more about job benchmarking, view our Resource section.



What is talent management?

Talent management is the process of attracting and developing the right people for jobs in order to meet current and future business objectives.  The responsibility of talent management falls across the entire organization. Coaches play a special role in developing talent.



Which report should I use?

HP currently has 4 reports. Most administrators use the Administrator’s Number Sheet to prepare for coaching sessions frame coaching interventions. The HPI Full Report is a 42 page report for the person who completes the instrument to view their results in a series of graphs with explanations of the terms under the graphs. The General Overview is a short summary page, but it is the least specific and calibrated because sentences are produced by cut-off scores. The Sequence Report presents the standard deviations in the order of the graphs and also includes specialized category names for experts in Firo, in MBTI, and in DISC. Coaches streamline their presentation of data depending upon the needs of your client – coaching, selection, placement, mediation of conflicts or talent management.



What is Full Circle?

Full Circle is a proprietary process that supports talent management or human capital management provided to organizations that elect a complete talent management and development process. Identifying key success factors and culture markers is the first stage of the process. Then a competency and capability analysis is done for each high potential. Next Human Patterns is administered. Following Human Patterns is an Assessment Center. Each participant in a Full Circle is then assigned a mentor and a coach. This process develops bench strength across an organization to ensure a sustainable culture, proper assignment, and fully leveraged employees.



What are Human Patterns Support Hours?

Human Patterns offers support Monday through Friday from 8am to 5pm EST. The office is closed on major holidays.



How do I stay updated?

Join our Human Patterns Users Group on LinkIn



We are a data driven organization. How can we build a data bank of employee resources, profiles for exemplars for some of our positions, and so forth?

Contact us and we will be happy to design data driven systems and processes for your Human Capital agenda.



What are the degrees of confidence in Human Patterns data reporting?

There are distinct degrees of confidence a coach needs to consider in an interpretation. These are presented in the order of confidence.

a. The subject’s actual response to one of the 250 items
b. A single directly measured factor score (standard deviation) derived from scores of the items – there are 67 of these. They are on the left side of the administrator’s number sheet until the Leadership Behaviors set. (Excluding the set for Type of Information Attended To)
c. A “proactive” or “reactive score” on a directly measured factor as reflected by selection of a “most” descriptor within an item for proactive and selection of a “least” descriptor within an item for a reactive score
d. A directly measured proactive or reactive score factor in relation to a grouped set of factors related to a discernable model
e. A “proactive” or “reactive score” on an indirectly measured factor/label
f. A sentence produced by a computation upon a formula in a General Report addressing a single directly measured factor that has a cutoff score
g. A sentence produced by a computation upon a formula in a General Report addressing a single indirectly measured factor that has a cutoff score
h. A sentence produced by a computation upon a formula in a General Report addressing multiple directly measured factors that has a cutoff score
i. A sentence produced by a computation upon a formula in a General Report addressing a combination of multiple directly measured factors and indirectly measured factors that has a cutoff score
j. A cluster definition (for Motivation, for Direction of Energy and for Interpersonal Relations) produced by a formula based on relative values of subsets of factors

For Human Patterns, the data array in a number sheet is the best approach for discerning overall patterns of preferences. The graphic output is the best format for opening up discussion about a topic where the intersection of the scores, the question, and the context for the coachee’s job role or aspirations or concerns are uppermost. The only reason to refer to the “Average” on a number sheet is to get a sense of how the ordinal rank for a label was calculated. It is just an arithmetic average of the “proactive” and “reactive” number for the factor/label.

The Sequence sheet is just another ordering of the data and includes some of the cluster labels. It is merely supporting information and most interpreters ignore it.

The General Overview Report is the least reliable information and can be ignored in most cases. When a coach needs to offer language to a third party, perhaps when handing off a coachee to a trainer, the General Overview Reports can be useful for someone who has no familiarity with Human Patterns, but wants some generic information about the coachee — and where the coachee has agreed for the first coach to provide information to the second coach.



What is the history of Human Patterns adoption and use?

Human Patterns has been used for over 25 years in both the private sector and the public sector. Our administrators selected their own clients and their clients tended to be in sectors they themselves were from. Administrators also tended to be skewed toward folks with connections to academic institutions and boutique consulting companies. Because of the cost and the need for a thorough debrief and follow-up consultation or coaching, client organizations tend to be well capitalized and vested in careful selection and development of employees. Since we were pioneers in ipsative instrumentation we worked primarily with academics who were interested in our approach and folks who were already consulting clients of our small network that liked our approach to coaching and employee development. We marketed only through word of mouth until 2013.

In 2006, my interest in marketing Human Patterns took a nose dive because I was fully engaged in developing the framework for what is now called the “data supply chain”– a term I introduced in my first patent application. Distribution of HP then became restricted to folks who were already certified – with very little activity. We were never very widely distributed because use of the instrument was limited to a narrow group of consultants and coaches who completed 3 full days of training and 10 practice administrations before we would allow them to purchase. We also have never had a marketing or sales effort. Much of our training for administrators was in coaching methods themselves. Now that ipsative instrumentation is gaining traction and there are credible credentialing programs for coaches we are more willing to open up distribution channels.



How is it used in employee selection?

We have done over a dozen profiles for particular job roles. Our process is to collect Human Patterns from 30-50 top performers in a role as measured by other performance indicators like 360 surveys and performance appraisals (for 3 years). We also collect 10 profiles from folks who are “at risk” but still in the role. We then compare the populations against our baseline scores to determine if there is actually a profile. We apply algorithms used in “market basket” research pivoting around the external data points and take the factors that are correlated with role success or failure to set up cutoff scores or range scores (for instance we found that profitable VP’s with business unit budget responsibility in large construction firms needed to be between 1.5 and -.5 on “sales interest” to succeed in the role because those whose scores were greater than 1.5 would oversell their projects and have low profit margins and those whose scores were under -.5 would undersell and have too few projects on hand). This kind of study is expensive and we tend to discourage organizations from relying solely upon the results of the study and the instrument – even when a person is significantly out of range – because the need for outliers in a work unit and the need for outliers for adapting to changing circumstances can override the need for clones. Instead we teach internal HR folks a process for “Selecting Naturals” that includes an analysis of the work group, the role, and the candidate pool.

Human Patterns has found a niche in leadership development programs and as part of a selection process for leadership roles. Our older clients still use it for line employees, but we believe that is because of inertia. Human Patterns becomes part of the process when a developmental agenda that includes self-awareness and coaching support is available to incumbents in a role. We have also built a method to use a computer program (Winsteps) that uses Rasch methods to evolve ordinal scales that might correlate with cultural factors or with key performance indicators, but that work is not yet concluded. If a role is very clearly defined – a bank teller for example – and the advertisement clearly telegraphs the work assignment, we found that we could reduce turnover (3 years of data) from between 24-28% to between 11-16% (3 Years of data.)



How was the baseline dataset of 5000 individuals derived?

We sampled two global corporations: a “Big Pharma” company and a global software company. We did not find much difference between the subsets of the sample based on size of organization. We did find differences within the sample based on job roles/assignments. We compensated for the differences in roles by including all members of an entity (except for our subsets of the global companies where we sampled single sizable business units with an independent operating budget). Our sample was collected in Raleigh, NC over 8 years, so it is tilted toward English speakers in a metropolitan environment with Federal, State, and Municipal employees as well as the range of service, manufacturing, research, agricultural, and technology sectors. With a sample size of 5000, we expect to be reasonably representative of the US work force – but we could definitely be better. We have collected subsets of native French and native Chinese speakers. There are cultural differences in the aggregate, but individual variability is retained across cultures. We have a report in Chinese, but we have chosen not to distribute it because we are not confident that the labels or the descriptions of the labels are sufficiently congruent across Chinese and English. This problem emerges for any psychometric that uses language.



Are the standard deviations actual standard deviations?

The numbers reported are actual standard deviations. A reasonable case can be made for normalizing our output, but we chose to keep the actual standard deviations. This has resulted in our bell curves being flatter and the resulting standard deviation numbers for the population being greater than they would be for a normalized dataset, but it is the relationship to the factor of the individual that we most want to report in an array.



Is there a standard debriefing processes?

We have a lot of material on the website and in our manual relating to this, but we trust the coach and the coachee to discover and develop their own best process. After all, if preferences matter, bother coaches and coaches who operate congruently with their preferences will do better coaching and respond better to coaching if the process of debriefing is aligned with them. The biggest trap in debriefing is getting into the “weeds” with factors that have low scores —- less than a standard deviation. Being able to separate the “wheat” from the “chaff” by having an overview of the patterns is best done by taking in the array of data, the “gestalt” of a Number Sheet. However, for folks that are visual learners and responders, the graph image of their pattern can take precedence.



How malleable are preferences and interests?

Changes in scores of as much as .5 standard deviations are quite common (approximately 40%) – even from administrations with little separation in time. Changes between a half and one standard deviation are less common (have not reviewed this in some time, but I think it was 12%). As the size of the initial standard deviation number increases, the order of preference of one factor over another – the hierarchy – tends to remain constant for over 75% of the features. Additionally, a person’s “self-definition” scores may flatten or increase. We hypothesize this is a reflection of overarching developmental processes that lead some people toward more definitive preferences and others to more flexible preferences. As would be expected, the decreases or increases in the initial standard deviations will reflect the trend in overall self-definition.

People can and do “decide” to change a feature or preference and engage in therapy or training to change it. Our anecdotal data on chosen change is that it is dependent on environmental support and the quality of the coaching or therapy offered. Our increasingly strong conviction is that preference and interest patterns will manifest themselves clearly after adolescence and will then remain relatively constant. We hypothesize that many preferences and interests are a reflection of brain chemistry and that much of brain chemistry is correlated with the action of specific genes. We do note that trauma and crisis can induce major changes in preferences and interests, but even in these cases, it is related to a subset, not the entire array. Answers to this question also require an overarching theory of personality – which we do not have. Our own research leads us to believe that the fundamental distribution of preferences and interests is best reflected by “the big 5” factors, but that there are significant intervening and qualifying factors such as “physicality” and environmental factors such as a history of successful trust relationships that impact even “the big 5.” Identifying supplemental nuanced and context specific patterns is necessary to enable a coach or clinician to provide useful information and Human Patterns is intended provide that. For coaches that desire to learn how other factors that we have not been able to measure through our inventory impact behavior and development, we can offer additional webinars.



Reliability of Human Patterns

1. Cronback’s Alpha (measure of consistency – related to both validity and reliability) for the directly measured grouped factors is 81% – but do be aware that the calculation was done with a Bonferroni Correction.
2. Test-retest. We get overall reliability of over 63% for all factors within .5 standard deviations. We get hierarchical reliability – one factor can be out of place in an ordinal sequence to be positive – for over 78% for all directly measured labels.
3. Parallel forms 1. We get 89% within 1 standard deviation per directly measured label dividing the 250 item instrument in half.
4. Parallel forms 2. We get 88% consistency when we divide the instrument into 2 sets of items – one set of the “most” choices, and one set of the “least” choices.
5. Inter-rater. We did run a study of our results against key performance indicators generated via performance appraisal systems. There were interesting and promising implications; however, we were unable to correlate the constructs for key performance indicators and our factor labels sufficiently to have confidence we can claim any degree of validity or reliability for these.



Validity of Human Patterns

Caveat – APA has set standards for measurement methods to be used to test for validity and reliability. We have used some of these and are aligned with instruments that are commonly called valid and reliable. However, administrators of Human Patterns should be aware that vastly more complex algorithms implemented by computers capable of machine learning are being developed that will collate Biodata and actual behavior with preferences and interests – IBM’s Watson is an example. These advances will make many current instruments obsolete. Our real and lasting value is in our approach to mental models and offering a data array to invoke a dialogue between a coach and a coachee.
1. Face-validity by subject. Of 500 subjects rating the accuracy of the factor labels in relation to their own perceptions, 85% of the individuals reported that the factor labels reflected their own perceptions of themselves at a better than 8.5 on a 10 point scale. “This proportional numbers in this report are accurate.” 69% also reported that the ordinal rank of the factors within a group of factors was accurate. “The order of my preferences in this report is accurate.”
2. Face-validity by external observer. Of 128 observers rating the order of the factor hierarchies, 92% of the raters concurred with the statement at better than 8.5 on a 10 point scale: “This report correlates with my knowledge and experience of this person the person over the last 3 years.”
3. Criterion validity has not been tested. Not for lack of trying to find a way with our particular design.
4. Construct validity via orthogonal factor analytic rotations for directly measured factors range between 63% and 76%.
5. Construct validity via the constructs offered by instruments we derived our factors from is 72%, but do note that there are only 22 of these of the set of 67 directly measured factors.
6. Note: The instrument has not been revalidated since 1993. Populations change and preferences and interests may have cultural valences that will surface different values when we do a revalidation. We expect the validity itself to remain consistent in large part.



Why is Human Patterns tedious and frustrating to complete?

It is tedious, redundant, and frustrating. The redundancy and length is intended to reduce the risk that a given person might be able to adjust their responses to their beliefs about what a third party or assessing organization might “prefer.” Remembering responses is quite difficult after the first 100 or so items. The level of frustration and use of words that are uncommon is intended to induce a sense of uncertainty and create a context where the person completing the instrument operates more instinctively – thus giving us a better map of their consistency in proactive and reactive situations. We do not have comparative data to “prove” that the tediousness and frustrating nature of the items has our intended impact. There is evidence that redundant presentation of one term, when that term is posted into a different item comprised of different terms that load onto different factors, will surface an implicit hierarchy for the set of terms. We posit that this hierarchy reflects interests and preferences. This hierarchy can be thought of as a projection of the individual test subject’s patterns of interests and preferences.



What were the development assumptions and parameters for Human Patterns?

We developed the instrument using proprietary software packages and algorithms that had not been available until the mid-nineteen eighties. Our fundamental assumption is that individual differences reflected by selection by a test subject of descriptors that sufficiently correlate with a factor, will surface a hierarchy of factors if the test subject is presented sufficient opportunities for selection. A second assumption is that factors themselves can be grouped or clustered or aligned with factors used by and within constructs that have their own independent theoretical bases.
1. Selecting models: We looked for current and common models that were being reasonably well tested for independently (70% reliability — sufficient for the Mental Measurements Yearbook) and explored which of these models had an implicit hierarchy such that there would be “lead” factor and other factors would typically be less strongly endorsed or selected, but well distributed. A scan of the psychometric marketplace of the early eighties telegraphs the instruments that were common at the time. We evaluated the theoretical models that underlay the instruments and attempted to get to the “root” question that the theoretician was trying to address through his model. For example, occupational preferences are self-evident, but we had to bypass Geier and get into the weeds with Marsden to conclude that he was exploring what we call “Motivation.”
2. Discovering terms: We then looked for hundreds of terms or descriptors that lent themselves to iterative groupings to take advantage of the implicit hierarchies within each of the models. This was primarily done through dictionary and thesaurus searches.
3. Discovering and collating descriptors: We then surveyed business owners and employee selection specialists for lists of features and descriptors of employees and functions and roles and asked them to group these. We also had them rate the descriptors we had turned up. In addition to these groupings, we ran analyses of the lists using a product from Angoss software called KnowledgeSeeker that used Ken Ono’s “market basket” measurement tools to get at the likelihood one descriptor would be associated with each of the others. A review of the product and its features are included as these compare with SAS’s product. http://predictive-analytics.softwareinsider.com/compare/1-3/SAS-Enterprise-Miner-vs-KnowledgeSEEKER. Our version of Knowledgeseeker was much more barebones in the late eighties, but the dimensionality and model creation capacity was largely in place. Another tool we used was called TextSmart which was a standalone product that was absorbed into SPSS. http://www2.senama.cl/comunicaciones2/spss/spss_11.5/SPSS%20Products%20and%20Services/SPSS%20Product%20Catalog/Spec%20Sheets/TextSmart.pdf
TextSmart ran clustering algorithms on text files to produce a spatial map of correlated terms. This enabled us to choose degrees of overlap and independence of individual terms or descriptors within items. (For those of you who want to develop your own items – there is a simple little online tool called Visual Thesaurus that can give you baseline ideas.)



How is Certification arranged, priced, and delivered?

Certifications are provided by a select group of certified administrators who have at least 10 years of experience using Human Patterns. The fee for certification is dependent upon the method and process (tutorial or group), and whether the certification is for an individual professional or an organization (corporate or academic). Please contact us at info@humanpatterns.com to schedule and structure your training parameters. You can also email our Training Coordinator directly using EBroer@humandimension.org or by calling 303-887-4598. Following certification, you will be issued an ID and will be enabled to forward links to people you select to complete the inventory.



Testimonials

A. Laffoley

Academic Program Director Raleigh- Durham, NC

I used the Human Patterns Inventory in the development program for high potential senior leaders and recommend it as an effective tool as part of any comprehensive employee development program. In my opinion an important differentiator of this tool is the light it shines on the switches that may occur in our behavior when we are in reaction mode (e.g. in a stressful situation). Bringing awareness to where this occurs is invaluable to an individual’s personal development.

K. Jobe

Executive Recruiter Charlotte, North Carolina Area

I have used the Human Patterns as an internal recruiter as well as during client “coaching” engagements. It is the most comprehensive psychometric test that I have ever worked with. I highly recommend this tool to any organization that is committed to talent optimization.

F. Christian

Managing Director Chicago, IL

Human Patterns is a rare exception among assessment tools. Most are simplistic and slipshod, more mirrors of their creators' craniums than windows into one's own. Human Patterns has a richness that allows me to start meaningful conversations with the hidden high potentials I work with, who after years of severe underemployment have lost sight of themselves and their unique ways of working with the world. I'm so enthusiastic I now require it for new clients to shortcut to solutions.