Recent Articles on our Methodology

Ipsative Measurement


Forced-Choice Assessment of Work-Related Maladaptive Personality Traits: Preliminary Evidence From an Application of Thurstonian Item Response Modeling.
Guenole, Brown, Cooper.

Abstract
This article describes an investigation of whether Thurstonian item response modeling is a viable method for assessment of maladaptive traits. Forced-choice responses from 420 working adults to a broad-range personality inventory assessing six maladaptive traits were considered. The Thurstonian item response model's fit to the forced-choice data was adequate, while the fit of a counterpart item response model to responses to the same items but arranged in a single-stimulus design was poor. Monotrait heteromethod correlations indicated corresponding traits in the two formats overlapped substantially, although they did not measure equivalent constructs. A better goodness of fit and higher factor loadings for the Thurstonian item response model, coupled with a clearer conceptual alignment to the theoretical trait definitions, suggested that the single-stimulus item responses were influenced by biases that the independent clusters measurement model did not account for. Researchers may wish to consider forced-choice designs and appropriate item response modeling techniques such as Thurstonian item response modeling for personality questionnaire applications in industrial psychology, especially when assessing maladaptive traits. We recommend further investigation of this approach in actual selection situations and with different assessment instruments.

Item Response Models for Forced-Choice Questionnaires: A Common Framework.
Brown

Abstract
In forced-choice questionnaires, respondents have to make choices between two or more items presented at the same time. Several IRT models have been developed to link respondent choices to underlying psychological attributes, including the recent MUPP (Stark et al. in Appl Psychol Meas 29:184-203, 2005) and Thurstonian IRT (Brown and Maydeu-Olivares in Educ Psychol Meas 71:460-502, 2011) models. In the present article, a common framework is proposed that describes forced-choice models along three axes: (1) the forced-choice format used; (2) the measurement model for the relationships between items and psychological attributes they measure; and (3) the decision model for choice behavior. Using the framework, fundamental properties of forced-choice measurement of individual differences are considered. It is shown that the scale origin for the attributes is generally identified in questionnaires using either unidimensional or multidimensional comparisons. Both dominance and ideal point models can be used to provide accurate forced-choice measurement; and the rules governing accurate person score estimation with these models are remarkably similar.


How IRT can solve problems of ipsative data in forced-choice questionnaires.
Brown, Maydeu-Olivares A.

Abstract
In multidimensional forced-choice (MFC) questionnaires, items measuring different attributes are presented in blocks, and participants have to rank order the items within each block (fully or partially). Such comparative formats can reduce the impact of numerous response biases often affecting single-stimulus items (aka rating or Likert scales). However, if scored with traditional methodology, MFC instruments produce ipsative data, whereby all individuals have a common total test score. Ipsative scoring distorts individual profiles (it is impossible to achieve all high or all low scale scores), construct validity (covariances between scales must sum to zero), criterion-related validity (validity coefficients must sum to zero), and reliability estimates. We argue that these problems are caused by inadequate scoring of forced-choice items and advocate the use of item response theory (IRT) models based on an appropriate response process for comparative data, such as Thurstone's law of comparative judgment. We show that when Thurstonian IRT modeling is applied (Brown & Maydeu-Olivares, 2011), even existing forced-choice questionnaires with challenging features can be scored adequately and that the IRT-estimated scores are free from the problems of ipsative data.

Testimonials

A. Laffoley

Academic Program Director Raleigh- Durham, NC

I used the Human Patterns Inventory in the development program for high potential senior leaders and recommend it as an effective tool as part of any comprehensive employee development program. In my opinion an important differentiator of this tool is the light it shines on the switches that may occur in our behavior when we are in reaction mode (e.g. in a stressful situation). Bringing awareness to where this occurs is invaluable to an individual’s personal development.

K. Jobe

Executive Recruiter Charlotte, North Carolina Area

I have used the Human Patterns as an internal recruiter as well as during client “coaching” engagements. It is the most comprehensive psychometric test that I have ever worked with. I highly recommend this tool to any organization that is committed to talent optimization.

F. Christian

Managing Director Chicago, IL

Human Patterns is a rare exception among assessment tools. Most are simplistic and slipshod, more mirrors of their creators' craniums than windows into one's own. Human Patterns has a richness that allows me to start meaningful conversations with the hidden high potentials I work with, who after years of severe underemployment have lost sight of themselves and their unique ways of working with the world. I'm so enthusiastic I now require it for new clients to shortcut to solutions.