An Abbreviated Version of TRI 2.0


Charles L. Colby, Chief Methodologist, Rockbridge Associates, Inc.
Parasuraman, Professor, University of Miami

  In 2000, Parasuraman published the first Technology Readiness Index (TRI) scale, a tool for measuring “technology readiness,” the propensity to adopt and embrace cutting edge technology at home and in the workplace.  While TRI has been widely used in academic and commercial contexts, a common concern cited by users was that the instrument was long, consisting of 36 items.  In 2015, Parasuraman and Colby published a more concise scale consisting of 16 items.  The streamlined scale, called TRI 2.0 (and also referred to as TechQual™), includes measures of overall technology readiness (TR) as well as individual components of technology readiness (optimism, innovativeness, discomfort and insecurity). Based on TRI 2.0 scores on the index’s four individual components, Parasuraman and Colby also derived a segmentation scheme that categorizes people into five technology adoption segments (Explorers, Pioneers, Skeptics, Hesitators, and Avoiders).  TRI 2.0 is a copyrighted instrument and requires written permission and a license from the authors to use.

Though TRI 2.0 has less than half the number of items as does the original TRI, there is interest in an even more concise version that could be used by researchers seeking a reliable measure of overall technology readiness and/or recreating the technology segments, but have no interest in measuring the individual components of technology readiness.  A 10 item version of the index was developed from the original 36 item TRI and used extensively over the decade and a half before TRI 2.0 was introduced.  This paper presents an abbreviated 10 item index based on the TRI 2.0, including evidence of its reliability and validity.

Methodology.  The authors believe that a concise version of the TRI 2.0 should (a) include items from each of the 4 scale components, (b) be balanced between “motivators” of TR (optimism and innovativeness) and its “inhibitors” (discomfort and insecurity), (c) meet thresholds for reliability, (d) demonstrate validity by correlating with behavior and behavioral intent regarding cutting-edge technology, and (e) identify membership in the TR segments with a high level of accuracy.  The concise index was developed from the same data set used to develop TRI 2.0, the 2012 National Technology Readiness Survey in the U.S.  The first step consisted of identifying attributes that could be dropped from the initial list of 16 items with the least reduction in overall reliability measured by Cronbach’s Alpha.  Items were iteratively dropped to a total of 8, with the requirement that at least 2 items from each of the four TR sub-scales be present. In order to improve the reliability of the scale, 1 “motivator” item (from the Innovativeness sub-scale) and 1 “inhibitor” item (from the Insecurity sub-scale) were added back to list of 8; the rule for this selection was to identify the single motivator and the single inhibitor that produced the greatest lift in reliability.  It was felt that a total of 10 items provided the right balance between reliability and conciseness as a research tool.  As an additional check, a discriminant analysis was used to measure the ability of the 10 items to predict the 5 technology segments referenced in the 2012 paper.  As discussed below, the predictive accuracy of 10 items is high and close to that of using all 16 items.

The resulting index included the following 10 items (actual wordings of these items are in the second article in the references section at the end of this while paper):

Motivator Statements:

  • Optimism: OPT 2, OPT4
  • Innovativeness: INN1, INN2, INN4

Inhibitor Statements:

  • Discomfort: DIS2, DIS3
  • Insecurity: INS1, INS2, INS3

Reliability of the 10-item index.  Reliability is supported in the most recent wave of data on the TRI 2.0, the 2015 U.S. National Technology Readiness Survey which is based on an online panel of U.S. adults fielded in October 2015.  The Cronbach’s Alpha for the 10 scale items is .808, above the threshold of .7 recommended for a reliable measure of a construct.  The alphas for the 5 motivator items alone is .828, and for the alpha for the 4 inhibitor statements is .724.  Thus, the 10-item index proves to be a reliable measure of TR, and can also potentially be used to measure “motivators” and “inhibitors” separately.

Validity of the 10-item index.  To demonstrate validity, the 10-item index was correlated with measures of technology behavior and acceptance, which TR, by definition, should be able predict and explain.  The 16 item and 10 item indexes were correlated with two such measures:  (a) the number of behaviors a respondent engaged in online in the past 12 months (ranging from 0 to 25), and (b) the mean perception of desirability of 6 types of service robotics applications (ranging from 1, very undesirable to 7, very desirable).  The first measure included items such as ecommerce, online banking, online travel booking, and downloading content, all areas associated with more “techno-ready consumers”.  The robotics applications consisted of items that are not yet commercially available, and therefore cutting edge technologies, including delivery drones, driverless vehicles and transports, and robots that would perform household work and serve in restaurants.  The relationships were tested using a linear regression model.  As summarized below, the 10-item index is a significant predictor of both dependent variables and performs just as well as the 16 item index.

Ability to Identify TR Segments.  The baseline study in 2012 used to develop TRI 2.0 included 5 technology segments derived with Latent Class Analysis (LCA).  The segments are widely used by academic and commercial researchers because they identify individuals with common beliefs about technology that do not necessarily fit a simple spectrum from low to high technology readiness.  For example, one segment, “Pioneers,” is defined as having high levels of both motivators and inhibitors to using technology (a “love-hate relationship” with technology).

Segments were created using Fisher Classification Coefficients derived from a discriminant analysis in the baseline study.  The resulting classification scheme with the original 16 TRI 2.0 items is capable of predicting the original LCA segments with 93.9% accuracy.  Accuracy varies by segment, but the lowest of any of the five is 88.1%.  Generally, this is a high level of accuracy.  When using the 10 items in the abbreviated index, it is possible to predict segment membership with 80.3% accuracy.  The segment with the lowest level of accuracy is 72.2%.  In real world applications of segmentation schemes, this level of accuracy for an abbreviated question list is considered good, and segmentations with this level of accuracy are capable of predicting differences in behavior and intentions.

Another test of the predictive power of a segmentation is to examine the percentage of variance (eta-squared) that can be explained by the segmentation categorical variable.  The original segmentation scheme developed by LCA was able to explain 76% of the variance in the TRI 2.0 measure.  A segmentation created with a predictive algorithm using 10 items explains 72% of the variance in TRI 2.0, which shows that the explanatory power of the segmentation does not drop markedly when going from16 to 10 items utilizing a predictive algorithm.

Conclusions.  Researchers can use the 10 item abbreviated TR Index when they are interested in measuring overall TR, are not interested in measuring the individual components of TR, and wish to leave room for other questions on their questionnaires.  The 10 item index is also capable of predicting TR segment membership with a high degree of accuracy.


Parasuraman, A. (2000), ‘‘Technology Readiness Index (TRI): A Multiple-item Scale to Measure Readiness to Embrace New Technologies,’’ Journal of Service Research, 2 (May), 307-320.

Parasuraman, A. and Charles L. Colby (2015), “An Updated and Streamlined Technology Readiness Index: TRI 2.0,” Journal of Service Research, 1 (February) 59-74.