The goal of our research was to construct a survey that could reliably measure the Straight Talk® Communication Styles that were developed by Eric Douglas.
The research focused on developing a tool to measure the Director, Expresser, Harmonizer and Thinker communication styles. We considered both what Eric Douglas’ theory told us about the different styles, as well as the responses of participants in the study. It was important to consider both the theory and the data because using one without the other could increase the probability that chance alone would affect the final choice of statements. We knew that we wanted to end up with a final communication styles survey – or inventory – consisting of between 30 and 40 statements. With this goal in mind, we developed 98 statements that we thought could diagnose the four different styles. A quarter of these statements would measure the Director style, a quarter would measure the Expresser style, and so forth. After we had randomized these 98 statements and compiled them into a survey, we gave it to 237 individuals to complete on their own.
After we collected the data, we summed the individual response items for each style. Next, we correlated each of the 98 items with all four of the styles to see which statements were most strongly related to each style (Jackson, 1970). We then chose the 12 items for each of the four styles that had the highest correlation with each style and were at least .05 different from correlations with the other three styles. Also, any item that had a high correlation with a communication style for which it was not intended was discarded. Thus, our second-generation inventory was winnowed down to 48 statements.
The next step was to subject this refined inventory to factor analysis. This would allow us to measure the underlying traits responsible for the ways that people respond to the statements on the inventory. (Tabchnik & Fidell, 1996). Factor analysis also ensures that each statement is measuring the same underlying trait and that the statements are maximally related to a single subscale while being minimally related to the other subscales.
A screen plot of the results showed us that four factors (or communication styles) had eigenvalues greater than one (2.88 to 13.26). This confirmed that the data are best explained by four communication styles, rather than, say, three or five.
Based on this preliminary factor analysis, we narrowed our survey to eight statements for each communication style. We conducted a final factor analysis, which showed that each of these eight statements “loaded” on its relevant style with a score of .3 or above, and loaded lower than .3 on the other three styles. We were very pleased with these results because they confirmed that our survey instrument was meeting established professional standards.
Note: Letters after items indicate the communication style for which they were created: E=Expresser, T=Thinker, H=Harmonizer, D=Director.
We then conducted correlation analyses of the four subscales and the Social Desirability Scale (SDS; Crowne & Marlowe, 1964). We focused on the Social Desirability Scale because we were interested to see how the four subscales related to a scale that measured the tendency to present oneself in a favorable or socially desirable light. We found that the SDS was meaningfully related to the Harmonizer (r = .60) and Thinker (r = .37) styles (see Table 2). These relationships were predicted by the theory. Furthermore, we found strong relationships between the Expresser and Director subscales (r = .44) and the Thinker and Harmonizer subscales (r = .34). These relationships are also predicted by Douglas’ theory.
Note: * denotes significance at the .001 level. N = 237
The purpose of the next set of analyses was to investigate the internal consistency of the Douglas Communication Styles Inventory (DCSI). This would confirm that each item within the same subscale measures the same underlying communication style. Alpha reliability coefficients of .70 and above are considered acceptable. Our results revealed that the DCSI subscales showed good internal consistency (see Table 3).
We next investigated the stability of the Douglas Communication Style Inventory subscales. We gave the DCSI to 64 college students to fill out. Two weeks later our volunteer participants completed the DCSI once again. We conducted correlations of each of the DCSI subscales from Time 1 to Time 2 (see Table 4).
The Director, Expresser, Harmonizer, and Thinker subscales showed excellent test-retest reliability (acceptable coefficients are equal to or greater than .70). Overall, the DCSI had an average test-retest reliability of .83, which indicates that this scale is stable across time.
We also conducted t-tests for correlated groups to see if average subscales changed significantly from Time 1 to Time 2 (see Table 5).
Note: Significance levels equal to or less than .05 are considered meaningful. T1 equals Time 1; T2 equals Time 2; SD equals standard deviation; df equals degrees of freedom.
These analyses showed that average scores did not change from Time 1 to Time 2. This was added confirmation that the subscales are stable across time.
Next, we correlated each of the DCSI subscales to determine their inter-relationships (see Table 6).
Note: * denotes significance at the .01 level. N = 64.
Again, we found the expected high relationship between scores on the Expresser and Director subscales (r = .62), but not the Thinker and Harmonizer subscales (r = .22). We also found significant relationships between the Thinker and Director subscales (.38), the Harmonizer and Expresser subscales (.42), and the Thinker and the Expresser subscales (.46). We believe, however, that the correlations from the first phase more accurately reflect the relationships among variables because they are based on a larger number of people (237 vs. 64), and therefore are more reliable.
The purpose of the final set of analyses was to confirm the internal consistency of the DCSI subscales. Our results revealed that the DCSI subscales were internally consistent (see Table 7) except for the Harmonizer subscale. As before, the alpha reliability coefficients from Phase 1 were given more weight because they were based on a larger sample (see Table 6).
In summary, we were very pleased with the results of the first phase of the research. The Douglas Communication Styles Inventory (DCSI) showed an ideal factor analysis pattern and good internal consistency. Moreover, the expected relationships among the DCSI subscales and the Social Desirability Scale were found. We were also pleased with the reliability of the survey instrument. The DCSI showed good average test-retest reliability (.83) and good internal consistency. Together, these results confirmed that the DCSI was a reliable instrument for measuring the four discrete styles of communication.
1997 by Lisa Bohon, Ph.D. All rights reserved.
Reproduced by permission of the author.