Using alternative methodologies for evaluating patient medication leaflets

https://doi.org/10.1016/S0738-3991(01)00171-9Get rights and content

Abstract

A variety of direct and indirect methods have been used to evaluate written medication information; however, no published research has validated assessment tools or presented direct consumer assessment of patient information leaflets (PILs) provided in US community pharmacy (CP). We report on two new instruments: the medication information design assessment scale (MIDAS), an indirect measure of design quality administered by the investigators, and the consumer information rating form (CIRF), a direct measure of comprehensibility, utility, and overall design quality applied by a consumer panel. These were used to assess two types of PILs: 36 CP-PILs obtained from community pharmacies and 3 Model-PILs incorporating recommended design characteristics. The validity of the MIDAS was demonstrated in two ways. First, as predicted, the Model-PILs were rated more positively by consumers. We also found a significant positive correlation between the number of design criteria incorporated in a CP-PIL (as measured by the MIDAS score) and the consumers rating of design quality (CIRF). In conclusion, we confirmed the importance of design characteristics in the production of written medication information and have also developed and validated two easy-to-use tools for the assessment of written medication information.

Introduction

High-quality written medication information is now considered a key component of patient education in the US and other countries. In the US there is disagreement about whether there should be federal regulations requiring patient package inserts or information leaflets for all prescription drugs [1]. However, three issues clearly must be addressed to ensure optimal provision of written information: the information must be readily available or disseminated to patients; the content must be comprehensive, accurate, and specific enough to be useful to patients; and the information must be designed or formatted in a way that is easily read and understood by patients [2], [3].

On the issue of availability, US surveys show a clear upward trend in the proportion of patients receiving computer-generated or pre-printed patient information leaflets (PILs) from pharmacists over the past two decades. This proportion increased from 15% in 1982 to 52% in 1994 [1] and to 87% in the most recent study conducted in 1999 [4]. While this trend is encouraging, there are continuing concerns about the content and design of existing PILs. In 1996, the US Department of Health and Human Services requested a steering committee of professionals and lay persons to draft an “Action Plan” for evaluating and improving the usefulness of written medication information (hereafter referred to as the 1996 Action Plan). In the 1996 Action Plan, the steering committee recommended a number of criteria for evaluating written medication information. On the issue of content, it recommended that information be: scientifically accurate and timely, unbiased, and sufficiently comprehensive and specific to be useful to patients. On the issue of leaflet design, it recommended that information meet recognized guidelines for enhancing the comprehensibility and legibility of printed information for the elderly and general public (i.e. avoid small print and certain fonts; provide adequate space between lines, margins, paragraphs; use good ink/paper contrast, upper/lower case letters, true headings and bullet points).

Experts may be in the best position to judge the scientific accuracy, timeliness, and comprehensiveness of written medication information [4], [5]. However, consumers can and should be consulted about how easy or hard it is for them to read and understand the information (perceived comprehensibility), how personally relevant or useful the information is from their perspective (perceived utility), and their views on other attributes such as leaflet organization, attractiveness, print size, and spacing (perceived design quality).

A variety of direct and indirect methods have been used to evaluate written medication information from the consumer’s perspective. Indirect methods include a readability test which involves the computation of scores using formulae based on word and sentence length to predict the reading comprehension level, a person must have to comprehend a piece of written information [5]. Since this method is relatively simple to apply and represents an objective test, it is one of the most widely used methods for assessing patient information leaflets [6], [7], [8], [9], [10], [11], [12]. However, it has been argued that readability assessments may be of limited use in evaluating written medication information on the grounds that medical terminology may artificially inflate readability scores [6] and that they do not take account of all variables which can influence the difficulty of a particular piece of text [12]. Others have warned about wide variation in readability estimates for the same text, suggesting validity or reliability problems when applying these methods to medication information [10].

A second indirect method involves a design assessment tool which involves scoring leaflets in terms of design characteristics identified by researchers as enhancing comprehension in certain populations and the general public [6], [7]. Basara and Juergens formulated a “user-friendliness” index based on subjective characteristics such as print size, graphics, color printing, amount of white space, and paper quality [6]. Their evaluation of 63 patient leaflets revealed that many had limited white space and very small print size, features which might limit their usefulness for elderly and sight-impaired consumers. Kirkpatrick and Mohler used the eight-item readability assessment instrument (RAIN) tool based on characteristics such as global and local coherence, unity, audience appropriateness, adjunct questions, writing style, illustrations, and typography [7]. Of seven leaflets evaluated, only one met 80% of the recommendations. A more objective scoring system, the Baker able leaflet design (BALD) score, considers 16 characteristics. In applying this system to consumer product information (CPI)1 in Australia, Baker observed that no CPI document scored more than 22 out of 32 points and that the scale discriminated between the worst and best examples [8].

One obvious advantage of the “user-friendliness”, BALD, and RAIN tools is that they take into account a larger number of variables that can influence the comprehensibility and usefulness of a PIL. However, researchers have not established the validity of these tools. It is not known whether patient information leaflets scoring higher or lower on such tools are viewed more or less favorably by consumers, for example.

Researchers have used a variety of direct methods for studying consumers’ evaluation of written prescription information, including focus groups, individual interviews, and self-administered questionnaires [10], [14], [15], [16], [17], [18]. These methods have provided insights into consumer perceptions, beliefs, comprehension, recall, and behavior. However, they also have limitations. Focus groups are efficient as they obtain qualitative information from several respondents at once, but small sample sizes make it difficult to generalize and develop norms. Individual interviews allow researchers to gather quantitative data, but are very labor intensive and expensive. Self-administered questionnaires are less labor intensive and less expensive, but may be subject to response bias and variable exposure of respondents to test materials [5]. Finally, we found no published research comparing various direct and indirect methods of assessing written medication information.

The purpose of this pilot study was to develop and compare results using two new instruments for evaluating patient information leaflets distributed in community pharmacies. The first instrument, called the medication information design assessment scale (MIDAS), enables researchers to quantify the extent to which a given leaflet meets various design characteristics recommended in the 1996 Action Plan [1] and several attributes adapted from the Baker scale [8]. The second instrument, called the consumer information rating form (CIRF), provides a more direct method of quantifying consumers’ perceptions of the comprehensibility, utility, and overall design quality of the leaflet. The two instruments were used to rate 36 PILs obtained from Wisconsin pharmacies and 3 Model-PILs designed by investigators to meet criteria recommended in the 1996 Action Plan. The specific questions include:

  • 1.

    To what extent do PILs obtained from community pharmacies meet the design characteristics recommended in the 1996 Action Plan, as measured by MIDAS?

  • 2.

    How do consumers rate the comprehensibility, utility, and overall design quality of existing PILs versus Model-PILs, as measured by CIRF? We hypothesize that Model-PILs will receive more positive consumer ratings than existing PILs on each variable (comprehensibility, utility, overall design quality).

  • 3.

    What is the correlation between MIDAS and CIRF scores? We hypothesize a significant positive correlation, suggesting concurrent validity of the MIDAS scale.

Section snippets

Consumer panel

Using a snowballing method, researchers recruited a convenience sample of 24 individuals to serve on a consumer evaluation panel in November 1999. The consumers were recruited in two midwestern states and included 20 females and 4 males with a mean age of 54.8 (S.D.±17.9) years. One-half of the consumers had some college education, and the remaining one-half had a high school education or less (mean=14.7 [S.D.±3.0] years of education). They included 22 whites, one African American, and one

Assessment of leaflets using MIDAS

The results for the MIDAS scores comparing pharmacy and model leaflets are presented in Table 1. These data clearly show that the specific language and format criteria recommended in 1996 Action Plan have not been adopted universally by designers of the pharmacy leaflets. The majority of leaflets collected from community pharmacies did not meet criteria for line spacing, margins, line length, use of bullet points, bolding/box or summary to highlight important points, upper and lower case

Discussion

From the consumer’s perspective, the medication information sheets currently provided by pharmacists with new prescriptions are of variable comprehensibility, utility, and design quality. Deficits identified by the consumer panel raise important questions about whether a prospective medication user will fully read, understand, remember, keep, and benefit from the information. Particular problems in design centered on the common use of small type face, poor ink contrast, minimal white space and

Acknowledgements

The study was supported, in part, by the William S. Apple Research Fund awarded to Dr. Svarstad. The University of Sydney provided support to Dr. Krass for her sabbatical leave.

References (24)

  • L.R. Basara et al.

    Patient package insert readability and design

    Am. Pharm.

    (1994)
  • L. Mallet et al.

    Readability evaluation of nine patient drug education sources

    Am. Pharm.

    (1988)
  • Steering committee for the collaborative development of a long-range action plan for the provision of useful...
  • L.A. Morris et al.

    Counseling patients about prescribed medication: 12-year trends

    Med. Care

    (1998)
  • M.L. Buck

    Providing patients with written information

    Ann. Pharmacother.

    (1998)
  • Svarstad BL, Bultman D. Evaluation of written prescription information provided in community pharmacies: an eight-state...
  • US Department of Health and Human Services. Pretesting in health communication: methods, examples, and resources for...
  • M.F. Kirkpatrick et al.

    Using the readability assessment instrument to evaluate patient medication leaflets

    Drug. Inf. J.

    (1999)
  • S.J. Baker

    Who can read consumer product information?

    Aust. J. Hosp. Pharm.

    (1997)
  • S. Liguori

    Quantitative assessment of the readability of PPIs

    Drug Intell. Clin. Pharm.

    (1978)
  • L.A. Morris et al.

    Application of the readability concept to patient oriented drug information

    Am. J. Hosp. Pharm.

    (1980)
  • D.C. Sparado et al.

    Assessing readability of patient information materials

    Am. J. Hosp. Pharm.

    (1980)
  • Cited by (60)

    • Development and Assessment of a Low Literacy, Pictographic Cyclic Vomiting Syndrome Action Plan

      2022, Journal of Pediatrics
      Citation Excerpt :

      Appendix 1 provides for additional methodology and calculations for the various assessment tools, formulas, and scores, as discussed elsewhere in this article. A convenience sample of patients/caregivers was used to assess perceptions of comprehensibility, design quality, and usefulness using the Consumer Information Rating Form (CIRF).44,45 An overall CIRF percentile score was calculated: the numerator was the sum of the mean scores for each category and the denominator was the ideal CIRF score of 83.

    View all citing articles on Scopus
    View full text