Evaluating informatics applications—some alternative approaches: theory, social interactionism, and call for methodological pluralism
Introduction
Informatics researchers and practitioners have been developing systems to support clinical care and decision making for a half century. Information technologies now are a standard fixture in clinical settings. Nevertheless, there is a general consensus that a variety of systems for clinical decision support are little used, even though their potential benefits have been demonstrated repeatedly [1]. A review of evaluations concerning clinical decision support systems (CDSSs) indicates that these studies are done in a way that precludes findings useful for understanding why CDSSs may or may not be effective. This omission could result in making less informed decisions about these technologies and, by extension, other medical informatics applications. The review reports that most studies use an experimental or randomized controlled clinical trials design (RCT) to assess system performance or to focus on changes in clinical performance that could affect patient care. Few studies involve field tests of the CDSS and almost none use a naturalistic design in clinical settings with real patients. Most focus on physicians rather than on other clinical users or patients. In addition, there is little theoretical discussion, although papers are permeated by a rationalist perspective that excludes contextual issues related to how and why systems are used. Further, CDSS evaluation studies appear to be insulated from evaluations of other medical informatics applications [1].
Although RCTs and other experimental designs are excellent for assessing system performance or specific changes in clinical practice behaviors, some authors have pointed out that these methods are not well suited for studying other research questions. Consequently, other approaches have been developed: simulation, usability testing, cognitive studies, record and playback techniques, network analysis, ethnography, and social interactionism among them. However, when these are used under controlled conditions, it can be difficult to investigate a variety of human, contextual, and cultural factors that affect system acceptance in actual use. Further, a focus on pre-specified outcome measures precludes examining processes of actual system use during daily activities [2].
This paper builds on that literature review [1] to argue for expanding evaluation approaches to enable increased understanding of the many influences concerning informatics applications development and deployment so that we can improve these processes and their connection with patient care. The paper draws on a wide range of literature that was identified through a combination of manual and automated literature searches as described in [1]. Although the literature is from fields that often remain separated, here it is linked to illustrate the need for evaluation designs that go beyond RCTs and experiments, that focus on a variety of individuals and on organizational concerns rather than primarily on physicians’ behavior, and that are informed by evaluations in many areas of medical and health informatics. To further these aims, the paper discusses evaluations that take into account social, organizational, professional, and other contextual considerations. It also proposes: (1) a theoretical base for evaluation; and (2) methodological pluralism in evaluation that incorporates both multiple methods and also addresses a variety of evaluation issues. The paper suggests a social interactionist approach that draws on social science theory, incorporates multiple methods, and addresses a variety of evaluation issues. This approach is illustrated by an example of a CDSS evaluation in psychiatry. The example is based on Kaplan's 4Cs, which focus on communication, control, care, and context [3], [4]. Although the discussion concerns CDSSs, a social interactionist approach is useful when evaluating clinical practice guideline implementation or other medical informatics applications.
Section snippets
Limitations of RCT/experimental evaluation approaches
Althought RCTs are the ‘gold standard’ of clinical research, this hierarchy of research design is not adhered to in other scientific disciplines and recently has been questioned in medicine [5], [6]. With respect to CDSS, as early as 1987, Lundsgaarde reported that evaluations of expert systems often ignored context, such as culture, organization, and work life. He observed that this lack might help explain why so few systems have moved from laboratory to clinic [7]. A few years later, Forsythe
The need for an alternative
Thus, it appears as though the standards of RCTs and other experimental approaches, while excellent for measuring clinical behavior changes or system performance, may be resulting in an impoverished understanding of issues pertaining to system use. Whether or not an informatics system works depends upon social and cognitive processes as well as on technological ones [27]. Because RCT and experimental evaluation approaches do not include social, organizational, cultural, cognitive, or contextual
‘Fit’
The CDSS literature seems not to be informed by studies of other systems, such as hospital information systems (HISs), computer based patient records (CPRs), or ancillary care systems (e.g. laboratory, radiology) that, like studies of guideline compliance, could provide useful insights into issues that may be relevant to acceptance and use of CDSSs [1]. One area that has received some attention when considering acceptance or rejection of CDSSs, or, any clinical information system, may be
Social interactionist theory
Situated action or cognition, actor-network theory, sociotechnical theory, constructionist approaches, and Classic Diffusion Theory each provide theoretical bases that could inform informatics studies, as could ethnography, cognitive studies, and theories of artifacts as embedded models of psychological theories. These theoretical bases have in common that they are, in some way or another, social interactionist approaches [3], [17], [93], [94], and a number of the researchers cited have social
Example
Drawing on this social interactionist tradition, Kaplan's 4Cs evaluation framework is based both on a social interactionist theoretical perspective and on empirical evaluation research studies of a variety of medical and health informatics applications. It can be used for studying and influencing processes of design, implementation, adoption, and use in natural settings where a clinical application is introduced. The framework calls attention to four of the many interrelated areas when
Conclusion
The long history of difficulties in achieving clinical use of some kinds of clinical informatics applications, such as CDSSs and clinical guidelines, suggests a need for deeper understanding of reasons for this recurring pattern. When staff revolt or boycott new hospital information systems where they are required to do order entry [49], [61], [98]; when users sabotage systems [99]; when despite years of effort it has been notoriously difficult to get physicians to use clinical guidelines; when
Acknowledgements
I am grateful to Dr Richard Spivack of the US National Institute of Standards and Technology for his invaluable assistance in the automated literature search for literature reviewed here.
References (99)
Evaluating medical expert systems
Soc. Sci. Med.
(1987)Patient care information systems and health care work: a sociotechnical approach
Int. J. Med. Inform.
(1999)- et al.
The contextual nature of medical information
Int. J. Med. Inform.
(1999) - et al.
Considerations for sociotechnical design: experiences with an electronic patient record in a clinical context
Int. J. Med. Inform.
(1998) - et al.
Cognitive evaluation of decision making processes and assessment of information technology in medicine
Int. J. Med. Inform.
(1998) The evaluation of expert diagnostic systems—How to assess outcomes and quality parameters?
Artif. Intell. Med.
(1994)- et al.
Using a descriptive model of change when implementing large-scale clinical information systems to identify priorities for future research
Int. J. Med. Inform.
(1999) The VATAM guidelines
Int. J. Med. Inform.
(1999)A methodology for evaluation of knowledge-based systems in medicine
Artif. Intell. Med.
(1994)- et al.
Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations
Lancet
(1993)
Electronic communication and collaboration in a health care practice
Artif. Intell. Med.
The behavioral side of information technology
Int. J. Med. Inform.
Information systems evaluation and subjectivity
Int. J. Med. Inform.
Organizational issues in health informatics: a model approach
Int. J. Med. Inform.
Lessons from a failed information systems initiative: issues for complex organisations
Int. J. Med. Inform.
Cultural aspects of information technology implementation
Int. J. Med. Inform.
Addressing organizational issues into the evaluation of medical systems
J. Am. Med. Inform. Assoc.
Randomized, controlled trials, observational studies, and the hierarchy of research designs
New Engl. J. Med.
Comparison of observational studies and randomized controlled trials
New Engl. J. Med.
Philosophies for the design and development of clinical decision-support systems
Meth. Inform. Med.
Evaluating information technology in health care: barriers and challenges
Br. Med. J.
Medical informatics and the science of cognition
J. Am. Med. Inform. Assoc.
Building a virtual network in a community health research training program
J. Am. Med. Inform. Assoc.
The medical computing ‘lag’: perceptions of barriers to the application of computers to medicine
Int. J. Technol. Assess Health Care
An plea for controlled trials in medical informatics
J. Am. Med. Inform. Assoc.
Current evaluations of information technology in health care are often inadequate
Br. Med. J.
Users’ guides to the medical literature: XVIII. How to use an article evaluating the clinical impact of a computer-based clinical decision support system
J. Am. Med. Assoc.
Towards an informatics research agenda: key people and organizational issues
J. Am. Med. Inform. Assoc.
Why don't physicians follow clinical practice guide-lines?: a framework for improvement
J. Am. Med. Assoc.
Translating guidelines into practice: a systematic review of theoretic concepts, practical experience and research evidence in the adoption of clinical practice guidelines
Can. Med. Assoc. J.
Implementing clinical practice guidelines: social influence strategies and practitioner behavior change
QRB
Potential benefits, limitations, and harms of clinical guidelines
Br. Med. J.
Information technology and three studies of clinical work
ACM SIGBIO Newsl.
The computer based patient record: a strategic issue in process innovation
J. Med. Syst.
Cited by (309)
Researching big IT in the UK National Health Service: A systematic review of theory-based studies
2024, International Journal of Medical InformaticsREVISITING HEALTH INFORMATION TECHNOLOGY ETHICAL, LEGAL, and SOCIAL ISSUES and EVALUATION: TELEHEALTH/TELEMEDICINE and COVID-19
2020, International Journal of Medical InformaticsBayesian networks in healthcare: Distribution by medical condition
2020, Artificial Intelligence in MedicineChanging the conversation on evaluating digital transformation in healthcare: Insights from an institutional analysis
2020, Information and Organization