Developing a Clinical Trials Infrastructure in the United States

By Paul Eisenberg, Petra Kaufmann, Ellen Sigal, Janet Woodcock
May 11, 2012 | Discussion Paper

Clinical trials in the United States have a rich history of involving academic, industry, and government institutions (e.g. the National Institutes of Health [NIH]) to address important medical questions. Nonetheless, over time, clinical trials in the United States have become too expensive, difficult to enroll, inefficient to implement, and ineffective to support the development of new medical products using modern evidentiary standards. The nation’s capacity for conducting clinical trials is also inadequate to provide the evidence needed for rational clinical practice. For instance, clinical practice guidelines are routinely issued by professional societies but only a small proportion of these guidelines are based on high-quality evidence from randomized trials (Lee and Vielemeyer, 2011; Tricoci et al., 2009). Furthermore, there is a growing recognition that patients and other stakeholders need to be partners in the development, conduct, and interpretation of clinical trials. Importantly, community medical practitioners and other important caregivers (for example, clinical psychologists) provide an avenue for partnering with patients but largely do not participate in clinical trials as investigators or as practitioners willing to refer their patients to trials. This diminishes patient access to trials and decreases the generalizability or “real-world” relevance of trials that are conducted.

Join Our Community

Sign up for NAM email updates