This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License which permits noncommercial use and distribution provided the original author(s) and source are credited. (See https://creativecommons.org/licenses/by-nc-nd/4.0/
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Structured Abstract
Background:
With nearly 15.5 million cancer survivors in the United States, gains in survival have created a need for high-quality posttreatment care. Cancer survivors suffer from late and long-term effects of their disease and its treatment, and the health care system must be able to care for these issues in a way that addresses patients' preferences for care and their priorities for long-term outcomes. Survivorship care, which encompasses the follow-up care, services, resources and monitoring that cancer survivors receive after treatment has ended, is an important aspect of the cancer care trajectory that can improve patients' health outcomes and quality of life (QOL) over the long term. It focuses on the health and life of a cancer patient posttreatment and addresses physical, psychosocial and economic issues, including the lasting effects from treatment, patient's self-efficacy and ability to manage their care posttreatment, and issues related to insurance and financial challenges resulting from care.
Objectives:
The primary objective of the study was to compare the quality of 3 models of survivorship care and their impact on survivorship-care related patient-centered outcomes (PCOs) using a newly developed survivorship care quality framework (SCQF) and accompanying survivorship care quality index (SCQI) and validated measures of patient-prioritized PCOs.
Methods:
We completed 3 phases of work. (1) We developed the SCQF based on data from 170 focus groups with breast, colorectal and prostate cancer survivors and created a measurement tool, the SCQI, using exploratory and confirmatory factor analysis. The SCQI comprises 9 factors of quality with 42 total metrics. (2) We identified 3 models of survivorship care based on data from an environmental scan of 232 high-performing cancer centers with survivorship care services. The models are: (a) Specialized Consultative Care, (b) Specialized Longitudinal Care, and (c) Oncology Embedded Care. (3) We conducted comparative effectiveness research assessing the quality of care across the 3 survivorship models using the SCQI as our quality instrument. We also examined the impact of the models on patient-reported PCOs: cancer-related QOL, self-efficacy, which is defined as patients' confidence in managing symptoms and health care concerns related to their health posttreatment, health care utilization of 4 providers types, and patient satisfaction with care. The CER employed a longitudinal, observational study design. We recruited 32 high-performing cancer centers to enroll survivors into the study. We collected data on quality and PCOs over 6 months at 4 points in time: (1) T0a, occurring immediately before the initial survivorship care visit; (2) T0b at 1 week following the survivorship care visit; (3) T1 at 3 months; and (4) T2 at 6 months. We had a final sample of 991 patients at the first point of data collection and 777 patients at month 6.
Results:
The formative work contributes substantive research to the field in the form of a the SCQI, which can be used as both a quality metric and a toolkit for improvement, and the identification of 3 models of survivorship care currently in use in the field. The CER compares the quality and impact of survivorship care across the 3 models, providing ground-breaking results that can substantively change survivorship cancer care. In the CER, bivariate analyses show that cancer centers are struggling to provide high-quality survivorship care in most of the 9 quality of care factors, with most scores registering in the 50% to70% range (range of 0% to 100%). Multivariable model results show the Specialized Consultative Model is associated with better quality of care in cross-sectional analyses at week 1, but the Oncology Embedded Model is associated with better quality of care over time. For QOL, patients receiving care in the Specialized Consultative Model had significantly lower physical concerns than their counterparts receiving care in the other 2 models and significantly lower emotional/social concerns than patients in the Specialized Longitudinal Model. Survivorship care appears to have a significant impact regardless of care model for self-efficacy. Bivariate analyses show significant improvement from baseline to month 6 for self-efficacy within each model. However, there were no differences across models over time. For health care utilization over time, patients in the Specialized Longitudinal Model had significantly fewer visits to their PCPs than those in the consultative model. Finally, patients in all 3 models were very satisfied with their care, reporting percentages in the high 90th percentile.
Conclusions:
Survivorship care clearly provides benefits to survivors, but different models perform better on different outcomes. The Oncology Embedded Model has higher patient-reported quality scores, suggesting the importance of being able to develop long-term, trusting relationships with providers. The Specialized Consultative Model reported better QOL scores, suggesting a highly-specialized approach can address specific QOL concerns better than the longitudinal models in the immediate weeks posttreatment. In both cases, results reinforce that having some guidance for cancer centers on how to organize care can improve outcomes and provide blueprints for how care can be developed within specific contexts and patient populations.
Limitations:
The study faces several limitations. Results may not be generalizable to a broader population of cancer survivors as participants were disproportionately White and privately insured. Moreover, the cancer centers in the study were high performers, which may reduce generalizability to all cancer centers. The research design was a quasi-experimental, longitudinal design and may not control for unknown confounders. In addition, our model comparators had some overlap in characteristics which may confound the examination of differences across models.
Background
A cancer diagnosis creates a significant burden on those suffering from the consequences of the disease and its treatment. Americans diagnosed with cancer today have a nearly 7 in 10 chance of surviving at least 5 years,1 largely due to improvements in early detection and treatment. With approximately 15.5 million cancer survivors in the United States,1 gains in survival also create new needs for quality care posttreatment. Cancer survivors face late and long-term effects of cancer and its treatment including physical problems (eg, heart damage, fatigue, new cancers), emotional issues (eg, fear of recurrence, anxiety, depression) and practical concerns (eg, employment difficulties, insurance challenges, financial issues), as well as challenges managing these effects.2
Post-treatment survivorship care is increasingly recognized as an important and distinct phase of the cancer care trajectory, and comprises all the follow-up care, services, informational resources and monitoring that cancer survivors receive after their treatment has ended. Its focus shifts from treating the disease to managing the health and quality of life (QOL) of cancer patients over the long term, considering the physical, psychosocial and economic issues they encounter, such as the lasting physical and emotional effects of the disease and treatment, patients' self-efficacy and confidence they can manage their health and care posttreatment, and issues related to employment, health insurance and financial challenges resulting from treatment.3
In its landmark 2005 report, the Institute of Medicine identified 4 key components the health care system should include in the delivery of survivorship care: (1) prevention of new cancers and any late effects; (2) surveillance for cancers, and medical and psychosocial late effects; (3) intervention for consequences of cancer and its treatment; and (4) coordination between different providers of care.2 In 2012, LIVESTRONG issued a report recommending and defining 20 essential elements (EEs), organized into 3 priority tiers, for delivering high-quality survivorship care.4 The American Society for Clinical Oncology (ASCO) has also developed resources to help cancer centers develop, implement, and sustain survivorship care, including a needs assessment, evaluation tools, and reimbursement guidance.5 In response to these efforts, the American College of Surgeons Commission on Cancer (CoC) has established a new accreditation standard that requires a survivorship care plan, which includes a brief treatment summary and follow-up care plan customized for each survivor, be delivered to all eligible patients by January 2019.6,7 This requirement, however, only operationalizes 1 recommendation, of many, and other guidelines for quality care have yet to be successfully guided into standard practice.
The IOM report, CoC standards and LIVESTRONG EEs have spurred the development of new survivorship care models as part of a fundamentally new approach to caring for cancer survivors. However, no standards for posttreatment follow-up care exist and little research has been done to examine effective survivorship care. In the absence of standards and the widespread translation of evidence into practice, tremendous variation in survivorship programs has emerged.8-11
Researchers have taken a number of approaches to describing program models and care characteristics that are conducive to providing high-quality follow-up care to survivors of adult-onset cancers. Based on Oeffinger and McCabe's original taxonomy,12 scholars have described survivorship approaches in community cancer settings broadly as integrated or shared care models where survivors' oncologists and primary care physicians work together to provide appropriate care or multidisciplinary models where care engages a larger team of providers, often through referrals, that is coordinated by a nurse practitioner or care manager.13 These community cancer center approaches are differentiated in the literature from the more resource- and research-intensive approaches of academic cancer centers where a more comprehensive and integrated approach is often provided through a team of multidisciplinary providers within the academic medical center system.11 Models in both types of cancer centers may focus on disease-/organ-specific survivorship or all-cancer approaches to survivorship.12
These approaches to survivorship care may be further characterized by service delivery traits such as whether care is consultative or longitudinal, whether risk stratification guides survivors' transitions between oncology and PCPs, and whether an individual provider, often a nurse navigator, coordinates care and serves as the patient's point of contact.14,15 As summarized in the Agency for Healthcare Research and Quality's evidence review, survivorship programs are commonly distinguished by (1) types of cancer survivors served, (2) care setting, (3) providers of care, and (4) purpose/frequency of care.16
While the models described in the literature exist as descriptors of how care can be provided, little research has been conducted to systematically examine the structure and processes of survivorship programs as they exist in their local context. Moreover, few studies have been conducted examining the quality of care provided by different models of care and the impact these models have on patient outcomes.
The gap in research on survivorship care quality is widely recognized, and experts have identified the need for such work as essential for addressing problems with the delivery of survivorship care in diverse settings.8,17-20 But, the concept of quality care is complex and subjective, and the literature varies substantially in how it defines and operationalizes quality care depending on health condition and health care setting. Research, however, suggests that existing measures of quality universally lack the patient perspective and that patient-centeredness is increasingly recognized by stakeholders across disciplines and contexts as essential to quality care.21,22
Given the complex nature of cancer survivorship, a patient-centered approach to providing quality posttreatment care is essential. While work on oncology quality care is increasingly addressing patient-centeredness, effort has primarily focused on the active treatment stage rather than the entire cancer care continuum.23-27 Research is needed to define and operationalize high-quality survivorship care in a way that incorporates the patients' perspective. Moreover, studies are needed to examine the quality of survivorship care and its impact over time on outcomes that are most meaningful to patients.11,16,27,28 The comparative effectiveness research study Evaluating Cancer Survivorship Care Models addresses these gaps. The specific aims of the study are as follows:
- Specific aim 1. To examine patients' needs during survivorship care and develop a survivorship care framework and measurement instrument (named the Survivorship Care Quality Framework and Index [SCQF&I]) that represent patients' priorities for high-quality survivorship care and their most important health and welfare outcomes.
- Specific aim 2. To examine and describe how survivorship care is organized and delivered, and identify specific models of survivorship care to compare in a comparative effectiveness research study.
- Specific aim 3. To compare the effectiveness of 3 models of survivorship care on patient-centered outcomes (PCOs) identified in specific aim 1 as important priorities for patients. These PCOs are: (1) quality of survivorship care, using the newly developed SCQI; (2) cancer-related QOL, which consists of 3 components: physical concerns, social/emotional concerns, and practical concerns; (2) self-efficacy, defined as patients' confidence in managing symptoms and health care concerns related to their health posttreatment; (4) health care utilization of specific types of providers; and (5) satisfaction with the health care experience. While this research is highly exploratory, in this specific aim, we hypothesize that differences will exist in these outcomes across the 3 models of survivorship care.
Participation of Patients and Other Stakeholders
This project was developed with patient and other stakeholder engagement at its core, in particular around the development of the newly created Survivorship Care Quality Framework and Index (SCQF&I), which operationalizes the patient's perspective in the measurement of quality of care during the comparative effectiveness research (CER). The SCQF and SCQI engaged survivors of breast, prostate, and colorectal cancer via focus group (FG) discussions and a national survey to ensure their priorities were central to the concept of quality survivorship. The SCQF&I are principal developments of the research project and are poised for broader use as both a quality metric and a toolkit for improvement in clinical practice.
Our most sustained and intensive engagement occurred with the study's survivor-led advisory board (AB). The 18-member AB included 8 cancer survivors, 5 health care providers, 3 patient advocacy organization representatives (Cancer Support Community [CSC] the American Cancer Society [ACS], and LIVESTRONG), a representative from the CoC, and 1 survivorship disparities researcher. Survivor AB members were recruited through a public application process and selected to maximize variation across cancer type and other traits. Organizational partners were strategically recruited based on their role in the survivorship field. Provider AB members were purposively invited to represent a range of professional roles, specialty, and cancer types served. AB members met with the study team biannually via a web-based platform, and additional ad hoc email, phone, and in-person interactions. The AB:
- Ensured that study questions were relevant and reflected stakeholders' interests. AB members contributed to the development of the proposal and ensured that it responded to the needs of survivors and providers, and the state of the field.
- Improved the efficiency and uptake of research activities. CSC, ACS, and LIVESTRONG provided critical assistance in the formative phase by identifying ideal FG sites and recruiting FG participants, and by recruiting cognitive testers for the national survivor survey. CoC's distribution of the Environmental Scan survey to its over 1400 accredited institutions resulted in a response rate of 43.1%. AB members also provided helpful ideas and tangible assistance when we faced challenges recruiting cancer centers for the CER.
- Strengthened the validity, readability, and ease of use of study tools. AB members provided feedback on study tools including the FG guide; survivor survey; the clinician survey instrument used in the CER; and, with other survivor stakeholders, the SCQ Framework and Index. AB members also helped the research team develop the Environmental Scan tool used to examine survivorship care services in CoC-accredited cancer centers as a way to more meaningfully define the models of care.
- Bolstered the quality and transparency of research processes. An AB workgroup reached consensus on key design decisions such as calculating the high-performing institution algorithm, choosing criteria to guide cancer center selection for the CER, and describing the 3 models of care. The AB contributed to the interpretation of results at each phase and development of the Dissemination Plan.
- Identified study limitations and recommended next steps to address them. The AB noted that study recruitment strategies had not resulted in a diverse patient sample. This observation led the research team to propose a focused project examining survivorship care among underserved patients who seek care in the safety net, in which a number of AB members and sites from the completed study have expressed interest in participating.
Finally, the AB's “sign on” to the study lends credibility to the results. Among other effects, the endorsement of patients, providers, and experts has helped garner attention to the study from leaders in the field, which will help facilitate the incorporation of evidence into practice.
Another group of stakeholders whose engagement tangibly influenced the research was study site collaborators. The 32 CER site coordinators provided valuable feedback to improve patient recruitment, enrollment, and retention that not only improved processes in the completed project but will also inform our approach in subsequent work. Study site personnel, including research coordinators, providers, and administrators, also improved the validity of results and our adherence with PCORI's dissemination guidance by participating in our series of interactive Evidence Assessment webinars (n = 84 participants), where we presented preliminary analyses and solicited feedback on initial results.
Methods
Study Overview
The Evaluating Cancer Survivorship Care Models project comprised 2 distinct phases with 3 specific aims. Phase 1 comprised our formative phase, where we completed specific aims 1 and 2, and phase 2 was the comparative effectiveness arm. In phase 1 specific aim 1, we set out to develop a patient-prioritized quality framework and a measurement instrument for high-quality survivorship care using a mixed-methods approach. In specific aim 2, our objective was to define survivorship models of care through a web-based survey of all cancer centers, and in-depth interviews and surveys with high-performing institutions. In phase 2 specific aim 3, we evaluated the comparative effectiveness of 3 survivorship care models to determine how these varying models impact PCOs.
Study Design
In phase 1 (specific aim 1), we developed a patient-prioritized SCQF&I to understand how patients define quality survivorship care and what they prioritize as the most important aspects of high-quality survivorship (October 2013-August 2014), and to generate a patient-centered tool to measure quality of care in the CER. We conducted 22 FGs with breast, prostate, and colorectal cancer survivors (n = 170) across 6 sites and a virtual platform, and a national survivor survey (n = 1278) to develop the SCQF&I. We used inductive coding to develop the SCQ Framework which we then validated using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) of survivor survey data. A detailed description of the SCQF is presented in Figure 1 later in this report. EFA/CFA results were used to develop the SCQI (See Results: specific aim 2 and Appendix D for a description of the methodology and results).
Also in phase 1 (specific aim 2), we conducted an Environmental Scan (January 2014-March 2015) to understand how survivorship care is organized and delivered and to develop a new taxonomy for categorizing survivorship programs. We disseminated the survey via CoC to 1407 accredited cancer institutions and received 628 responses. Of these responses, 232 cancer centers met our criteria for providing survivorship care and completed more than just 1 section of the survey. The study team also conducted 32 in-depth interviews with high-performing survivorship care programs (SCPs; see below, this section, for definition) along with a detailed organizational survey to understand in more detail how survivorship care is organized, supported and delivered to cancer survivors. We used the environmental scan data to examine the prevalence of delivery system characteristics identified in the literature as important to the effectiveness of survivorship care and the provision of LIVESTRONG EEs of Survivorship Care.16 We then examined the distribution of the co-occurrence of these characteristics among all institutions (N = 142) and high-performing institutions who responded to the scan (n = 96) to identify the most common models of care emerging in the field. Based on a review of the literature, we identified additional adaptable components that are important to the delivery of survivorship care and analyzed data from the organizational interviews and characteristics survey of the high performers to further define differences across the models. A detailed description of the models is presented in the section Results: specific aim 2.
In phase 3 (specific aim 3), we conducted CER (February 2015 to December 2016) to evaluate the quality and effectiveness of survivorship care across the 3 models over a 6-month period of time. We included only high-performing cancer centers in the study to ensure the programs participating in the study were providing a certain threshold of survivorship care. To identify high performers, we developed a survivorship program performance index by examining the provision of LIVESTRONG tier I, II and III EEs (weighted by importance) across institutions that participated in the environmental scan. We defined “high performers” as institutions that scored in the top 50% of the Performance Index range. We recruited 32 high-performing CoC-accredited cancer centers as CER site partners (see Appendix E for a list of participating institutions). We conducted organizational surveys and interviews of each cancer center to classify them into 1 of the 3 models of care and to better understand the contextual attributes of the models, thereby bolstering external validity. We enrolled a total of 991 survivors across the 32 institutions to examine the quality and impact of survivorship care on key outcomes over a 6-month time period. The longitudinal design for the study allowed us to examine the ongoing and lasting impact of survivorship care on PCOs.
A clinician survey was also completed by all clinicians who had a patient participating in the CER study. The clinician survey collected information on the services provided during the initial survivorship visit. In the study, we defined the initial survivorship care visit as the first clinic visit where the patient and clinician discuss explicit post-cancer treatment care instructions, including follow-up services, information resources and monitoring needs. The timing of when this occurred in a patient's cancer care trajectory varied by survivorship care model and institution.
Study Setting and Participants
We used a stratified sampling approach to form the study cohort. As noted above, we recruited 32 high-performing CoC-accredited cancer centers as our CER site partners. Cancer centers were invited to participate in the study if they met the following criteria: they provided survivorship care and they met our high performer threshold (see Methods: Study Design). Using the environmental scan to identify our sampling frame of sites, we invited a total of 55 institutions to participate in the study. In total, 34 sites met our eligibility criteria and agreed to participate in the study. Two sites withdrew from the study.
Each participating cancer center was responsible for recruiting a conveniences sample of 30 cancer survivors into the study. The GW research team gave each cancer center explicit inclusion and exclusion criteria for screening and enrolling participants. Adult survivors of nonmetastatic breast, prostate and colorectal cancer were considered eligible for participation if they had completed all active treatment, and would be receiving survivorship care, but had not yet had an initial survivorship-specific visit for that diagnosis and treatment. Patients could still be receiving planned hormonal or maintenance treatment. Patients were not considered eligible for participation in the study if they were undergoing active treatment for their cancer including surgery, chemotherapy, targeted therapy, immunotherapy, and radiation.
Interventions and Comparators
The CER is an observational study and did not implement an intervention. However, we compare outcomes across 3 models of survivorship, which serve as our study comparators. The models are defined by 3 core components: approach to survivorship care, defined as time consultative visit vs multiple ongoing visits in longitudinal care; and access to care, defined as receiving services onsite in the cancer center or through a referral. The 3 models are:
- Specialized Consultative Model with direct access. This model provides consultative survivorship care after treatment has ended through a 1-time-only visit dedicated specifically to survivorship (limited to a single encounter for a specific purpose). The majority of tier 1 EEs, including the SCP, psychosocial care, health promotion services and symptom management, are provided through direct access during the survivorship visit.
- Specialized Longitudinal Model with direct access. This model provides ongoing survivorship care through a series of survivorship-specific visits scheduled at predetermined intervals and as needed by the patient. The majority of tier 1 EEs are provided through direct access to services onsite rather than through a referral, including the SCP, surveillance for impacts of treatment, screenings for new cancers/recurrences, health promotion education, psychosocial care and symptom management.
- Oncology-embedded model with varied access. This model provides survivorship care as part of standard treatment and posttreatment follow-up care. Because this model integrates survivorship with standard follow-up care, it is considered a longitudinal model, though institutions also report survivorship is not always provided as systematically as in Model 2, but rather on an as needed basis. Institutions with the embedded model are more likely to use referrals to deliver some of the key survivorship services than either of the other 2 models.
Please See Results: specific aim 2 for full descriptions of the 3 models that serve as comparators.
Study Outcomes
We collected data on 5 primary patient-reported, PCOs that relate to 2 aspects of survivorship: the quality of the survivorship care provided and the impact the care has on patients' health and wellness. The quality of care is examined using 2 outcomes: quality of survivorship care as measured by the SCQI and patient satisfaction with the care received. The impact of care is examined using 3 outcomes: cancer-related QOL, self-efficacy, and health care utilization. We selected outcomes to capture the breadth of survivors' perspectives on their priorities for high-quality survivorship care, as well as their posttreatment experiences related to everyday life. When selecting outcomes, we also considered relevance for health care providers, health systems, and stakeholders in the survivorship field. All the outcome variables measure complex concepts. Thus, these variables comprise multiple metrics, some of which we have aggregated into composite index scores. See Table 1 for a description of each primary outcome, including methods for measuring and calculating aggregate scores, where appropriate.
Time Frame for the Study
We followed study participants for 6 months to assess the ongoing and lasting impact of survivorship care models on survivors, who often experience late and long-term effects of their cancer and treatment for months, even years, after completion of their cancer treatment. We collected data at 4 points in time, consisting of 2 time points for baseline data and 2 follow-up time points. Please see Diagram 1 for a description of the time points when each outcome was collected, as not all outcomes were collected at each time point. T0a and T0b: together comprise the baseline for the study. T0a was our enrollment time point where we enrolled patients into the study and collected demographic data, as well as baseline data on patients' QOL, management of health and health care utilization. These data were collected prior to patients' initial survivorship care visit. T0b was considered our baseline for outcomes related to the quality of the care provided during the survivorship care visit and patients satisfaction with that visit. This time point occurred 1 week following the initial survivorship visit. T1 was the first follow-up time point and occurred 3 months following initial survivorship visit. We collected QOL, management of health and health care utilization outcomes during this time point. T2 was the second follow-up time point and occurred 6 months following initial survivorship visit. We collected data for all outcome variables at this time point: QOL, management of health, health care utilization, quality of care, and satisfaction with care.
Data Collection and Sources
Initial patient data were collected for the CER in person via a secure iPad at enrollment, which occurred prior to the initial instance in which a survivor spent time with any provider to facilitate access to survivorship care and services. All follow-up patient data were collected by the George Washington University (GW) research team by telephone. We made a significant effort to maximize the follow-up rate. We incentivized participants by sending a $15 gift card after completion of each follow-up survey (a total of 3). In addition, during the enrollment survey, we asked participants to identify days and times that were convenient to them to receive GW research staff's follow-up calls. If participants were not available when GW researchers called them, staff left voicemails, and included a phone number for the survivor to use to call back. The study team left 10 voicemails before reporting a survivor as lost to follow-up. On occasion, survivors participating in the study were deemed ineligible for further follow-up based on screening questions related to their cancer status. If a survivor's cancer had returned or she or he had been diagnosed with a new cancer, the survivor was dropped from the study and recorded as lost to follow-up. Finally, if we contacted a survivor who then asked to be removed from the study, we asked for a reason and then recorded the survivor as lost to follow-up. We tracked all reasons for lost to follow-up when able (see Table 12 later in this report).
Analytical and Statistical Approaches
The data set used to conduct CER analysis links survivors' (patient) data across the 4 time periods described above, survivorship clinicians who provided care to participating survivors during the initial survivorship visit, and organizational characteristics at the cancer center level (captured via the organizational survey). Patient level data at all 4 time periods were collected in REDCap. The clinician data, which provides information on services provided to individual survivors, was also collected in REDCap. Clinician data were then linked to patient data via a unique numeric identifier assigned to each participant. Data from the organizational level were linked to patient-level data using an organizational variable in Stata 12.
We used χ2 tests for categorical variables and analysis of variance for continuous variables to test for differences by the 3 care models. We used mixed models to perform longitudinal regression, accounting for within institution and within individual correlations over time, and to test potential confounders. Potential covariates were identified a priori based on the available literature and known factors that are hypothesized to be associated with the outcomes of interest. All of the final mixed models controlled for patient demographics (age, gender, race, ethnicity, marital status, education and income), primary provider of survivorship care, and time since diagnosis. In all mixed models, we utilized an unstructured covariance structure. We tested the effects of changes by model over time on the main outcomes of interest of QOL, quality of care, self-efficacy, health care utilization, and satisfaction. We did not correct for multiple comparisons in this study. All outcomes were compared over time from baseline across models.
Recent studies have shown that multiple imputation does not improve the performance of longitudinal mixed models; therefore, missing data were not imputed.29 The SAS procedure PROC MIXED handles missing data under the assumption that any data missing after baseline were missing at random. To assess whether data were missing at random we compared basic demographic characteristics between those who completed the 6-month assessment and those who dropped out (please see Appendix F). Those who completed the study were more likely to be female, non-Hispanic White, breast cancer survivors, and married. The participants did not differ by other demographic characteristics. There was very little item nonresponse due to the follow-up methods described above. For the SCQI items the nonresponse was <1.00%, with nonresponse occurring only at baseline and not at any other time point. For QOL, the item nonresponse rate was <4.00% for all items at all survey points. For self-efficacy, the item nonresponse rate was <1.00% for all items at all survey points and for health care utilization it was <2.00% for all items at all survey points. For satisfaction, the item nonresponse rate was 2.70% at its highest point, which was at T2 or our final survey time point. Analyses were done in SAS 9.4 and Stata 12.
For the purpose of the CER, the analyses are powered for the primary hypothesis. In terms of potential exploratory subgroup analyses, a model by cancer type was the only potential analysis considered, but given the current state of cancer survivorship care, we did not anticipate we could achieve the required sample size to examine treatment effects by cancer type. We did not look at heterogeneity of treatment effects (HTE) because our sample size was fairly homogeneous and we did not have statistical power to analyze effects by any subgroup.
Our sample is sufficiently powered to detect main effects larger than (f = 0.15) in the longitudinal models. Assuming an effect size of small to medium, (f = 0.15), α = .05, with correlations between individuals of r = 0.5 or higher over time, across 3 models, and assuming a minimum of 2 observations per subject, the power to detect an effect of this magnitude is at 0.904 provided the design effect (DE)* is not greater than 2.
Conduct of the Study
The GW IRB approved the original study protocol on October 22, 2013, and in each subsequent year thereafter. We made 2 changes to the study protocol over the period of the project. First, we changed our approach to the national survivor survey used to develop the SCQI. Initially we had proposed conducting a survey using a convenience sample and recruiting survivors through GW's cancer registry, CSC's registry of ≥3500 cancer survivors, LIVESTRONG's social media and ACS's Cancer Survivors Network. However, to increase the generalizability of the survey, we changed the convenience sampling approach and contracted with GfK Knowledge Networks to field a probability-based internet survey. This change required an IRB modification which was approved on July 3, 2014.
The second change to the original study protocol was related to the institutional and patient sample size in the CER. Originally, we had proposed to recruit 15 institutions per model for a total of 45 institutions, and approximately 30 survivors per institution for a total of 1350 survivors. Due to a variety of recruitment challenges and delays, however, we revised the institutional target to 32 institutions. This change to our protocol did not require an IRB modification; however, it did require a contract modification to our PCORI contract, which was approved June 24, 2016.
Results
Specific Aim 1
The purpose of this phase of the project was to develop a patient-prioritized framework and measurement instrument representing high-quality survivorship care. The study team conducted extensive formative work around patients' priorities for survivorship care including analysis of FGs with survivors and a national survivor survey (see Table 2 for FG and survivor survey participant demographics and Table 3 for participant eligibility and enrollment information). Based on this work, we developed the SCQF&I, which comprises 3 domains and 9 factors of high-quality survivorship care, as prioritized by patients wants and needs.
Survivorship Care Quality Framework and Index
The patient-prioritized SCQF&I served as the guiding principles for the development of one of our primary outcomes: quality of care. The SCQF was developed specifically for cancer survivorship care with survivors' priorities at the forefront of its development. The framework represents survivor's perspectives on their care posttreatment, focusing on how the health care system can be improved and what their expectations are for long-term outcomes. The identification and organization of the framework was accomplished through a proximity matrix and qualitative cluster analysis of themes that emerged from the FG discussion, and then validated using EFA/CFA of data from the national survey of cancer survivors. Based on the final CFA, we developed the SCQI instrument as our quality of care outcome measure (See Appendix G).
The framework depicts 3 important influences at different levels of the health care system, that, when organized well, can support the goal of providing high-quality, patient-centered care, and improve survivors' ability to manage the lasting effects of cancer and advance their overall health. The 3 domains are organized around intrapersonal, interpersonal and organizational influences that reflect how high-quality survivorship care should be organized: (1) with resources to help survivors become informed and grounded; (2) with an emphasis on productive partnerships between clinicians and patients; and (3) with coordinated organizations that support health and wellness. The framework further specifies factors of care within each of these domains that survivors identified as essential for high-quality survivorship care. See Figure 1 for a full description of the SCQF domains and factors.
The study team used the national survivor survey to confirm generalizability and validate the conceptualization of the qualitative framework and operationalize the 9 factors with individual metrics identified as “high priorities” by participants (where 40% or more of participants responded to items as either very important or absolutely essential). After identifying 42 high priorities, we performed EFA testing (n = 639) and CFA validation (n = 639) analyses. Results indicated all factors had an α >.75 (most are >.86), demonstrating that each factor has a patterned relationship among the items. For the CFA, the model was specified based on the 42-item EFA results. Standard fit statistics and modification indices were examined to determine fit. The root mean square error of approximation (RMSEA) is an absolute measure of fit that presumes the best fitting model has a fit of 0. Thus a value of <.10 is an acceptable measure for RMSEA. The comparative fit index (CFI) is an incremental measure of fit directly based on a noncentrality measure. The nonnormed fit index (NNFI) is another incremental measure of fit, which includes a penalty for adding parameters to the model, thus increasing the fit index. A value between .90 and .95 is considered marginal, above .95 is a good fitting model, and below .90 is a poor-fitting model. The model achieved acceptable fit (RMSEA <0.10, CFI & NNFI ≥0.9) without modifications to the EFA specification of items per factor, but 14 error variances were allowed to covary to improve fit of the final CFA model. The fit statistics for the models tested and additional detail on the methodology and results of the EFA/CFA are shown in Appendix D.
Based on the final CFA, we developed the SCQI, a measurement tool of quality survivorship care, to operationalize the conceptual framework as an instrument for the CER. The SCQI comprises 9 quality factors that map to the SCQF and a total of 41 individual items that measure specific services within each quality factor (see Table 4 for a description of domains, factors and items).* The 9 factors are organized into the domains according to the SCQF: A) Informed and Grounded Patient domain – A1. emotional and social support; A2. information and resources about expectations posttreatment; B) Patient-Provider Alliance domain – B3. empowered and engaged patients; B4. supportive and prepared clinicians; B5. meaningful communication between patients/providers; and C) Supportive Health and Wellness System – C6. care coordination and transitions; C7. provisions of full spectrum of care; C8. practical life support; C9. providing a medical home.
Specific Aim 2
The purpose of specific aim 2 was to examine and describe how survivorship care is organized, and identify specific models of survivorship care to compare in the CER. The study team identified 3 models of survivorship care based on analysis of our Environmental Scan of 232 cancer centers (See Table 5 for eligibility and enrollment numbers) as well as in-depth interviews and a characteristics survey of 32 high-performing cancer centers.
Based on a literature review and input from stakeholders, we identified 3 delivery system characteristics that have widespread acceptance in the field for how survivorship care should be organized to ensure the provision of LIVESTRONG EEs: (1) the approach to survivorship care: whether it is provided through a specialized clinic vs embedded as part of standard oncology, follow-up care; (2) the duration of care: whether care is provided in a single consultative visit or via multiple visits on an ongoing basis; and (3) access to services: whether the majority of provided tier 1 and tier 2 EEs are available directly through a survivorship care visit or indirectly through referral. Based on our analysis of the environmental scan data, these 3 characteristics of care are most commonly included in SCPs (see Table 6). These characteristics serve as the principal components comprising our survivorship care models.
To examine how often these characteristics are organized together, we created a co-occurrence matrix that resulted in 12 model permutations, all of which would have an expected frequency of 8.3% if the models were distributed evenly in our sample. Specific permutations, however, emerged as more common both among the sample of all institutions in the scan and high-performing institutions (see Table 7). These permutations became the framework for the 3 models used in the CER. Qualitative analysis of in-depth interviews and organizational surveys of the 32 high-performing institutions participating in the CER identified additional adaptable components that are also important to the delivery of survivorship care and that vary depending on model. These characteristics are included in our model descriptions.
Cancer Survivorship Care Models
The final results of specific aim 2 are the definition of the 3 models, which are specified below. See Table 8 for more detail on the models.
- Specialized Consultative Model (“Model 1”) provides survivorship care after treatment has ended through a 1-time visit dedicated specifically to survivorship. Care is limited to a single encounter for a specific purpose. Model 1 provides the majority of tier 1 EEs through direct, onsite access during the survivorship visit, including the SCP, psychosocial care, health promotions services and symptom
- Specialized Longitudinal Model (“Model 2”) provides ongoing survivorship care after treatment has ended through a series of survivorship-specific visits scheduled at predetermined intervals and as needed by the patient. Model 2 provides the majority of tier 1 EEs through direct onsite access during the survivorship visit, such as the SCP, psychosocial care, health promotions services, symptom management, surveillance for impacts of treatment, and screenings for new cancers/recurrences.
- Oncology Embedded Model (“Model 3”) provides survivorship care that is delivered by the oncology treatment team as part of standard treatment and posttreatment care. Survivorship care tends to be delivered throughout the treatment course; thus, it is considered a longitudinal model. In this model, survivorship is not typically delivered systematically over a course of visits set aside specifically for this type of care, but rather as needed or requested by the patient. Model 3 is more likely than the other 2 models to use referrals to deliver key survivorship services.
While the 3 models have overlapping features, they also include characteristics that clearly differentiate them in terms of how care is organized and delivered and what services are offered. These factors and how they are integrated in a model of care are key contributors to the overall quality of a survivorship program and are critical for the comparison of effectiveness in our study, particularly as they relate to our PCOs of interest.
Specific Aim 3
The main objective of specific aim 3 was to compare the effectiveness of 3 models of survivorship care on PCOs deemed of highest priority for 603 [sic]. These PCOs are: (1) quality of survivorship care, using the newly developed SCQI; (2) cancer-related QOL, which consists of 3 components: physical concerns, social/emotional concerns, and practical concerns; (3) self-efficacy, defined as patients' confidence in managing symptoms and health care concerns related to their health posttreatment; (4) health care utilization of specific types of providers; and (5) satisfaction with the health care experience. Given the exploratory nature of the study, we chose to report results where the probability of type I error is <.1 (P < .1). In the following section, we will present both cross-sectional bivariate and longitudinal multivariable results, examining each of the 5 PCOs by model of survivorship care. The 3 models of survivorship care we are comparing are: (1) the Specialized Consultative Model; (2) the Specialized Longitudinal Model, and (3) the Oncology Embedded Model. Our main objective is to identify any differences in the 5 PCOs across the models, which may suggest that the model, or how care is delivered within the model, impacts the outcomes.
We have organized this section by PCO in the following order: (1) quality of care as measured by the Survivorship Care Quality Index (SCQI) and presented using composite scores of the 9 principal factors; (2) QOL, presented using composite scores of the 3 components; (3) self-efficacy, presented as a single composite measure; (4) health care utilization, presented as utilization of 4 types of health care providers; and (5) satisfaction of care, presented as 3 individual metrics. For more information on each PCO, please refer to Table 1, which provides a description of each primary outcome, including methods for measuring and calculating aggregate scores, where appropriate.
Each outcome section is organized in the same way to provide a consistent format for interpreting results. First, we present bivariate analyses to examine differences in outcomes within the group of patients (patient group) using care in each of the 3 models of survivorship care. These results are cross-sectional, providing data at both baseline and month 6, and help determine if an empirical relationship exists between the survivorship care model (eg, receiving survivorship care from a cancer center that organizes care according to a particular model) and the outcomes of interest. Second, we present longitudinal, multivariable analyses for each outcome. The multivariable analyses examine the relationship between the survivorship care model and the outcomes of interest over time, controlling for confounding factors. The primary objective of these analyses is to identify whether changes over time for an outcome varies across the 3 models. For example, we will look at whether there is an improvement in QOL from baseline to 6 months and if the magnitude of improvement differs when we compare it across the 3 survivorship care models. These analyses compare the effectiveness of each model of survivorship care on the PCOs over a 6-month period. Finally, we summarize the key findings for each outcome at the end of the outcome's section.
Demographics of Study Participants
A total of 991 patients enrolled in the study at baseline (see Table 9 for additional demographics of study participants).
Our final sample size at 6 months was 777 patients, yielding a 78% response rate. (See Tables 10, 11, and 12 for additional information on participant eligibility and enrollment, and lost-to-follow-up numbers).
Quality of Care
Quality of care was measured using the newly developed SCQI, created by the study team. The SCQI is a 42-item questionnaire asking about survivorship care services. These items measure 9 factors that represent specific aspects of quality survivorship care. The factors and underlying item structure of the index were confirmed as valid constructs of quality by the EFA/CFA (see Methods, Study Design and Appendix D for more information). The 9 factors are organized into a framework consisting of 3 larger domains that encapsulate the intrapersonal, interpersonal and organizational levels of quality care. These domains are (1) being an informed and grounded patient (intrapersonal); (2) having a patient-provider alliance (interpersonal); and (3) having a supportive health and wellness system (organizational). We report statistically significant results for each of the 9 factors at the bivariate, cross-sectional level at both baseline and 6 months, and then longitudinally over the 6-month period, controlling for confounding factors. We have organized the presentation of results by the framework domains to show impact at the different levels of quality care. Within the 9 factors, each item is measured on a continuous 3-point scale of performance where 0 = item is not at all met, 1 = somewhat met, and 2 = definitely met. The reported percentages represent the summed number of patient's responding “yes, definitely” over the total number of items in the factor.
Bivariate analysis
In looking at the quality of care outcome, the bivariate data suggest that cancer centers are struggling to provide high-quality survivorship care, even among high-performing institutions (see Table 13). Scores in the 9 factors of quality care range from a low of 9.3% for the practical life factor to a high of 89.3% for the meaningful communication factor. Most scores registered in the 50% to 70% range. Of particular concern, cancer centers are failing to provide services and resources to support survivors' posttreatment emotional and social issues with low scores across all 3 models (Model 1 = 33.1%, Model 2 = 30.7%, and Model 3 = 20.2% of services were reported at “yes, definitely” provided at baseline). When looking at differences in scores reported by patients receiving care in each survivorship care model, we observed significant differences across care models in 6 out of the 9 quality factors at week 1 but only 3 of the 9 factors at month 6 (unadjusted analyses). Significant results are reported below by factor at both the week 1 and month 6 time points.
In the intrapersonal domain (Informed and Grounded Patient), composite scores for both factors—getting emotional and social support and having adequate information and resources to understand survivorship—are significantly different across patients in each care model with patients receiving care from the Specialized Consultative Model and the Specialized Longitudinal Model showing higher scores than the Oncology Embedded Model in both factors. For emotional and social support, patients receiving care in either Specialized model (the Specialized Consultative Model or the Specialized Longitudinal Model) report higher quality scores than those in the Oncology Embedded Model by 10 to 13 percentage points (unadjusted analysis: Model 1 = 33.1%, Model 2 = 30.7%, and Model 3 = 20.2%), (P < .001). At the 6-month time point, emotional and social support scores are significantly different across the 3 care models with survivors receiving care in the Specialized Longitudinal Model reporting the highest scores. However, despite the significant differences across care models in emotional support services at month 6, the scores are very low (unadjusted analysis: Model 1 = 14.8%, Model 2 = 18.9%, and Model 3 = 12.6%), (P < .029), suggesting these services are not being provided long term.
For the factor information and resources, patients in either Specialized model (the Specialized Consultative Model or the Specialized Longitudinal Model) report scores approximately 8 to 10 percentage points higher than patients in the Oncology Embedded Model (unadjusted analysis: Model 1 = 75.5%; Model 2 = 73.9%; and Model 3 = 65.9%) (P < .001). Little difference exists in the composite score between patients in the 2 specialty models.
In the interpersonal domain (Patient-Provider Alliance), composite scores for 2 of the 3 factors showed significant differences across patients receiving care in each of the 3 care models at week 1, with patients in the Specialized Consultative Model and the Specialized Longitudinal Model reporting higher scores than patients in the Oncology Embedded Model. For the empowered and engaged factor, the composite score shows that roughly 60% of participants in the 2 specialized models reported this metric was definitely met in their survivorship care (Model 1 = 59.0% and Model 2 = 60.7%) compared to only 50.5% of participants in the Oncology Embedded Model, (P < .001). The factor “meaningful communication” had the highest scores in this domain with unadjusted scores of 88.4% for Model 1, 89.3% for Model 2, and 84.2% for Model 3 (P = .003). While statistically significant, the differences for this factor are small and may not be meaningful to patients. We found no significant differences in patient scores across models at the 6-month time point for any of these factors.
In the organizational domain (Supportive Health and Wellness System), composite scores for 2 of the 4 factors are significantly different across patients in the 3 care models. In the care coordination and transitions in care factor, the patients in the Specialized Consultative and Specialized Longitudinal Models reported scores of 53.1% and 54.8%, (P < .008), respectively, showing very little difference between the 2 patient groups. Patients in the Oncology Embedded Model reported scores of 50.3%, which is statistically significantly lower than reported scores in the other 2 models. While the difference between patients in the 2 Specialized models and the Oncology Embedded Model seem small for this factor, examination of the individual metrics (see data in Appendix H) show that large differences in certain measures are potentially driving the difference. For the Full Spectrum of Care factor, the composite score again is higher for patients in the 2 Specialized models than for those in the Oncology Embedded Model by 10 to 11 percentage points (unadjusted analysis: Model 1 = 45.4%, Model 2 = 44.8%, and Model 3 = 34.6%), (P < .001). This pattern is also seen for this factor at the 6-month mark, though statistical significance is marginal (unadjusted analysis: Model 1 = 41.4%, Model 2 = 45.9%, and Model 3 = 39.7%), (P < .061). Scores reported by patients for practical life support are not significantly different by care model at baseline but are significantly different at 6 months with higher scores reported by patients receiving care in the Specialized Longitudinal Model and the Oncology Embedded Model (unadjusted analysis: Model 1 = 8.0%, Model 2 = 13.2%, and Model 3 = 14.8%), (P < .009). While these data show important potential differences across the patient groups in each of the survivorship care models, they are unadjusted findings and may not be showing differences due to the model, but instead differences across the patients in the 3 groups.
Multivariable analysis
Cross-sectional, multivariable analyses that control for important demographic variables across the models support the cross-sectional bivariate analyses and show that at week 1 patients in the Oncology Embedded Model report significantly lower quality of care scores in 3 of the 9 factors (analyses not shown*), controlling for confounding variables. Patients in this care model report scores in the provision of a full spectrum of care factor and the emotional and social support factor that are 5 and 7 percentage points lower, respectively, than patient scores in the Specialized Consultative Model (P < .01). In addition, scores for this care model in the meaningful communication factor are 2 percentage points lower than in the Specialized Consultative Model (P < .1).
Multivariable, longitudinal results comparing all 9 factors of quality care by survivorship care models are summarized in Table 14. We observed an increase in reported quality of care for 2 factors for the Specialized Longitudinal Model compared to the Specialized Consultative Model. The Specialized Longitudinal Model has higher quality of care for practical life support and health insurance issues by 8 percentage points (P = .018), and emotional and social support by 8 percentage points (P = .030) compared to the Specialized Consultative Model between week 1 and month 6 of the intervention. Results also show an increase in 7 quality of care factors for the Oncology Embedded Model compared to the Specialized Consultative Model. Most notably, participants in the Oncology Embedded Model reported 9 to 14 percentage points higher quality of care for provision of a full spectrum of care (0.105, P < .001), practical life support/health insurance issues (0.086, P = .003), emotional and social support (0.118, P < .001), and being empowered and engaged patients (0.091, P = .002).
Summary of quality of care results
In summary, the unadjusted, bivariate analyses suggest the quality of care reported by patients using the Specialized Consultative and the Specialized Longitudinal Models are higher than those reported by patients in the Oncology Embedded Model. This pattern is consistent at week 1 for 6 of the 9 quality factors. At month 6, we see a similar trend where the 2 Specialized models have higher patients scores than the Oncology Embedded model, but for only 3 of the 9 quality factors. The multivariable, longitudinal results show an improvement in quality scores from baseline to 6 months that is larger for the Oncology Embedded Model than the other 2 Specialized models. In 6 of the quality factors the magnitude of improvement is significantly larger than the Specialized Consultative Model, which is the reference group.
Quality of Life
Quality of life was measured utilizing the Quality of Live-Breast Cancer Survivors30 instrument and the Functional Living Cancer Index.31 Forty-one items were measured across 3 domains, representing (1) physical concerns; (2) social and emotional concerns; and (3) practical concerns. Responses for each metric were measured as a dichotomous variable – participants reporting any concern vs reporting no concerns. The 3 summary domain measures comprise “top concerns,” which we identified based on the prevalence of concerns reported by 40% or more of the study population. The summary domain scores thus indicate the average percentage of top concerns reported, ranging from 0 to 1 and presented as a percentage. For this outcome, a low score translates to fewer concerns, so we are looking at which models have the lowest average percent of concerns. We report statistically significant results for each domain at both baseline and 6 months.
Bivariate analysis
At baseline, bivariate results for QOL show significantly different scores reported by patients across the 3 care models for physical concerns and social/emotional concerns (P < .05), but not for practical concerns (unadjusted analyses; see Table 15). In the physical concerns domain, patients in the Specialized Consultative Model had the highest composite scores with an average 56.5% of top concerns reported compared to 52.6% for Specialized Longitudinal Model patients and 50.7% for Oncology Embedded Model patients. There were no significant differences across patients in the care models at 6 months.
We saw a similar trend in the social and emotional domain where patients in the Specialized Consultative and Specialized Longitudinal Models had higher reported concerns (61.0% and 61.1%, respectively) compared to those in the Oncology Embedded Model (53.6%) at baseline, (unadjusted analysis; P = .01). We also saw significant differences by model for this factor at the 6-month time period with patients in the Oncology Embedded Model having the lowest percent of social/emotional concerns at 54.2% compared to patients in the Specialized Longitudinal Model with the highest percent of concerns at 63.6% and those in the Specialized Consultative Model at 58.1% (P = .028).
For the practical concerns domain, we saw no significant differences across models at either baseline or 6 months. However, within the Oncology Embedded Model, we saw a significant decrease in average percentage of issues reported as a concern from 44.5% at baseline to 33.7% at 6 months (P < .001). As with the quality of care outcome, the differences reported in these domains are based on unadjusted analyses and may be showing differences across groups of patients rather than differences due to the models themselves.
Multivariable analysis
Table 16 presents impact estimates of longitudinal results for the QOL domain scores across the 3 models over time, controlling for confounding variables. Results show that patients in the Specialized Longitudinal Model reported an increase in reported physical concerns (0.086, P < .001) and emotional concerns (0.055, P = .057) compared to the Specialized Consultative Model, while patients in the Oncology Embedded Model reported an increase in reported physical concerns (0.037, P = .083) and emotional concerns (0.037, P = .074), and a decrease in practical concerns (−0.068, P = .039) compared to the Specialized Consultative Model. While statistically significant, these differences are small and may not be meaningful to patients.
Summary of QOL results
The results of the bivariate analysis show statistically significant differences in patients' QOL across the care models. For physical concerns, patients in the Oncology Embedded Model reported the lowest average percentage of concerns at week 1. Patients in the Oncology Embedded Model also reported the lowest average percentage of social/emotional concerns at both week 1 and month 6. The multivariable, longitudinal analysis, however, showed a different pattern over time. Patients in both the Specialized Longitudinal and the Oncology Embedded Models reported a statistically significant increase in physical concerns from baseline to 6 months as compared to the Specialized Consultative Model. Patients in the Specialized Longitudinal Model also reported an increase in social/emotional concerns over time, compared to the Specialized Consultative Model.
Self-Efficacy
Self-efficacy was measured at all time points utilizing the Breast Cancer Survivor Self-Efficacy Scale32 and the Chronic Disease Self-Efficacy Scale.33 Patients reported their confidence in managing their health and health care across 8 metrics, on a continuous scale of 1 = not at all confident to 5 = totally confident. The overall mean self-efficacy metric is a single composite score, calculated by dividing the sum of the 8 items (on the 1-5 scale) by the total number of responses. We present statistically significant differences in self-efficacy scores by model at 4 points in time—baseline, week 1, month 3 and month 6—to show change over both the short and long-term.
Bivariate analysis
In unadjusted bivariate analyses, we saw an interesting pattern of self-efficacy emerge. We saw statistically significant differences in self-efficacy from baseline (before the initial survivorship care visit) to month 6 for patients in all 3 models (P < .001) (see Figure 2). However, the highest self-efficacy score is seen at week 1, which is 1 week following the initial survivorship care visit with scores at 4.26 for the Specialized Consultative Model and the Oncology Embedded Model and 4.25 for the Specialized Longitudinal Model. While scores drop at month 3, they increase again at month 6. At month 6, scores are 4.20 for the Oncology Embedded Model, 4.09 for the Specialized Consultative Model, and 4.07 for the Specialized Longitudinal Model.
Unadjusted analyses also show differences approaching significance across patients in the 3 care models with those in the Oncology Embedded Model reporting self-efficacy scores higher than those in either of the 2 Specialized models at baseline (P = .06) and month 6 (P = .08). These findings are based on unadjusted analyses and may be presenting differences across the patients in each model rather than differences due to the model of care.
Multivariable analysis
Table 17 presents results from the longitudinal multivariable analysis showing changes in self-efficacy across the 3 models from baseline to 6 months. Self-efficacy between baseline and week 1 for the Oncology Embedded Model changed at a lower rate compared to the Specialized Consultative Model (−0.120, P = .008). There were no other significant changes over time in self-efficacy across the 3 models.
Summary of self-efficacy results
At the bivariate level, scores for patients' self-efficacy are significantly different across models at both baseline and month 6 with patients in the Specialized Consultative Model reporting the highest scores and patients in the Specialized Longitudinal Model reporting the lowest scores. The longitudinal results show that while improvement over time occurs within each model from baseline to month 6, differences in the magnitude of the 6-month improvement in self-efficacy are not significant.
Health Care Utilization
Health care utilization was measured as the frequency of visits to different providers in the prior 3 months. Possible responses were 0, 1, 2 to 3, 4 to 5, or 6+ visits. To create continuous measures, we assigned the midpoints of these categories and calculated separate responses for primary care providers, oncologists, hospital care (inpatient, outpatient, and emergency room care), and other specialists (including mental health and other specialists). At the bivariate level, we include mean number of visits at baseline, from baseline to 3 months and from 3 months to 6 months to investigate the trend in usage over this time period and whether usage continuously increased or decreased or whether it changed direction within this period of time. At the multivariable level, we examine overall change in health care usage from baseline to 6 months by model.
Bivariate analysis
We did not see any significant differences in number of PCP visits or oncologist visits across models at any time point (excluding pre-baseline, end of treatment to baseline utilization rates) (see Table 18). We did see significant differences in mean number of visits to specialists (P = .058) and hospital visits (P = .007) across patients in the 3 care models for the 3 to 6-month time period. Patients in the Oncology Embedded Model had the lowest mean number of specialist visits, averaging 1.13 visits per patient compared to patients in the Specialized Longitudinal Model, which had the highest mean number of visits at 1.71. For hospital visits, patients in the Specialized Longitudinal Model had the fewest visits (0.27) and the Oncology Embedded Model had the most visits (0.58). These numbers are quite low across the board, and represent unadjusted analyses, and, therefore, may not be clinically meaningful despite the significance.
Multivariable analysis
Longitudinal effects of health care utilization are presented in Table 19. Patients in the Specialized Longitudinal Model reported a decrease in average number of visits to PCPs (−0.273, P = .079) and oncology providers (−0.545, P = .017) compared with the Specialized Consultative Model over time between week 1 and month 6. There were no other significant changes over time in health care use between the 3 models.
Summary of health care utilization results
Results at the bivariate level show significant differences in average number of hospital visits and specialty care visits across the 3 models from month 3 to month 6 only; patients in the Specialized Longitudinal Model had the lowest mean number of hospital visits and the highest mean number of specialty care visits, while patients in the Oncology Embedded Model had the highest mean number of hospital visits and the lowest mean number of specialty care visits. No other indicator or time period had significant differences across models at the bivariate level. The longitudinal analysis also showed that patients in the Specialized Longitudinal Model had a significant decrease in PCP and oncology visits from week 1 to month 6 compared to the reference group (Specialized Consultative Model).
Satisfaction With Care Experience
A common PCO reported around health care experiences is satisfaction with care. Patient satisfaction was measured using 3 questions from the Patient Satisfaction with Cancer-Related Care Index34: (1) “Are you satisfied with the care you received?” measured on a continuous 5-point scale from 1 = strongly disagree to 5 = strongly disagree. (2) “Would you recommend the care you received?” measured on a continuous 3-point scale where 1 = no and 3 = yes, definitely. And, (3) “Please rate your care,” measured on a continuous 10-point scale where 0 = the worst possible care and 10 = the best possible care.
Bivariate analysis
Across the board, we saw very high levels of satisfaction with little difference across patients in the 3 survivorship care models in unadjusted analyses (see Table 20). At week 1, 95% to 97% of survivors reported they were satisfied with their care and 95% to 96% reported they would recommend their care to family and friends. The mean rating of satisfaction on a scale from 0-10 was 10 for all 3 models. Satisfaction with care declined marginally over time but was still high across the board.
Multivariable analysis
The multivariable longitudinal results for satisfaction are presented in Table 19. For this analysis, patient satisfaction with care was measured by the item “would recommend care to a friend or family member with cancer” (on a 3-point continuous scale). Compared to patients in the Specialized Consultative Model, patients in the Specialized Longitudinal Model reported a 7-percentage point increase in “yes definitely” they would recommend care to a friend or family member with cancer (0.071, P = .049). There were no significant longitudinal effects in the Oncology Embedded Model.
Summary of satisfaction with patient experience results
Patients receiving survivorship care are extremely satisfied with their patient experience across all 3 models with no significant differences. Over time, patients' willingness to recommend their survivorship care clinic significantly increased for those receiving care in the Specialized Longitudinal Model compared with the Specialized Consultative Model, which was the reference group.
Discussion
Context for Study Results
Results from the 3 aims of the Evaluating Cancer Survivorship Models of Care project make significant contributions to the field of survivorship care, as well as the field of patient-centered quality of care and the impact it has on PCOs. Based on a comprehensive review of the literature, we found the patient perspective is missing in current assessments of how high-quality survivorship care should be delivered, which may help explain why much of care does not adequately address patient-identified need.2,35,36 While a number of measures of cancer quality exist, most focus on clinical outcomes such as morbidity, mortality, or progression-free survival, rather than on outcomes deemed important by survivors themselves.8,9,35,36 Our study is noteworthy in that it focuses on both the quality of survivorship care and the impact it has on outcomes through the lens of survivors' preferences and what they have identified as priorities.
The SCQ framework and index, developed for aim 1, address this gap in the field of survivorship care by creating a way to define and measure quality so that patient-centeredness is the priority. Using the SCQI to measure the quality of care provided can help establish a care standard that is aligned and responsive to patient preferences and improve the survivorship care delivery system so that it more directly addresses survivors' needs.
Aim 2 of the study focused on identifying models of survivorship emerging in the field. Of note is the new approach to defining survivorship care models in our study. While we used the literature and stakeholder input to guide our conceptualization of the models, we relied on the results from the environmental scan to identify specific characteristics that comprise survivorship care and the different combinations of characteristics that are most commonly being used in the field. The scan identified approach, duration and access as the most common characteristics defining survivorship care and our 3 models—the Specialized Consultative Model, the Specialized Longitudinal Model, and the Oncology Embedded Model—as the predominant permutations emerging in the field. Prior to this evidence-based approach, research that described models of survivorship care were based on individual case studies or hypothetical constructs.
Unlike the body of scholarship on survivorship care models and the original taxonomy defining this field,12 we found little evidence that shared-care models are being organized in the 968 field. In addition, very few cancer centers in our environmental scan reported using a formalized risk-stratified approach based on survivors' risk of long-term adverse outcomes associated with their cancer and treatment. We found some evidence of multidisciplinary models as seen by the use of NP/PAs and nurse navigators to help coordinate care. This is reflected in our Specialized models, both consultative and longitudinal. The incongruence with how the literature defines models and our findings may be because of the nascency of the field, the lack of evidence demonstrating what works best, and the absence of standards to help guide development in cancer centers.
The main contribution of the study is the CER results (aim 3). The comparison of 3 models over a 6-month period of time provides important information to both survivors who are deciding whether and where to seek survivorship care services and cancer centers who are trying to establish and improve their SCPs. Survivorship care is critical to the cancer patient's long-term health and wellness, and the CER design allowed the study team to examine the lasting impact of survivorship care on PCOs over time.
The 3 models of survivorship care examined in the CER study all provide benefits to survivors in different aspects of quality care and QOL, confirming that survivorship care is important and beneficial to patients. Based on the adjusted, multivariable results, it appears that longitudinal models of survivorship, particularly the Oncology Embedded Model, are associated with providing higher quality of care to survivors over time than the Specialized Consultative Model. These data may be suggesting the importance that patients place on cultivating a trusting ongoing relationship with a provider, whom they feel delivers customized care that responds to their unique needs, whether it is an oncologist, as in the Oncology Embedded Model, or a NP/PA as in the Specialized Longitudinal model. This interpretation is supported by the bivariate analyses which show the highest quality scores for meaningful communication between providers and survivors and having a medical home.
However, the strong evidence demonstrating the higher quality of the Specialized Consultative Model at week 1 in the cross-sectional adjusted multivariable analyses should not be discounted. These results could be explained by a division of patients based on need.
Patients who have short-term survivorship care needs—focused on getting highly specialized information and resources about survivorship and their posttreatment care (a factor that also gets high marks from survivors)—may rank this type of care highly in the immediate weeks following the end of their treatment regimen. Whereas patients with longer term, more complex issues may value highly the care they receive in a longitudinal model, especially those getting care in the Oncology Embedded Model. This model, where patients receive survivorship care from their treatment team as part of standard follow-up care, may be better equipped and organized to address chronic or more complicated survivorship issues. These results may also suggest the importance of personalizing survivorship care and considering the unique needs of each patient, which is more easily done in a longitudinal model than a 1-time only consultative visit. While we did not set-up our analysis to examine this difference in patient need and complexity it is worth further exploration.
The QOL results, which demonstrate better outcomes over time for the Specialized Consultative Model, may, in contrast, reflect the importance of care that is dedicated specifically to survivorship and receiving specialized information, resources and support to help patients address their survivorship care needs. It may also support the hypothesis that patients with short term needs are getting those needs met, and as a result have higher QOL and lower concerns. If more complex patients are being seen in the longitudinal models, then it is conceivable that their QOL scores would be worse (reporting higher numbers of concerns).
The health care utilization results may also support the notion that the longitudinal survivorship care models are supporting care for more complex survivors. Patients getting survivorship from a Specialized Longitudinal Model had fewer PCP visits, but more specialty care visits, which could mean they are relying on survivorship clinicians and other specialists for their care rather than transitioning back to their PCPs.
These results also highlight the importance of self-efficacy and a survivorship care model that supports and improves patients' ability and confidence to effectively manage their survivorship. While the longitudinal models did not show significant differences across model over time, the bivariate results were compelling as they suggest that survivorship care following a survivorship visit. This is an important finding as much of survivorship care is reliant on survivors feeling empowered and able to manage their health and health care effectively.37-39 If the health care system can help support this by providing information and resources, access to psychosocial support, positive alliances between patients and providers, and a system of care that is better coordinated and provides a range of services within a medical home—all aspects of high-quality survivorship care—then survivors will likely feel more in control of their health and have better QOL.
Generalizability of the Findings
The research conducted in this study included patients of breast, prostate and colorectal cancers—the 3 cancers that patients most frequently survive—in an effort to increase the generalizability of the study. This patient composition was the goal in all aspects of the study—the qualitative FGs, the national survivorship survey, and the CER. However, the study lacks generalizability in important respects, which is a limitation of the research. In the formative work, the data used to develop the SCQI was based on a study population that is disproportionately White, privately insured and survivors of breast cancer. As a result, the SCQI may not be generalizable to a more diverse patient population. Additional studies should be conducted to determine whether the quality priorities represented in the SCQI are relevant to a more diverse and underserved population of cancer survivors.
Generalizability of the CER results are also limited due to our institutional and patient participate, but those who agreed may be more invested in their survivorship programs, which could overstate results and limit generalizability. Moreover, the inclusion of only high performers may limit generalizability of the study results to only cancer centers that have established SCPs. We also used a convenience sample of survivors, who volunteered to participate in the CER. Survivors willing to participate in the study may have been predisposed to respond better to survivorship care, and study volunteers are more often non-Hispanic White and more educated than the general population. Moreover, despite our efforts to recruit breast, prostate and colorectal cancer patients, female breast cancer survivors were the majority of respondents in the CER, which impacts extrapolation of the results to other cancer survivors. Still, the study included SCPs at 32 institutions across the country that vary in terms of local and contextual issues involved in cancer care. The identification of models of survivorship across these institutions that are based on common core features increases the ability of this work to be transferred to other cancer centers.
Implementation of Study Results
The results of this study have important implications for both clinical practice and patient decision making. The SCQI is an important tool that cancer centers can use to measure and track the quality of survivorship care to ensure they are providing services that meet the survivorship needs of their patient populations. Moreover, the SCQI can be translated into materials for patients to educate them on what survivorship care is, how it should be organized and what they should expect from their providers. This information can help patients advocate for themselves and increase their self-efficacy. The SCQI, however, was developed as a research tool and may be unwieldy in typical clinical practices, when time is short. The index should be shortened and validated for clinical work.
The survivorship care model development and CER results also have important implementation potential. Survivorship is a rapidly growing field of care and without evidence-based standards cancer centers are often using a trial and error approach to how to organize this important care. Identifying models and their impact on PCOs can provide helpful guidance to cancer centers as they work to implement and improve survivorship care within the confines of resources and given the needs of their patient population. For example, a large academic-medical center may have the resources and high-risk patient population that warrant implementation of the Specialized Longitudinal Model, whereas a community cancer center may determine the Specialized Consultative Model is a better fit for their patients and given their resources. A small rural hospital without significant specialized care may consider the Oncology Embedded Model as the best option. Moreover, understanding the shortcomings of each of these models on PCOs, as reported in this study, can alert cancer centers to how these models can be improved and where they should focus their efforts as they build their survivorship services within any of the given models of care.
Subpopulation Considerations
The study was not powered to conduct subpopulation analyses.
Study Limitations
While the research team made every effort to design a rigorous CER study, there are important limitations to the research. The CER is limited by study design. Given the nature of health system and health services research, it was impossible to design a randomized control group study. Instead, the CER study utilizes a quasi-experimental, longitudinal design that compares outcomes across the 3 models of care identified as our comparators and only in high-performing cancer centers. The longitudinal models control for key demographic and health care system variables and allow for random effects at the institutional level. However, there may be unknown confounders that were not controlled for related both to patient and institutional level characteristics. In addition, given the observational nature of our data, we can only demonstrate associations between survivorship care model and changes in PCOs rather than causality.
The longitudinal design of the study also increases the risk of attrition, which can impact the internal validity of the study. Participants who remained in the study may include only those survivors who are faring better in terms of their health. Over the course of the study, we had an attrition rate of over 20%. To better understand differences in our study population, we compared baseline characteristics between completers and noncompleters and observed that females, breast cancer survivors, and White participants were more likely to complete the study. Thus, it may be more difficult to understand if our results apply to other populations.
In this study we examined multiple outcomes. We consider this to be an exploratory, or hypothesis-generating study, and thus did not correct the P values for multiple comparisons. Thus, some of the observed associations may be due to chance.
The overlap of characteristics across the 3 models of care used in the CER as the study's comparators may also confound differences seen across the models. Future studies should examine individual factors within each of the models, as well as the models in the aggregate, to better understand what aspects of the models are driving quality and effectiveness.
Future Research
The issues related to generalizability of the research are important limitations that future research should address. The SCQI could be an important clinical tool and it should be studied in a more diverse socioeconomic, minority patient population to determine if the priorities of this population are similar to those used to develop the SCQI. In addition, the SCQI has not been formally validated beyond the CFA used in the initial construction of the CFA. A validation study should also be conducted to ensure measurement accuracy and reliability.
A study examining the impact of survivorship care and comparing different models of survivorship should be conducted in low-resourced and/or safety net hospitals with diverse patient populations. A study in this setting would help determine how survivorship care is organized, which models are most common, whether the models are effective in these institutions and how to improve care and patients' health and wellness.
In addition, this study is highly exploratory providing only suggestions on differences across the 3 models of survivorship care. Thus, it is it is difficult and ill-advised to make conclusive recommendations on which model is the “better model.” As is the case with most health care, survivorship care must consider the unique context and considerations of their patients and their institutions, making it imprudent to implement a 1-size-fits-all approach. However, understanding how survivorship is organized, the quality of care that is provided, and its impact on PCOs can support the process of developing standards and guidelines that recommend elements of care as important to overall quality and impact of survivorship. To get to this point, additional research is needed to examine other factors that may be contributing to outcomes, including patient complexity, the number and types of services provided and who is providing the care.
Finally, we do not include analyses or interpretation of clinically meaningful differences in our outcomes, which may be more useful than statistical differences when considering clinical operational change and patient decision-making. Future studies should test specific findings to generate a larger body of evidence around models of survivorship care and examine both clinically meaningful change as well as statistical difference.
Conclusions
Our results are groundbreaking in that they are the first to define and assess the quality of different types of survivorship care models over time and the first to generate empirical evidence on whether and how well different models of survivorship care impact PCOs of cancer survivors. In aim 1, we developed a quality tool focusing on survivors' priorities for care. This tool draws on evidence from survivors themselves about what is essential for high-quality survivorship care and can be used as both a clinical aid to help patients' advocate for critical services and as a set of standards for cancer centers as they implement SCPs.
In aim 2, we identified models of survivorship care using empirical evidence demonstrating which factors are most important in survivorship care and how they can be organized to provide high-quality care. The resulting 3 models—the Specialized Consultative, Specialized Longitudinal and Oncology Embedded models—provide a new, evidenced-based way of looking at how survivorship care can and should be organized. The models serve as frameworks for the delivery of care and can be used to help reduce the “trial and error” approach that many cancer centers have taken to develop survivorship care.
In aim 3, we demonstrated the influence survivorship care has on PCOs and the lasting impact it has for survivors. We also identified the need for much improvement in some key areas of quality, such as psychosocial support, coordinating services and transitions in care. These findings can provide guidance to cancer centers on how SCPs generally fall short, and where they should look to shore up services. In addition, the results from the CER provide important information on the effectiveness of different models of survivorship. Our work suggests the longitudinal care models may be better suited to patients with more substantial needs, while the consultative model may work better for patients with short-term needs. These findings can be used to increase the quality of survivorship care by providing “blueprints” that can be used and adapted by different types of institutions depending on their need and resources.
Our study could have a substantial impact on how survivorship care evolves. With the findings from this work as guidance, cancer centers can better target the allocation of resources and more easily implement survivorship programs. This, in turn, will lead to better quality and effectiveness.
References
- 1.
- Miller KD, Siegel RL, Lin CC et al. Cancer treatment and survivorship statistics, 2016. CA Cancer J Clin. 2016;66(4):271-289. [PubMed: 27253694]
- 2.
- Institute of Medicine and National Research Council; Hewitt M, Greenfield S, Stovall E, eds. From Cancer Patient to Cancer Survivor: Lost in Transition. National Academies Press; 2006.
- 3.
- National Cancer Institute at the National Institutes of Health. NCI Dictionary of Cancer Terms. Accessed June 4, 2018. https://www
.cancer.gov /publications/dictionaries /cancer-terms - 4.
- Rechis R, Beckjord EB, Arvey SR, Reynolds KA, Mcgoldrick D. The essential elements of survivorship care: a LIVESTRONG brief. LIVESTRONG. Published December 2011. Accessed August 22, 2016. https://www.livestrong.org/sites/default/files/what-we-do/reports/essentialelementsbrief.pdf
- 5.
- American Society of Clinical Oncology. Survivorship Compendium. Accessed July 19, 2017. https://www.asco.org/practice-guidelines/cancer-care-initiatives/prevention-survivorship/survivorship/survivorship-compendium [Link no longer works.]
- 6.
- Accreditation Committee Clarifications for Standard 3.3 Survivorship Care Plan. American College of Surgeons' Commission on Cancer. Published September 9, 2014. Accessed January 14, 2016. https://www.facs.org/publications/newsletters/coc-source/special-source/standard33 [Link no longer works.]
- 7.
- American College of Surgeons' Commission on Cancer. Cancer program standards 2012: ensuring patient-centered care. Accessed December 12, 2012. http://www.facs.org/cancer/coc/cocprogramstandards2012.pdf
- 8.
- Jacobs LA, Shulman LN. Follow-up care of cancer survivors: challenges and solutions. Lancet Oncol. 2017;18(1):e19-e29. [PubMed: 28049574]
- 9.
- Mayer DK, Shapiro CL, Jacobson P, McCabe MS. Assuring Quality Survivorship Care: We've Only Just Begun. American Society of Clinical Oncology 2015 Educational Book. American Society of Clinical Oncology; 2015:e583-e591. http:
//meetinglibrary .asco.org/record/104147/edbook#fulltext [PubMed: 25993226] - 10.
- Alfano CM, Smith T, de Moor JS, et al. An action plan for translating cancer survivorship research into care. J Natl Cancer Inst. 2014;106(11):dju287. [PMC free article: PMC4184343] [PubMed: 25249551]
- 11.
- McCabe MS, Bhatia S, Oeffinger KC, et al. American Society of Clinical Oncology statement: achieving high-quality cancer survivorship care. J Clin Oncol. 2013;31(5):631-640. [PMC free article: PMC4878049] [PubMed: 23295805]
- 12.
- Oeffinger KC, McCabe MS. Models for delivering survivorship care. J Clin Oncol. 2006;24(32):5117-5124. [PubMed: 17093273]
- 13.
- Oeffinger KC, Argenbright KE, Levitt GA, et al. Models of Cancer Survivorship Health Care: Moving Forward. 2014 ASCO Educational Book. American Society of Clinical Oncology; 2014. http:
//meetinglibrary .asco.org/record/88976/edbook#fulltext [PubMed: 24857078] - 14.
- McCabe MS, Partridge AH, Grunfeld E, Hudson MM. Risk-based health care, the cancer survivor, the oncologist, and the primary care physician. Semin Oncol. 2013;40(6):804-812. [PMC free article: PMC4465133] [PubMed: 24331199]
- 15.
- O'Malley D, Hudson S, Nekhlyudov L, et al. Learning the landscape: implementation challenges of primary care innovators around cancer survivorship care. J Cancer Surviv. 2017;11(1):13-23. [PMC free article: PMC5145775] [PubMed: 27277895]
- 16.
- Viswanathan M, Halpern M, Evans TS, et al. Models of survivorship care. Technical Brief No. 14. 2014. Agency for Healthcare Research and Quality (AHRQ). Accessed August 22, 2016. https://www
.ncbi.nlm .nih.gov/books/NBK196214 /pdf/Bookshelf_NBK196214.pdf [PubMed: 24783311] - 17.
- Jacobsen PB, Rowland JH, Paskett ED, et al. Key gaps in cancer survivorship research: findings from the American Society of Clinical Oncology Survey. JCO Oncol Pract. 2016;12(3):190-193. [PubMed: 26907451]
- 18.
- Halpern MT, Argenbright KE. Evaluation of effectiveness of survivorship programmes: how to measure success?. Lancet Oncol. 2017;18(1):e51-e59. [PubMed: 28049577]
- 19.
- Halpern MT, Viswanathan M, Evans TS, et al. Models of cancer survivorship care: overview and summary of current evidence. JCO Oncol Pract. 2014;11(1):e19-e27. [PubMed: 25205779]
- 20.
- Malin JL, Schneider EC, Epstein AM, et al. Results of the National Initiative for Cancer Care Quality: how can we improve the quality of cancer care in the United States? J Clin Oncol. 2006;24(4):626-634. [PubMed: 16401682]
- 21.
- Institute of Medicine (IOM). Crossing the Quality Chasm: A New Health System for the 21st Century. National Academy Press; 2001. [PubMed: 25057539]
- 22.
- Epstein RM, Street RL. The values and value of patient-centered care. Ann Fam Med. 2011;9(2):100-103. [PMC free article: PMC3056855] [PubMed: 21403134]
- 23.
- American Society of Clinical Oncology. Quality Oncology Practice Initiative (QOPI) measures overview. Accessed July 19, 2017. http://www.instituteforquality.org/qopi/measures
- 24.
- Basch E, Abernethy AP, Mullins CD, et al. Recommendations for incorporating patient-reported outcomes into clinical comparative effectiveness research in adult oncology. J Clin Oncol. 2012;30(34):4249-4255. [PubMed: 23071244]
- 25.
- Garcia SF, Cella D, Clauser SB, et al. Standardizing patient-reported outcomes assessment in cancer clinical trials: a patient-reported outcomes measurement information system initiative. J Clin Oncol. 2007;25(32):5106-5112. [PubMed: 17991929]
- 26.
- Ouwens M, Hermens R, Hulscher M, et al. Development of indicators for patient-centered cancer care. Support Care Cancer. 2010;18(1):121. [PMC free article: PMC2778774] [PubMed: 19387693]
- 27.
- National Quality Forum. Towards a comprehensive cancer measure set: value-based episodes of care workshop summary. 2008. Accessed July 19, 2017. https://www.qualityforum.org/Projects/c-d/Cancer_Measure_Set_Value-Based_Episodes_of_Care/Comprehensive_Cancer_Measure_Set Value-Based_Episodes_of_Care.aspx [Link no longer works.]
- 28.
- Aiello Bowles EJ, Tuzzio L, Wiese CJ, et al. Understanding high-quality cancer care. Cancer. 2008;112(4):934-942. [PubMed: 18181099]
- 29.
- Twisk J, Boer MD, Vente WD, Heymans M. Multiple imputation of missing values was not necessary before performing a longitudinal mixed-model analysis. J Clin Epidemiol. 2013;66(9):1022-1028. doi:10.1016/j.jclinepi.2013.03.017. [PubMed: 23790725] [CrossRef]
- 30.
- Azuero A, Su X, McNees P, Meneses K. A Revision of the Quality of Life-Breast Cancer Survivors (QOL-BCS) Instrument. Res Nurs Health. 2013;36(4):423-434. doi:10.1002/nur.21541 [PMC free article: PMC3796941] [PubMed: 23512217] [CrossRef]
- 31.
- Schipper H, Clinch J, McMurray A, Levitt M. Measuring the quality of life of cancer patients: the Functional Living Index-Cancer: development and validation. J Clin Oncol. 1984;2(5):472-483. [PubMed: 6374052]
- 32.
- Beckham JC, Burker EJ, Burker EJ, et al. Self-efficacy and adjustment in cancer patients: a preliminary report. Behav Med. 1997;23(3):138-142. [PubMed: 9397286]
- 33.
- Lorig K, Stewart A, Ritter P, et al. Outcome Measures for Health Education and other Healthcare Interventions. Sage Publications; 1996:24-25, 41-45.
- 34.
- Jean-Pierre P, Cheng Y, Paskett E, et al. Item response theory analysis of the Patient Satisfaction with Cancer Related Care Measure: a psychometric investigation in a multicultural sample of 1296 participants. Support Cancer Care. 2014;22(8):2229-2240. [PMC free article: PMC4256077] [PubMed: 24664356]
- 35.
- National Coalition for Cancer Survivorship. Our history. Accessed December 12, 2012. https:
//canceradvocacy .org/about/our-history/ - 36.
- Wu AW, Snyder C, Clancy CM, Steinwachs DM. Adding the patient perspective to comparative effectiveness research. Health Aff (Millwood). 2010;29(10):1863-1871. [PubMed: 20921487]
- 37.
- McCorkle R, Ercolano E, Lazenby M, et al. Self-management: enabling and empowering patients living with cancer as a chronic illness. CA Cancer J Clin. 2011;61(1):50-62. [PMC free article: PMC3058905] [PubMed: 21205833]
- 38.
- Loh SY, Packer T, Chinna K, Quek KF. Effectiveness of a patient self-management programme for breast cancer as a chronic illness: a non-randomised controlled clinical trial. J Cancer Surviv. 2013;7(3):331-342. [PubMed: 23519654]
- 39.
- Schulman-Green D, Jaser S, Martin F, et al. Processes of self-management in chronic illness. J Nurs Scholarsh. 2012;44(2):136-144. [PMC free article: PMC3366425] [PubMed: 22551013]
Acknowledgements
We would like to acknowledge the hard work and support of the 32 cancer centers who agreed to participate in this study. Their contribution was critical to the success of this work. In addition, we would like to thank the cancer survivors who gave so willingly of their time. Their perspective on cancer survivorship care is invaluable to ensuring the health care system understands and meets their needs.
Research reported in this report was [partially] funded through a Patient-Centered Outcomes Research Institute® (PCORI®) Award (#IH-12-11-5255) Further information available at: https://www.pcori.org/research-results/2013/evaluating-different-types-cancer-survivorship-care
Appendices
Appendix A.
Methodology and Results of EFA and CFA Used to Develop the SCQI (PDF, 344K)
Diagram 1. Path Diagram of Survivorship Care Quality Index Factors and Items (PDF, 147K)
Table 1. Results of Exploratory Factor Analysis by Component (PDF, 228K)
Table 2. Exploratory Factor Analysis Correlation Matrix (PDF, 173K)
Table 3. Confirmatory factor analysis model fit statistics (PDF, 172K)
Table 4. Confirmatory Factor Analysis Final Model by Quality Component (PDF, 194K)
Table 5. 14 covariance between items for final CFA model (PDF, 178K)
Appendix B.
High-Performing CoC-Accredited Cancer Centers Participating in CER: Name and Location (PDF, 210K)
Appendix C.
Difference in Baseline Demographics Between Final Sample and Lost to Follow-Up (PDF, 245K)
Appendix D.
Methodology and Results of EFA and CFA Used to Develop the SCQI (PDF, 329K)
Diagram 1. Path Diagram of Survivorship Care Quality Index Factors and Items (PDF, 147K)
Table 1. Results of Exploratory Factor Analysis by Component (PDF, 228K)
Table 2. Exploratory Factor Analysis Correlation Matrix (PDF, 173K)
Table 3. Confirmatory factor analysis model fit statistics (PDF, 172K)
Table 4. Confirmatory Factor Analysis Final Model by Quality Component (PDF, 194K)
Table 5. 14 covariance between items for final CFA model (PDF, 178K)
Appendix E.
32 High-Performing CoC-Accredited Cancer Centers Participating in CER: Name and Location (PDF, 210K)
Appendix F.
Difference in Baseline Demographics Between Final Sample and Lost to Follow-up (PDF, 246K)
Appendix G.
West 1 Postbaseline Survey, Including the SCQI (PDF, 238K)
Appendix H.
Quality of Care Bivariate Data by Domain, With Individual Metrics—Week 1 and Month 6 (PDF, 201K)
Appendix I.
QOL Bivariate Data by Domain, With Individual Metrics—Baseline and Month 6 (PDF, 222K)
Footnotes
*Note: Because our research questions focused on longitudinal differences across models, we do not systematically present multivariable analyses at the cross-sectional level. However, because the bivariate analyses suggested consistently lower quality for the Oncology Embedded Model at week one, we conducted a multivariable model to see if this trend in the data persisted even when controlling for key variables.
*Note: Based on feedback from our AB, 2 items from the EFA/CFA were merged because of their close similarity in meaning. The items “preference for transitioning care to PCP is discussed” and “preference to remain under cancer clinician until ready to transfer care is discussed” were combined in the SCQI as “clinician discussed with patient about whether ready to transition care to primary care provider.” This item is included in all SCQI analysis. In addition, the AB recommended the addition of 4 items to the CER patient survey after completion of the EFA/CFA analysis. These items are not included in the SCQI analysis or composite scores and were examined as separate outcomes. Thus the CER patient survey includes 46 items across the 9 factors of quality, while the SCQI comprises 41-items across the 9 factors of quality.
*Note: Because our research questions focused on longitudinal differences across models, we do not systematically present multivariable analyses at the cross-sectional level. However, because the bivariate analyses suggested consistently lower quality for the Oncology Embedded Model at week 1, we conducted a multivariable model to see if this trend in the data persisted even when controlling for key variables.
Figures
Figure 2Self-Efficacy Over Time (Baseline, Week 1, Month 3, and Month 6), by Model (n = 777)
Mean score is a composite of 8 metrics, measured on a scale from 1 to 5, where 1 = not at all confident and 5 = totally confident.
*At baseline and month 6, significant differences exist across models in mean score of self-efficacy (1-5).
***Within each model, significant differences exist between baseline and month 6 at P < .001.
Tables
Table 1CER Primary Outcomes, Definitions, and Measurement
Outcome | Source instrument(s) | Definition and measurement of final metrics or score |
---|---|---|
Quality of care | Cancer survivor quality of care indexa | Definition: consists of 3 domains, 9 factors and 41 total metrics informed & grounded patient domain: emotional and social support (4 metrics); information and resources on survivorship care (4 metrics) Patient-Provider Alliance Domain: empowered and engaged patients (3 metrics); supportive and prepared clinicians (4 metrics); meaningful patient-clinician communication (5 metrics) Supportive Health & Wellness System Domain: care coordination and transitions in care (8 metrics); provision of full spectrum of care (5 metrics); practical life support (3 metrics); having a medical home (5 metrics) Measurement: 9 separate scales were created representing the 9 factors in the SCQI. We calculated the percentage of participants responding “yes, definitely” for each line item, and for each factor summed the number of “yes, definitely” responses over the total number of items in the factor to calculate an overall 0-1 value (as a continuous metric) that could then be presented as a percentage. |
Cancer-related quality of life | QOL-BCS instrument,b The Functional Living Index-Cancerc | Definition: consists of 3 domains and 41 total metrics Physical concerns (20 metrics) Social and emotional concerns (14 metrics) Practical Concerns (7 metrics) Measurement: QOL responses were categorized as any concerns/no concerns for each line item within each domain. To focus on the performance of care models, we created a measure for “top concerns” whereby we calculated the prevalence concerns that were reported by more than 40% of the study population at baseline for each of the domains. For the physical concerns domain, there were 8 top concerns, for the emotional/social concerns domain there were 6 top concerns and for the practical concerns there were 2 top concerns. The overall domain scores for QOL thus indicate the average percentage of top concerns reported, ranging from 0-1 on a continuous scale and presented as a percentage. |
Self-efficacy | BCSESd Chronic Disease Self-Efficacy Scalee | Definition: consists of 8 self-efficacy items that ask participants to rate their confidence in managing their health and health care (eg, dealing with the lasting effects of cancer or navigating the health care system to address subsequent needs). Measurement: overall self-efficacy score was calculated as a sum of the 8 individual line items in the scale with response options ranging from 1 (not at all confident) to 5 (totally confident) on a continuous scale, and mean was calculated by dividing by the number of total responses. |
Appropriate use of health care | N/A | Definition: Frequency of visits to different health care providers is used as a proxy for appropriate health care usef,g Measurement: Participants selected frequency of visits to specific types of providers over the prior 3 mo from categories of none, 1, 2-3, 4-5, or ≥6 visits. To create continuous measures, we assigned the midpoints of these categories and calculated separate responses for primary care providers, oncologists, hospital care (inpatient, outpatient, and emergency room care), and other specialists (including mental health and other specialists). |
Patient experience/satisfaction | PSCCh | Definition: Consists of 3 items Item 1: Are you satisfied with the care you received? Item 2: Would you recommend the care you received? Item 3: Please rate your care Measurement: Each item is reported separately Item 1: Measured on a continuous 3-point scale where 1 = strongly disagree/disagree and 3 = agree/strongly agree Item 2: Measured on a continuous 3-point scale where 1 = no, definitely not and 3 = yes, definitely Item 3: Measured on a 0 to 10 continuous scale, where 0 = the worst possible care and 10 = the best possible care |
Abbreviations: BCSES, Breast Cancer Survivor Self-Efficacy Scale; CER, comparative effectiveness research; N/A, not applicable; PSCC, Patient Satisfaction with Cancer-Related Care; QOL-BCS, Quality of Life-Breast Cancer Survivors; SCQI, Survivor Quality of Care Index.
Notes:
aValidated as part of phase 1 of the study
b Azuero A, Su X, McNees P, Meneses K. A revision of the Quality of Life-Breast Cancer Survivors (QOL-BCS) instrument. Res Nurs Health. 2013;36(4):423-434. doi:10.1002/nur.21541 [PMC free article: PMC3796941] [PubMed: 23512217] [CrossRef]
cSchipper H, Clinch J, McMurray A, Levitt M. Measuring the quality of life of cancer patients: the Functional Living Index-Cancer: development and validation. J Clin Oncol. 1984;2(5):472-483 [PubMed: 6374052].
d Beckham JC, Burker EJ, Burker EJ, Feldman ME, Costakis MJ. Self-efficacy and adjustment in cancer patients: a preliminary report. Behav Med. 1997;23(3):138-142. [PubMed: 9397286]
eLorig K, Stewart A, Ritter P, González V, Laurent D, Lynch J. Outcome Measures for Health Education and other Healthcare Interventions. Sage Publications; 1996:24-25, 41-45.
f Earle CC, Neville BA. Under use of necessary care among cancer survivors. Cancer. 2004;101:1712-1719. doi:10.1002/cncr.20560 [PubMed: 15386307] [CrossRef]
gRajotte EJ, Heron L, Syrjala KL, Baker KS, Health care utilization among long-term cancer survivors. J Clin Oncol. 2016;34(3_suppl):22.
hJean-Pierre P, Cheng Y, Paskett E, Shao C, Fiscella K, Winters P. Item response theory analysis of the Patient Satisfaction with Cancer Related Care Measure: a psychometric investigation in a multicultural sample of 1296 participants. Support Cancer Care. 2014;22(8):2229-2240 [PMC free article: PMC4256077] [PubMed: 24664356].
Diagram 1Data Collection Time Points
Baseline (2 time points) | Follow-up (2 time points) | ||
---|---|---|---|
T0a: Presurvivorship visit: clinics had survivors complete a baseline survey to assess:
| T0b: 1-wk postvisit: survivors completed a baseline survey to assess:
| T1: 3 mo postvisit: survivors completed a follow-up telephone survey at 3 mo postvisit to assess:
| T2: 6 mo postvisit: survivors completed a follow-up telephone survey at 6 mo postvisit to assess:
|
Presurvivorship visit | Postsurvivorship visit |
Abbreviations: HC, health care; QOL, quality of life; SCQI, Survivorship Care Quality Index.
Table 2Characteristics of Study Participants Involved in Development of the Survivorship Care Quality Framework and Indexa
Characteristic | Focus group discussion N = 170 | National survey N = 1278 |
---|---|---|
Age, y, mean | 61 | 68 |
Gender, % female | 58 | 57 |
Cancer site, % | ||
Breast | 42 | 51 |
Prostate | 38 | 35 |
Colorectal | 23 | 14 |
Race | ||
White | 75 | 81 |
Black | 14 | 9 |
Other | − | − |
Hispanic ethnicity, % yes | 3 | 5 |
Insurance, % yes | 97 | 92 |
Private | − | 27 |
Medicare/Medicare supplement | − | 54 |
Medicaid | − | 2 |
Other form | − | 11 |
aPower estimations were calculated varying DE from 1.0 to 2.0, which is the standard range when considering clustered effects. That translates to an intraclass correlation coefficient between 0.005 and 0.034, which are typical in clinic studies.
Table 3Focus Group and Survivorship Survey Participant Eligibility and Enrollment
Research component | Planned sample size | Total screened | Total eligible | Total enrolled |
---|---|---|---|---|
Focus group | 150 (participants) | N/A | N/A | 170 |
Survivor quality framework survey | 1325 | 7566 | 1327 | 1327 (1278)a |
Abbreviation: N/A, not applicable.
aTotal enrolled includes GfK sample, cognitive interviews, time testers, and pilot testers. All analyses were conducted on a sample size of 1278.
Table 4Survivorship Care Quality Index Final Metrics for Comparative Effectiveness Researcha
Survivorship Care Quality Index |
---|
Domains, factors, and Individual Items |
Domain A: Informed and Grounded Patient |
Factor 1: Emotional and social support
|
Factor 2: Information and resources about expectations posttreatment
|
Domain B: The Patient-Provider Alliance |
Factor 3: Empowered and engaged patients
|
Factor 4: Supportive and prepared clinicians
|
Factor 5: Meaningful communication between clinicians and provider
|
Domain C: Supportive Health and Wellness System |
Factor 6: Care coordination and transitions across care providers
|
Factor 7: Provision of full spectrum of care
|
Factor 8: Health insurance issues/practical life support
|
Factor 9: Providing a medical home
|
Total = 41 items |
Abbreviations: AB, advisory board; EHR, electronic health record; FU, follow-up; OTC, over the counter; PCP, primary care provider; Rx, prescription.
aBased on feedback from our AB, 4 additional items were added to the CER patient survey after completion of the EFA/CFA analysis. These items are not included in the SCQI analysis or composite scores and will be examined as separate outcomes. Thus the CER patient survey includes 46 items across the 9 factors of quality, while the SCQI comprises the 42-items resulting from the EFA/CFA analysis.
Table 5Environmental Scan Institutional Eligibility and Enrollment
Research component | Planned sample size | Total screened | Total eligible | Total enrolled |
---|---|---|---|---|
Institutional assessment/environmental scan | N/A (survey sent to 1407 institutions) | 628 | 297 | 232 |
Abbreviation: N/A, not applicable.
Table 6Frequency Distribution of Key Component of Survivorship Care
Key components of care | All Institutions, % | High performers, % |
---|---|---|
Approach to care | (N = 189) | (N = 112) |
Specialized | 42.9 | 56.3 |
Embedded | 57.1 | 43.8 |
Frequency of care | (N = 171) | (N = 110) |
Consult | 29.2 | 22.7 |
Ongoing | 39.9 | 43.6 |
Ad hoc | 22.2 | 21.8 |
Access to care | (N = 219) | (N = 122) |
Direct | 43.4 | 53.3 |
Indirect | 45.2 | 39.3 |
Mixed | 11.4 | 7.4 |
Table 7Co-occurrence Matrix of Key Components of Survivorship Care
Key components of survivorship care | All institutions (N = 142) | High performers (N = 96) | ||
---|---|---|---|---|
No. | % | No. | % | |
Specialized, consultative, direct access | 19 | 13% | 12 | 13% |
Specialized, consultative, indirect access | 14 | 10% | 7 | 7% |
Specialized, ad hoc, direct access | 7 | 5% | 6 | 6% |
Specialized, ad hoc, indirect access | 5 | 4% | 3 | 3% |
Specialized, ongoing, direct access | 17 | 12% | 15 | 16% |
Specialized, ongoing, indirect access | 9 | 6% | 7 | 7% |
Embedded, consultative, direct access | 2 | 1% | 1 | 1% |
Embedded, consultative, indirect access | 11 | 8% | 4 | 4% |
Embedded, ad hoc, direct access | 11 | 8% | 7 | 7% |
Embedded, ad hoc, indirect access | 15 | 11% | 8 | 8% |
Embedded, ongoing, direct access | 23 | 16% | 21 | 22% |
Embedded, ongoing, indirect access | 9 | 6% | 5 | 5% |
Table 8Descriptions of Models of Survivorship Care
Model Name | Core Components | Adaptable Components |
---|---|---|
Model 1 Specialized consultative survivorship care with direct access |
|
|
Model 2 Specialized longitudinal survivorship care with direct access |
|
|
Model 3 Oncology embedded survivorship care with varied access |
|
|
Abbreviations: NP, nurse practitioner; PA, physician assistant; PCP, primary care provider; SCP, Survivorship Care Plan.
Table 9CER Participant Demographics and Cancer-Related Characteristics (N = 991)a
Characteristic | Specialized consultative | Specialized longitudinal | Oncology embedded | Total No. (%) | P value |
---|---|---|---|---|---|
No. (%) at baseline | 412 (41.6) | 205 (20.7) | 374 (37.7) | 991 | |
No. (%) at 6 mo | 323 (41.6) | 159 (20.5) | 295 (38.0) | 777 (78.4) | |
Age, mean (SD) | 59.4 (11.6) | 59.7 (12.0) | 61.8 (11.0) | 60.3 (11.6) | .009 |
Time since diagnosis, mean (SD), mo | 11.8 (17.9) | 18.4 (32.6) | 18.3 (72.4) | 15.6 (48.3) | .112 |
Gender, No. (%) female | 354 (85.9) | 198 (96.6) | 305 (81.6) | 857 (86) | <.001 |
Cancer site | <.001 | ||||
Breast | 341 (82.8) | 195 (95.1) | 295 (78.9) | 831 (84) | |
Prostate | 48 (11.7) | 4 (2) | 60 (16) | 112 (11) | |
Colorectal | 23 (5.6) | 6 (2.9) | 19 (5.1) | 48 (5) | |
Race | .001 | ||||
White | 332 (80.6) | 193 (94.2) | 313 (83.7) | 838 (85) | |
Black | 46 (11.2) | 4 (2) | 34 (9.1) | 84 (8) | |
Other | 4 (1) | 1 (0.5) | 6 (1.6) | 58 (6) | |
Hispanic Ethnicity, No. (%) yes | 11 (2.7) | 2 (1) | 12 (3.2) | 25 (3) | .085 |
Education | <.001 | ||||
High school/GED grad or less | 93 (22.6) | 51 (24.9) | 131 (35) | 275 (28) | |
Some college/2-y degree | 127 (30.8) | 74 (36.1) | 126 (33.7) | 327 (33) | |
4-y college grad | 118 (28.6) | 51 (24.9) | 67 (17.9) | 236 (24) | |
Grad school degree | 74 (18) | 28 (13.7) | 47 (12.6) | 149 (15) | |
Income | <.001 | ||||
<$25 000 | 78 (18.9) | 25 (12.2) | 75 (20.1) | 178 (18) | |
$25 000-$74 999 | 147 (35.7) | 109 (53.2) | 163 (43.6) | 419 (42) | |
≥$75 000 | 145 (35.2) | 57 (27.8) | 94 (25.1) | 296 (30) | |
Marital status | .337 | ||||
Married/living with partner | 266 (64.6) | 137 (66.8) | 248 (66.3) | 651 (66) | |
Widowed | 113 (27.4) | 57 (27.8) | 95 (25.4) | 265 (27) | |
Divorced/separated | 33 (8) | 9 (4.4) | 29 (7.8) | 71 (7) | |
Employment status | .232 | ||||
Working (as paid employee) | 197 (47.8) | 96 (46.8) | 149 (39.8) | 442 (45) | |
Self-employed | 43 (10.4) | 20 (9.8) | 50 (13.4) | 113 (11) | |
Not working, retired, disabled | 171 (41.5) | 88 (42.9) | 175 (46.8) | 434 (44) | |
Stage of cancer at diagnosis | .153 | ||||
Stage 0 | 37 (9) | 11 (5.4) | 46 (12.3) | 94 (9) | |
Stage I | 146 (35.4) | 78 (38.1) | 133 (35.6) | 357 (36) | |
Stage II | 131 (31.8) | 66 (32.2) | 92 (24.6) | 289 (29) | |
Stage III | 48 (11.7) | 20 (9.8) | 37 (9.9) | 105 (11) | |
Stage IV | 5 (1.2) | 2 (1) | 4 (1.1) | 11 (1) | |
Don't know | 40 (9.7) | 25 (12.2) | 55 (14.7) | 120 (12) | |
Treatments completedb | |||||
Surgery | 349 (84.7) | 198 (96.6) | 303 (81) | 850 (86) | <.001 |
Chemotherapy, targeted therapy or immunotherapy | 202 (49) | 94 (45.9) | 138 (36.9) | 434 (44) | .002 |
Radiation | 325 (78.9) | 139 (67.8) | 315 (84.2) | 779 (79) | <.001 |
Other | 9 (2.2) | 13 (6.3) | 11 (2.9) | 33 (3) | .022 |
Receiving hormone therapy | .534 | ||||
No (have not and will not) | 126 (30.9) | 56 (27.5) | 125 (33.6) | 307 (31) | |
Currently receiving | 241 (59.1) | 125 (61.3) | 203 (54.6) | 569 (58) | |
Completed/will start | 32 (7.8) | 19 (9.3) | 31 (8.3) | 82 (8) | |
Recurrence or diagnosis of new cancer, No. (%) | |||||
At 3 mo | 5 (1.4) | 0 (0) | 5 (1.5) | 10 (1.1) | .279 |
At 6 mo | 6 (1.8) | 0 (0) | 8 (2.6) | 14 (1.8) | .123 |
Change in employment | |||||
At 3 mo | 19 (5.2) | 11 (6.3) | 22 (6.8) | 10 (1.1) | .674 |
At 6 mo | 14 (4.3) | 8 (5.0) | 13 (4.4) | 14 (1.8) | .933 |
Abbreviation: CER, comparative effectiveness research.
aAll demographic data are reported at baseline. Percentages may not sum to 100% due to missing values.
bSurvivors can select all responses that apply.
Table 10CER Survivor Eligibility and Enrollment
Research component | Planned sample size | Total screened | Total eligible | Total enrolled |
---|---|---|---|---|
Institutional enrollment | 45 | 232 | 55 | 32 |
Survivor enrollment | 1150a | 1190b | 1189 | 991 |
Clinician enrollment | N/A | N/A | N/A | 101 |
Abbreviations: CER, comparative effectiveness research; N/A, not applicable.
aSurvivor enrollment planned sample size numbers are based on our June 2016 approved modification to PCORI.
bThe number of screened participants for the CER enrollment numbers included 1 lung cancer patient who was screened into the study by a site but was later deemed ineligible by the GW research team.
Table 11Total Survivors Enrolled Over Time by Model
Specialize consultative | Specialized longitudinal | Oncology embedded | Total enrolled | |
---|---|---|---|---|
Baseline survey | 412 | 205 | 374 | 991 |
Survey at 1 wk | 393 | 187 | 346 | 926 |
Survey at 3 mo | 365 | 175 | 328 | 868 |
Survey at 6 mo | 323 | 159 | 295 | 777 |
Table 12Lost-to-Follow-up Across All CER Participants Over Timea
Reason | Baseline | 1 wk | 3 mo | 6 mo | Total |
---|---|---|---|---|---|
Unknown | 0 | 5 | 6 | 3 | 4b |
Unable to contact in allotted time | 0 | 6 | 9 | 30 | 45 |
Phone no longer working | 0 | 3 | 2 | 2 | 7 |
Secondary cancer diagnosis | 0 | 1 | 5 | 10 | 16 |
Asked to be removed (not interested) | 0 | 12 | 9 | 3 | 24 |
Called 10× or more | 0 | 39 | 21 | 36 | 96 |
Busy and did not receive gift card | 0 | 1 | 1 | 0 | 2 |
Transferred survivorship services to another facility that is not a “site” | 0 | 0 | 1 | 0 | 1 |
Not able to get a signed clinician consent form | 0 | 0 | 1 | 0 | 1 |
Site no longer participating | 0 | 3 | 4 | 1 | 8 |
Lung cancer | 1 | 0 | 0 | 0 | 1 |
Death | 0 | 0 | 2 | 0 | 2 |
Abbreviation: CER, comparative effectiveness research.
aReasons for lost-to-follow-up were not tracked by model.
bDue to inconsistencies in record-keeping discovered during data cleaning, 10 records marked as lost-to-follow-up for unknown reasons were determined to be included in the data set. We include the 10 records recorded in error as lost-to-follow-up in the data collection periods where they were recorded, but delete them from the total.
Table 13Quality of Care Factors at 1 Week (n = 926) and 6 Months (n = 777) After First Survivorship Visit by Model
Survivorship care quality factors | % Responding “yes, definitely”a | ||||
---|---|---|---|---|---|
Specialized consultative | Specialized longitudinal | Oncology embedded | P value | ||
Domain A: Informed and Grounded Patient | |||||
Emotional and social support, 4 metrics | 1 wk | 33.1 | 30.7 | 20.2 | <.001 |
6 mo | 14.8*** | 18.9*** | 12.6*** | .029 | |
Information and resources, 4 metrics | 1 wk | 75.5 | 73.9 | 65.9 | <.001 |
6 mo | 60.2*** | 59.6*** | 60.6** | .951 | |
Domain B: The Patient-Provider Alliance | |||||
Empowered and engaged patients, 3 metrics | 1 wk | 59.0 | 60.7 | 50.5 | <.001 |
6 mo | 57.6** | 59.8** | 59.3 | .676 | |
Supportive and prepared clinicians, 4 metrics | 1 wk | 59.9 | 58.7 | 56.4 | .179 |
6 mo | 60.2 | 62.3 | 59.1 | .417 | |
Meaningful patient-provider communication, 5 metrics | 1 wk | 88.4 | 89.3 | 84.2 | .003 |
6 mo | 86.3 | 88.4 | 87.5** | .566 | |
Domain C: Supportive Health and Wellness System | |||||
Care coordination and transitions, 8 metrics | 1 wk | 53.1 | 54.8 | 50.3 | .008 |
6 mo | 57.3*** | 59.1** | 57.5*** | .48 | |
Provision of full spectrum of care, 5 metrics | 1 wk | 45.4 | 44.8 | 34.6 | <.001 |
6 mo | 41.4*** | 45.9 | 39.7 | .061 | |
Practical life support/health insurance issues (n = 315b), 3 metrics | 1 wk | 12 | 9.3 | 9.4 | .327 |
6 mo | 8** | 13.2 | 14.8** | .009 | |
Having a medical home, 5 metrics | 1 wk | 77.0 | 75.7 | 75.8 | .582 |
6 mo | 82.3 | 84.0 | 82.9 | .745 |
aWithin each model, significant differences exist between week 1 and month 6 if denoted with an asterisk as follows: *P < .01.
***P < .001;
**P < .05;
bThe sample size for this factor is lower because many participants reported “not applicable” (N/A) to questions regarding practical life support/health insurance issues.
Table 14Longitudinal Results of Quality of Care Scores Between Week 1 (n = 926) and Month 6, by Model (n = 777)a
Quality of care | Impact estimate by model over time between week 1 and month 6 (95% CI) | ||
---|---|---|---|
Specialized consultative | Specialized longitudinal | Oncology embedded | |
Domain A: Informed & Grounded Patient | |||
Emotional and social support | (Reference) | 0.081** (0.017-0.146) | 0.118*** (0.063-0.173) |
Information and resources | (Reference) | −0.008 (−0.064 to 0.079) | 0.091** (0.030-0.151) |
Domain B: The Patient-Provider Alliance | |||
Empowered and engaged patients | (Reference) | −0.012 (−0.081 to 0.056) | 0.091** (0.033-0.149) |
Supportive and prepared clinicians | (Reference) | 0.031 (−0.023 to 0.085) | 0.014 (−0.032 to 0.060) |
Meaningful patient-clinician communication | (Reference) | 0.019 (−0.027 to 0.065) | 0.053** (0.014-0.092) |
Domain C: Supportive Health and Wellness System | |||
Care coordination & transitions | (Reference) | 0.001 (−0.037 to 0.040) | 0.023 (−0.009 to 0.056) |
Provision of full spectrum of care | (Reference) | 0.055* (−0.003 to 0.114) | 0.105*** (0.056-0.155) |
Practical life support/health insurance issues | (Reference) | 0.082** (0.014-0.149) | 0.086** (0.028-0.143) |
Having a medical home | (Reference) | −0.001 (−0.047 to 0.045) | 0.009 (−0.030 to 0.048) |
aModels control for age, gender, race, ethnicity, marital status, education, income, primary provider of survivorship care, and time since diagnosis.
***P < .001;
**P < .05;
*P < .01.
Table 15Quality of Life at Baseline (n = 991) and 6 Months (n = 777) After First Survivorship Visit by Model
Quality of life domains | Percentage of top concerns reporteda | ||||
---|---|---|---|---|---|
Specialized consultative | Specialized longitudinal | Oncology embedded | P value | ||
Physical concerns (8 reported among >40% of baseline population) | Baseline | 56.5 | 52.6 | 50.7 | .024 |
6 mo | 54.5 | 58.3 | 53.1 | .212 | |
Social and emotional concerns (6 reported among >40% of baseline population) | Baseline | 61.0 | 61.1 | 53.6 | .011 |
6 mo | 58.1 | 63.6 | 54.2 | .028 | |
Practical concerns (2 reported among >40% of baseline population) | Baseline | 42.7 | 41.5 | 44.5 | .725 |
6 mo | 38.7 | 42.1 | 33.7*** | .123 |
aTop concerns were defined as those reported by >40% of the study population across all models at baseline. For physical concerns, individuals received scores ranging from 0/8 to 8/8; for social/emotional concerns, individuals were scored 0/6 to 6/6; and for practical concerns, they were scored 0/2 to 2/2. The number presented is the average of those individual scores, converted into a percentage. Within each model, significant differences exist between baseline and month 6 as follows: **P < .05; and *P < .01.
***P < .001;
Table 16Longitudinal Results of Quality of Life Composite Score and Components Between Baseline (n = 991) and Month 6, by Model (n = 777)a
Quality of Life Domains | Impact estimate by model over time between wk 1 and mo 6 (95% CI) | ||
---|---|---|---|
Specialized consultative | Specialized longitudinal | Oncology embedded | |
Physical concerns | (Reference) | 0.086*** (0.038-0.134) | 0.037* (−0.004 to 0.078) |
Social and emotional concerns | (Reference) | 0.055* (−0.002 to 0.112) | 0.036 (−0.012 to 0.085) |
Practical concerns | (Reference) | 0.059 (−0.017 to 0.136) | −0.068** (−0.133 to 0.004) |
aModels control for age, gender, race, ethnicity, marital status, education, income, primary provider of survivorship care, and time since diagnosis.
***P < .001;
**P < .05;
*P < .01.
Table 17Longitudinal Results of Self-Efficacy (n = 777)
Self-efficacy | Impact estimate by model over time between wk 1 and mo 6 (95% CI)a | ||
---|---|---|---|
Specialized consultative | Specialized longitudinal | Oncology embedded | |
Baseline to wk 1 | (Reference) | .019 (−0.086 to 0.124) | −0.120** (−0.210 to .031) |
Baseline to 3 mo | (Reference) | .052 (−0.057 to 0.162) | −0.011 (−0.104 to 0.082) |
Baseline to 6 mo | (Reference) | .045 (−0.070 to 0.162) | 0.029 (−0.068 to 0.129) |
aModels control for age, gender, race, ethnicity, marital status, education, income, primary provider of survivorship care, and time since diagnosis. ***P < .001; and *P < .01.
**P < .05;
Table 18Health Care Utilization of Survivorship Care in Average Number of Visits Over Time, by Model (n = 777)
Providers | Visits to specific providers over time by model | ||||
---|---|---|---|---|---|
Specialized consultative | Specialized longitudinal | Oncology embedded | P value | ||
Primary care physician | End of treatment – baseline | 0.71 (1.3) | 0.77 (1.4) | 0.78 (1.4) | .727 |
0-3 mo postvisit | 1.0 (1.2) | 0.90 (1.0) | 1.1 (1.3) | .321 | |
3-6 mo postvisit | 0.91 (1.3) | 0.76 (0.9) | 0.88 (1.1) | .377 | |
Oncology | End of treatment – baseline | 1.59 (1.8) | 2.22 (2.2) | 1.77 (2.1) | .001 |
0-3 mo postvisit | 1.15 (1.0) | 1.31 (1.4) | 1.32 (1.2) | .117 | |
3-6 mo postvisit | 0.97 (1.0) | 1.09 (1.3) | 1.09 (1.1) | .292 | |
Hospital carea | End of treatment - baseline | 0.49 (1.5) | 0.22 (.71) | 0.54 (1.8) | .056 |
0-3 mo postvisit | 0.51 (1.5) | 0.34 (1.1) | 0.55 (1.2) | .224 | |
3-6 mo postvisit | 0.41 (1.1) | 0.27 (.68) | 0.58 (1.2) | .007 | |
Other specialistsb | End of treatment - baseline | 1.6 (2.7) | 2.0 (3.5) | 1.45 (2.5) | .184 |
0-3 mo postvisit | 1.94 (2.9) | 2.14 (3.2) | 1.73 (2.7) | .317 | |
3-6 mo postvisit | 1.33 (2.5) | 1.71 (2.9) | 1.13 (2.0) | .058 |
aIncludes inpatient, outpatient, and emergency department care.
bIncludes mental health care and other medical specialty care.
Table 19Longitudinal Results of Satisfaction and Health Care Utilization Between Week 1 and Month 6, by Model (n = 777)
Impact estimate by model over time between wk 1 and mo 6 (95% CI)a | |||
---|---|---|---|
Specialized consultative | Specialized longitudinal | Oncology embedded | |
Satisfaction | |||
Recommend survivorship clinic to family or friend survivors of cancerb | (Reference) | 0.062* (−0.005 to 0.128) | 0.006 (−0.050 to −0.063) |
Health care use in average No. of visits | |||
Primary care physicians | (Reference) | −0.229* (−0.508 to 0.050) | −0.121 (−0.358 to 0.116) |
Oncologists | (Reference) | −0.487** (−0.844 to 0.131) | −0.074 (−0.377 to 0.229) |
Hospital care | (Reference) | 0.199 (−0.097 to 0.496) | 0.187 (−0.064 to 0.439) |
Other specialists | (Reference) | 0.137 (−0.504 to 0.778) | 0.079 (−0.445 to 0.603) |
aModels control for age, gender, race, ethnicity, marital status, education, income, primary provider of survivorship care, and time since diagnosis. ***P < .001;
**P < .05;
*P < .01.
bThe satisfaction outcome is ranked on a 3-point continuous scale, where 1 = no, definitely not; and 3 = yes, definitely.
Table 20Satisfaction With Survivorship Care Between Week 1 and Month 6, by Model (n = 777)
Satisfaction | Specialized consultative | Specialized longitudinal | Oncology embedded | |
---|---|---|---|---|
% Responding “yes” Satisfied with carea | 1 wk | 97 | 95 | 95 |
6 mo | 90 | 92 | 94 | |
% Responding “yes” Would recommend carea | 1 wk | 95 | 95 | 96 |
6 mo | 91 | 96 | 93 | |
Mean rating of satisfaction (0-10), median (IQR) | 1 wk | 10 (9-10) | 10 (9-10) | 10 (8-10) |
6 mo | 9 (8-10) | 10 (8-10) | 10 (8-10) |
aThe data presented for these outcome variables represent the highest category on a 3-point continuous scale.
Suggested citation:
Mead KH, Raskin S, Arem H, et al. (2019). Evaluating Different Types of Cancer Survivorship Care. Patient-Centered Outcomes Research Institute (PCORI). https://doi.org/10.25302/7.2019.IH.12115255
Disclaimer
The [views, statements, opinions] presented in this report are solely the responsibility of the author(s) and do not necessarily represent the views of the Patient-Centered Outcomes Research Institute® (PCORI®), its Board of Governors or Methodology Committee.