357 resultados para kidding interval


Relevância:

10.00% 10.00%

Publicador:

Resumo:

- Background Exercise referral schemes (ERS) aim to identify inactive adults in the primary-care setting. The GP or health-care professional then refers the patient to a third-party service, with this service taking responsibility for prescribing and monitoring an exercise programme tailored to the needs of the individual. - Objective To assess the clinical effectiveness and cost-effectiveness of ERS for people with a diagnosed medical condition known to benefit from physical activity (PA). The scope of this report was broadened to consider individuals without a diagnosed condition who are sedentary. - Data sources MEDLINE; EMBASE; PsycINFO; The Cochrane Library, ISI Web of Science; SPORTDiscus and ongoing trial registries were searched (from 1990 to October 2009) and included study references were checked. - Methods Systematic reviews: the effectiveness of ERS, predictors of ERS uptake and adherence, and the cost-effectiveness of ERS; and the development of a decision-analytic economic model to assess cost-effectiveness of ERS. - Results Seven randomised controlled trials (UK, n = 5; non-UK, n = 2) met the effectiveness inclusion criteria, five comparing ERS with usual care, two compared ERS with an alternative PA intervention, and one to an ERS plus a self-determination theory (SDT) intervention. In intention-to-treat analysis, compared with usual care, there was weak evidence of an increase in the number of ERS participants who achieved a self-reported 90-150 minutes of at least moderate-intensity PA per week at 6-12 months' follow-up [pooled relative risk (RR) 1.11, 95% confidence interval 0.99 to 1.25]. There was no consistent evidence of a difference between ERS and usual care in the duration of moderate/vigorous intensity and total PA or other outcomes, for example physical fitness, serum lipids, health-related quality of life (HRQoL). There was no between-group difference in outcomes between ERS and alternative PA interventions or ERS plus a SDT intervention. None of the included trials separately reported outcomes in individuals with medical diagnoses. Fourteen observational studies and five randomised controlled trials provided a numerical assessment of ERS uptake and adherence (UK, n = 16; non-UK, n = 3). Women and older people were more likely to take up ERS but women, when compared with men, were less likely to adhere. The four previous economic evaluations identified suggest ERS to be a cost-effective intervention. Indicative incremental cost per quality-adjusted life-year (QALY) estimates for ERS for various scenarios were based on a de novo model-based economic evaluation. Compared with usual care, the mean incremental cost for ERS was £169 and the mean incremental QALY was 0.008, with the base-case incremental cost-effectiveness ratio at £20,876 per QALY in sedentary people without a medical condition and a cost per QALY of £14,618 in sedentary obese individuals, £12,834 in sedentary hypertensive patients, and £8414 for sedentary individuals with depression. Estimates of cost-effectiveness were highly sensitive to plausible variations in the RR for change in PA and cost of ERS. - Limitations We found very limited evidence of the effectiveness of ERS. The estimates of the cost-effectiveness of ERS are based on a simple analytical framework. The economic evaluation reports small differences in costs and effects, and findings highlight the wide range of uncertainty associated with the estimates of effectiveness and the impact of effectiveness on HRQoL. No data were identified as part of the effectiveness review to allow for adjustment of the effect of ERS in different populations. - Conclusions There remains considerable uncertainty as to the effectiveness of ERS for increasing activity, fitness or health indicators or whether they are an efficient use of resources in sedentary people without a medical diagnosis. We failed to identify any trial-based evidence of the effectiveness of ERS in those with a medical diagnosis. Future work should include randomised controlled trials assessing the cinical effectiveness and cost-effectivenesss of ERS in disease groups that may benefit from PA. - Funding The National Institute for Health Research Health Technology Assessment programme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

- Background In the UK, women aged 50–73 years are invited for screening by mammography every 3 years. In 2009–10, more than 2.24 million women in this age group in England were invited to take part in the programme, of whom 73% attended a screening clinic. Of these, 64,104 women were recalled for assessment. Of those recalled, 81% did not have breast cancer; these women are described as having a false-positive mammogram. - Objective The aim of this systematic review was to identify the psychological impact on women of false-positive screening mammograms and any evidence for the effectiveness of interventions designed to reduce this impact. We were also looking for evidence of effects in subgroups of women. - Data sources MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, EMBASE, Health Management Information Consortium, Cochrane Central Register for Controlled Trials, Cochrane Database of Systematic Reviews, Centre for Reviews and Dissemination (CRD) Database of Abstracts of Reviews of Effects, CRD Health Technology Assessment (HTA), Cochrane Methodology, Web of Science, Science Citation Index, Social Sciences Citation Index, Conference Proceedings Citation Index-Science, Conference Proceeding Citation Index-Social Science and Humanities, PsycINFO, Cumulative Index to Nursing and Allied Health Literature, Sociological Abstracts, the International Bibliography of the Social Sciences, the British Library's Electronic Table of Contents and others. Initial searches were carried out between 8 October 2010 and 25 January 2011. Update searches were carried out on 26 October 2011 and 23 March 2012. - Review methods Based on the inclusion criteria, titles and abstracts were screened independently by two reviewers. Retrieved papers were reviewed and selected using the same independent process. Data were extracted by one reviewer and checked by another. Each included study was assessed for risk of bias. - Results Eleven studies were found from 4423 titles and abstracts. Studies that used disease-specific measures found a negative psychological impact lasting up to 3 years. Distress increased with the level of invasiveness of the assessment procedure. Studies using instruments designed to detect clinical levels of morbidity did not find this effect. Women with false-positive mammograms were less likely to return for the next round of screening [relative risk (RR) 0.97; 95% confidence interval (CI) 0.96 to 0.98] than those with normal mammograms, were more likely to have interval cancer [odds ratio (OR) 3.19 (95% CI 2.34 to 4.35)] and were more likely to have cancer detected at the next screening round [OR 2.15 (95% CI 1.55 to 2.98)]. - Limitations This study was limited to UK research and by the robustness of the included studies, which frequently failed to report quality indicators, for example failure to consider the risk of bias or confounding, or failure to report participants' demographic characteristics. - Conclusions We conclude that the experience of having a false-positive screening mammogram can cause breast cancer-specific psychological distress that may endure for up to 3 years, and reduce the likelihood that women will return for their next round of mammography screening. These results should be treated cautiously owing to inherent weakness of observational designs and weaknesses in reporting. Future research should include a qualitative interview study and observational studies that compare generic and disease-specific measures, collect demographic data and include women from different social and ethnic groups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND Sedentary behavior is continuing to emerge as an important target for health promotion. The purpose of this study was to determine the validity of a self-report use of time recall tool, the Multimedia Activity Recall for Children and Adults (MARCA) in estimating time spent sitting/lying, compared with a device-based measure. METHODS Fifty-eight participants (48% female, [mean±standard deviation] 28±7.4 years of age, 23.9±3.05 kg/m2) wore an activPAL device for 24-h and the following day completed the MARCA. Pearson correlation coefficients (r) were used to analyse convergent validity of the adult MARCA compared with activPAL estimates of total sitting/lying time. Agreement was examined using Bland-Altman plots. RESULTS According to activPAL estimates, participants spent 10.4 hr/day [standard deviation (SD)=2.06] sitting or lying down while awake. The correlation between MARCA and activPAL estimates of total sit/lie time was r=0.77 (95% confidence interval = 0.64-0.86; p<0.001). Bland-Altman analyses revealed a mean bias of +0.59 hr/day with moderately wide limits of agreement (-2.35 hours to +3.53 hr/day). CONCLUSIONS This study found a moderate to strong agreement between the adult MARCA and the activPAL, suggesting that the MARCA is an appropriate tool for the measurement of time spent sitting or lying down in an adult population.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Sedentary behaviour is associated with several deleterious health consequences. Although device-based measures of sedentary time are available, they are costly and do not provide a measure of domain specific sedentary time. High quality self-report measures are necessary to accurately capture domain specific sedentary time, and to provide an alternative to devices when cost is an issue. In this study, the Past-day Adults’ Sedentary Time (PAST) questionnaire, previously shown to have acceptable validity and reliability in a sample of breast cancer survivors, was modified for a university sample and validity of the modified questionnaire was examined compared with activPAL. Methods Participants (n = 58, age = 18–55 years, 48% female, 66% students) were recruited from the University of Queensland (students and staff). They answered the PAST questionnaire, which asked about time spent sitting or lying down for work, study, travel, television viewing, leisure-time computer use, reading, eating, socialising and other purposes, during the previous day. Time reported for these questions was summed to provide a measure of total sedentary time. Participants also wore an activPAL device for the full day prior to completing the questionnaire and recorded their wake and sleep times in an activity log. Total waking sedentary time derived from the activPAL was used as the criterion measure. Correlation (Pearson's r) and agreement (Bland–Altman plots) between PAST and activPAL sedentary time were examined. Results Participants were sedentary (activPAL-determined) for approximately 66% of waking hours. The correlation between PAST and activPAL sedentary time for the whole sample was r = 0.50 [95% confidence interval (CI) = 0.28–0.67]; and higher for non-students (r = 0.63, 95% CI = 0.26–0.84) than students (r = 0.46, 95% CI = 0.16–0.68). Bland–Altman plots revealed that the mean difference between the two measures was 19 min although limits of agreement were wide (95% limits of agreement −4.1 to 4.7 h). Discussion The PAST questionnaire provides an acceptable measure of sedentary time in this population, which included students and adults with high workplace sitting. These findings support earlier research that questionnaires employing past-day recall of sedentary time provide a viable alternative to existing sedentary behaviour questionnaires.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background People admitted to intensive care units and those with chronic health care problems often require long-term vascular access. Central venous access devices (CVADs) are used for administering intravenous medications and blood sampling. CVADs are covered with a dressing and secured with an adhesive or adhesive tape to protect them from infection and reduce movement. Dressings are changed when they become soiled with blood or start to come away from the skin. Repeated removal and application of dressings can cause damage to the skin. The skin is an important barrier that protects the body against infection. Less frequent dressing changes may reduce skin damage, but it is unclear whether this practice affects the frequency of catheter-related infections. Objectives To assess the effect of the frequency of CVAD dressing changes on the incidence of catheter-related infections and other outcomes including pain and skin damage. Search methods In June 2015 we searched: The Cochrane Wounds Specialised Register; The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library); Ovid MEDLINE; Ovid MEDLINE (In-Process & Other Non-Indexed Citations); Ovid EMBASE and EBSCO CINAHL. We also searched clinical trials registries for registered trials. There were no restrictions with respect to language, date of publication or study setting. Selection criteria All randomised controlled trials (RCTs) evaluating the effect of the frequency of CVAD dressing changes on the incidence of catheter-related infections on all patients in any healthcare setting. Data collection and analysis We used standard Cochrane review methodology. Two review authors independently assessed studies for inclusion, performed risk of bias assessment and data extraction. We undertook meta-analysis where appropriate or otherwise synthesised data descriptively when heterogeneous. Main results We included five RCTs (2277 participants) that compared different frequencies of CVAD dressing changes. The studies were all conducted in Europe and published between 1995 and 2009. Participants were recruited from the intensive care and cancer care departments of one children's and four adult hospitals. The studies used a variety of transparent dressings and compared a longer interval between dressing changes (5 to15 days; intervention) with a shorter interval between changes (2 to 5 days; control). In each study participants were followed up until the CVAD was removed or until discharge from ICU or hospital. - Confirmed catheter-related bloodstream infection (CRBSI) One trial randomised 995 people receiving central venous catheters to a longer or shorter interval between dressing changes and measured CRBSI. It is unclear whether there is a difference in the risk of CRBSI between people having long or short intervals between dressing changes (RR 1.42, 95% confidence interval (CI) 0.40 to 4.98) (low quality evidence). - Suspected catheter-related bloodstream infection Two trials randomised a total of 151 participants to longer or shorter dressing intervals and measured suspected CRBSI. It is unclear whether there is a difference in the risk of suspected CRBSI between people having long or short intervals between dressing changes (RR 0.70, 95% CI 0.23 to 2.10) (low quality evidence). - All cause mortality Three trials randomised a total of 896 participants to longer or shorter dressing intervals and measured all cause mortality. It is unclear whether there is a difference in the risk of death from any cause between people having long or short intervals between dressing changes (RR 1.06, 95% CI 0.90 to 1.25) (low quality evidence). - Catheter-site infection Two trials randomised a total of 371 participants to longer or shorter dressing intervals and measured catheter-site infection. It is unclear whether there is a difference in risk of catheter-site infection between people having long or short intervals between dressing changes (RR 1.07, 95% CI 0.71 to 1.63) (low quality evidence). - Skin damage One small trial (112 children) and three trials (1475 adults) measured skin damage. There was very low quality evidence for the effect of long intervals between dressing changes on skin damage compared with short intervals (children: RR of scoring ≥ 2 on the skin damage scale 0.33, 95% CI 0.16 to 0.68; data for adults not pooled). - Pain Two studies involving 193 participants measured pain. It is unclear if there is a difference between long and short interval dressing changes on pain during dressing removal (RR 0.80, 95% CI 0.46 to 1.38) (low quality evidence). Authors' conclusions The best available evidence is currently inconclusive regarding whether longer intervals between CVAD dressing changes are associated with more or less catheter-related infection, mortality or pain than shorter intervals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Studies investigating the relationship between malnutrition and post-discharge mortality following acute hip fracture yield conflicting results. This study aimed to determine whether malnutrition independently predicted 12-month post-fracture mortality after adjusting for clinically relevant covariates. Methods An ethics approved, prospective, consecutive audit was undertaken for all surgically treated hip fracture inpatients admitted to a dedicated orthogeriatric unit (November 2010–October 2011). The 12-month mortality data were obtained by a dual search of the mortality registry and Queensland Health database. Malnutrition was evaluated using the Subjective Global Assessment. Demographic (age, gender, admission residence) and clinical covariates included fracture type, time to surgery, anaesthesia type, type of surgery, post-surgery time to mobilize and post-operative complications (delirium, pulmonary and deep vein thrombosis, cardiac complications, infections). The Charlson Comorbidity Index was retrospectively applied. All diagnoses were confirmed by the treating orthogeriatrician. Results A total of 322 of 346 patients were available for audit. Increased age (P = 0.004), admission from residential care (P < 0.001), Charlson Comorbidity Index (P = 0.007), malnutrition (P < 0.001), time to mobilize >48 h (P < 0.001), delirium (P = 0.003), pulmonary embolism (P = 0.029) and cardiovascular complication (P = 0.04) were associated with 12-month mortality. Logistic regression analysis demonstrated that malnutrition (odds ratio (OR) 2.4 (95% confidence interval (CI) 1.3–4.7, P = 0.007)), in addition to admission from residential care (OR 2.6 (95% CI 1.3–5.3, P = 0.005)) and pulmonary embolism (OR 11.0 (95% CI 1.5–78.7, P = 0.017)), independently predicted 12-month mortality. Conclusions Findings substantiate malnutrition as an independent predictor of 12-month mortality in a representative sample of hip fracture inpatients. Effective strategies to identify and treat malnutrition in hip fracture should be prioritized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective The objective of this study was to investigate the risk of chronic kidney disease (CKD) stage 4-5 and dialysis treatment on incidence of foot ulceration and major lower extremity amputation in comparison to CKD stage 3. Methods In this retrospective study, all individuals who visited our hospital between 2006 and 2012 because of CKD stages 3 to 5 or dialysis treatment were included. Medical records were reviewed for incidence of foot ulceration and major amputation. The time from CKD 3, CKD 4-5, and dialysis treatment until first foot ulceration and first major lower extremity amputation was calculated and analyzed by Kaplan-Meier curves and multivariate Cox proportional hazards model. Diabetes mellitus, peripheral arterial disease, peripheral neuropathy, and foot deformities were included for potential confounding. Results A total of 669 individuals were included: 539 in CKD 3, 540 in CKD 4-5, and 259 in dialysis treatment (individuals could progress from one group to the next). Unadjusted foot ulcer incidence rates per 1000 patients per year were 12 for CKD 3, 47 for CKD 4-5, and 104 for dialysis (P < .001). In multivariate analyses, the hazard ratio for incidence of foot ulceration was 4.0 (95% confidence interval [CI], 2.6-6.3) in CKD 4-5 and 7.6 (95% CI, 4.8-12.1) in dialysis treatment compared with CKD 3. Hazard ratios for incidence of major amputation were 9.5 (95% CI, 2.1-43.0) and 15 (95% CI, 3.3-71.0), respectively. Conclusions CKD 4-5 and dialysis treatment are independent risk factors for foot ulceration and major amputation compared with CKD 3. Maximum effort is needed in daily clinical practice to prevent foot ulcers and their devastating consequences in all individuals with CKD 4-5 or dialysis treatment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concession agreement is the core feature of BOT projects, with the concession period being the most essential feature in determining the time span of the various rights, obligations and responsibilities of the government and concessionaire. Concession period design is therefore crucial for financial viability and determining the benefit/cost allocation between the host government and the concessionaire. However, while the concession period and project life span are essentially interdependent, most methods to date consider their determination as contiguous events that are determined exogenously. Moreover, these methods seldom consider the, often uncertain, social benefits and costs involved that are critical in defining, pricing and distributing benefits and costs between the various parties and evaluating potentially distributable cash flows. In this paper, we present the results of the first stage of a research project aimed at determining the optimal build-operate-transfer (BOT) project life span and concession period endogenously and interdependently by maximizing the combined benefits of stakeholders. Based on the estimation of the economic and social development involved, a negotiation space of the concession period interval is obtained, with its lower boundary creating the desired financial return for the private investors and its upper boundary ensuring the economic feasibility of the host government as well as the maximized welfare within the project life. The outcome of the new quantitative model is considered as a suitable basis for future field trials prior to implementation. The structure and details of the model are provided in the paper with Hong Kong tunnel project as a case study to demonstrate its detailed application. The basic contributions of the paper to the theory of construction procurement are that the project life span and concession period are determined jointly and the social benefits taken into account in the examination of project financial benefits. In practical terms, the model goes beyond the current practice of linear-process thinking and should enable engineering consultants to provide project information more rationally and accurately to BOT project bidders and increase the government's prospects of successfully entering into a contract with a concessionaire. This is expected to generate more negotiation space for the government and concessionaire in determining the major socioeconomic features of individual BOT contracts when negotiating the concession period. As a result, the use of the model should increase the total benefit to both parties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background The Global Burden of Diseases (GBD), Injuries, and Risk Factors study used the disability-adjusted life year (DALY) to quantify the burden of diseases, injuries, and risk factors. This paper provides an overview of injury estimates from the 2013 update of GBD, with detailed information on incidence, mortality, DALYs and rates of change from 1990 to 2013 for 26 causes of injury, globally, by region and by country. Methods Injury mortality was estimated using the extensive GBD mortality database, corrections for ill-defined cause of death and the cause of death ensemble modelling tool. Morbidity estimation was based on inpatient and outpatient data sets, 26 cause-of-injury and 47 nature-of-injury categories, and seven follow-up studies with patient-reported long-term outcome measures. Results In 2013, 973 million (uncertainty interval (UI) 942 to 993) people sustained injuries that warranted some type of healthcare and 4.8 million (UI 4.5 to 5.1) people died from injuries. Between 1990 and 2013 the global age-standardised injury DALY rate decreased by 31% (UI 26% to 35%). The rate of decline in DALY rates was significant for 22 cause-of-injury categories, including all the major injuries. Conclusions Injuries continue to be an important cause of morbidity and mortality in the developed and developing world. The decline in rates for almost all injuries is so prominent that it warrants a general statement that the world is becoming a safer place to live in. However, the patterns vary widely by cause, age, sex, region and time and there are still large improvements that need to be made.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces an index of tax optimality that measures the distance of some current tax structure from the optimal tax structure in the presence of public goods. This index is defined on the [0, 1] interval and measures the proportion of the optimal tax rates that will achieve the same welfare outcome as some arbitrarily given initial tax structure. We call this number the Tax Optimality Index. We also show how the basic methodology can be altered to derive a revenue equivalent uniform tax, which measures the tax burden implied by the public sector. A numerical example is used to illustrate the method developed, and extensions of the analysis to handle models with multiple households and nonlinear taxation structures are undertaken.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CONTEXT: Polyalanine tract variations in transcription factors have been identified for a wide spectrum of developmental disorders. The thyroid transcription factor forkhead factor E1 (FOXE1) contains a polymorphic polyalanine tract with 12-22 alanines. Single-nucleotide polymorphisms (SNP) close to this locus are associated with papillary thyroid cancer (PTC), and a strong linkage disequilibrium block extends across this region. OBJECTIVE: The objective of the study was to assess whether the FOXE1 polyalanine repeat region was associated with PTC and to assess the effect of polyalanine repeat region variants on protein expression, DNA binding, and transcriptional function on FOXE1-responsive promoters. DESIGN: This was a case-control study. SETTING: The study was conducted at a tertiary referral hospital. PATIENTS AND METHODS: The FOXE1 polyalanine repeat region and tag SNP were genotyped in 70 PTC, with a replication in a further 92 PTC, and compared with genotypes in 5767 healthy controls (including 5667 samples from the Wellcome Trust Case Control Consortium). In vitro studies were performed to examine the protein expression, DNA binding, and transcriptional function for FOXE1 variants of different polyalanine tract lengths. RESULTS: All the genotyped SNP were in tight linkage disequilibrium, including the FOXE1 polyalanine repeat region. We confirmed the strong association of rs1867277 with PTC (overall P = 1 × 10(-7), odds ratio 1.84, confidence interval 1.31-2.57). rs1867277 was in tight linkage disequilibrium with the FOXE1 polyalanine repeat region (r(2) = 0.95). FOXE1(16Ala) was associated with PTC with an odds ratio of 2.23 (confidence interval 1.42-3.50; P = 0.0005). Functional studies in vitro showed that FOXE1(16Ala) was transcriptionally impaired compared with FOXE1(14Ala), which was not due to differences in protein expression or DNA binding. CONCLUSIONS: We have confirmed the previous association of FOXE1 with PTC. Our data suggest that the coding polyalanine expansion in FOXE1 may be responsible for the observed association between FOXE1 and PTC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

- Background Expressed emotion (EE) captures the affective quality of the relationship between family caregivers and their care recipients and is known to increase the risk of poor health outcomes for caregiving dyads. Little is known about expressed emotion in the context of caregiving for persons with dementia, especially in non-Western cultures. The Family Attitude Scale (FAS) is a psychometrically sound self-reporting measure for EE. Its use in the examination of caregiving for patients with dementia has not yet been explored. - Objectives This study was performed to examine the psychometric properties of the Chinese version of the FAS (FAS-C) in Chinese caregivers of relatives with dementia, and its validity in predicting severe depressive symptoms among the caregivers. - Methods The FAS was translated into Chinese using Brislin's model. Two expert panels evaluated the semantic equivalence and content validity of this Chinese version (FAS-C), respectively. A total of 123 Chinese primary caregivers of relatives with dementia were recruited from three elderly community care centers in Hong Kong. The FAS-C was administered with the Chinese versions of the 5-item Mental Health Inventory (MHI-5), the Zarit Burden Interview (ZBI) and the Revised Memory and Behavioral Problem Checklist (RMBPC). - Results The FAS-C had excellent semantic equivalence with the original version and a content validity index of 0.92. Exploratory factor analysis identified a three-factor structure for the FAS-C (hostile acts, criticism and distancing). Cronbach's alpha of the FAS-C was 0.92. Pearson's correlation indicated that there were significant associations between a higher score on the FAS-C and greater caregiver burden (r = 0.66, p < 0.001), poorer mental health of the caregivers (r = −0.65, p < 0.001) and a higher level of dementia-related symptoms (frequency of symptoms: r = 0.45, p < 0.001; symptom disturbance: r = 0.51, p < 0.001), which serves to suggest its construct validity. For detecting severe depressive symptoms of the family caregivers, the receiving operating characteristics (ROC) curve had an area under curve of 0.78 (95% confidence interval (CI) = 0.69–0.87, p < 0.0001). The optimal cut-off score was >47 with a sensitivity of 0.720 (95% CI = 0.506–0.879) and specificity of 0.742 (95% CI = 0.643–0.826). - Conclusions The FAS-C is a reliable and valid measure to assess the affective quality of the relationship between Chinese caregivers and their relatives with dementia. It also has acceptable predictability in identifying family caregivers with severe depressive symptoms.