846 resultados para Model risk
Resumo:
OBJECTIVES: To assess health care utilisation for patients co-infected with TB and HIV (TB-HIV), and to develop a weighted health care index (HCI) score based on commonly used interventions and compare it with patient outcome. METHODS: A total of 1061 HIV patients diagnosed with TB in four regions, Central/Northern, Southern and Eastern Europe and Argentina, between January 2004 and December 2006 were enrolled in the TB-HIV study. A weighted HCI score (range 0–5), based on independent prognostic factors identified in multivariable Cox models and the final score, included performance of TB drug susceptibility testing (DST), an initial TB regimen containing a rifamycin, isoniazid and pyrazinamide, and start of combination antiretroviral treatment (cART). RESULTS: The mean HCI score was highest in Central/Northern Europe (3.2, 95%CI 3.1–3.3) and lowest in Eastern Europe (1.6, 95%CI 1.5–1.7). The cumulative probability of death 1 year after TB diagnosis decreased from 39% (95%CI 31–48) among patients with an HCI score of 0, to 9% (95%CI 6–13) among those with a score of ≥4. In an adjusted Cox model, a 1-unit increase in the HCI score was associated with 27% reduced mortality (relative hazard 0.73, 95%CI 0.64–0.84). CONCLUSIONS: Our results suggest that DST, standard anti-tuberculosis treatment and early cART may improve outcome for TB-HIV patients. The proposed HCI score provides a tool for future research and monitoring of the management of TB-HIV patients. The highest HCI score may serve as a benchmark to assess TB-HIV management, encouraging continuous health care improvement.
Resumo:
IMPORTANCE Because effective interventions to reduce hospital readmissions are often expensive to implement, a score to predict potentially avoidable readmissions may help target the patients most likely to benefit. OBJECTIVE To derive and internally validate a prediction model for potentially avoidable 30-day hospital readmissions in medical patients using administrative and clinical data readily available prior to discharge. DESIGN Retrospective cohort study. SETTING Academic medical center in Boston, Massachusetts. PARTICIPANTS All patient discharges from any medical services between July 1, 2009, and June 30, 2010. MAIN OUTCOME MEASURES Potentially avoidable 30-day readmissions to 3 hospitals of the Partners HealthCare network were identified using a validated computerized algorithm based on administrative data (SQLape). A simple score was developed using multivariable logistic regression, with two-thirds of the sample randomly selected as the derivation cohort and one-third as the validation cohort. RESULTS Among 10 731 eligible discharges, 2398 discharges (22.3%) were followed by a 30-day readmission, of which 879 (8.5% of all discharges) were identified as potentially avoidable. The prediction score identified 7 independent factors, referred to as the HOSPITAL score: h emoglobin at discharge, discharge from an o ncology service, s odium level at discharge, p rocedure during the index admission, i ndex t ype of admission, number of a dmissions during the last 12 months, and l ength of stay. In the validation set, 26.7% of the patients were classified as high risk, with an estimated potentially avoidable readmission risk of 18.0% (observed, 18.2%). The HOSPITAL score had fair discriminatory power (C statistic, 0.71) and had good calibration. CONCLUSIONS AND RELEVANCE This simple prediction model identifies before discharge the risk of potentially avoidable 30-day readmission in medical patients. This score has potential to easily identify patients who may need more intensive transitional care interventions.
Resumo:
BACKGROUND In recent years, the occurrence and the relevance of Mycoplasma hyopneumoniae infections in suckling pigs has been examined in several studies. Whereas most of these studies were focused on sole prevalence estimation within different age groups, follow-up of infected piglets or assessment of pathological findings, none of the studies included a detailed analysis of individual and environmental risk factors. Therefore, the aim of the present study was to investigate the frequency of M. hyopneumoniae infections in suckling pigs of endemically infected herds and to identify individual risk factors potentially influencing the infection status of suckling pigs at the age of weaning. RESULTS The animal level prevalence of M. hyopneumoniae infections in suckling pigs examined in three conventional pig breeding herds was 3.6% (41/1127) at the time of weaning. A prevalence of 1.2% was found in the same pigs at the end of their nursery period. In a multivariable Poisson regression model it was found that incidence rate ratios (IRR) for suckling pigs are significantly lower than 1 when teeth grinding was conducted (IRR: 0.10). Moreover, high temperatures in the piglet nest during the first two weeks of life (occasionally >40°C) were associated with a decrease of the probability of an infection (IRR: 0.23-0.40). Contrary, the application of PCV2 vaccines to piglets was associated with an increased infection risk (IRR: 9.72). CONCLUSIONS Since single infected piglets are supposed to act as initiators for the transmission of this pathogen in nursery and fattening pigs, the elimination of the risk factors described in this study should help to reduce the incidence rate of M. hyopneumoniae infections and thereby might contribute to a reduced probability of high prevalences in older pigs.
Resumo:
Background Persons infected with human immunodeficiency virus (HIV) have increased rates of coronary artery disease (CAD). The relative contribution of genetic background, HIV-related factors, antiretroviral medications, and traditional risk factors to CAD has not been fully evaluated in the setting of HIV infection. Methods In the general population, 23 common single-nucleotide polymorphisms (SNPs) were shown to be associated with CAD through genome-wide association analysis. Using the Metabochip, we genotyped 1875 HIV-positive, white individuals enrolled in 24 HIV observational studies, including 571 participants with a first CAD event during the 9-year study period and 1304 controls matched on sex and cohort. Results A genetic risk score built from 23 CAD-associated SNPs contributed significantly to CAD (P = 2.9×10−4). In the final multivariable model, participants with an unfavorable genetic background (top genetic score quartile) had a CAD odds ratio (OR) of 1.47 (95% confidence interval [CI], 1.05–2.04). This effect was similar to hypertension (OR = 1.36; 95% CI, 1.06–1.73), hypercholesterolemia (OR = 1.51; 95% CI, 1.16–1.96), diabetes (OR = 1.66; 95% CI, 1.10–2.49), ≥1 year lopinavir exposure (OR = 1.36; 95% CI, 1.06–1.73), and current abacavir treatment (OR = 1.56; 95% CI, 1.17–2.07). The effect of the genetic risk score was additive to the effect of nongenetic CAD risk factors, and did not change after adjustment for family history of CAD. Conclusions In the setting of HIV infection, the effect of an unfavorable genetic background was similar to traditional CAD risk factors and certain adverse antiretroviral exposures. Genetic testing may provide prognostic information complementary to family history of CAD.
Resumo:
Alveolar echinococcosis (AE) in humans is a parasitic disease characterized by severe damage to the liver and occasionally other organs. AE is caused by infection with the metacestode (larval) stage of the fox tapeworm Echinococcus multilocularis, usually infecting small rodents as natural intermediate hosts. Conventionally, human AE is chemotherapeutically treated with mebendazole or albendazole. There is, however still the need for improved chemotherapeutical options. Primary in vivo studies on drugs of interest are commonly performed in small laboratory animals such as mice and Mongolian jirds, and in most cases, a secondary infection model is used, whereby E. multilocularis metacestodes are directly injected into the peritoneal cavity or into the liver. Disadvantages of this methodological approach include risk of injury to organs during the inoculation and, most notably, a limitation in the macroscopic (visible) assessment of treatment efficacy. Thus, in order to monitor the efficacy of chemotherapeutical treatment, animals have to be euthanized and the parasite tissue dissected. In the present study, mice were infected with E. multilocularis metacestodes through the subcutaneous route and were then subjected to chemotherapy employing albendazole. Serological responses to infection were comparatively assessed in mice infected by the conventional intraperitoneal route. We demonstrate that the subcutaneous infection model for secondary AE facilitates the assessment of the progress of infection and drug treatment in the live animal.
Evaluation of control and surveillance strategies for classical swine fever using a simulation model
Resumo:
Classical swine fever (CSF) outbreaks can cause enormous losses in naïve pig populations. How to best minimize the economic damage and number of culled animals caused by CSF is therefore an important research area. The baseline CSF control strategy in the European Union and Switzerland consists of culling all animals in infected herds, movement restrictions for animals, material and people within a given distance to the infected herd and epidemiological tracing of transmission contacts. Additional disease control measures such as pre-emptive culling or vaccination have been recommended based on the results from several simulation models; however, these models were parameterized for areas with high animal densities. The objective of this study was to explore whether pre-emptive culling and emergency vaccination should also be recommended in low- to moderate-density areas such as Switzerland. Additionally, we studied the influence of initial outbreak conditions on outbreak severity to improve the efficiency of disease prevention and surveillance. A spatial, stochastic, individual-animal-based simulation model using all registered Swiss pig premises in 2009 (n=9770) was implemented to quantify these relationships. The model simulates within-herd and between-herd transmission (direct and indirect contacts and local area spread). By varying the four parameters (a) control measures, (b) index herd type (breeding, fattening, weaning or mixed herd), (c) detection delay for secondary cases during an outbreak and (d) contact tracing probability, 112 distinct scenarios were simulated. To assess the impact of scenarios on outbreak severity, daily transmission rates were compared between scenarios. Compared with the baseline strategy (stamping out and movement restrictions) vaccination and pre-emptive culling neither reduced outbreak size nor duration. Outbreaks starting in a herd with weaning piglets or fattening pigs caused higher losses regarding to the number of culled premises and were longer lasting than those starting in the two other index herd types. Similarly, larger transmission rates were estimated for these index herd type outbreaks. A longer detection delay resulted in more culled premises and longer duration and better transmission tracing increased the number of short outbreaks. Based on the simulation results, baseline control strategies seem sufficient to control CSF in low-medium animal-dense areas. Early detection of outbreaks is crucial and risk-based surveillance should be focused on weaning piglet and fattening pig premises.
Resumo:
Purpose Femoral fracture is a common medical problem in osteoporotic individuals. Bone mineral density (BMD) is the gold standard measure to evaluate fracture risk in vivo. Quantitative computed tomography (QCT)-based homogenized voxel finite element (hvFE) models have been proved to be more accurate predictors of femoral strength than BMD by adding geometrical and material properties. The aim of this study was to evaluate the ability of hvFE models in predicting femoral stiffness, strength and failure location for a large number of pairs of human femora tested in two different loading scenarios. Methods Thirty-six pairs of femora were scanned with QCT and total proximal BMD and BMC were evaluated. For each pair, one femur was positioned in one-legged stance configuration (STANCE) and the other in a sideways configuration (SIDE). Nonlinear hvFE models were generated from QCT images by reproducing the same loading configurations imposed in the experiments. For experiments and models, the structural properties (stiffness and ultimate load), the failure location and the motion of the femoral head were computed and compared. Results In both configurations, hvFE models predicted both stiffness (R2=0.82 for STANCE and R2=0.74 for SIDE) and femoral ultimate load (R2=0.80 for STANCE and R2=0.85 for SIDE) better than BMD and BMC. Moreover, the models predicted qualitatively well the failure location (66% of cases) and the motion of the femoral head. Conclusions The subject specific QCT-based nonlinear hvFE model cannot only predict femoral apparent mechanical properties better than densitometric measures, but can additionally provide useful qualitative information about failure location.
Resumo:
Traditional methods do not actually measure peoples’ risk attitude naturally and precisely. Therefore, a fuzzy risk attitude classification method is developed. Since the prospect theory is usually considered as an effective model of decision making, the personalized parameters in prospect theory are firstly fuzzified to distinguish people with different risk attitudes, and then a fuzzy classification database schema is applied to calculate the exact value of risk value attitude and risk be- havior attitude. Finally, by applying a two-hierarchical clas- sification model, the precise value of synthetical risk attitude can be acquired.
Resumo:
Risk behaviors such as substance use or deviance are often limited to the early stages of the life course. Whereas the onset of risk behavior is well studied, less is currently known about the decline and timing of cessation of risk behaviors of different domains during young adulthood. Prevalence and longitudinal developmental patterning of alcohol use, drinking to the point of drunkenness, smoking, cannabis use, deviance, and HIV-related sexual risk behavior were compared in a Swiss community sample (N = 2,843). Using a longitudinal cohort-sequential approach to link multiple assessments with 3 waves of data for each individual, the studied period spanned the ages of 16 to 29 years. Although smoking had a higher prevalence, both smoking and drinking up to the point of drunkenness followed an inverted U-shaped curve. Alcohol consumption was also best described by a quadratic model, though largely stable at a high level through the late 20s. Sexual risk behavior increased slowly from age 16 to age 22 and then remained largely stable. In contrast, cannabis use and deviance linearly declined from age 16 to age 29. Young men were at higher risk for all behaviors than were young women, but apart from deviance, patterning over time was similar for both sexes. Results about the timing of increase and decline as well as differences between risk behaviors may inform tailored prevention programs during the transition from late adolescence to adulthood.
Resumo:
Objective: Impaired cognition is an important dimension in psychosis and its at-risk states. Research on the value of impaired cognition for psychosis prediction in at-risk samples, however, mainly relies on study-specific sample means of neurocognitive tests, which unlike widely available general test norms are difficult to translate into clinical practice. The aim of this study was to explore the combined predictive value of at-risk criteria and neurocognitive deficits according to test norms with a risk stratification approach. Method: Potential predictors of psychosis (neurocognitive deficits and at-risk criteria) over 24 months were investigated in 97 at-risk patients. Results: The final prediction model included (1) at-risk criteria (attenuated psychotic symptoms plus subjective cognitive disturbances) and (2) a processing speed deficit (digit symbol test). The model was stratified into 4 risk classes with hazard rates between 0.0 (both predictors absent) and 1.29 (both predictors present). Conclusions: The combination of a processing speed deficit and at-risk criteria provides an optimized stratified risk assessment. Based on neurocognitive test norms, the validity of our proposed 3 risk classes could easily be examined in independent at-risk samples and, pending positive validation results, our approach could easily be applied in clinical practice in the future.
Resumo:
The risk of second malignant neoplasms (SMNs) following prostate radiotherapy is a concern due to the large population of survivors and decreasing age at diagnosis. It is known that parallel-opposed beam proton therapy carries a lower risk than photon IMRT. However, a comparison of SMN risk following proton and photon arc therapies has not previously been reported. The purpose of this study was to predict the ratio of excess relative risk (RRR) of SMN incidence following proton arc therapy to that after volumetric modulated arc therapy (VMAT). Additionally, we investigated the impact of margin size and the effect of risk-minimized proton beam weighting on predicted RRR. Physician-approved treatment plans were created for both modalities for three patients. Therapeutic dose was obtained with differential dose-volume histograms from the treatment planning system, and stray dose was estimated from the literature or calculated with Monte Carlo simulations. Then, various risk models were applied to the total dose. Additional treatment plans were also investigated with varying margin size and risk-minimized proton beam weighting. The mean RRR ranged from 0.74 to 0.99, depending on risk model. The additional treatment plans revealed that the RRR remained approximately constant with varying margin size, and that the predicted RRR was reduced by 12% using a risk-minimized proton arc therapy planning technique. In conclusion, proton arc therapy was found to provide an advantage over VMAT in regard to predicted risk of SMN following prostate radiotherapy. This advantage was independent of margin size and was amplified with risk-optimized proton beam weighting.
Resumo:
Over the last forty years, applying dendrogeomorphology to palaeoflood analysis has improved estimates of the frequency and magnitude of past floods worldwide. This paper reviews the main results obtained by applying dendrogeomorphology to flood research in several case studies in Central Spain. These dendrogeomorphological studies focused on the following topics: (1) anatomical analysis to understand the physiological response of trees to flood damage and improve sampling efficiency; (2) compiling robust flood chronologies in ungauged mountain streams, (3) determining flow depth and estimating flood discharge using two-dimensional hydraulic modelling, and comparing them with other palaeostage indicators; (4) calibrating hydraulic model parameters (i.e. Manning roughness); and (5) implementing stochastic-based, cost–benefit analysis to select optimal mitigation measures. The progress made in these areas is presented with suggestions for further research to improve the applicability of dendrogeochronology to palaeoflood studies. Further developments will include new methods for better identification of the causes of specific types of flood damage to trees (e.g. tilted trees) or stable isotope analysis of tree rings to identify the climatic conditions associated with periods of increasing flood magnitude or frequency.
Resumo:
Dendrogeomorphology uses information sources recorded in the roots, trunks and branches of trees and bushes located in the fluvial system to complement (or sometimes even replace) systematic and palaeohydrological records of past floods. The application of dendrogeomorphic data sources and methods to palaeoflood analysis over nearly 40 years has allowed improvements to be made in frequency and magnitude estimations of past floods. Nevertheless, research carried out so far has shown that the dendrogeomorphic indicators traditionally used (mainly scar evidence), and their use to infer frequency and magnitude, have been restricted to a small, limited set of applications. New possibilities with enormous potential remain unexplored. New insights in future research of palaeoflood frequency and magnitude using dendrogeomorphic data sources should: (1) test the application of isotopic indicators (16O/18O ratio) to discover the meteorological origin of past floods; (2) use different dendrogeomorphic indicators to estimate peak flows with 2D (and 3D) hydraulic models and study how they relate to other palaeostage indicators; (3) investigate improved calibration of 2D hydraulic model parameters (roughness); and (4) apply statistics-based cost–benefit analysis to select optimal mitigation measures. This paper presents an overview of these innovative methodologies, with a focus on their capabilities and limitations in the reconstruction of recent floods and palaeofloods.
Resumo:
BACKGROUND: : Women at increased risk of breast cancer (BC) are not widely accepting of chemopreventive interventions, and ethnic minorities are underrepresented in related trials. Furthermore, there is no validated instrument to assess the health-seeking behavior of these women with respect to these interventions. METHODS: : By using constructs from the Health Belief Model, the authors developed and refined, based on pilot data, the Breast Cancer Risk Reduction Health Belief (BCRRHB) scale using a population of 265 women at increased risk of BC who were largely medically underserved, of low socioeconomic status (SES), and ethnic minorities. Construct validity was assessed using principal components analysis with oblique rotation to extract factors, and generate and interpret summary scales. Internal consistency was determined using Cronbach alpha coefficients. RESULTS: : Test-retest reliability for the pilot and final data was calculated to be r = 0.85. Principal components analysis yielded 16 components that explained 64% of the total variance, with communalities ranging from 0.50-0.75. Cronbach alpha coefficients for the extracted factors ranged from 0.45-0.77. CONCLUSIONS: : Evidence suggests that the BCRRHB yields reliable and valid data that allows for the identification of barriers and enhancing factors associated with use of breast cancer chemoprevention in the study population. These findings allow for tailoring treatment plans and intervention strategies to the individual. Future research is needed to validate the scale for use in other female populations. Cancer 2009. (c) 2009 American Cancer Society.
Resumo:
Background: Despite effective solutions to reduce teen birth rates, Texas teen birth rates are among the highest in the nation. School districts can impact youth sexual behavior through implementation of evidence-based programs (EBPs); however, teen pregnancy prevention is a complex and controversial issue for school districts. Subsequently, very few districts in Texas implement EBPs for pregnancy prevention. Additionally, school districts receive little guidance on the process for finding, adopting, and implementing EBPs. Purpose: The purpose of this report is to present the CHoosing And Maintaining Programs for Sex education in Schools (CHAMPSS) Model, a practical and realistic framework to help districts find, adopt, and implement EBPs. Methods: Model development occurred in four phases using the core processes of Intervention Mapping: 1) knowledge acquisition, 2) knowledge engineering, 3) model representation, and 4) knowledge development. Results: The CHAMPSS Model provides seven steps, tailored for school-based settings, which encompass phases of assessment, preparation, implementation, and maintenance: Prioritize, Asses, Select, Approve, Prepare, Implement, and Maintain. Advocacy and eliciting support for adolescent sexual health are also core elements of the model. Conclusion: This systematic framework may help schools increase adoption, implementation, and maintenance for EBPs.