886 resultados para Critical factors of success
Resumo:
A retrospective study of 2,146 feedlot cattle in 17 feedlot tests from 1988 to 1997 was conducted to determine the impact of bovine respiratory disease (BRD) on veterinary treatment costs, average daily gain, carcass traits, mortality, and net profit. Morbidity caused by BRD was 20.6%. The average cost to treat each case of BRD was $12.39. Mortality rate of calves diagnosed and treated for BRD was 5.9% vs. .35% for those not diagnosed with BRD. Average daily gain differed between treated and non-treated steers during the first 28 days on feed but did not differ from 28 days to harvest. Net profit was $57.48 lower for treated steers. Eighty-two percent of this difference was due to a combination of mortality and treatment costs. Eighteen percent of the net profit difference was due to improved performance and carcass value of the non-treated steers. Data from 496 steers and heifers in nine feedlot tests were used to determine the effects of age, weaning, and use of modified live virus or killed vaccines prior to the test to predict BRD. Younger calves, non-weaned calves, and calves vaccinated with killed vaccines prior to the test had higher BRD morbidity than those that were older, weaned, or vaccinated with modified live virus vaccines, respectively. Treatment regimes that precluded relapse resulting in re-treatment prevented reduced performance and loss of carcass value. Using modified live virus vaccines and weaning calves 30 days prior to shipment reduced the incidence of BRD.
Resumo:
BACKGROUND Homicide-suicides are rare but catastrophic events. This study examined the epidemiology of homicide-suicide in Switzerland. METHODS The study identified homicide-suicide events 1991-2008 in persons from the same household in the Swiss National Cohort, which links census and mortality records. The analysis examined the association of the risk of dying in a homicide-suicide event with socio-demographic variables, measured at the individual-level, household composition variables and area-level variables. Proportional hazards regression models were calculated for male perpetrators and female victims. Results are presented as age-adjusted hazard ratios (HR) with 95% confidence intervals (95%CI). RESULTS The study identified 158 deaths from homicide-suicide events, including 85 murder victims (62 women, 4 men, 19 children and adolescents) and 68 male and 5 female perpetrators. The incidence was 3 events per million households and year. Firearms were the most prominent method for both homicides and suicides. The risk of perpetrating homicide-suicide was higher in divorced than in married men (HR 3.64; 95%CI 1.56-8.49), in foreigners without permanent residency compared to Swiss citizens (HR 3.95; 1.52-10.2), higher in men without religious affiliations than in Catholics (HR 2.23; 1.14-4.36) and higher in crowded households (HR 4.85; 1.72-13.6 comparing ≥2 with <1 persons/room). There was no association with education, occupation or nationality, the number of children, the language region or degree of urbanicity. Associations were similar for female victims. CONCLUSIONS This national longitudinal study shows that living conditions associated with psychological stress and lower levels of social support are associated with homicide-suicide events in Switzerland.
Resumo:
Distrust should automatically activate a "thinking the opposite". Thus, according to Schul, Mayo and Burnstein (2004), subjects detect antonyms of adjectives faster when confronted with untrustworthy rather than trustworthy faces. We conducted four experiments within their paradigm to test whether the response latency of detecting antonyms remains stable. We introduced the following changes: the paradigm was applied with and without an induction phase, faces were culturally adapted, the stimuli were presented according more to priming rules, and the canonicity of antonyms was controlled. Results show that the response latency of detecting antonyms is difficult to predict. Even if faces are culturally adapted and priming rules are applied more strictly, response latency depends on whether the induction phase is applied and on the canonicity of antonyms rather than on the trustworthiness of faces. In general, this paradigm seems not to be appropriate to test thinking the opposite under distrust.
Resumo:
The two major subtypes of diffuse large B-cell lymphoma (DLBCL) (germinal centre B-cell - like (GCB-DLBCL) and activated B-cell - like (ABC-DLBCL)) are defined by means of gene expression profiling (GEP). Patients with GCB-DLBCL survive longer with the current standard regimen R-CHOP than patients with ABC-DLBCL. As GEP is not part of the current routine diagnostic work-up, efforts have been made to find a substitute than involves immunohistochemistry (IHC). Various algorithms achieved this with 80-90% accuracy. However, conflicting results on the appropriateness of IHC have been reported. Because it is likely that the molecular subtypes will play a role in future clinical practice, we assessed the determination of the molecular DLBCL subtypes by means of IHC at our University Hospital, and some aspects of this determination elsewhere in Switzerland. The most frequently used Hans algorithm includes three antibodies (against CD10, bcl-6 and MUM1). From records of the routine diagnostic work-up, we identified 51 of 172 (29.7%) newly diagnosed and treated DLBCL cases from 2005 until 2010 with an assigned DLBCL subtype. DLBCL subtype information was expanded by means of tissue microarray analysis. The outcome for patients with the GCB subtype was significantly better compared with those with the non-GC subtype, independent of the age-adjusted International Prognostic Index. We found a lack of standardisation in the subtype determination by means of IHC in Switzerland and significant problems of reproducibility. We conclude that the Hans algorithm performs well in our hands and that awareness of this important matter is increasing. However, outside clinical trials, vigorous efforts to standardise IHC determination are needed as DLBCL subtype-specific therapies emerge.
Resumo:
The purpose of this study was to investigate the role of the fronto–striatal system for implicit task sequence learning. We tested performance of patients with compromised functioning of the fronto–striatal loops, that is, patients with Parkinson's disease and patients with lesions in the ventromedial or dorsolateral prefrontal cortex. We also tested amnesic patients with lesions either to the basal forebrain/orbitofrontal cortex or to thalamic/medio-temporal regions. We used a task sequence learning paradigm involving the presentation of a sequence of categorical binary-choice decision tasks. After several blocks of training, the sequence, hidden in the order of tasks, was replaced by a pseudo-random sequence. Learning (i.e., sensitivity to the ordering) was assessed by measuring whether this change disrupted performance. Although all the patients were able to perform the decision tasks quite easily, those with lesions to the fronto–striatal loops (i.e., patients with Parkinson's disease, with lesions in the ventromedial or dorsolateral prefrontal cortex and those amnesic patients with lesions to the basal forebrain/orbitofrontal cortex) did not show any evidence of implicit task sequence learning. In contrast, those amnesic patients with lesions to thalamic/medio-temporal regions showed intact sequence learning. Together, these results indicate that the integrity of the fronto–striatal system is a prerequisite for implicit task sequence learning.
Resumo:
CONTEXT Aims of bladder preservation in muscle-invasive bladder cancer (MIBC) are to offer a quality-of-life advantage and avoid potential morbidity or mortality of radical cystectomy (RC) without compromising oncologic outcomes. Because of the lack of a completed randomised controlled trial, oncologic equivalence of bladder preservation modality treatments compared with RC remains unknown. OBJECTIVE This systematic review sought to assess the modern bladder-preservation treatment modalities, focusing on trimodal therapy (TMT) in MIBC. EVIDENCE ACQUISITION A systematic literature search in the PubMed and Cochrane databases was performed from 1980 to July 2013. EVIDENCE SYNTHESIS Optimal bladder-preservation treatment includes a safe transurethral resection of the bladder tumour as complete as possible followed by radiation therapy (RT) with concurrent radiosensitising chemotherapy. A standard radiation schedule includes external-beam RT to the bladder and limited pelvic lymph nodes to an initial dose of 40Gy, with a boost to the whole bladder to 54Gy and a further tumour boost to a total dose of 64-65Gy. Radiosensitising chemotherapy with phase 3 trial evidence in support exists for cisplatin and mitomycin C plus 5-fluorouracil. A cystoscopic assessment with systematic rebiopsy should be performed at TMT completion or early after TMT induction. Thus, nonresponders are identified early to promptly offer salvage RC. The 5-yr cancer-specific survival and overall survival rates range from 50% to 82% and from 36% to 74%, respectively, with salvage cystectomy rates of 25-30%. There are no definitive data to support the benefit of using of neoadjuvant or adjuvant chemotherapy. Critical to good outcomes is proper patient selection. The best cancers eligible for bladder preservation are those with low-volume T2 disease without hydronephrosis or extensive carcinoma in situ. CONCLUSIONS A growing body of accumulated data suggests that bladder preservation with TMT leads to acceptable outcomes and therefore may be considered a reasonable treatment option in well-selected patients. PATIENT SUMMARY Treatment based on a combination of resection, chemotherapy, and radiotherapy as bladder-sparing strategies may be considered as a reasonable treatment option in properly selected patients.
Resumo:
We have previously shown that vasculogenesis, the process by which bone marrow-derived cells are recruited to the tumor and organized to form a blood vessel network de novo, is essential for the growth of Ewing’s sarcoma. We further demonstrated that these bone marrow cells differentiate into pericytes/vascular smooth muscle cells(vSMC) and contribute to the formation of the functional vascular network. The molecular mechanisms that control bone marrow cell differentiation into pericytes/vSMC in Ewing’s sarcoma are poorly understood. Here, we demonstrate that the Notch ligand Delta like ligand 4 (DLL4) plays a critical role in this process. DLL4 is essential for the formation of mature blood vessels during development and in several tumor models. Inhibition of DLL4 causes increased vascular sprouting, decreased pericyte coverage, and decreased vessel functionality. We demonstrate for the first time that DLL4 is expressed by bone marrow-derived pericytes/vascular smooth muscle cells in two Ewing’s sarcoma xenograft models and by perivascular cells in 12 out of 14 patient samples. Using dominant negative mastermind to inhibit Notch, we demonstrate that Notch signaling is essential for bone marrow cell participation in vasculogenesis. Further, inhibition of DLL4 using either shRNA or the monoclonal DLL4 neutralizing antibody YW152F led to dramatic changes in blood vessel morphology and function. Vessels in tumors where DLL4 was inhibited were smaller, lacked lumens, had significantly reduced numbers of bone marrow-derived pericyte/vascular smooth muscle cells, and were less functional. Importantly, growth of TC71 and A4573 tumors was significantly inhibited by treatment with YW152F. Additionally, we provide in vitro evidence that DLL4-Notch signaling is involved in bone marrow-derived pericyte/vascular smooth muscle cell formation outside of the Ewing’s sarcoma environment. Pericyte/vascular smooth muscle cell marker expression by whole bone marrow cells cultured with mouse embryonic stromal cells was reduced when DLL4 was inhibited by YW152F. For the first time, our findings demonstrate a role for DLL4 in bone marrow-derived pericyte/vascular smooth muscle differentiation as well as a critical role for DLL4 in Ewing’s sarcoma tumor growth.
Resumo:
This study was conducted to determine the incidence and etiology of neonatal seizures, and evaluate risk factors for this condition in Harris County, Texas, between 1992 and 1994. Potential cases were ascertained from four sources: discharge diagnoses at local hospitals, birth certificates, death certificates, and a clinical study of neonatal seizures conducted concurrent with this study at a large tertiary care center in Houston, Texas. The neonatal period was defined as the first 28 days of life for term infants, and up to 44 weeks gestation for preterm infants.^ There were 207 cases of neonatal seizures ascertained among 116,048 live births, yielding and incidence of 1.8 per 1000. Half of the seizures occurred by the third day of life, 70% within the first week, and 93% within the first 28 days of life. Among 48 preterm infants with seizures 15 had their initial seizure after the 28th day of life. About 25% of all seizures occurred after discharge from the hospital of birth.^ Idiopathic seizures occurred most frequently (0.5/1000 births), followed by seizures attributed to perinatal hypoxia/ischemia (0.4/1000 births), intracranial hemorrhage (0.2/1000 births), infection of the central nervous system (0.2/1000 births), and metabolic abnormalities (0.1/1000 births).^ Risk factors were evaluated based on birth certificate information, using univariate and multivariate analysis (logistic regression). Factors considered included birth weight, gender, ethnicity, place of birth, mother's age, method of delivery, parity, multiple birth and, among term infants, small birth weight for gestational age (SGA). Among preterm infants, very low birth weight (VLBW, $<$1500 grams) was the strongest risk factor, followed by birth in private/university hospitals with a Level III nursery compared with hospitals with a Level II nursery (RR = 2.9), and male sex (RR = 1.8). The effect of very low birth weight varied according to ethnicity. Compared to preterm infants weighing 2000-2999 grams, non-white VLBW infants were 12.0 times as likely to have seizures; whereas white VLBW infants were 2.5 times as likely. Among term infants, significant risk factors included SGA (RR = 1.8), birth in Level III nursery private/university hospitals versus hospitals with Level II nursery (RR = 2.0), and birth by cesarean section (RR = 2.2). ^
Resumo:
QUESTIONS UNDER STUDY We sought to identify reasons for late human immunodeficiency virus (HIV) testing or late presentation for care. METHODS A structured chart review was performed to obtain data on test- and health-seeking behaviour of patients presenting late with CD4 cell counts below 350 cells/µl or with acquired immunodeficiency syndrome (AIDS), at the Zurich centre of the Swiss HIV Cohort Study between January 2009 and December 2011. Logistic regression analyses were used to compare demographic characteristics of persons presenting late with not late presenters. RESULTS Of 281 patients, 45% presented late, 48% were chronically HIV-infected non-late presenters, and an additional 7% fulfilled the <350 CD4 cells/µl criterion for late presentation but a chart review revealed that lymphopenia was caused by acute HIV infection. Among the late presenters, 60% were first tested HIV positive in a private practice. More than half of the tests (60%) were suggested by a physician, only 7% following a specific risk situation. The majority (88%) of patients entered medical care within 1 month of testing HIV positive. Risk factors for late presentation were older age (odds ratio [OR] for ≥50 vs <30 years: 3.16, p = 0.017), Asian versus Caucasian ethnicity (OR 3.5, p = 0.021). Compared with men who have sex with men (MSM) without stable partnership, MSM in a stable partnership appeared less likely to present late (OR 0.50, p = 0.034), whereas heterosexual men in a stable partnership had a 2.72-fold increased odds to present late (p = 0.049). CONCLUSIONS The frequency of late testing could be reduced by promoting awareness, particularly among older individuals and heterosexual men in stable partnerships.
Resumo:
Research has shown repeatedly that the “feeling better” effect of exercise is far more moderate than generally claimed. Examinations of subgroups in secondary analyses also indicate that numerous further variables influence this relationship. One reason for inconsistencies in this research field is the lack of adequate theoretical analyses. Well-being output variables frequently possess no construct definition, and little attention is paid to moderating and mediating variables. This article integrates the main models in an overview and analyzes how secondary analyses define well-being and which areas of the construct they focus on. It then applies a moderator and/or mediator framework to examine which person and environmental variables can be found in the existing explanatory approaches in sport science and how they specify the influence of these moderating and mediating variables. Results show that the broad understanding of well-being in many secondary analyses makes findings difficult to interpret. Moreover, physiological explanatory approaches focus more on affective changes in well-being, whereas psychological approaches also include cognitive changes. The approaches focus mostly on either physical or psychological person variables and rarely combine the two, as in, for example, the dual-mode model. Whereas environmental variables specifying the treatment more closely (e.g., its intensity) are comparatively frequent, only the social support model formulates variables such as the framework in which exercise is presented. The majority of explanatory approaches use simple moderator and/or mediator models such as the basic mediated (e.g., distraction hypothesis) or multiple mediated (e.g., monoamine hypotheses) model. The discussion draws conclusions for future research.