162 resultados para DRINFELD CATEGORY
Resumo:
BACKGROUND: Antiretroviral compounds have been predominantly studied in human immunodeficiency virus type 1 (HIV-1) subtype B, but only ~10% of infections worldwide are caused by this subtype. The analysis of the impact of different HIV subtypes on treatment outcome is important. METHODS: The effect of HIV-1 subtype B and non-B on the time to virological failure while taking combination antiretroviral therapy (cART) was analyzed. Other studies that have addressed this question were limited by the strong correlation between subtype and ethnicity. Our analysis was restricted to white patients from the Swiss HIV Cohort Study who started cART between 1996 and 2009. Cox regression models were performed; adjusted for age, sex, transmission category, first cART, baseline CD4 cell counts, and HIV RNA levels; and stratified for previous mono/dual nucleoside reverse-transcriptase inhibitor treatment. RESULTS: Included in our study were 4729 patients infected with subtype B and 539 with non-B subtypes. The most prevalent non-B subtypes were CRF02_AG (23.8%), A (23.4%), C (12.8%), and CRF01_AE (12.6%). The incidence of virological failure was higher in patients with subtype B (4.3 failures/100 person-years; 95% confidence interval [CI], 4.0-4.5]) compared with non-B (1.8 failures/100 person-years; 95% CI, 1.4-2.4). Cox regression models confirmed that patients infected with non-B subtypes had a lower risk of virological failure than those infected with subtype B (univariable hazard ratio [HR], 0.39 [95% CI, .30-.52; P < .001]; multivariable HR, 0.68 [95% CI, .51-.91; P = .009]). In particular, subtypes A and CRF02_AG revealed improved outcomes (multivariable HR, 0.54 [95% CI, .29-.98] and 0.39 [95% CI, .19-.79], respectively). CONCLUSIONS: Improved virological outcomes among patients infected with non-B subtypes invalidate concerns that these individuals are at a disadvantage because drugs have been designed primarily for subtype B infections.
Resumo:
OBJECTIVE: To assess the prevalence of problem gambling in a population of youths in Switzerland and to determine its association with other potentially addictive behaviours. METHODS: Cross-sectional survey including 1,102 participants in the first and second year of post-compulsory education, reporting gambling, socio-demographics, internet use and substance use. For three categories of gambling (nongambler; nonproblem gambler and at-risk/problem gambler). socio-demographic and addiction data were compared using a bivariate analysis. All significant variables were included in a multinominal logistic regression using nongamblers as the reference category. RESULTS: The prevalence of gamblers was 37.48% (n = 413), with nonproblem gamblers being 31.94% (n = 352) and at-risk/problem gamblers 5.54% (n = 61). At the bivariate level, severity of gambling increased among adults (over 18 years) and among males, vocational students, participants not living with both parents and youths having a low socio-economic status. Gambling was also associated to the four addictive behaviours studied. At the multivariate level, risk of nonproblem gambling was increased in males, older youths, vocational students, participants of Swiss origin and alcohol misusers. Risk of at-risk/problem gambling was higher for males, older youths, alcohol misusers, participants not living with both parents and problem internet users. CONCLUSIONS: One-third of youths in our sample had gambled in the previous year and gambling is associated with other addictive behaviours. Clinicians should screen their adolescent patients for gambling habits, especially if other addictive behaviours are present. Additionally, gambling should be included in prevention campaigns together with other addictive behaviours.
Resumo:
Zebrafish is a good model for studying regeneration because of the rapidity with which it occurs. Better understanding of this process may lead in the future to improvement of the regenerating capacity of humans. Signaling factors are the second largest category of genes, regulated during regeneration after the regulators of wound healing. Major developmental signaling pathways play a role in this multistep process, such as Bmp, Fgf, Notch, retinoic acid, Shh, and Wnt. In the present study, we focus on TGF-β-induced genes, bigh3 and bambia. Bigh3 encodes keratoepithelin, a protein first identified as an extracellular matrix protein reported to play a role in cell adhesion, as well as in cornea formation and osteogenesis. The expression of bigh3 in zebrafish fins has previously been reported. Here we demonstrate that tgf-b1 and tgf-b3 mRNA reacted with delay, first showing no regulation at 3âeuro0/00dpa, followed by upregulation at 4 and 5âeuro0/00dpa. Tgf-b1, tgf-2, and tgf-brII mRNA were back to normal levels at 10âeuro0/00dpa. Only tgf-b3 mRNA was still upregulated at that time. Bigh3 mRNA followed the upregulation of tgf-b1, while bambia mRNA behaved similarly to tgf-b2 mRNA. We show that upregulation of bigh3 and bambia mRNA correlated with the process of fin regeneration and regulation of TGF-b signaling, suggesting a new role for these proteins.
Resumo:
OBJECTIVE: To compare the distribution of congenital anomalies within the VACTERL association (vertebral defects, anal atresia, cardiac, tracheoesophageal, renal, and limb abnormalities) between patients exposed to tumor necrosis factor-α (TNF-α) antagonist and the general population. METHODS: Analysis for comparison of proportional differences to a previous publication between anomaly subgroups, according to subgroup definitions of the European Surveillance of Congenital Anomalies (EUROCAT), a population-based database. RESULTS: Most EUROCAT subgroups belonging to the VACTERL association contained only one or 2 records of TNF-α antagonist exposure, so comparison of proportions was imprecise. Only the category "limb abnormalities" showed a significantly higher proportion in the general population. CONCLUSION: The high number of congenital anomalies belonging to the VACTERL association from a report of pregnancies exposed to TNF-α antagonists could not be confirmed using a population-based congenital anomaly database.
Resumo:
The assessment of medical technologies has to answer several questions ranging from safety and effectiveness to complex economical, social, and health policy issues. The type of data needed to carry out such evaluation depends on the specific questions to be answered, as well as on the stage of development of a technology. Basically two types of data may be distinguished: (a) general demographic, administrative, or financial data which has been collected not specifically for technology assessment; (b) the data collected with respect either to a specific technology or to a disease or medical problem. On the basis of a pilot inquiry in Europe and bibliographic research, the following categories of type (b) data bases have been identified: registries, clinical data bases, banks of factual and bibliographic knowledge, and expert systems. Examples of each category are discussed briefly. The following aims for further research and practical goals are proposed: criteria for the minimal data set required, improvement to the registries and clinical data banks, and development of an international clearinghouse to enhance information diffusion on both existing data bases and available reports on medical technology assessments.
Resumo:
OBJECTIVE: Reported survival after cardiopulmonary resuscitation (CPR) in children varies considerably. We aimed to identify predictors of 1-year survival and to assess long-term neurological status after in- or outpatient CPR. DESIGN: Retrospective review of the medical records and prospective follow-up of CPR survivors. SETTING: Tertiary care pediatric university hospital. PATIENTS AND METHODS: During a 30-month period, 89 in- and outpatients received advanced CPR. Survivors of CPR were prospectively followed-up for 1 year. Neurological outcome was assessed by the Pediatric Cerebral Performance Category scale (PCPC). Variables predicting 1-year survival were identified by multivariable logistic regression analysis. INTERVENTIONS: None. RESULTS: Seventy-one of the 89 patients were successfully resuscitated. During subsequent hospitalization do-not-resuscitate orders were issued in 25 patients. At 1 year, 48 (54%) were alive, including two of the 25 patients with out-of-hospital CPR. All patients died, who required CPR after trauma or near drowning, when CPR began >10 min after arrest or with CPR duration >60 min. Prolonged CPR (21-60 min) was compatible with survival (five of 19). At 1 year, 77% of the survivors had the same PCPC score as prior to CPR. Predictors of survival were location of resuscitation, CPR during peri- or postoperative care, and duration of resuscitation. A clinical score (0-15 points) based on these three items yielded an area under the ROC of 0.93. CONCLUSIONS: Independent determinants of long-term survival of pediatric resuscitation are location of arrest, underlying cause, and duration of CPR. Long-term survivors have little or no change in neurological status.
Resumo:
The project of articulating a theological ethics on the basis of liturgical anthropology is bound to fail if the necessary consequence is that one has to quit the forum of critical modern rationality. The risk of Engelhardt's approach is to limit rationality to a narrow vision of reason. Sin is not to be understood as the negation of human holiness, but as the negation of divine holiness. The only way to renew theological ethics is to understand sin as the anthropological and ethical expression of the biblical message of the justification by faith only. Sin is therefore a secondary category, which can only by interpreted in light of the positive manifestation of liberation, justification, and grace. The central issue of Christian ethics is not ritual purity or morality, but experience, confession and recognition of our own injustice in our dealing with God and men.
Resumo:
The authors examined the associations of social support with socioeconomic status (SES) and with mortality, as well as how SES differences in social support might account for SES differences in mortality. Analyses were based on 9,333 participants from the British Whitehall II Study cohort, a longitudinal cohort established in 1985 among London-based civil servants who were 35-55 years of age at baseline. SES was assessed using participant's employment grades at baseline. Social support was assessed 3 times in the 24.4-year period during which participants were monitored for death. In men, marital status, and to a lesser extent network score (but not low perceived support or high negative aspects of close relationships), predicted both all-cause and cardiovascular mortality. Measures of social support were not associated with cancer mortality. Men in the lowest SES category had an increased risk of death compared with those in the highest category (for all-cause mortality, hazard ratio = 1.59, 95% confidence interval: 1.21, 2.08; for cardiovascular mortality, hazard ratio = 2.48, 95% confidence interval: 1.55, 3.92). Network score and marital status combined explained 27% (95% confidence interval: 14, 43) and 29% (95% confidence interval: 17, 52) of the associations between SES and all-cause and cardiovascular mortality, respectively. In women, there was no consistent association between social support indicators and mortality. The present study suggests that in men, social isolation is not only an important risk factor for mortality but is also likely to contribute to differences in mortality by SES.
Resumo:
BACKGROUND: : Thinness in children and adolescents is largely under studied, a contrast with abundant literature on under-nutrition in infants and on overweight in children and adolescents. The aim of this study is to compare the prevalence of thinness using two recently developed growth references, among children and adolescents living in the Seychelles, an economically rapidly developing country in the African region. METHOD: S: Weight and height were measured every year in all children of 4 grades (age range: 5 to 16 years) of all schools in the Seychelles as part of a routine school-based surveillance program. In this study we used data collected in 16,672 boys and 16,668 girls examined from 1998 to 2004. Thinness was estimated according to two growth references: i) an international survey (IS), defining three grades of thinness corresponding to a BMI of 18.5, 17.0 and 16.0 kg/m2 at age 18 and ii) the WHO reference, defined here as three categories of thinness (-1, -2 and -3 SD of BMI for age) with the second and third named "thinness" and "severe thinness", respectively. RESULTS: : The prevalence of thinness was 21.4%, 6.4% and 2.0% based on the three IS cut-offs and 27.7%, 6.7% and 1.2% based on the WHO cut-offs. The prevalence of thinness categories tended to decrease according to age for both sexes for the IS reference and among girls for the WHO reference. CONCLUSION: The prevalence of the first category of thinness was larger with the WHO cut-offs than with the IS cut-offs while the prevalence of thinness of "grade 2" and thinness of "grade 3" (IS cut-offs) was similar to the prevalence of "thinness" and "severe thinness" (WHO cut-offs), respectively.
Resumo:
QUESTION UNDER STUDY: Hospitals transferring patients retain responsibility until admission to the new health care facility. We define safe transfer conditions, based on appropriate risk assessment, and evaluate the impact of this strategy as implemented at our institution. METHODS: An algorithm defining transfer categories according to destination, equipment monitoring, and medication was developed and tested prospectively over 6 months. Conformity with algorithm criteria was assessed for every transfer and transfer category. After introduction of a transfer coordination centre with transfer nurses, the algorithm was implemented and the same survey was carried out over 1 year. RESULTS: Over the whole study period, the number of transfers increased by 40%, chiefly by ambulance from the emergency department to other hospitals and private clinics. Transfers to rehabilitation centres and nursing homes were reassigned to conventional vehicles. The percentage of patients requiring equipment during transfer, such as an intravenous line, decreased from 34% to 15%, while oxygen or i.v. drug requirement remained stable. The percentage of transfers considered below theoretical safety decreased from 6% to 4%, while 20% of transfers were considered safer than necessary. A substantial number of planned transfers could be "downgraded" by mutual agreement to a lower degree of supervision, and the system was stable on a short-term basis. CONCLUSION: A coordinated transfer system based on an algorithm determining transfer categories, developed on the basis of simple but valid medical and nursing criteria, reduced unnecessary ambulance transfers and treatment during transfer, and increased adequate supervision.
Resumo:
Les coûts de traitement de certains patients s'avèrent extrêmement élevés, et peuvent faire soupçonner une prise en charge médicale inadéquate. Comme I'évolution du remboursement des prestations hospitalières passe à des forfaits par pathologie, il est essentiel de vérifier ce point, d'essayer de déterminer si ce type de patients peut être identifié à leur admission, et de s'assurer que leur devenir soit acceptable. Pour les années 1995 et 1997. les coûts de traitement dépassant de 6 déviations standard le coût moyen de la catégorie diagnostique APDRG ont été identifiés, et les dossiers des 50 patients dont les coûts variables étaient les plus élevés ont été analysés. Le nombre total de patients dont I'hospitalisation a entraîné des coûts extrêmes a passé de 391 en 1995 à 328 patients en 1997 (-16%). En ce qui concerne les 50 patients ayant entraîné les prises en charge les plus chères de manière absolue, les longs séjours dans de multiples services sont fréquents, mais 90% des patients sont sortis de l'hôpital en vie, et près de la moitié directement à domicile. Ils présentaient une variabilité importante de diagnostics et d'interventions, mais pas d'évidence de prise en charge inadéquate. En conclusion, les patients qualifiés de cas extrêmes sur un plan économique, ne le sont pas sur un plan strictement médical, et leur devenir est bon. Face à la pression qu'exercera le passage à un mode de financement par pathologie, les hôpitaux doivent mettre au point un système de revue interne de I'adéquation des prestations fournies basées sur des caractéristiques cliniques, s'ils veulent garantir des soins de qualité. et identifier les éventuelles prestations sous-optimales qu'ils pourraient être amenés à délivrer. [Auteurs] Treatment costs for some patients are extremely high and might let think that medical care could have been inadequate. As hospital financing systems move towards reimbursement by diagnostic groups, it is essential to assess whether inadequate care is provided, to try to identify these patients upon admission, and make sure that their outcome is good. For the years 1995 and 1997, treatment costs exceeding by 6 standard deviations the average cost of their APDRG category were identified, and the charts of the 50 patients with the highest variable costs were analyzed. The total number of patients with such extreme costs diminished from 391 in 1995 to 328 in 1997 (-16%). For the 50 most expensive patients, long stays in several services were frequent, but 90% of these patients left the hospital alive, and about half directly to their home. They presented an important variation in diagnoses and operations, but no evidence for inadequate care. Thus, patients qualified as extreme from an economic perspective cannot be qualified as such from a medical perspective, and their outcome is good. To face the pressure linked with the change in financing system, hospitals must develop an internal review system for assessing the adequacy of care, based on clinical characteristics, if they want to guarantee good quality of care and identify potentially inadequate practice.
Resumo:
Introduction Walk-in centers may improve access to healthcare for some patients, due to their convenient location and extensive opening hours, with no need for appointment. Herein we describe and assess a new model of walk-in centre, characterized by care provided by residents and supervision achieved by experienced family doctors. Main aim of the study was to assess patients satisfaction about the care they received from residents and the supervision by family doctors. Secondary aim was to describe walk-in patients demographic characteristics and to identify potential associations with satisfaction. Methods The study was conducted in the walk-in centre of Lausanne. Patients who consulted between in April 2011 were automatically included and received a questionnaire in French. We used a five-point Likert scale, from "not at all satisfied" to "very satisfied", converted from 1 to 5. We focused on the satisfaction regarding residents care and supervision by a family doctor. The former was divided in three categories: "Skills", "Treatment" and "Behaviour". Mean satisfaction was calculated for each category and a multivariable logistic model was applied in order to identify associations among patients demographics. Results Response rate was 47% [184/395], Walk-in patients were more likely to be women, young, with a high education level. Patients were very satisfied with residents care, with median satisfaction between 4.5 and 5, for each category. Over than 90% of patients were "satisfied" or "very satisfied" that a family doctor was involved in the consultation. Age showed the major association of satisfaction. Discussion Patients were highly satisfied with care provided by residents and with involvement of a family doctor in the consultation. Older age showed the major association with satisfaction with a positive impact. The high satisfaction reported by walk-in patients supports this new model of walk-in centre.
Resumo:
Ant queens that attempt to disperse and found new colonies independently face high mortality risks. The exposure of queens to soil entomopathogens during claustral colony founding may be particularly harmful, as founding queens lack the protection conferred by mature colonies. Here, we tested the hypotheses that founding queens (I) detect and avoid nest sites that are contaminated by fungal pathogens, and (II) tend to associate with other queens to benefit from social immunity when nest sites are contaminated. Surprisingly, in nest choice assays, young Formica selysi BONDROIT, 1918 queens had an initial preference for nest sites contaminated by two common soil entomopathogenic fungi, Beauveria bassiana and Metarhizium brunneum. Founding queens showed a similar preference for the related but non-entomopathogenic fungus Fusarium graminearum. In contrast, founding queens had no significant preference for the more distantly related nonentomopathogenic fungus Petromyces alliaceus, nor for heat-killed spores of B. bassiana. Finally, founding queens did not increase the rate of queen association in presence of B. bassiana. The surprising preference of founding queens for nest sites contaminated by live entomopathogenic fungi suggests that parasites manipulate their hosts or that the presence of specific fungi is a cue associated with suitable nesting sites.
Resumo:
OBJECTIVE: To assess the change in non-compliant items in prescription orders following the implementation of a computerized physician order entry (CPOE) system named PreDiMed. SETTING: The department of internal medicine (39 and 38 beds) in two regional hospitals in Canton Vaud, Switzerland. METHOD: The prescription lines in 100 pre- and 100 post-implementation patients' files were classified according to three modes of administration (medicines for oral or other non-parenteral uses; medicines administered parenterally or via nasogastric tube; pro re nata (PRN), as needed) and analyzed for a number of relevant variables constitutive of medical prescriptions. MAIN OUTCOME MEASURE: The monitored variables depended on the pharmaceutical category and included mainly name of medicine, pharmaceutical form, posology and route of administration, diluting solution, flow rate and identification of prescriber. RESULTS: In 2,099 prescription lines, the total number of non-compliant items was 2,265 before CPOE implementation, or 1.079 non-compliant items per line. Two-thirds of these were due to missing information, and the remaining third to incomplete information. In 2,074 prescription lines post-CPOE implementation, the number of non-compliant items had decreased to 221, or 0.107 non-compliant item per line, a dramatic 10-fold decrease (chi(2) = 4615; P < 10(-6)). Limitations of the computerized system were the risk for erroneous items in some non-prefilled fields and ambiguity due to a field with doses shown on commercial products. CONCLUSION: The deployment of PreDiMed in two departments of internal medicine has led to a major improvement in formal aspects of physicians' prescriptions. Some limitations of the first version of PreDiMed were unveiled and are being corrected.
Resumo:
BACKGROUND: The recent availability of genetic analyses has demonstrated the shortcomings of the current phenotypic method of corneal dystrophy classification. Abnormalities in different genes can cause a single phenotype, whereas different defects in a single gene can cause different phenotypes. Some disorders termed corneal dystrophies do not appear to have a genetic basis. PURPOSE: The purpose of this study was to develop a new classification system for corneal dystrophies, integrating up-to-date information on phenotypic description, pathologic examination, and genetic analysis. METHODS: The International Committee for Classification of Corneal Dystrophies (IC3D) was created to devise a current and accurate nomenclature. RESULTS: This anatomic classification continues to organize dystrophies according to the level chiefly affected. Each dystrophy has a template summarizing genetic, clinical, and pathologic information. A category number from 1 through 4 is assigned, reflecting the level of evidence supporting the existence of a given dystrophy. The most defined dystrophies belong to category 1 (a well-defined corneal dystrophy in which a gene has been mapped and identified and specific mutations are known) and the least defined belong to category 4 (a suspected dystrophy where the clinical and genetic evidence is not yet convincing). The nomenclature may be updated over time as new information regarding the dystrophies becomes available. CONCLUSIONS: The IC3D Classification of Corneal Dystrophies is a new classification system that incorporates many aspects of the traditional definitions of corneal dystrophies with new genetic, clinical, and pathologic information. Standardized templates provide key information that includes a level of evidence for there being a corneal dystrophy. The system is user-friendly and upgradeable and can be retrieved on the website www.corneasociety.org/ic3d.