15 resultados para TURF analysis, Binary programming, product design
em Université de Lausanne, Switzerland
Resumo:
BACKGROUND AND PURPOSE: Stroke registries are valuable tools for obtaining information about stroke epidemiology and management. The Acute STroke Registry and Analysis of Lausanne (ASTRAL) prospectively collects epidemiological, clinical, laboratory and multimodal brain imaging data of acute ischemic stroke patients in the Centre Hospitalier Universitaire Vaudois (CHUV). Here, we provide design and methods used to create ASTRAL and present baseline data of our patients (2003 to 2008). METHODS: All consecutive patients admitted to CHUV between January 1, 2003 and December 31, 2008 with acute ischemic stroke within 24 hours of symptom onset were included in ASTRAL. Patients arriving beyond 24 hours, with transient ischemic attack, intracerebral hemorrhage, subarachnoidal hemorrhage, or cerebral sinus venous thrombosis, were excluded. Recurrent ischemic strokes were registered as new events. RESULTS: Between 2003 and 2008, 1633 patients and 1742 events were registered in ASTRAL. There was a preponderance of males, even in the elderly. Cardioembolic stroke was the most frequent type of stroke. Most strokes were of minor severity (National Institute of Health Stroke Scale [NIHSS] score ≤ 4 in 40.8% of patients). Cardioembolic stroke and dissections presented with the most severe clinical picture. There was a significant number of patients with unknown onset stroke, including wake-up stroke (n=568, 33.1%). Median time from last-well time to hospital arrival was 142 minutes for known onset and 759 minutes for unknown-onset stroke. The rate of intravenous or intraarterial thrombolysis between 2003 and 2008 increased from 10.8% to 20.8% in patients admitted within 24 hours of last-well time. Acute brain imaging was performed in 1695 patients (97.3%) within 24 hours. In 1358 patients (78%) who underwent acute computed tomography angiography, 717 patients (52.8%) had significant abnormalities. Of the 1068 supratentorial stroke patients who underwent acute perfusion computed tomography (61.3%), focal hypoperfusion was demonstrated in 786 patients (73.6%). CONCLUSIONS: This hospital-based prospective registry of consecutive acute ischemic strokes incorporates demographic, clinical, metabolic, acute perfusion, and arterial imaging. It is characterized by a high proportion of minor and unknown-onset strokes, short onset-to-admission time for known-onset patients, rapidly increasing thrombolysis rates, and significant vascular and perfusion imaging abnormalities in the majority of patients.
Resumo:
Background and objective: Cefepime was one of the most used broad-spectrum antibiotics in Swiss public acute care hospitals. The drug was withdrawn from market in January 2007, and then replaced by a generic since October 2007. The goal of the study was to evaluate changes in the use of broad-spectrum antibiotics after the withdrawal of the cefepime original product. Design: A generalized regression-based interrupted time series model incorporating autocorrelated errors assessed how much the withdrawal changed the monthly use of other broad-spectrum antibiotics (ceftazidime, imipenem/cilastin, meropenem, piperacillin/ tazobactam) in defined daily doses (DDD)/100 bed-days from January 2004 to December 2008 [1, 2]. Setting: 10 Swiss public acute care hospitals (7 with\200 beds, 3 with 200-500 beds). Nine hospitals (group A) had a shortage of cefepime and 1 hospital had no shortage thanks to importation of cefepime from abroad. Main outcome measures: Underlying trend of use before the withdrawal, and changes in the level and in the trend of use after the withdrawal. Results: Before the withdrawal, the average estimated underlying trend (coefficient b1) for cefepime was decreasing by -0.047 (95% CI -0.086, -0.009) DDD/100 bed-days per month and was significant in three hospitals (group A, P\0.01). Cefepime withdrawal was associated with a significant increase in level of use (b2) of piperacillin/tazobactam and imipenem/cilastin in, respectively, one and five hospitals from group A. After the withdrawal, the average estimated trend (b3) was greatest for piperacillin/tazobactam (+0.043 DDD/100 bed-days per month; 95% CI -0.001, 0.089) and was significant in four hospitals from group A (P\0.05). The hospital without drug shortage showed no significant change in the trend and the level of use. The hypothesis of seasonality was rejected in all hospitals. Conclusions: The decreased use of cefepime already observed before its withdrawal from the market could be explained by pre-existing difficulty in drug supply. The withdrawal of cefepime resulted in change in level for piperacillin/tazobactam and imipenem/cilastin. Moreover, an increase in trend was found for piperacillin/tazobactam thereafter. As these changes generally occur at the price of lower bacterial susceptibility, a manufacturers' commitment to avoid shortages in the supply of their products would be important. As perspectives, we will measure the impact of the changes in cost and sensitivity rates of these antibiotics.
Resumo:
Issue addressed: Cutaneous melanoma is a significant health problem in New Zealand. Excessive sun exposure in early life increases subsequent risk. This study investigated parental opinions, understanding and practices concerning the sun protection of young children. The study aimed to identify areas where improvements in sun protection may be most needed. Methods: Parents were recruited through licensed childcare centres and kindergartens in Dunedin to take part in semi-structured focus groups. Feedback was obtained from participants in response to summary reports based on audiotapes. Results: Parents noted increased social acceptability of sun protective behaviours and child sunburn was now unacceptable. Past media campaigns were well recalled. The 'time to burn' used in media weather reports was easier to understand than the Ultra Violet Index (UVI), about which more information was wanted. Protective messages were expected to be straightforward, consistent and readily and regularly available. Local radio may provide the most timely, relevant information. There was a perceived lack of authoritative information about sunscreens and sunglasses and a shortage of acceptable protective clothing. Fuller information on sunscreen containers and greater use of UV Protection Factor (UPF) ratings for clothing and Eye Protection Factor (EPF) for sunglasses would assist. The use of shade and rescheduling of activities were scarcely mentioned. Conclusions: Parents were aware of the need for child sun protection but lacked confidence about how best to achieve this. Future health promotion programs should emphasise how optimal protection can be achieved more than why sun protection is needed. Programs should include a repertoire of strategies targeted towards individuals through the education of children and caregivers. They should also aim at achieving modifications in physical and social environments, including appropriate product design and promotion. So what?: The development of a balanced, comprehensive program with environmental components that reinforce protective behaviours has the potential to sustain sun protection among the largest number of children in the longer term.
Resumo:
The flourishing number of publications on the use of isotope ratio mass spectrometry (IRMS) in forensicscience denotes the enthusiasm and the attraction generated by this technology. IRMS has demonstratedits potential to distinguish chemically identical compounds coming from different sources. Despite thenumerous applications of IRMS to a wide range of forensic materials, its implementation in a forensicframework is less straightforward than it appears. In addition, each laboratory has developed its ownstrategy of analysis on calibration, sequence design, standards utilisation and data treatment without aclear consensus.Through the experience acquired from research undertaken in different forensic fields, we propose amethodological framework of the whole process using IRMS methods. We emphasize the importance ofconsidering isotopic results as part of a whole approach, when applying this technology to a particularforensic issue. The process is divided into six different steps, which should be considered for a thoughtfuland relevant application. The dissection of this process into fundamental steps, further detailed, enablesa better understanding of the essential, though not exhaustive, factors that have to be considered in orderto obtain results of quality and sufficiently robust to proceed to retrospective analyses or interlaboratorycomparisons.
Resumo:
Background: The variety of DNA microarray formats and datasets presently available offers an unprecedented opportunity to perform insightful comparisons of heterogeneous data. Cross-species studies, in particular, have the power of identifying conserved, functionally important molecular processes. Validation of discoveries can now often be performed in readily available public data which frequently requires cross-platform studies.Cross-platform and cross-species analyses require matching probes on different microarray formats. This can be achieved using the information in microarray annotations and additional molecular biology databases, such as orthology databases. Although annotations and other biological information are stored using modern database models ( e. g. relational), they are very often distributed and shared as tables in text files, i.e. flat file databases. This common flat database format thus provides a simple and robust solution to flexibly integrate various sources of information and a basis for the combined analysis of heterogeneous gene expression profiles.Results: We provide annotationTools, a Bioconductor-compliant R package to annotate microarray experiments and integrate heterogeneous gene expression profiles using annotation and other molecular biology information available as flat file databases. First, annotationTools contains a specialized set of functions for mining this widely used database format in a systematic manner. It thus offers a straightforward solution for annotating microarray experiments. Second, building on these basic functions and relying on the combination of information from several databases, it provides tools to easily perform cross-species analyses of gene expression data.Here, we present two example applications of annotationTools that are of direct relevance for the analysis of heterogeneous gene expression profiles, namely a cross-platform mapping of probes and a cross-species mapping of orthologous probes using different orthology databases. We also show how to perform an explorative comparison of disease-related transcriptional changes in human patients and in a genetic mouse model.Conclusion: The R package annotationTools provides a simple solution to handle microarray annotation and orthology tables, as well as other flat molecular biology databases. Thereby, it allows easy integration and analysis of heterogeneous microarray experiments across different technological platforms or species.
Resumo:
BACKGROUND: Persistence is a key factor for long-term blood pressure control, which is of high prognostic importance for patients at increased cardiovascular risk. Here we present the results of a post-marketing survey including 4769 hypertensive patients treated with irbesartan in 886 general practices in Switzerland. The goal of this survey was to evaluate the tolerance and the blood pressure lowering effect of irbesartan as well as the factors affecting persistence in a large unselected population. METHODS: Prospective observational survey conducted in general practices in all regions of Switzerland. Previously untreated and uncontrolled pre-treated patients were started with a daily dose of 150 mg irbesartan and followed up to 6 months. RESULTS: After an observation time slightly exceeding 4 months, the average reduction in systolic and diastolic blood pressure was 20 (95% confidence interval (CI) -19.6 to -20.7 mmHg) and 12 mmHg (95% CI -11.4 to -12.1 mmHg), respectively. At this time, 26% of patients had a blood pressure < 140/90 mmHg and 60% had a diastolic blood pressure < 90 mmHg. The drug was well tolerated with an incidence of adverse events (dizziness, headaches,...) of 8.0%. In this survey more than 80% of patients were still on irbesartan at 4 month. The most important factors predictive of persistence were the tolerability profile and the ability to achieve a blood pressure target < or = 140/90 mmHg before visit 2. Patients who switched from a fixed combination treatment tended to discontinue irbesartan more often whereas those who abandoned the previous treatment because of cough (a class side effect of ACE-Inhibitors) were more persistent with irbesartan. CONCLUSION: The results of this survey confirm that irbesartan is effective, well tolerated and well accepted by patients, as indicated by the good persistence. This post-marketing survey also emphasizes the importance of the tolerability profile and of achieving an early control of blood pressure as positive predictors of persistence.
Resumo:
We have devised a program that allows computation of the power of F-test, and hence determination of appropriate sample and subsample sizes, in the context of the one-way hierarchical analysis of variance with fixed effects. The power at a fixed alternative is an increasing function of the sample size and of the subsample size. The program makes it easy to obtain the power of F-test for a range of values of sample and subsample sizes, and therefore the appropriate sizes based on a desired power. The program can be used for the 'ordinary' case of the one-way analysis of variance, as well as for hierarchical analysis of variance with two stages of sampling. Examples are given of the practical use of the program.
Resumo:
The flourishing number of publications on the use of isotope ratio mass spectrometry (IRMS) in forensic science denotes the enthusiasm and the attraction generated by this technology. IRMS has demonstrated its potential to distinguish chemically identical compounds coming from different sources. Despite the numerous applications of IRMS to a wide range of forensic materials, its implementation in a forensic framework is less straightforward than it appears. In addition, each laboratory has developed its own strategy of analysis on calibration, sequence design, standards utilisation and data treatment without a clear consensus.Through the experience acquired from research undertaken in different forensic fields, we propose a methodological framework of the whole process using IRMS methods. We emphasize the importance of considering isotopic results as part of a whole approach, when applying this technology to a particular forensic issue. The process is divided into six different steps, which should be considered for a thoughtful and relevant application. The dissection of this process into fundamental steps, further detailed, enables a better understanding of the essential, though not exhaustive, factors that have to be considered in order to obtain results of quality and sufficiently robust to proceed to retrospective analyses or interlaboratory comparisons.
Resumo:
The flourishing number of publications on the use of isotope ratio mass spectrometry (IRMS) in forensicscience denotes the enthusiasm and the attraction generated by this technology. IRMS has demonstratedits potential to distinguish chemically identical compounds coming from different sources. Despite thenumerous applications of IRMS to a wide range of forensic materials, its implementation in a forensicframework is less straightforward than it appears. In addition, each laboratory has developed its ownstrategy of analysis on calibration, sequence design, standards utilisation and data treatment without aclear consensus.Through the experience acquired from research undertaken in different forensic fields, we propose amethodological framework of the whole process using IRMS methods. We emphasize the importance ofconsidering isotopic results as part of a whole approach, when applying this technology to a particularforensic issue. The process is divided into six different steps, which should be considered for a thoughtfuland relevant application. The dissection of this process into fundamental steps, further detailed, enablesa better understanding of the essential, though not exhaustive, factors that have to be considered in orderto obtain results of quality and sufficiently robust to proceed to retrospective analyses or interlaboratorycomparisons.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
OBJECTIVES: To provide a global, up-to-date picture of the prevalence, treatment, and outcomes of Candida bloodstream infections in intensive care unit patients and compare Candida with bacterial bloodstream infection. DESIGN: A retrospective analysis of the Extended Prevalence of Infection in the ICU Study (EPIC II). Demographic, physiological, infection-related and therapeutic data were collected. Patients were grouped as having Candida, Gram-positive, Gram-negative, and combined Candida/bacterial bloodstream infection. Outcome data were assessed at intensive care unit and hospital discharge. SETTING: EPIC II included 1265 intensive care units in 76 countries. PATIENTS: Patients in participating intensive care units on study day. INTERVENTIONS: None. MEASUREMENT AND MAIN RESULTS: Of the 14,414 patients in EPIC II, 99 patients had Candida bloodstream infections for a prevalence of 6.9 per 1000 patients. Sixty-one patients had candidemia alone and 38 patients had combined bloodstream infections. Candida albicans (n = 70) was the predominant species. Primary therapy included monotherapy with fluconazole (n = 39), caspofungin (n = 16), and a polyene-based product (n = 12). Combination therapy was infrequently used (n = 10). Compared with patients with Gram-positive (n = 420) and Gram-negative (n = 264) bloodstream infections, patients with candidemia were more likely to have solid tumors (p < .05) and appeared to have been in an intensive care unit longer (14 days [range, 5-25 days], 8 days [range, 3-20 days], and 10 days [range, 2-23 days], respectively), but this difference was not statistically significant. Severity of illness and organ dysfunction scores were similar between groups. Patients with Candida bloodstream infections, compared with patients with Gram-positive and Gram-negative bloodstream infections, had the greatest crude intensive care unit mortality rates (42.6%, 25.3%, and 29.1%, respectively) and longer intensive care unit lengths of stay (median [interquartile range]) (33 days [18-44], 20 days [9-43], and 21 days [8-46], respectively); however, these differences were not statistically significant. CONCLUSION: Candidemia remains a significant problem in intensive care units patients. In the EPIC II population, Candida albicans was the most common organism and fluconazole remained the predominant antifungal agent used. Candida bloodstream infections are associated with high intensive care unit and hospital mortality rates and resource use.
Resumo:
OBJECTIVES: To investigate whether associations of smoking with depression and anxiety are likely to be causal, using a Mendelian randomisation approach. DESIGN: Mendelian randomisation meta-analyses using a genetic variant (rs16969968/rs1051730) as a proxy for smoking heaviness, and observational meta-analyses of the associations of smoking status and smoking heaviness with depression, anxiety and psychological distress. PARTICIPANTS: Current, former and never smokers of European ancestry aged ≥16 years from 25 studies in the Consortium for Causal Analysis Research in Tobacco and Alcohol (CARTA). PRIMARY OUTCOME MEASURES: Binary definitions of depression, anxiety and psychological distress assessed by clinical interview, symptom scales or self-reported recall of clinician diagnosis. RESULTS: The analytic sample included up to 58 176 never smokers, 37 428 former smokers and 32 028 current smokers (total N=127 632). In observational analyses, current smokers had 1.85 times greater odds of depression (95% CI 1.65 to 2.07), 1.71 times greater odds of anxiety (95% CI 1.54 to 1.90) and 1.69 times greater odds of psychological distress (95% CI 1.56 to 1.83) than never smokers. Former smokers also had greater odds of depression, anxiety and psychological distress than never smokers. There was evidence for positive associations of smoking heaviness with depression, anxiety and psychological distress (ORs per cigarette per day: 1.03 (95% CI 1.02 to 1.04), 1.03 (95% CI 1.02 to 1.04) and 1.02 (95% CI 1.02 to 1.03) respectively). In Mendelian randomisation analyses, there was no strong evidence that the minor allele of rs16969968/rs1051730 was associated with depression (OR=1.00, 95% CI 0.95 to 1.05), anxiety (OR=1.02, 95% CI 0.97 to 1.07) or psychological distress (OR=1.02, 95% CI 0.98 to 1.06) in current smokers. Results were similar for former smokers. CONCLUSIONS: Findings from Mendelian randomisation analyses do not support a causal role of smoking heaviness in the development of depression and anxiety.
Resumo:
With the trend in molecular epidemiology towards both genome-wide association studies and complex modelling, the need for large sample sizes to detect small effects and to allow for the estimation of many parameters within a model continues to increase. Unfortunately, most methods of association analysis have been restricted to either a family-based or a case-control design, resulting in the lack of synthesis of data from multiple studies. Transmission disequilibrium-type methods for detecting linkage disequilibrium from family data were developed as an effective way of preventing the detection of association due to population stratification. Because these methods condition on parental genotype, however, they have precluded the joint analysis of family and case-control data, although methods for case-control data may not protect against population stratification and do not allow for familial correlations. We present here an extension of a family-based association analysis method for continuous traits that will simultaneously test for, and if necessary control for, population stratification. We further extend this method to analyse binary traits (and therefore family and case-control data together) and accurately to estimate genetic effects in the population, even when using an ascertained family sample. Finally, we present the power of this binary extension for both family-only and joint family and case-control data, and demonstrate the accuracy of the association parameter and variance components in an ascertained family sample.