152 resultados para methods: data analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Adherence to combination antiretroviral therapy (cART) is a dynamic process, however, changes in adherence behavior over time are insufficiently understood. METHODS: Data on self-reported missed doses of cART was collected every 6 months in Swiss HIV Cohort Study participants. We identified behavioral groups associated with specific cART adherence patterns using trajectory analyses. Repeated measures logistic regression identified predictors of changes in adherence between consecutive visits. RESULTS: Six thousand seven hundred nine individuals completed 49,071 adherence questionnaires [median 8 (interquartile range: 5-10)] during a median follow-up time of 4.5 years (interquartile range: 2.4-5.1). Individuals were clustered into 4 adherence groups: good (51.8%), worsening (17.4%), improving (17.6%), and poor adherence (13.2%). Independent predictors of worsening adherence were younger age, basic education, loss of a roommate, starting intravenous drug use, increasing alcohol intake, depression, longer time with HIV, onset of lipodystrophy, and changing care provider. Independent predictors of improvements in adherence were regimen simplification, changing class of cART, less time on cART, and starting comedications. CONCLUSIONS: Treatment, behavioral changes, and life events influence patterns of drug intake in HIV patients. Clinical care providers should routinely monitor factors related to worsening adherence and intervene early to reduce the risk of treatment failure and drug resistance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To evaluate whether the correlation between in vitro bond strength data and estimated clinical retention rates of cervical restorations after two years depends on pooled data obtained from multicenter studies or single-test data. Materials and Methods: Pooled mean data for six dentin adhesive systems (Adper Prompt L-Pop, Clearfil SE, OptiBond FL, Prime & Bond NT, Single Bond, and Scotchbond Multipurpose) and four laboratory methods (macroshear, microshear, macrotensile and microtensile bond strength test) (Scherrer et al, 2010) were correlated to estimated pooled two-year retention rates of Class V restorations using the same adhesive systems. For bond strength data from a single test institute, the literature search in SCOPUS revealed one study that tested all six adhesive systems (microtensile) and two that tested five of the six systems (microtensile, macroshear). The correlation was determined with a database designed to perform a meta-analysis on the clinical performance of cervical restorations (Heintze et al, 2010). The clinical data were pooled and adjusted in a linear mixed model, taking the study effect, dentin preparation, type of isolation and bevelling of enamel into account. A regression analysis was carried out to evaluate the correlation between clinical and laboratory findings. Results: The results of the regression analysis for the pooled data revealed that only the macrotensile (adjusted R2 = 0.86) and microtensile tests (adjusted R2 = 0.64), but not the shear and the microshear tests, correlated well with the clinical findings. As regards the data from a single-test institute, the correlation was not statistically significant. Conclusion: Macrotensile and microtensile bond strength tests showed an adequate correlation with the retention rate of cervical restorations after two years. Bond strength tests should be carried out by different operators and/or research institutes to determine the reliability and technique sensitivity of the material under investigation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Solexa/Illumina short-read ultra-high throughput DNA sequencing technology produces millions of short tags (up to 36 bases) by parallel sequencing-by-synthesis of DNA colonies. The processing and statistical analysis of such high-throughput data poses new challenges; currently a fair proportion of the tags are routinely discarded due to an inability to match them to a reference sequence, thereby reducing the effective throughput of the technology. RESULTS: We propose a novel base calling algorithm using model-based clustering and probability theory to identify ambiguous bases and code them with IUPAC symbols. We also select optimal sub-tags using a score based on information content to remove uncertain bases towards the ends of the reads. CONCLUSION: We show that the method improves genome coverage and number of usable tags as compared with Solexa's data processing pipeline by an average of 15%. An R package is provided which allows fast and accurate base calling of Solexa's fluorescence intensity files and the production of informative diagnostic plots.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Recent reviews of randomized control trials have shown that pharmacist interventions improve cardiovascular diseases (CVD) risk factors in outpatients. Various interventions were evaluated in different settings, and a substantial heterogeneity was observed in the effect estimates. To better express uncertainties in the effect estimates, prediction intervals (PI) have been proposed but are, however, rarely reported. Objective: Pooling data from two systematic reviews, we estimated the effect of pharmacist interventions on systolic blood pressure (BP), computed PI, and evaluated potential causes of heterogeneity. Methods: Data were pooled from systematic reviews assessing the effect of pharmacist interventions on CVD risk factors in patients with or without diabetes, respectively. Effects were estimated using random effect models. Results: Systolic BP was the outcome in 31 trials including 12 373 patients. Pharmacist interventions included patient educational interventions, patient-reminder systems, measurement of BP, medication management and feedback to physician, or educational intervention to health care professionals. Pharmacist interventions were associated with a large reduction in systolic BP (-7.5 mmHg; 95% CI: -9.0 to -5.9). There was a substantial heterogeneity (I2: 66%). The 95% PI ranged from -13.9 to -1.0 mmHg. The effect tended to be larger if the intervention was conducted in a community pharmacy and if the pharmacist intervened at least monthly. Conclusion: On average, the effect of pharmacist interventions on BP was substantial. However, the wide PI suggests that the effect differed between interventions, with some having modest effects and others very large effects on BP. Part of the heterogeneity could be due to differences in the setting and in the frequency of the interventions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background In a previous study, the European Organisation for Research and Treatment of Cancer (EORTC) reported a scoring system to predict survival of patients with low-grade gliomas (LGGs). A major issue in the diagnosis of brain tumors is the lack of agreement among pathologists. New models in patients with LGGs diagnosed by central pathology review are needed. Methods Data from 339 EORTC patients with LGGs diagnosed by central pathology review were used to develop new prognostic models for progression-free survival (PFS) and overall survival (OS). Data from 450 patients with centrally diagnosed LGGs recruited into 2 large studies conducted by North American cooperative groups were used to validate the models. Results Both PFS and OS were negatively influenced by the presence of baseline neurological deficits, a shorter time since first symptoms (<30 wk), an astrocytic tumor type, and tumors larger than 5 cm in diameter. Early irradiation improved PFS but not OS. Three risk groups have been identified (low, intermediate, and high) and validated. Conclusions We have developed new prognostic models in a more homogeneous LGG population diagnosed by central pathology review. This population better fits with modern practice, where patients are enrolled in clinical trials based on central or panel pathology review. We could validate the models in a large, external, and independent dataset. The models can divide LGG patients into 3 risk groups and provide reliable individual survival predictions. Inclusion of other clinical and molecular factors might still improve models' predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Practical guidelines for monitoring and measuring compounds such as jasmonates, ketols, ketodi(tri)enes and hydroxy-fatty acids as well as detecting the presence of novel oxylipins are presented. Additionally, a protocol for the penetrant analysis of non-enzymatic lipid oxidation is described. Each of the methods, which employ gas chromatography/mass spectrometry, can be applied without specialist knowledge or recourse to the latest analytical instrumentation. Additional information on oxylipin quantification and novel protocols for preparing oxygen isotope-labelled internal standards are provided. Four developing areas of research are identified: (i) profiling of the unbound cellular pools of oxylipins; (ii) profiling of esterified oxylipins and/or monitoring of their release from parent lipids; (iii) monitoring of non-enzymatic lipid oxidation; (iv) analysis of unstable and reactive oxylipins. The methods and protocols presented herein are designed to give technical insights into the first three areas and to provide a platform from which to enter the fourth area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background and aims. Limited data from large cohorts are available on tumor necrosis factor (TNF) antagonists (infliximab, adalimumab, certolizumab pegol) switch over time. We aimed to evaluate the prevalence of switching from one TNF antagonist to another and to identify associated risk factors. Methods. Data from the Swiss Inflammatory Bowel Diseases Cohort Study (SIBDCS) were analyzed. Results. Of 1731 patients included into the SIBDCS (956 with Crohn's disease [CD] and 775 with ulcerative colitis [UC]), 347 CD patients (36.3%) and 129 UC patients (16.6%) were treated with at least one TNF antagonist. A total of 53/347 (15.3%) CD patients (median disease duration 9 years) and 20/129 (15.5%) of UC patients (median disease duration 7 years) needed to switch to a second and/or a third TNF antagonist, respectively. Median treatment duration was longest for the first TNF antagonist used (CD 25 months; UC 14 months), followed by the second (CD 13 months; UC 4 months) and third TNF antagonist (CD 11 months; UC 15 months). Primary nonresponse, loss of response and side effects were the major reasons to stop and/or switch TNF antagonist therapy. A low body mass index, a short diagnostic delay and extraintestinal manifestations at inclusion were identified as risk factors for a switch of the first used TNF antagonist within 24 months of its use in CD patients. Conclusion. Switching of the TNF antagonist over time is a common issue. The median treatment duration with a specific TNF antagonist is diminishing with an increasing number of TNF antagonists being used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: The purpose of this study was to determine whether thoracic endovascular aortic repair (TEVAR) reduces death and morbidity compared with open surgical repair for descending thoracic aortic disease. BACKGROUND: The role of TEVAR versus open surgery remains unclear. Metaregression can be used to maximally inform adoption of new technologies by utilizing evidence from existing trials. METHODS: Data from comparative studies of TEVAR versus open repair of the descending aorta were combined through meta-analysis. Metaregression was performed to account for baseline risk factor imbalances, study design, and thoracic pathology. Due to significant heterogeneity, registry data were analyzed separately from comparative studies. RESULTS: Forty-two nonrandomized studies involving 5,888 patients were included (38 comparative studies, 4 registries). Patient characteristics were balanced except for age, as TEVAR patients were usually older than open surgery patients (p = 0.001). Registry data suggested overall perioperative complications were reduced. In comparative studies, all-cause mortality at 30 days (odds ratio [OR]: 0.44, 95% confidence interval [CI]: 0.33 to 0.59) and paraplegia (OR: 0.42, 95% CI: 0.28 to 0.63) were reduced for TEVAR versus open surgery. In addition, cardiac complications, transfusions, reoperation for bleeding, renal dysfunction, pneumonia, and length of stay were reduced. There was no significant difference in stroke, myocardial infarction, aortic reintervention, and mortality beyond 1 year. Metaregression to adjust for age imbalance, study design, and pathology did not materially change the results. CONCLUSIONS: Current data from nonrandomized studies suggest that TEVAR may reduce early death, paraplegia, renal insufficiency, transfusions, reoperation for bleeding, cardiac complications, pneumonia, and length of stay compared with open surgery. Sustained benefits on survival have not been proven.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Whether for investigative or intelligence aims, crime analysts often face up the necessity to analyse the spatiotemporal distribution of crimes or traces left by suspects. This article presents a visualisation methodology supporting recurrent practical analytical tasks such as the detection of crime series or the analysis of traces left by digital devices like mobile phone or GPS devices. The proposed approach has led to the development of a dedicated tool that has proven its effectiveness in real inquiries and intelligence practices. It supports a more fluent visual analysis of the collected data and may provide critical clues to support police operations as exemplified by the presented case studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative information from magnetic resonance imaging (MRI) may substantiate clinical findings and provide additional insight into the mechanism of clinical interventions in therapeutic stroke trials. The PERFORM study is exploring the efficacy of terutroban versus aspirin for secondary prevention in patients with a history of ischemic stroke. We report on the design of an exploratory longitudinal MRI follow-up study that was performed in a subgroup of the PERFORM trial. An international multi-centre longitudinal follow-up MRI study was designed for different MR systems employing safety and efficacy readouts: new T2 lesions, new DWI lesions, whole brain volume change, hippocampal volume change, changes in tissue microstructure as depicted by mean diffusivity and fractional anisotropy, vessel patency on MR angiography, and the presence of and development of new microbleeds. A total of 1,056 patients (men and women ≥ 55 years) were included. The data analysis included 3D reformation, image registration of different contrasts, tissue segmentation, and automated lesion detection. This large international multi-centre study demonstrates how new MRI readouts can be used to provide key information on the evolution of cerebral tissue lesions and within the macrovasculature after atherothrombotic stroke in a large sample of patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Numerous studies have shown a negative association between birth weight (BW) and blood pressure (BP) later in life. To estimate the direct effect of BW on BP, it is conventional to condition on current weight (CW). However, such conditioning can induce collider stratification bias in the estimate of the direct effect. Objective: To bound the potential bias due to U, an unmeasured common cause of CW and BP, on the estimate of the (controlled) direct effect of BW on BP. Methods: Data from a school based study in Switzerland were used (N = 4,005; 2,010 B/1,995 G; mean age: 12.3 yr [range: 10.1-14.9]). Measured common causes of BW-BP (SES, smoking, body weight, and hypertension status of the mother) and CW-BP (breastfeeding and child's physical activity and diet) were identified with DAGs. Linear regression models were fitted to estimate the association between BW and BP. Sensitivity analyses were conducted to assess the potential effect of U on the association between BW and BP. U was assumed 1) to be a binary variable that affected BP by the same magnitude in low BWand in normal BW children and 2) to have a different prevalence in low BW children and in normal BW children for a given CW. Results: A small negative association was observed between BW and BP [beta: -0.3 mmHg/kg (95% CI: -0.9 to 0.3)]. The association was strengthened upon conditioning for CW [beta: -1.5 mmHg/kg (95% CI: -2.1 to -0.9)]. Upon further conditioning on common causes of BW-BP and CW-BP, the association did not change substantially [beta: -1.4 mmHg/kg (95% CI: -2.0 to -0.8)]. The negative association could be explained by U only if U was strongly associated with BP and if there was a large difference in the prevalence of U between low BWand normal BW children. Conclusion: The observed negative association between BW and BP upon adjustment for CW was not easily explained by an unmeasured common cause of CWand BP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Synovial sarcoma (SS) is a malignant soft tissue sarcoma with a poor prognosis because of late local recurrence and distant metastases. To our knowledge, no studies have minimum follow-up of 10 years that evaluate long-term outcomes for survivors. Patients and methods: Data on 62 patients who had been treated for SS from 1968 to 1999 were studied retrospectively in a multicenter study. Mean follow-up of living patients was 17.2 years and of dead patients 7.7 years. Results: Mean age at diagnosis was 35.4 years (range 6-82 years). Overall survival was 38.7%. The 5-year survival was 74.2%; 10-year survival was 61.2%; and 15-year survival was 46.5%. Fifteen patients (24%) died of disease after 10 years of follow-up. Local recurrence occurred after a mean of 3.6 years (range 0.5-14.9 years) and metastases at a mean of 5.7 years (range 0.5-16.3 years). Only four patients were treated technically correctly with a planned biopsy followed by a wide resection or amputation. Factors associated with significantly worse prognosis included larger tumor size, metastases at the time of diagnosis, high-grade histology, trunk-related disease, and lack of wide resection as primary surgical treatment. Conclusions: In SS, metastases develop late with high mortality. Patients with SS should be followed for >10 years.