43 resultados para Multivariate data analysis
Adenocarcinoma of the pancreas: Comparative single centre analysis between ductal and mucinous type.
Resumo:
1. Background¦Adenocarcinomas of the pancreas are exocrine tumors, originate from ductal system, including two morphologically distinct entities: the ductal adenocarcinoma and mucinous adenocarcinoma. Ductal adenocarcinoma is by far the most frequent malignant tumor in the pancreas, representing at least about 90% of all pancreas cancers. It is associated with very poor prognosis, due to the fact that actually there are no any biological markers or diagnostic tools for identification of the disease at an early stage. Most of the time the disease is extensive with vascular and nerves involvement or with metastatic spread at the time of diagnosis (1). The median survival is less than 5% at 5 years, placing it, at the fifth leading cause of death by cancer in the world (2). The mucinous form of pancreatic adenocarcinoma is less frequent, and seems to have a better prognosis with about 57% survival at 5 years (1)(3)(4).¦Each morphologic type of pancreatic adenocarcinoma is associated with particular preneoplastic lesions. Two types of preneoplastic lesions are described: firstly, pancreatic intra-epithelial neoplasia (PanIN) which affects the small and peripheral pancreatic ducts, and the intraductal papillary-mucinous neoplasm (IPMN) interested the main pancreatic ducts and its principal branches. Both of preneoplastic lesions lead by different mechanisms to the pancreatic adenocarcinoma (1)(2)(3)(4)(5)(6)(7)(8)(9)(10).¦The purpose of our study consists in a retrospective analysis of various clinical and histo-morphological parameters in order to assess a difference in survival between these two morphological types of pancreatic adenocarcinomas.¦1.2 Material and methods¦We conducted a retrospective analysis including 35 patients, (20 men and 15 women), beneficed the surgical treatment for pancreas adenocarcinoma at the Surgical Department of University Hospital in Lausanne. The patients involved in our study have been treated between 2003 and 2008, permitting at least 5-years mean follow up. For each patient the following parameters were analysed: age, gender, type of operation, type of preneoplastic lesions, TNM stage, histological grade of the tumor, vascular invasion, lymphatic and perineural invasion, resection margins, and adjuvant treatment.¦The results from these observations were included in a univariate and multivariate statistical analysis and compared with overall survival, as well as specific survival for each morphologic subtype of adenocarcinoma.¦As a low number of mucinous adenocarcinomas (n=5) was insufficient to conduct a pertinent statistical analysis, we compared the data obtained from adenocarcinomas developed on PanIN with adenocarcinomas developed on IPMN including both, ductal or mucinous types.¦1.3 Result¦Our results show that adenocarcinomas developed on pre-existing IPMN including both morphologic types (ductal and mucinous form) are associated with a better survival and prognosis than adenocarciomas developed on PanIN.¦1.4 Conclusion¦This study reflects that the most relevant parameter in survival in pancreatic adenocarcinoma seems to be the type of preneoplastic lesion. The significant difference in survival was noted between adenocarcinomas developing on PanIN as compared to adenocarcinomas developed on IPMN precursor lesions. Ductal adenocarcinomas developped on IPMN present significantly longer survival than those developed on PanIN lesions (P value= 0,01). Therefore we can suggest that the histological type of preneoplastic lesion rather than the histological type of adenocarcinoma should be the determinant prognosis factor in survival of pancreatic adenocarcinoma.
Resumo:
Whether for investigative or intelligence aims, crime analysts often face up the necessity to analyse the spatiotemporal distribution of crimes or traces left by suspects. This article presents a visualisation methodology supporting recurrent practical analytical tasks such as the detection of crime series or the analysis of traces left by digital devices like mobile phone or GPS devices. The proposed approach has led to the development of a dedicated tool that has proven its effectiveness in real inquiries and intelligence practices. It supports a more fluent visual analysis of the collected data and may provide critical clues to support police operations as exemplified by the presented case studies.
Resumo:
BACKGROUND AND OBJECTIVES: Combination antiretroviral therapy (cART) is changing, and this may affect the type and occurrence of side effects. We examined the frequency of lipodystrophy (LD) and weight changes in relation to the use of specific drugs in the Swiss HIV Cohort Study (SHCS). METHODS: In the SHCS, patients are followed twice a year and scored by the treating physician as having 'fat accumulation', 'fat loss', or neither. Treatments, and reasons for change thereof, are recorded. Our study sample included all patients treated with cART between 2003 and 2006 and, in addition, all patients who started cART between 2000 and 2003. RESULTS: From 2003 to 2006, the percentage of patients taking stavudine, didanosine and nelfinavir decreased, the percentage taking lopinavir, nevirapine and efavirenz remained stable, and the percentage taking atazanavir and tenofovir increased by 18.7 and 22.2%, respectively. In life-table Kaplan-Meier analysis, patients starting cART in 2003-2006 were less likely to develop LD than those starting cART from 2000 to 2002 (P<0.02). LD was quoted as the reason for treatment change or discontinuation for 4% of patients on cART in 2003, and for 1% of patients treated in 2006 (P for trend <0.001). In univariate and multivariate regression analysis, patients with a weight gain of >or=5 kg were more likely to take lopinavir or atazanavir than patients without such a weight gain [odds ratio (OR) 2, 95% confidence interval (CI) 1.3-2.9, and OR 1.7, 95% CI 1.3-2.1, respectively]. CONCLUSIONS: LD has become less frequent in the SHCS from 2000 to 2006. A weight gain of more than 5 kg was associated with the use of atazanavir and lopinavir.
Resumo:
INTRODUCTION/OBJECTIVES: Detection rates for adenoma and early colorectal cancer (CRC) are insufficient due to low compliance towards invasive screening procedures, like colonoscopy.Available non-invasive screening tests have unfortunately low sensitivity and specificity performances.Therefore, there is a large unmet need calling for a cost-effective, reliable and non-invasive test to screen for early neoplastic and pre-neoplastic lesions AIMS & Methods: The objective is to develop a screening test able to detect early CRCs and adenomas.This test is based on a nucleic acids multi-gene assay performed on peripheral blood mononuclear cells (PBMCs).A colonoscopy-controlled feasibility study was conducted on 179 subjects.The first 92 subjects was used as training set to generate a statistical significant signature.Colonoscopy revealed 21 subjects with CRC,30 with adenoma bigger than 1 cm and 41 with no neoplastic or inflammatory lesions.The second group of 48 subjects (controls, CRC and polyps) was used as a test set and will be kept blinded for the entire data analysis.To determine the organ and disease specificity 38 subjects were used:24 with inflammatory bowel disease (IBD),14 with other cancers than CRC (OC).Blood samples were taken from each patient the day of the colonoscopy and PBMCs were purified. Total RNA was extracted following standard procedures.Multiplex RT-qPCR was applied on 92 different candidate biomarkers.Different univariate and multivariate statistical methods were applied on these candidates and among them 60 biomarkers with significant p-values (<0.01) were selected.These biomarkers are involved in several different biological functions as cellular movement,cell signaling and interaction,tissue and cellular development,cancer and cell growth and proliferation.Two distinct biomarker signatures are used to separate patients without lesion from those with cancer or with adenoma, named COLOX CRC and COLOX POL respectively.COLOX performances were validated using random resampling method, bootstrap. RESULTS: COLOX CRC and POL tests successfully separate patients without lesions from those with CRC (Se 67%,Sp 93%,AUC 0.87) and from those with adenoma bigger than 1cm (Se 63%,Sp 83%,AUC 0.77),respectively. 6/24 patients in the IBD group and 1/14 patients in the OC group have a positive COLOX CRC CONCLUSION: The two COLOX tests demonstrated a high sensitivity and specificity to detect the presence of CRCs and adenomas bigger than 1 cm.A prospective, multicenter, pivotal study is underway in order to confirm these promising results in a larger cohort.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
Dysregulation of intestinal epithelial cell performance is associated with an array of pathologies whose onset mechanisms are incompletely understood. While whole-genomics approaches have been valuable for studying the molecular basis of several intestinal diseases, a thorough analysis of gene expression along the healthy gastrointestinal tract is still lacking. The aim of this study was to map gene expression in gastrointestinal regions of healthy human adults and to implement a procedure for microarray data analysis that would allow its use as a reference when screening for pathological deviations. We analyzed the gene expression signature of antrum, duodenum, jejunum, ileum, and transverse colon biopsies using a biostatistical method based on a multivariate and univariate approach to identify region-selective genes. One hundred sixty-six genes were found responsible for distinguishing the five regions considered. Nineteen had never been described in the GI tract, including a semaphorin probably implicated in pathogen invasion and six novel genes. Moreover, by crossing these genes with those retrieved from an existing data set of gene expression in the intestine of ulcerative colitis and Crohn's disease patients, we identified genes that might be biomarkers of Crohn's and/or ulcerative colitis in ileum and/or colon. These include CLCA4 and SLC26A2, both implicated in ion transport. This study furnishes the first map of gene expression along the healthy human gastrointestinal tract. Furthermore, the approach implemented here, and validated by retrieving known gene profiles, allowed the identification of promising new leads in both healthy and disease states.
Resumo:
Analyzing functional data often leads to finding common factors, for which functional principal component analysis proves to be a useful tool to summarize and characterize the random variation in a function space. The representation in terms of eigenfunctions is optimal in the sense of L-2 approximation. However, the eigenfunctions are not always directed towards an interesting and interpretable direction in the context of functional data and thus could obscure the underlying structure. To overcome such difficulty, an alternative to functional principal component analysis is proposed that produces directed components which may be more informative and easier to interpret. These structural components are similar to principal components, but are adapted to situations in which the domain of the function may be decomposed into disjoint intervals such that there is effectively independence between intervals and positive correlation within intervals. The approach is demonstrated with synthetic examples as well as real data. Properties for special cases are also studied.
Resumo:
Port-a-Cath© (PAC) are totally implantable devices that offer an easy and long term access to venous circulation. They have been extensively used for intravenous therapy administration and are particularly well suited for chemotherapy in oncologic patients. Previous comparative studies have shown that these devices have the lowest catheter-related bloodstream infection rates among all intravascular access systems. However, bloodstream infection (BSI) still remains a major issue of port use and epidemiology data for PAC-associated BSI (PABSI) rates differ strongly depending on studies. Also, current literature about PABSI risk factors is scarce and sometimes controversial. Such heterogeneity may depend on type of studied population and local factors. Therefore, the aim of this study was to describe local epidemiology and risk factors for PABSI in adult patients in our tertiary- care university hospital. We conducted a retrospective cohort study in order to describe local epidemiology. We also performed a nested case-control study to identify local risk factors of PABSI. We analyzed medical files of adult patients who had a PAC implanted between January 1st, 2008 and December 31st, 2009 and looked for PABSI occurrence before May 1st, 2011 to define cases. Thirty nine PABSI occurred in this population with an attack rate of 5.8%. We estimated an incidence rate of 0.08/1000 PAC-days using the case-control study. PABSI causative agents were mainly Gram positive cocci (62%). We identified three predictive factors of PABSI by multivariate statistical analysis: neutropenia on outcome date (Odds Ratio [OR]: 4.05; 95% confidence interval [CI]:1.05- 15.66; p=0.042), diabetes (OR: 11.53; 95% CI: 1.07-124.70; p=0.044) and having another infection than PABSI on outcome date (OR: 6.35; 95% CI: 1.50-26.86; p=0.012). Patients suffering from acute or renal failure (OR: 4.26; 95% CI: 0.94-19.21; p=0.059) or wearing another invasive device (OR: 2.99; 95%CI:0.96-9.31; p=0.059) did not have a statistically increased risk for developing a PABSI according to classical threshold (p<0.05) but nevertheless remained close to significance. Our study demonstrated that local epidemiology and microbiology of PABSI in our institution was similar to previous reports. A larger prospective study is required to confirm our results or to test preventive measures.
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.
Resumo:
BACKGROUND: New HIV infections in men who have sex with men (MSM) have increased in Switzerland since 2000 despite combination antiretroviral therapy (cART). The objectives of this mathematical modelling study were: to describe the dynamics of the HIV epidemic in MSM in Switzerland using national data; to explore the effects of hypothetical prevention scenarios; and to conduct a multivariate sensitivity analysis. METHODOLOGY/PRINCIPAL FINDINGS: The model describes HIV transmission, progression and the effects of cART using differential equations. The model was fitted to Swiss HIV and AIDS surveillance data and twelve unknown parameters were estimated. Predicted numbers of diagnosed HIV infections and AIDS cases fitted the observed data well. By the end of 2010, an estimated 13.5% (95% CI 12.5, 14.6%) of all HIV-infected MSM were undiagnosed and accounted for 81.8% (95% CI 81.1, 82.4%) of new HIV infections. The transmission rate was at its lowest from 1995-1999, with a nadir of 46 incident HIV infections in 1999, but increased from 2000. The estimated number of new infections continued to increase to more than 250 in 2010, although the reproduction number was still below the epidemic threshold. Prevention scenarios included temporary reductions in risk behaviour, annual test and treat, and reduction in risk behaviour to levels observed earlier in the epidemic. These led to predicted reductions in new infections from 2 to 26% by 2020. Parameters related to disease progression and relative infectiousness at different HIV stages had the greatest influence on estimates of the net transmission rate. CONCLUSIONS/SIGNIFICANCE: The model outputs suggest that the increase in HIV transmission amongst MSM in Switzerland is the result of continuing risky sexual behaviour, particularly by those unaware of their infection status. Long term reductions in the incidence of HIV infection in MSM in Switzerland will require increased and sustained uptake of effective interventions.
Resumo:
Until recently, the hard X-ray, phase-sensitive imaging technique called grating interferometry was thought to provide information only in real space. However, by utilizing an alternative approach to data analysis we demonstrated that the angular resolved ultra-small angle X-ray scattering distribution can be retrieved from experimental data. Thus, reciprocal space information is accessible by grating interferometry in addition to real space. Naturally, the quality of the retrieved data strongly depends on the performance of the employed analysis procedure, which involves deconvolution of periodic and noisy data in this context. The aim of this article is to compare several deconvolution algorithms to retrieve the ultra-small angle X-ray scattering distribution in grating interferometry. We quantitatively compare the performance of three deconvolution procedures (i.e., Wiener, iterative Wiener and Lucy-Richardson) in case of realistically modeled, noisy and periodic input data. The simulations showed that the algorithm of Lucy-Richardson is the more reliable and more efficient as a function of the characteristics of the signals in the given context. The availability of a reliable data analysis procedure is essential for future developments in grating interferometry.
Resumo:
The integration of specific institutions for teacher education into the higher education system represents a milestone in the Swiss educational policy and has broad implications. This thesis explores organizational and institutional change resulting from this policy reform, and attempts to assess structural change in terms of differentiation and convergence within the system of higher education. Key issues that are dealt with are, on the one hand, the adoption of a research function by the newly conceptualized institutions of teacher education, and on the other, the positioning of the new institutions within the higher education system. Drawing on actor-centred approaches to differentiation, this dissertation discusses system-level specificities of tertiarized teacher education and asks how this affects institutional configurations and actor constellations. On the basis of qualitative and quantitative empirical data, a comparative analysis has been carried out including case studies of four universities of teacher education as well as multivariate regression analysis of micro-level data on students' educational choices. The study finds that the process of system integration and adaption to the research function by the various institutions have unfolded differently depending on the institutional setting and the specific actor constellations. The new institutions have clearly made a strong push to position themselves as a new institutional type and to find their identity beyond the traditional binary divide which assigns the universities of teacher education to the college sector. Potential conflicts have been identified in divergent cognitive normative orientations and perceptions of researchers, teacher educators, policy-makers, teachers, and students as to the mission and role of the new type of higher education institution. - L'intégration dans le système d'enseignement supérieur d'institutions qui ont pour tâche spécifique de former des enseignants peut être considérée comme un événement majeur dans la politique éducative suisse, qui se trouve avoir des conséquences importantes à plusieurs niveaux. Cette thèse explore les changements organisationnels et institutionnels résultant de cette réforme politique, et elle se propose d'évaluer en termes de différentiation et de convergence les changements structurels intervenus dans le système d'éducation tertiaire. Les principaux aspects traités sont d'une part la nouvelle mission de recherche attribuée à ces institutions de formation pédagogique, et de l'autre la place par rapport aux autres institutions du système d'éducation tertiaire. Recourant à une approche centrée sur les acteurs pour étudier les processus de différen-tiation, la thèse met en lumière et en discussion les spécificités inhérentes au système tertiaire au sein duquel se joue la formation des enseignants nouvellement conçue et soulève la question des effets de cette nouvelle façon de former les enseignants sur les configurations institutionnelles et les constellations d'acteurs. Une analyse comparative a été réalisée sur la base de données qualitatives et quantitatives issues de quatre études de cas de hautes écoles pédagogiques et d'analyses de régression multiple de données de niveau micro concernant les choix de carrière des étudiants. Les résultats montrent à quel point le processus d'intégration dans le système et la nouvelle mission de recherche peuvent apparaître de manière différente selon le cadre institutionnel d'une école et la constellation spécifique des acteurs influents. A pu clairement être observée une forte aspiration des hautes écoles pédagogiques à se créer une identité au-delà de la structure binaire du système qui assigne la formation des enseignants au secteur des hautes écoles spéciali-sées. Des divergences apparaissent dans les conceptions et perceptions cognitives et normatives des cher-cheurs, formateurs, politiciens, enseignants et étudiants quant à la mission et au rôle de ce nouveau type de haute école. - Die Integration spezieller Institutionen für die Lehrerbildung ins Hochschulsystem stellt einen bedeutsamen Schritt mit weitreichenden Folgen in der Entwicklung des schweizerischen Bildungswesens dar. Diese Dissertation untersucht die mit der Neuerung verbundenen Veränderungen auf organisatorischer und institutioneller Ebene und versucht, die strukturelle Entwicklung unter den Gesichtspunkten von Differenzierung und Konvergenz innerhalb des tertiären Bildungssystems einzuordnen. Zentrale Themen sind dabei zum einen die Einführung von Forschung und Entwicklung als zusätzlichem Leistungsauftrag in der Lehrerbildung und zum andern die Positionierung der pädagogischen Hochschulen innerhalb des Hochschulsystems. Anhand akteurzentrierter Ansätze zur Differenzierung werden die Besonderheiten einer tertiarisierten Lehrerbildung hinsichtlich der Systemebenen diskutiert und Antworten auf die Frage gesucht, wie die Reform die institutionellen Konfigurationen und die Akteurkonstellationen beeinflusst. Auf der Grundlage qualitativer und quantitativer Daten wurde eine vergleichende Analyse durchgeführt, welche Fallstudien zu vier pädagogischen Hochschulen umfasst sowie Regressionsanalysen von Mikrodaten zur Studienwahl von Maturanden. Die Ergebnisse machen deutlich, dass sich der Prozess der Systemintegration und die Einführung von Forschung in die Lehrerbildung in Abhängigkeit von institutionellen Ordnungen und der jeweiligen Akteurkonstellation unterschiedlich gestalten. Es lässt sich bei den neu gegründeten pädagogischen Hochschulen ein starkes Bestreben feststellen, sich als neuen Hochschultypus zu positionieren und sich eine Identität zu schaffen jenseits der herkömmlichen binären Struktur, welche die pädagogischen Hochschulen dem Fachhochschul-Sektor zuordnet. Potentielle Konflikte zeichnen sich ab in den divergierenden kognitiven und normativen Orientierungen und Wahrnehmungen von Forschern, Ausbildern, Bildungspolitikern, Lehrern und Studierenden hinsichtlich des Auftrags und der Rolle dieses neuen Typs Hochschule.
Resumo:
The coverage and volume of geo-referenced datasets are extensive and incessantly¦growing. The systematic capture of geo-referenced information generates large volumes¦of spatio-temporal data to be analyzed. Clustering and visualization play a key¦role in the exploratory data analysis and the extraction of knowledge embedded in¦these data. However, new challenges in visualization and clustering are posed when¦dealing with the special characteristics of this data. For instance, its complex structures,¦large quantity of samples, variables involved in a temporal context, high dimensionality¦and large variability in cluster shapes.¦The central aim of my thesis is to propose new algorithms and methodologies for¦clustering and visualization, in order to assist the knowledge extraction from spatiotemporal¦geo-referenced data, thus improving making decision processes.¦I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical¦Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis:¦the Tree-structured Self-organizing Maps Component Planes. In addition, I present¦methodologies that combined with FGHSON and the Tree-structured SOM Component¦Planes allow the integration of space and time seamlessly and simultaneously in¦order to extract knowledge embedded in a temporal context.¦The originality of the FGHSON lies in its capability to reflect the underlying structure¦of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of¦clusters is crucial when data include complex structures with large variability of cluster¦shapes, variances, densities and number of clusters. The most important characteristics¦of the FGHSON include: (1) It does not require an a-priori setup of the number¦of clusters. (2) The algorithm executes several self-organizing processes in parallel.¦Hence, when dealing with large datasets the processes can be distributed reducing the¦computational cost. (3) Only three parameters are necessary to set up the algorithm.¦In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm¦lies in its ability to create a structure that allows the visual exploratory data analysis¦of large high-dimensional datasets. This algorithm creates a hierarchical structure¦of Self-Organizing Map Component Planes, arranging similar variables' projections in¦the same branches of the tree. Hence, similarities on variables' behavior can be easily¦detected (e.g. local correlations, maximal and minimal values and outliers).¦Both FGHSON and the Tree-structured SOM Component Planes were applied in¦several agroecological problems proving to be very efficient in the exploratory analysis¦and clustering of spatio-temporal datasets.¦In this thesis I also tested three soft competitive learning algorithms. Two of them¦well-known non supervised soft competitive algorithms, namely the Self-Organizing¦Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the¦third was our original contribution, the FGHSON. Although the algorithms presented¦here have been used in several areas, to my knowledge there is not any work applying¦and comparing the performance of those techniques when dealing with spatiotemporal¦geospatial data, as it is presented in this thesis.¦I propose original methodologies to explore spatio-temporal geo-referenced datasets¦through time. Our approach uses time windows to capture temporal similarities and¦variations by using the FGHSON clustering algorithm. The developed methodologies¦are used in two case studies. In the first, the objective was to find similar agroecozones¦through time and in the second one it was to find similar environmental patterns¦shifted in time.¦Several results presented in this thesis have led to new contributions to agroecological¦knowledge, for instance, in sugar cane, and blackberry production.¦Finally, in the framework of this thesis we developed several software tools: (1)¦a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called¦BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user¦interface tool which integrates the FGHSON algorithm with Google Earth in order to¦show zones with similar agroecological characteristics.