103 resultados para Model-based Categorical Sequence Clustering
Resumo:
The 2009-2010 Data Fusion Contest organized by the Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society was focused on the detection of flooded areas using multi-temporal and multi-modal images. Both high spatial resolution optical and synthetic aperture radar data were provided. The goal was not only to identify the best algorithms (in terms of accuracy), but also to investigate the further improvement derived from decision fusion. This paper presents the four awarded algorithms and the conclusions of the contest, investigating both supervised and unsupervised methods and the use of multi-modal data for flood detection. Interestingly, a simple unsupervised change detection method provided similar accuracy as supervised approaches, and a digital elevation model-based predictive method yielded a comparable projected change detection map without using post-event data.
Resumo:
This paper presents a new type of very fine grid hydrological model based on the spatiotemporal repartition of a PMP (Probable Maximum Precipitation) and on the topography. The goal is to estimate the influence of this rain on a PMF (Probable Maximum Flood) on a catchment area in Switzerland. The spatiotemporal distribution of the PMP was realized using six clouds modeled by the advection-diffusion equation. The equation shows the movement of the clouds over the terrain and also gives the evolution of the rain intensity in time. This hydrological modeling is followed by a hydraulic modeling of the surface and subterranean flow, done considering the factors that contribute to the hydrological cycle, such as the infiltration, the resurgence and the snowmelt. These added factors make the developed model closer to reality and also offer flexibility in the initial condition that is added to the factors concerning the PMP, such as the duration of the rain, the speed and direction of the wind. All these initial conditions taken together offer a complete image of the PMF.
Resumo:
Introduction: Responses to external stimuli are typically investigated by averaging peri-stimulus electroencephalography (EEG) epochs in order to derive event-related potentials (ERPs) across the electrode montage, under the assumption that signals that are related to the external stimulus are fixed in time across trials. We demonstrate the applicability of a single-trial model based on patterns of scalp topographies (De Lucia et al, 2007) that can be used for ERP analysis at the single-subject level. The model is able to classify new trials (or groups of trials) with minimal a priori hypotheses, using information derived from a training dataset. The features used for the classification (the topography of responses and their latency) can be neurophysiologically interpreted, because a difference in scalp topography indicates a different configuration of brain generators. An above chance classification accuracy on test datasets implicitly demonstrates the suitability of this model for EEG data. Methods: The data analyzed in this study were acquired from two separate visual evoked potential (VEP) experiments. The first entailed passive presentation of checkerboard stimuli to each of the four visual quadrants (hereafter, "Checkerboard Experiment") (Plomp et al, submitted). The second entailed active discrimination of novel versus repeated line drawings of common objects (hereafter, "Priming Experiment") (Murray et al, 2004). Four subjects per experiment were analyzed, using approx. 200 trials per experimental condition. These trials were randomly separated in training (90%) and testing (10%) datasets in 10 independent shuffles. In order to perform the ERP analysis we estimated the statistical distribution of voltage topographies by a Mixture of Gaussians (MofGs), which reduces our original dataset to a small number of representative voltage topographies. We then evaluated statistically the degree of presence of these template maps across trials and whether and when this was different across experimental conditions. Based on these differences, single-trials or sets of a few single-trials were classified as belonging to one or the other experimental condition. Classification performance was assessed using the Receiver Operating Characteristic (ROC) curve. Results: For the Checkerboard Experiment contrasts entailed left vs. right visual field presentations for upper and lower quadrants, separately. The average posterior probabilities, indicating the presence of the computed template maps in time and across trials revealed significant differences starting at ~60-70 ms post-stimulus. The average ROC curve area across all four subjects was 0.80 and 0.85 for upper and lower quadrants, respectively and was in all cases significantly higher than chance (unpaired t-test, p<0.0001). In the Priming Experiment, we contrasted initial versus repeated presentations of visual object stimuli. Their posterior probabilities revealed significant differences, which started at 250ms post-stimulus onset. The classification accuracy rates with single-trial test data were at chance level. We therefore considered sub-averages based on five single trials. We found that for three out of four subjects' classification rates were significantly above chance level (unpaired t-test, p<0.0001). Conclusions: The main advantage of the present approach is that it is based on topographic features that are readily interpretable along neurophysiologic lines. As these maps were previously normalized by the overall strength of the field potential on the scalp, a change in their presence across trials and between conditions forcibly reflects a change in the underlying generator configurations. The temporal periods of statistical difference between conditions were estimated for each training dataset for ten shuffles of the data. Across the ten shuffles and in both experiments, we observed a high level of consistency in the temporal periods over which the two conditions differed. With this method we are able to analyze ERPs at the single-subject level providing a novel tool to compare normal electrophysiological responses versus single cases that cannot be considered part of any cohort of subjects. This aspect promises to have a strong impact on both basic and clinical research.
Resumo:
In Switzerland, the annual cost of damage by natural elements has been increasing for several years despite the introduction of protective measures. Mainly induced by material destruction building insurance companies have to pay the majority of this cost. In many European countries, governments and insurance companies consider prevention strategies to reduce vulnerability. In Switzerland, since 2004, the cost of damage due to natural hazards has surpassed the cost of damage due to fire; a traditional activity of the Cantonal Insurance company (EGA). Therefore, the strategy for efficient fire prevention incorporates a reduction of the vulnerability of buildings. The thesis seeks to illustrate the relevance of such an approach when applied to the damage caused by natural hazards. It examines the role of insurance place and its involvement in targeted prevention of natural disasters. Integrated risk management involves a faultless comprehension of all risk parameters The first part of the thesis is devoted to the theoretical development of the key concepts that influence risk management, such as: hazard, vulnerability, exposure or damage. The literature on this subject, very prolific in recent years, was taken into account and put in perspective in the context of this study. Among the risk parameters, it is shown in the thesis that vulnerability is a factor that we can influence efficiently in order to limit the cost of damage to buildings. This is confirmed through the development of an analysis method. This method has led to the development of a tool to assess damage to buildings by flooding. The tool, designed for the property insurer or owner, proposes several steps, namely: - Vulnerability and damage potential assessment; - Proposals for remedial measures and risk reduction from an analysis of the costs of a potential flood; - Adaptation of a global strategy in high-risk areas based on the elements at risk. The final part of the thesis is devoted to the study of a hail event in order to provide a better understanding of damage to buildings. For this, two samples from the available claims data were selected and analysed in the study. The results allow the identification of new trends A second objective of the study was to develop a hail model based on the available data The model simulates a random distribution of intensities and coupled with a risk model, proposes a simulation of damage costs for the determined study area. Le coût annuel des dommages provoqués par les éléments naturels en Suisse est conséquent et sa tendance est en augmentation depuis plusieurs années, malgré la mise en place d'ouvrages de protection et la mise en oeuvre de moyens importants. Majoritairement induit par des dégâts matériels, le coût est supporté en partie par les assurances immobilières en ce qui concerne les dommages aux bâtiments. Dans de nombreux pays européens, les gouvernements et les compagnies d'assurance se sont mis à concevoir leur stratégie de prévention en termes de réduction de la vulnérabilité. Depuis 2004, en Suisse, ce coût a dépassé celui des dommages dus à l'incendie, activité traditionnelle des établissements cantonaux d'assurance (ECA). Ce fait, aux implications stratégiques nombreuses dans le domaine public de la gestion des risques, résulte en particulier d'une politique de prévention des incendies menée efficacement depuis plusieurs années, notamment par le biais de la diminution de la vulnérabilité des bâtiments. La thèse, par la mise en valeur de données actuarielles ainsi que par le développement d'outils d'analyse, cherche à illustrer la pertinence d'une telle approche appliquée aux dommages induits par les phénomènes naturels. Elle s'interroge sur la place de l'assurance et son implication dans une prévention ciblée des catastrophes naturelles. La gestion intégrale des risques passe par une juste maîtrise de ses paramètres et de leur compréhension. La première partie de la thèse est ainsi consacrée au développement théorique des concepts clés ayant une influence sur la gestion des risques, comme l'aléa, la vulnérabilité, l'exposition ou le dommage. La littérature à ce sujet, très prolifique ces dernières années, a été repnse et mise en perspective dans le contexte de l'étude, à savoir l'assurance immobilière. Parmi les paramètres du risque, il est démontré dans la thèse que la vulnérabilité est un facteur sur lequel il est possible d'influer de manière efficace dans le but de limiter les coûts des dommages aux bâtiments. Ce raisonnement est confirmé dans un premier temps dans le cadre de l'élaboration d'une méthode d'analyse ayant débouché sur le développement d'un outil d'estimation des dommages aux bâtiments dus aux inondations. L'outil, destiné aux assurances immobilières, et le cas échéant aux propriétaires, offre plusieurs étapes, à savoir : - l'analyse de la vulnérabilité et le potentiel de dommages ; - des propositions de mesures de remédiation et de réduction du risque issues d'une analyse des coûts engendrés par une inondation potentielle; - l'adaptation d'une stratégie globale dans les zones à risque en fonction des éléments à risque. La dernière partie de la thèse est consacrée à l'étude d'un événement de grêle dans le but de fournir une meilleure compréhension des dommages aux bâtiments et de leur structure. Pour cela, deux échantillons ont été sélectionnés et analysés parmi les données de sinistres à disposition de l'étude. Les résultats obtenus, tant au niveau du portefeuille assuré que de l'analyse individuelle, permettent de dégager des tendances nouvelles. Un deuxième objectif de l'étude a consisté à élaborer une modélisation d'événements de grêle basée sur les données à disposition. Le modèle permet de simuler une distribution aléatoire des intensités et, couplé à un modèle d'estimation des risques, offre une simulation des coûts de dommages envisagés pour une zone d'étude déterminée. Les perspectives de ce travail permettent une meilleure focalisation du rôle de l'assurance et de ses besoins en matière de prévention.
Resumo:
Executive Summary Electricity is crucial for modern societies, thus it is important to understand the behaviour of electricity markets in order to be prepared to face the consequences of policy changes. The Swiss electricity market is now in a transition stage from a public monopoly to a liberalised market and it is undergoing an "emergent" liberalisation - i.e. liberalisation taking place without proper regulation. The withdrawal of nuclear capacity is also being debated. These two possible changes directly affect the mechanisms for capacity expansion. Thus, in this thesis we concentrate on understanding the dynamics of capacity expansion in the Swiss electricity market. A conceptual model to help understand the dynamics of capacity expansion in the Swiss electricity market is developed an explained in the first essay. We identify a potential risk of imports dependence. In the second essay a System Dynamics model, based on the conceptual model, is developed to evaluate the consequences of three scenarios: a nuclear phase-out, the implementation of a policy for avoiding imports dependence, and the combination of both. We conclude that the Swiss market is not well prepared to face unexpected changes of supply and demand, and we identify a risk of imports dependence, mainly in the case of a nuclear phase-out. The third essay focus on the opportunity cost of hydro-storage power generation, one of the main generation sources in Switzerland. We use and extended version of our model to test different policies for assigning an opportunity cost to hydro-storage power generation. We conclude that the preferred policies are different for different market participants and depend on market structure.
Resumo:
Secreted proteases constitute potential virulence factors of dermatophytes. A total of seven genes encoding putative serine proteases of the subtilisin family (SUB) were isolated in Trichophyton rubrum. Based on sequence data and intron-exon structure, a phylogenetic analysis of subtilisins from T. rubrum and other fungi revealed a presumed ancestral lineage comprising T. rubrum SUB2 and Aspergillus SUBs. All other SUBs (SUB1, SUB3-7) are dermatophyte-specific and have apparently emerged more recently, through successive gene duplication events. We showed that two subtilisins, Sub3 and Sub4, were detected in culture supernatants of T. rubrum grown in a medium containing soy protein as a sole nitrogen source. Both recombinant enzymes produced in Pichia pastoris are highly active on keratin azure suggesting that these proteases play an important role in invasion of keratinised tissues by the fungus. The set of deduced amino acid sequences of T. rubrum SUB ORFs allowed the identification of orthologous Subs secreted by other dermatophyte species using proteolysis and mass spectrometry.
Resumo:
The SLC2 family of glucose and polyol transporters comprises 13 members, the glucose transporters (GLUT) 1-12 and the H(+)- myo-inositol cotransporter (HMIT). These proteins all contain 12 transmembrane domains with both the amino and carboxy-terminal ends located on the cytoplasmic side of the plasma membrane and a N-linked oligosaccharide side-chain located either on the first or fifth extracellular loop. Based on sequence comparison, the GLUT isoforms can be grouped into three classes: class I comprises GLUT1-4; class II, GLUT6, 8, 10, and 12 and class III, GLUT5, 7, 9, 11 and HMIT. Despite their sequence similarity and the presence of class-specific signature sequences, these transporters carry various hexoses and HMIT is a H(+)/ myo-inositol co-transporter. Furthermore, the substrate transported by some isoforms has not yet been identified. Tissue- and cell-specific expression of the well-characterized GLUT isoforms underlies their specific role in the control of whole-body glucose homeostasis. Numerous studies with transgenic or knockout mice indeed support an important role for these transporters in the control of glucose utilization, glucose storage and glucose sensing. Much remains to be learned about the transport functions of the recently discovered isoforms (GLUT6-13 and HMIT) and their physiological role in the metabolism of glucose, myo-inositol and perhaps other substrates.
Resumo:
The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.
Resumo:
The development of statistical models for forensic fingerprint identification purposes has been the subject of increasing research attention in recent years. This can be partly seen as a response to a number of commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. In addition, key forensic identification bodies such as ENFSI [1] and IAI [2] have recently endorsed and acknowledged the potential benefits of using statistical models as an important tool in support of the fingerprint identification process within the ACE-V framework. In this paper, we introduce a new Likelihood Ratio (LR) model based on Support Vector Machines (SVMs) trained with features discovered via morphometric and spatial analyses of corresponding minutiae configurations for both match and close non-match populations often found in AFIS candidate lists. Computed LR values are derived from a probabilistic framework based on SVMs that discover the intrinsic spatial differences of match and close non-match populations. Lastly, experimentation performed on a set of over 120,000 publicly available fingerprint images (mostly sourced from the National Institute of Standards and Technology (NIST) datasets) and a distortion set of approximately 40,000 images, is presented, illustrating that the proposed LR model is reliably guiding towards the right proposition in the identification assessment of match and close non-match populations. Results further indicate that the proposed model is a promising tool for fingerprint practitioners to use for analysing the spatial consistency of corresponding minutiae configurations.
Resumo:
Pd1-xInx thin films (0.4 < x < 0.56) were prepared by radio frequency sputtering from a multi-zone target. The properties of these Hume-Rothery alloys were studied by X-ray diffractometry, electron probe microanalysis and scanning tunneling microscopy. The diffraction spectra were analyzed to obtain the intensity ratio of the (100) superlattice line to the (200) normal line, together with the variations of the lattice constant. The results ape explained quantitatively by a model based on point defects, i.e. Pd vacancies in In-rich films and Pd antisite atoms in Pd-rich films. In-rich films grow preferentially in the [100] direction while Pd-rich films grow preferentially in the [110] direction. The grains in indium-rich sputtered films appear to be enclosed in an atomically thick, indium-rich layer. The role of texture and the influence of point defects on electrical resistivity is also reported. (C) 1996 Elsevier Science Limited.
Resumo:
CCAAT/enhancer-binding protein (C/EBP) family members are transcription factors involved in important physiological processes, such as cellular proliferation and differentiation, regulation of energy homeostasis, inflammation, and hematopoiesis. Transcriptional activation by C/EBPalpha and C/EBPbeta involves the coactivators CREB-binding protein (CBP) and p300, which promote transcription by acetylating histones and recruiting basal transcription factors. In this study, we show that C/EBPdelta is also using CBP as a coactivator. Based on sequence homology with C/EBPalpha and -beta, we identify in C/EBPdelta two conserved amino acid segments that are necessary for the physical interaction with CBP. Using reporter gene assays, we demonstrate that mutation of these residues prevents CBP recruitment and diminishes the transactivating potential of C/EBPdelta. In addition, our results indicate that C/EBP family members not only recruit CBP but specifically induce its phosphorylation. We provide evidence that CBP phosphorylation depends on its interaction with C/EBPdelta and define point mutations within one of the two conserved amino acid segments of C/EBPdelta that abolish CBP phosphorylation as well as transcriptional activation, suggesting that this new mechanism could be important for C/EBP-mediated transcription.
Resumo:
BACKGROUND: There is an emerging knowledge base on the effectiveness of strategies to close the knowledge-practice gap. However, less is known about how attributes of an innovation and other contextual and situational factors facilitate and impede an innovation's adoption. The Healthy Heart Kit (HHK) is a risk management and patient education resource for the prevention of cardiovascular disease (CVD) and promotion of cardiovascular health. Although previous studies have demonstrated the HHK's content validity and practical utility, no published study has examined physicians' uptake of the HHK and factors that shape its adoption. OBJECTIVES: Conceptually informed by Rogers' Diffusion of Innovation theory, and Theory of Planned Behaviour, this study had two objectives: (1) to determine if specific attributes of the HHK as well as contextual and situational factors are associated with physicians' intention and actual usage of the HHK kit; and (2), to determine if any contextual and situational factors are associated with individual or environmental barriers that prevent the uptake of the HHK among those physicians who do not plan to use the kit. METHODS: A sample of 153 physicians who responded to an invitation letter sent to all family physicians in the province of Alberta, Canada were recruited for the study. Participating physicians were sent a HHK, and two months later a study questionnaire assessed primary factors on the physicians' clinical practice, attributes of the HHK (relative advantage, compatibility, complexity, trialability, observability), confidence and control using the HHK, barriers to use, and individual attributes. All measures were used in path analysis, employing a causal model based on Rogers' Diffusion of Innovations Theory and Theory of Planned Behaviour. RESULTS: 115 physicians (follow up rate of 75%) completed the questionnaire. Use of the HHK was associated with intention to use the HHK, relative advantage, and years of experience. Relative advantage and the observability of the HHK benefits were also significantly associated with physicians' intention to use the HHK. Physicians working in solo medical practices reported experiencing more individual and environmental barriers to using the HHK. CONCLUSION: The results of this study suggest that future information innovations must demonstrate an advantage over current resources and the research evidence supporting the innovation must be clearly visible. Findings also suggest that the innovation adoption process has a social element, and collegial interactions and discussions may facilitate that process. These results could be valuable for knowledge translation researchers and health promotion developers in future innovation adoption planning.
Resumo:
Background. Human immunodeficiency virus type 1 (HIV-1) transmitted drug resistance (TDR) can compromise antiretroviral therapy (ART) and thus represents an important public health concern. Typically, sources of TDR remain unknown, but they can be characterized with molecular epidemiologic approaches. We used the highly representative Swiss HIV Cohort Study (SHCS) and linked drug resistance database (SHCS-DRDB) to analyze sources of TDR. Methods. ART-naive men who have sex with men with infection date estimates between 1996 and 2009 were chosen for surveillance of TDR in HIV-1 subtype B (N = 1674), as the SHCS-DRDB contains pre-ART genotypic resistance tests for >69% of this surveillance population. A phylogeny was inferred using pol sequences from surveillance patients and all subtype B sequences from the SHCS-DRDB (6934 additional patients). Potential sources of TDR were identified based on phylogenetic clustering, shared resistance mutations, genetic distance, and estimated infection dates. Results. One hundred forty of 1674 (8.4%) surveillance patients carried virus with TDR; 86 of 140 (61.4%) were assigned to clusters. Potential sources of TDR were found for 50 of 86 (58.1%) of these patients. ART-naive patients constitute 56 of 66 (84.8%) potential sources and were significantly overrepresented among sources (odds ratio, 6.43 [95% confidence interval, 3.22-12.82]; P < .001). Particularly large transmission clusters were observed for the L90M mutation, and the spread of L90M continued even after the near cessation of antiretroviral use selecting for that mutation. Three clusters showed evidence of reversion of K103N or T215Y/F. Conclusions. Many individuals harboring viral TDR belonged to transmission clusters with other Swiss patients, indicating substantial domestic transmission of TDR in Switzerland. Most TDR in clusters could be linked to sources, indicating good surveillance of TDR in the SHCS-DRDB. Most TDR sources were ART naive. This, and the presence of long TDR transmission chains, suggests that resistance mutations are frequently transmitted among untreated individuals, highlighting the importance of early diagnosis and treatment.
Resumo:
GLUT proteins are encoded by the SLC2 genes and are members of the major facilitator superfamily of membrane transporters. Fourteen GLUT proteins are expressed in the human and they are categorized into three classes based on sequence similarity. All GLUTs appear to transport hexoses or polyols when expressed ectopically, but the primary physiological substrates for several of the GLUTs remain uncertain. GLUTs 1-5 are the most thoroughly studied and all have well established roles as glucose and/or fructose transporters in various tissues and cell types. The GLUT proteins are comprised of ∼500 amino acid residues, possess a single N-linked oligosaccharide, and have 12 membrane-spanning domains. In this review we briefly describe the major characteristics of the 14 GLUT family members.