96 resultados para Model-based optimization
Resumo:
This paper presents a new type of very fine grid hydrological model based on the spatiotemporal repartition of a PMP (Probable Maximum Precipitation) and on the topography. The goal is to estimate the influence of this rain on a PMF (Probable Maximum Flood) on a catchment area in Switzerland. The spatiotemporal distribution of the PMP was realized using six clouds modeled by the advection-diffusion equation. The equation shows the movement of the clouds over the terrain and also gives the evolution of the rain intensity in time. This hydrological modeling is followed by a hydraulic modeling of the surface and subterranean flow, done considering the factors that contribute to the hydrological cycle, such as the infiltration, the resurgence and the snowmelt. These added factors make the developed model closer to reality and also offer flexibility in the initial condition that is added to the factors concerning the PMP, such as the duration of the rain, the speed and direction of the wind. All these initial conditions taken together offer a complete image of the PMF.
Resumo:
Introduction: Responses to external stimuli are typically investigated by averaging peri-stimulus electroencephalography (EEG) epochs in order to derive event-related potentials (ERPs) across the electrode montage, under the assumption that signals that are related to the external stimulus are fixed in time across trials. We demonstrate the applicability of a single-trial model based on patterns of scalp topographies (De Lucia et al, 2007) that can be used for ERP analysis at the single-subject level. The model is able to classify new trials (or groups of trials) with minimal a priori hypotheses, using information derived from a training dataset. The features used for the classification (the topography of responses and their latency) can be neurophysiologically interpreted, because a difference in scalp topography indicates a different configuration of brain generators. An above chance classification accuracy on test datasets implicitly demonstrates the suitability of this model for EEG data. Methods: The data analyzed in this study were acquired from two separate visual evoked potential (VEP) experiments. The first entailed passive presentation of checkerboard stimuli to each of the four visual quadrants (hereafter, "Checkerboard Experiment") (Plomp et al, submitted). The second entailed active discrimination of novel versus repeated line drawings of common objects (hereafter, "Priming Experiment") (Murray et al, 2004). Four subjects per experiment were analyzed, using approx. 200 trials per experimental condition. These trials were randomly separated in training (90%) and testing (10%) datasets in 10 independent shuffles. In order to perform the ERP analysis we estimated the statistical distribution of voltage topographies by a Mixture of Gaussians (MofGs), which reduces our original dataset to a small number of representative voltage topographies. We then evaluated statistically the degree of presence of these template maps across trials and whether and when this was different across experimental conditions. Based on these differences, single-trials or sets of a few single-trials were classified as belonging to one or the other experimental condition. Classification performance was assessed using the Receiver Operating Characteristic (ROC) curve. Results: For the Checkerboard Experiment contrasts entailed left vs. right visual field presentations for upper and lower quadrants, separately. The average posterior probabilities, indicating the presence of the computed template maps in time and across trials revealed significant differences starting at ~60-70 ms post-stimulus. The average ROC curve area across all four subjects was 0.80 and 0.85 for upper and lower quadrants, respectively and was in all cases significantly higher than chance (unpaired t-test, p<0.0001). In the Priming Experiment, we contrasted initial versus repeated presentations of visual object stimuli. Their posterior probabilities revealed significant differences, which started at 250ms post-stimulus onset. The classification accuracy rates with single-trial test data were at chance level. We therefore considered sub-averages based on five single trials. We found that for three out of four subjects' classification rates were significantly above chance level (unpaired t-test, p<0.0001). Conclusions: The main advantage of the present approach is that it is based on topographic features that are readily interpretable along neurophysiologic lines. As these maps were previously normalized by the overall strength of the field potential on the scalp, a change in their presence across trials and between conditions forcibly reflects a change in the underlying generator configurations. The temporal periods of statistical difference between conditions were estimated for each training dataset for ten shuffles of the data. Across the ten shuffles and in both experiments, we observed a high level of consistency in the temporal periods over which the two conditions differed. With this method we are able to analyze ERPs at the single-subject level providing a novel tool to compare normal electrophysiological responses versus single cases that cannot be considered part of any cohort of subjects. This aspect promises to have a strong impact on both basic and clinical research.
Resumo:
In Switzerland, the annual cost of damage by natural elements has been increasing for several years despite the introduction of protective measures. Mainly induced by material destruction building insurance companies have to pay the majority of this cost. In many European countries, governments and insurance companies consider prevention strategies to reduce vulnerability. In Switzerland, since 2004, the cost of damage due to natural hazards has surpassed the cost of damage due to fire; a traditional activity of the Cantonal Insurance company (EGA). Therefore, the strategy for efficient fire prevention incorporates a reduction of the vulnerability of buildings. The thesis seeks to illustrate the relevance of such an approach when applied to the damage caused by natural hazards. It examines the role of insurance place and its involvement in targeted prevention of natural disasters. Integrated risk management involves a faultless comprehension of all risk parameters The first part of the thesis is devoted to the theoretical development of the key concepts that influence risk management, such as: hazard, vulnerability, exposure or damage. The literature on this subject, very prolific in recent years, was taken into account and put in perspective in the context of this study. Among the risk parameters, it is shown in the thesis that vulnerability is a factor that we can influence efficiently in order to limit the cost of damage to buildings. This is confirmed through the development of an analysis method. This method has led to the development of a tool to assess damage to buildings by flooding. The tool, designed for the property insurer or owner, proposes several steps, namely: - Vulnerability and damage potential assessment; - Proposals for remedial measures and risk reduction from an analysis of the costs of a potential flood; - Adaptation of a global strategy in high-risk areas based on the elements at risk. The final part of the thesis is devoted to the study of a hail event in order to provide a better understanding of damage to buildings. For this, two samples from the available claims data were selected and analysed in the study. The results allow the identification of new trends A second objective of the study was to develop a hail model based on the available data The model simulates a random distribution of intensities and coupled with a risk model, proposes a simulation of damage costs for the determined study area. Le coût annuel des dommages provoqués par les éléments naturels en Suisse est conséquent et sa tendance est en augmentation depuis plusieurs années, malgré la mise en place d'ouvrages de protection et la mise en oeuvre de moyens importants. Majoritairement induit par des dégâts matériels, le coût est supporté en partie par les assurances immobilières en ce qui concerne les dommages aux bâtiments. Dans de nombreux pays européens, les gouvernements et les compagnies d'assurance se sont mis à concevoir leur stratégie de prévention en termes de réduction de la vulnérabilité. Depuis 2004, en Suisse, ce coût a dépassé celui des dommages dus à l'incendie, activité traditionnelle des établissements cantonaux d'assurance (ECA). Ce fait, aux implications stratégiques nombreuses dans le domaine public de la gestion des risques, résulte en particulier d'une politique de prévention des incendies menée efficacement depuis plusieurs années, notamment par le biais de la diminution de la vulnérabilité des bâtiments. La thèse, par la mise en valeur de données actuarielles ainsi que par le développement d'outils d'analyse, cherche à illustrer la pertinence d'une telle approche appliquée aux dommages induits par les phénomènes naturels. Elle s'interroge sur la place de l'assurance et son implication dans une prévention ciblée des catastrophes naturelles. La gestion intégrale des risques passe par une juste maîtrise de ses paramètres et de leur compréhension. La première partie de la thèse est ainsi consacrée au développement théorique des concepts clés ayant une influence sur la gestion des risques, comme l'aléa, la vulnérabilité, l'exposition ou le dommage. La littérature à ce sujet, très prolifique ces dernières années, a été repnse et mise en perspective dans le contexte de l'étude, à savoir l'assurance immobilière. Parmi les paramètres du risque, il est démontré dans la thèse que la vulnérabilité est un facteur sur lequel il est possible d'influer de manière efficace dans le but de limiter les coûts des dommages aux bâtiments. Ce raisonnement est confirmé dans un premier temps dans le cadre de l'élaboration d'une méthode d'analyse ayant débouché sur le développement d'un outil d'estimation des dommages aux bâtiments dus aux inondations. L'outil, destiné aux assurances immobilières, et le cas échéant aux propriétaires, offre plusieurs étapes, à savoir : - l'analyse de la vulnérabilité et le potentiel de dommages ; - des propositions de mesures de remédiation et de réduction du risque issues d'une analyse des coûts engendrés par une inondation potentielle; - l'adaptation d'une stratégie globale dans les zones à risque en fonction des éléments à risque. La dernière partie de la thèse est consacrée à l'étude d'un événement de grêle dans le but de fournir une meilleure compréhension des dommages aux bâtiments et de leur structure. Pour cela, deux échantillons ont été sélectionnés et analysés parmi les données de sinistres à disposition de l'étude. Les résultats obtenus, tant au niveau du portefeuille assuré que de l'analyse individuelle, permettent de dégager des tendances nouvelles. Un deuxième objectif de l'étude a consisté à élaborer une modélisation d'événements de grêle basée sur les données à disposition. Le modèle permet de simuler une distribution aléatoire des intensités et, couplé à un modèle d'estimation des risques, offre une simulation des coûts de dommages envisagés pour une zone d'étude déterminée. Les perspectives de ce travail permettent une meilleure focalisation du rôle de l'assurance et de ses besoins en matière de prévention.
Resumo:
Executive Summary Electricity is crucial for modern societies, thus it is important to understand the behaviour of electricity markets in order to be prepared to face the consequences of policy changes. The Swiss electricity market is now in a transition stage from a public monopoly to a liberalised market and it is undergoing an "emergent" liberalisation - i.e. liberalisation taking place without proper regulation. The withdrawal of nuclear capacity is also being debated. These two possible changes directly affect the mechanisms for capacity expansion. Thus, in this thesis we concentrate on understanding the dynamics of capacity expansion in the Swiss electricity market. A conceptual model to help understand the dynamics of capacity expansion in the Swiss electricity market is developed an explained in the first essay. We identify a potential risk of imports dependence. In the second essay a System Dynamics model, based on the conceptual model, is developed to evaluate the consequences of three scenarios: a nuclear phase-out, the implementation of a policy for avoiding imports dependence, and the combination of both. We conclude that the Swiss market is not well prepared to face unexpected changes of supply and demand, and we identify a risk of imports dependence, mainly in the case of a nuclear phase-out. The third essay focus on the opportunity cost of hydro-storage power generation, one of the main generation sources in Switzerland. We use and extended version of our model to test different policies for assigning an opportunity cost to hydro-storage power generation. We conclude that the preferred policies are different for different market participants and depend on market structure.
Resumo:
We present the first approach to the genetic diversity and structure of the Balearic toad (Bufo balearicus Boettger, 1880) for the island of Menorca. Forty-one individ- uals from 21 localities were analyzed for ten microsatellite loci. We used geo-refer- enced individual multilocus genotypes and a model-based clustering method for the inference of the number of populations and of the spatial location of genetic dis- continuities between those populations.¦Only six of the microsatellites analyzed were polymorphic. We revealed a northwest- ern area inhabited by a single population with several well-connected localities and another set of populations in the southeast that includes a few unconnected small units with genetically significant differences among them as well as with the individ- uals from the northwest of the island. The observed fragmentation may be explained by shifts from agricultural to tourism practices that have been taking place on the island of Menorca since the 1960s. The abandonment of rural activities in favor of urbanization and concomitant service areas has mostly affected the southeast of the island and is currently threatening the overall geographic connectivity between the different farming areas of the island that are inhabited by the Balearic toad.
Resumo:
The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.
Resumo:
The development of statistical models for forensic fingerprint identification purposes has been the subject of increasing research attention in recent years. This can be partly seen as a response to a number of commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. In addition, key forensic identification bodies such as ENFSI [1] and IAI [2] have recently endorsed and acknowledged the potential benefits of using statistical models as an important tool in support of the fingerprint identification process within the ACE-V framework. In this paper, we introduce a new Likelihood Ratio (LR) model based on Support Vector Machines (SVMs) trained with features discovered via morphometric and spatial analyses of corresponding minutiae configurations for both match and close non-match populations often found in AFIS candidate lists. Computed LR values are derived from a probabilistic framework based on SVMs that discover the intrinsic spatial differences of match and close non-match populations. Lastly, experimentation performed on a set of over 120,000 publicly available fingerprint images (mostly sourced from the National Institute of Standards and Technology (NIST) datasets) and a distortion set of approximately 40,000 images, is presented, illustrating that the proposed LR model is reliably guiding towards the right proposition in the identification assessment of match and close non-match populations. Results further indicate that the proposed model is a promising tool for fingerprint practitioners to use for analysing the spatial consistency of corresponding minutiae configurations.
Resumo:
Pd1-xInx thin films (0.4 < x < 0.56) were prepared by radio frequency sputtering from a multi-zone target. The properties of these Hume-Rothery alloys were studied by X-ray diffractometry, electron probe microanalysis and scanning tunneling microscopy. The diffraction spectra were analyzed to obtain the intensity ratio of the (100) superlattice line to the (200) normal line, together with the variations of the lattice constant. The results ape explained quantitatively by a model based on point defects, i.e. Pd vacancies in In-rich films and Pd antisite atoms in Pd-rich films. In-rich films grow preferentially in the [100] direction while Pd-rich films grow preferentially in the [110] direction. The grains in indium-rich sputtered films appear to be enclosed in an atomically thick, indium-rich layer. The role of texture and the influence of point defects on electrical resistivity is also reported. (C) 1996 Elsevier Science Limited.
Resumo:
BACKGROUND: There is an emerging knowledge base on the effectiveness of strategies to close the knowledge-practice gap. However, less is known about how attributes of an innovation and other contextual and situational factors facilitate and impede an innovation's adoption. The Healthy Heart Kit (HHK) is a risk management and patient education resource for the prevention of cardiovascular disease (CVD) and promotion of cardiovascular health. Although previous studies have demonstrated the HHK's content validity and practical utility, no published study has examined physicians' uptake of the HHK and factors that shape its adoption. OBJECTIVES: Conceptually informed by Rogers' Diffusion of Innovation theory, and Theory of Planned Behaviour, this study had two objectives: (1) to determine if specific attributes of the HHK as well as contextual and situational factors are associated with physicians' intention and actual usage of the HHK kit; and (2), to determine if any contextual and situational factors are associated with individual or environmental barriers that prevent the uptake of the HHK among those physicians who do not plan to use the kit. METHODS: A sample of 153 physicians who responded to an invitation letter sent to all family physicians in the province of Alberta, Canada were recruited for the study. Participating physicians were sent a HHK, and two months later a study questionnaire assessed primary factors on the physicians' clinical practice, attributes of the HHK (relative advantage, compatibility, complexity, trialability, observability), confidence and control using the HHK, barriers to use, and individual attributes. All measures were used in path analysis, employing a causal model based on Rogers' Diffusion of Innovations Theory and Theory of Planned Behaviour. RESULTS: 115 physicians (follow up rate of 75%) completed the questionnaire. Use of the HHK was associated with intention to use the HHK, relative advantage, and years of experience. Relative advantage and the observability of the HHK benefits were also significantly associated with physicians' intention to use the HHK. Physicians working in solo medical practices reported experiencing more individual and environmental barriers to using the HHK. CONCLUSION: The results of this study suggest that future information innovations must demonstrate an advantage over current resources and the research evidence supporting the innovation must be clearly visible. Findings also suggest that the innovation adoption process has a social element, and collegial interactions and discussions may facilitate that process. These results could be valuable for knowledge translation researchers and health promotion developers in future innovation adoption planning.
Resumo:
Numerous sources of evidence point to the fact that heterogeneity within the Earth's deep crystalline crust is complex and hence may be best described through stochastic rather than deterministic approaches. As seismic reflection imaging arguably offers the best means of sampling deep crustal rocks in situ, much interest has been expressed in using such data to characterize the stochastic nature of crustal heterogeneity. Previous work on this problem has shown that the spatial statistics of seismic reflection data are indeed related to those of the underlying heterogeneous seismic velocity distribution. As of yet, however, the nature of this relationship has remained elusive due to the fact that most of the work was either strictly empirical or based on incorrect methodological approaches. Here, we introduce a conceptual model, based on the assumption of weak scattering, that allows us to quantitatively link the second-order statistics of a 2-D seismic velocity distribution with those of the corresponding processed and depth-migrated seismic reflection image. We then perform a sensitivity study in order to investigate what information regarding the stochastic model parameters describing crustal velocity heterogeneity might potentially be recovered from the statistics of a seismic reflection image using this model. Finally, we present a Monte Carlo inversion strategy to estimate these parameters and we show examples of its application at two different source frequencies and using two different sets of prior information. Our results indicate that the inverse problem is inherently non-unique and that many different combinations of the vertical and lateral correlation lengths describing the velocity heterogeneity can yield seismic images with the same 2-D autocorrelation structure. The ratio of all of these possible combinations of vertical and lateral correlation lengths, however, remains roughly constant which indicates that, without additional prior information, the aspect ratio is the only parameter describing the stochastic seismic velocity structure that can be reliably recovered.
Resumo:
Chronic pain is a complex disabling experience that negatively affects the cognitive, affective and physical functions as well as behavior. Although the interaction between chronic pain and physical functioning is a well-accepted paradigm in clinical research, the understanding of how pain affects individuals' daily life behavior remains a challenging task. Here we develop a methodological framework allowing to objectively document disruptive pain related interferences on real-life physical activity. The results reveal that meaningful information is contained in the temporal dynamics of activity patterns and an analytical model based on the theory of bivariate point processes can be used to describe physical activity behavior. The model parameters capture the dynamic interdependence between periods and events and determine a 'signature' of activity pattern. The study is likely to contribute to the clinical understanding of complex pain/disease-related behaviors and establish a unified mathematical framework to quantify the complex dynamics of various human activities.
Resumo:
Exploratory and confirmatory factor analyses reported in the French technical manual of the WISC-IV provides evidence supporting a structure with four indices: Verbal Comprehension (VCI), Perceptual Reasoning (PRI), Working Memory (WMI), and Processing Speed (PSI). Although the WISC-IV is more attuned to contemporary theory, it is still not in total accordance with the dominant theory: the Cattell-Horn-Carroll (CHC) theory of cognitive ability. This study was designed to determine whether the French WISC-IV is better described with the four-factor solution or whether an alternative model based on the CHC theory is more appropriate. The intercorrelations matrix reported in the French technical manual was submitted to confirmatory factor analysis. A comparison of competing models suggests that a model based on the CHC theory fits the data better than the current WISC-IV structure. It appears that the French WISC-IV in fact measures six factors: crystallized intelligence (Gc), fluid intelligence (Gf), short-term memory (Gsm), processing speed (Gs), quantitative knowledge (Gq), and visual processing (Gv). We recommend that clinicians interpret the subtests of the French WISC-IV in relation to this CHC model in addition to the four indices.
Resumo:
The ovarian hyperstimulation syndrome (SHO) can be defined as an iatrogenic pathology induced by active substances administered for controlling follicular maturation and ovulation. The etiology, the physiopathology, the diagnostic and therapeutic methods available are discussed. A theoretical model, based on clinical data, allows identification of a set of criteria which should help determining prospectively the chances of development of such a pathology.
Resumo:
Helical tomotherapy is a relatively new intensity-modulated radiation therapy (IMRT) treatment for which room shielding has to be reassessed for the following reasons. The beam-on-time needed to deliver a given target dose is increased and leads to a weekly workload of typically one order of magnitude higher than that for conventional radiation therapy. The special configuration of tomotherapy units does not allow the use of standard shielding calculation methods. A conventional linear accelerator must be shielded for primary, leakage and scatter photon radiations. For tomotherapy, primary radiation is no longer the main shielding issue since a beam stop is mounted on the gantry directly opposite the source. On the other hand, due to the longer irradiation time, the accelerator head leakage becomes a major concern. An analytical model based on geometric considerations has been developed to determine leakage radiation levels throughout the room for continuous gantry rotation. Compared to leakage radiation, scatter radiation is a minor contribution. Since tomotherapy units operate at a nominal energy of 6 MV, neutron production is negligible. This work proposes a synthetic and conservative model for calculating shielding requirements for the Hi-Art II TomoTherapy unit. Finally, the required concrete shielding thickness is given for different positions of interest.
Resumo:
Aim To evaluate the effects of using distinct alternative sets of climatic predictor variables on the performance, spatial predictions and future projections of species distribution models (SDMs) for rare plants in an arid environment. . Location Atacama and Peruvian Deserts, South America (18º30'S - 31º30'S, 0 - 3 000 m) Methods We modelled the present and future potential distributions of 13 species of Heliotropium sect. Cochranea, a plant group with a centre of diversity in the Atacama Desert. We developed and applied a sequential procedure, starting from climate monthly variables, to derive six alternative sets of climatic predictor variables. We used them to fit models with eight modelling techniques within an ensemble forecasting framework, and derived climate change projections for each of them. We evaluated the effects of using these alternative sets of predictor variables on performance, spatial predictions and projections of SDMs using Generalised Linear Mixed Models (GLMM). Results The use of distinct sets of climatic predictor variables did not have a significant effect on overall metrics of model performance, but had significant effects on present and future spatial predictions. Main conclusion Using different sets of climatic predictors can yield the same model fits but different spatial predictions of current and future species distributions. This represents a new form of uncertainty in model-based estimates of extinction risk that may need to be better acknowledged and quantified in future SDM studies.