920 resultados para Model-based optimization
Resumo:
BACKGROUND: Solexa/Illumina short-read ultra-high throughput DNA sequencing technology produces millions of short tags (up to 36 bases) by parallel sequencing-by-synthesis of DNA colonies. The processing and statistical analysis of such high-throughput data poses new challenges; currently a fair proportion of the tags are routinely discarded due to an inability to match them to a reference sequence, thereby reducing the effective throughput of the technology. RESULTS: We propose a novel base calling algorithm using model-based clustering and probability theory to identify ambiguous bases and code them with IUPAC symbols. We also select optimal sub-tags using a score based on information content to remove uncertain bases towards the ends of the reads. CONCLUSION: We show that the method improves genome coverage and number of usable tags as compared with Solexa's data processing pipeline by an average of 15%. An R package is provided which allows fast and accurate base calling of Solexa's fluorescence intensity files and the production of informative diagnostic plots.
Resumo:
Understanding brain reserve in preclinical stages of neurodegenerative disorders allows determination of which brain regions contribute to normal functioning despite accelerated neuronal loss. Besides the recruitment of additional regions, a reorganisation and shift of relevance between normally engaged regions are a suggested key mechanism. Thus, network analysis methods seem critical for investigation of changes in directed causal interactions between such candidate brain regions. To identify core compensatory regions, fifteen preclinical patients carrying the genetic mutation leading to Huntington's disease and twelve controls underwent fMRI scanning. They accomplished an auditory paced finger sequence tapping task, which challenged cognitive as well as executive aspects of motor functioning by varying speed and complexity of movements. To investigate causal interactions among brain regions a single Dynamic Causal Model (DCM) was constructed and fitted to the data from each subject. The DCM parameters were analysed using statistical methods to assess group differences in connectivity, and the relationship between connectivity patterns and predicted years to clinical onset was assessed in gene carriers. In preclinical patients, we found indications for neural reserve mechanisms predominantly driven by bilateral dorsal premotor cortex, which increasingly activated superior parietal cortices the closer individuals were to estimated clinical onset. This compensatory mechanism was restricted to complex movements characterised by high cognitive demand. Additionally, we identified task-induced connectivity changes in both groups of subjects towards pre- and caudal supplementary motor areas, which were linked to either faster or more complex task conditions. Interestingly, coupling of dorsal premotor cortex and supplementary motor area was more negative in controls compared to gene mutation carriers. Furthermore, changes in the connectivity pattern of gene carriers allowed prediction of the years to estimated disease onset in individuals. Our study characterises the connectivity pattern of core cortical regions maintaining motor function in relation to varying task demand. We identified connections of bilateral dorsal premotor cortex as critical for compensation as well as task-dependent recruitment of pre- and caudal supplementary motor area. The latter finding nicely mirrors a previously published general linear model-based analysis of the same data. Such knowledge about disease specific inter-regional effective connectivity may help identify foci for interventions based on transcranial magnetic stimulation designed to stimulate functioning and also to predict their impact on other regions in motor-associated networks.
Resumo:
This paper presents a new type of very fine grid hydrological model based on the spatiotemporal repartition of a PMP (Probable Maximum Precipitation) and on the topography. The goal is to estimate the influence of this rain on a PMF (Probable Maximum Flood) on a catchment area in Switzerland. The spatiotemporal distribution of the PMP was realized using six clouds modeled by the advection-diffusion equation. The equation shows the movement of the clouds over the terrain and also gives the evolution of the rain intensity in time. This hydrological modeling is followed by a hydraulic modeling of the surface and subterranean flow, done considering the factors that contribute to the hydrological cycle, such as the infiltration, the resurgence and the snowmelt. These added factors make the developed model closer to reality and also offer flexibility in the initial condition that is added to the factors concerning the PMP, such as the duration of the rain, the speed and direction of the wind. All these initial conditions taken together offer a complete image of the PMF.
Resumo:
Estudi realitzat a partir d’una estada a l’Institut Desenvolupat a School of Comparative American Studies adscrit a la University of Warwick, Regne Unit, entre 2011 i 2012. Aquest projecte analitza en primer lloc la mobilització popular del primer liberalisme i la formació de les primeres organitzacions polítiques liberals que es constituïren a partir de les societats secretes i es propagaren a través dels principals centres de sociabilitat liberal: les societats patriòtiques. En segon lloc mitjançant l’estudi de la mobilitat dels liberals entre l’Espanya metropolitana i el virregnat de Nueva Espanya demostra com es dibuixà un nou model polític basat en el federalisme. El tercer aspecte d’anàlisi és com els exiliats catalans a Anglaterra reberen el suport de la Foreign Bible Society perquè havia mantingut contactes des dels primers anys vint amb l’alt clergat espanyol. El darrer aspecte de la recerca abasta l’estudi de l’espai urbà en relació amb les pràctiques polítiques dels ciutadans a partir de l’anàlisi de la formació i ampliació de les places de la ciutat de Barcelona durant la primera meitat del segle XIX.
Resumo:
Introduction: Responses to external stimuli are typically investigated by averaging peri-stimulus electroencephalography (EEG) epochs in order to derive event-related potentials (ERPs) across the electrode montage, under the assumption that signals that are related to the external stimulus are fixed in time across trials. We demonstrate the applicability of a single-trial model based on patterns of scalp topographies (De Lucia et al, 2007) that can be used for ERP analysis at the single-subject level. The model is able to classify new trials (or groups of trials) with minimal a priori hypotheses, using information derived from a training dataset. The features used for the classification (the topography of responses and their latency) can be neurophysiologically interpreted, because a difference in scalp topography indicates a different configuration of brain generators. An above chance classification accuracy on test datasets implicitly demonstrates the suitability of this model for EEG data. Methods: The data analyzed in this study were acquired from two separate visual evoked potential (VEP) experiments. The first entailed passive presentation of checkerboard stimuli to each of the four visual quadrants (hereafter, "Checkerboard Experiment") (Plomp et al, submitted). The second entailed active discrimination of novel versus repeated line drawings of common objects (hereafter, "Priming Experiment") (Murray et al, 2004). Four subjects per experiment were analyzed, using approx. 200 trials per experimental condition. These trials were randomly separated in training (90%) and testing (10%) datasets in 10 independent shuffles. In order to perform the ERP analysis we estimated the statistical distribution of voltage topographies by a Mixture of Gaussians (MofGs), which reduces our original dataset to a small number of representative voltage topographies. We then evaluated statistically the degree of presence of these template maps across trials and whether and when this was different across experimental conditions. Based on these differences, single-trials or sets of a few single-trials were classified as belonging to one or the other experimental condition. Classification performance was assessed using the Receiver Operating Characteristic (ROC) curve. Results: For the Checkerboard Experiment contrasts entailed left vs. right visual field presentations for upper and lower quadrants, separately. The average posterior probabilities, indicating the presence of the computed template maps in time and across trials revealed significant differences starting at ~60-70 ms post-stimulus. The average ROC curve area across all four subjects was 0.80 and 0.85 for upper and lower quadrants, respectively and was in all cases significantly higher than chance (unpaired t-test, p<0.0001). In the Priming Experiment, we contrasted initial versus repeated presentations of visual object stimuli. Their posterior probabilities revealed significant differences, which started at 250ms post-stimulus onset. The classification accuracy rates with single-trial test data were at chance level. We therefore considered sub-averages based on five single trials. We found that for three out of four subjects' classification rates were significantly above chance level (unpaired t-test, p<0.0001). Conclusions: The main advantage of the present approach is that it is based on topographic features that are readily interpretable along neurophysiologic lines. As these maps were previously normalized by the overall strength of the field potential on the scalp, a change in their presence across trials and between conditions forcibly reflects a change in the underlying generator configurations. The temporal periods of statistical difference between conditions were estimated for each training dataset for ten shuffles of the data. Across the ten shuffles and in both experiments, we observed a high level of consistency in the temporal periods over which the two conditions differed. With this method we are able to analyze ERPs at the single-subject level providing a novel tool to compare normal electrophysiological responses versus single cases that cannot be considered part of any cohort of subjects. This aspect promises to have a strong impact on both basic and clinical research.
Resumo:
In Switzerland, the annual cost of damage by natural elements has been increasing for several years despite the introduction of protective measures. Mainly induced by material destruction building insurance companies have to pay the majority of this cost. In many European countries, governments and insurance companies consider prevention strategies to reduce vulnerability. In Switzerland, since 2004, the cost of damage due to natural hazards has surpassed the cost of damage due to fire; a traditional activity of the Cantonal Insurance company (EGA). Therefore, the strategy for efficient fire prevention incorporates a reduction of the vulnerability of buildings. The thesis seeks to illustrate the relevance of such an approach when applied to the damage caused by natural hazards. It examines the role of insurance place and its involvement in targeted prevention of natural disasters. Integrated risk management involves a faultless comprehension of all risk parameters The first part of the thesis is devoted to the theoretical development of the key concepts that influence risk management, such as: hazard, vulnerability, exposure or damage. The literature on this subject, very prolific in recent years, was taken into account and put in perspective in the context of this study. Among the risk parameters, it is shown in the thesis that vulnerability is a factor that we can influence efficiently in order to limit the cost of damage to buildings. This is confirmed through the development of an analysis method. This method has led to the development of a tool to assess damage to buildings by flooding. The tool, designed for the property insurer or owner, proposes several steps, namely: - Vulnerability and damage potential assessment; - Proposals for remedial measures and risk reduction from an analysis of the costs of a potential flood; - Adaptation of a global strategy in high-risk areas based on the elements at risk. The final part of the thesis is devoted to the study of a hail event in order to provide a better understanding of damage to buildings. For this, two samples from the available claims data were selected and analysed in the study. The results allow the identification of new trends A second objective of the study was to develop a hail model based on the available data The model simulates a random distribution of intensities and coupled with a risk model, proposes a simulation of damage costs for the determined study area. Le coût annuel des dommages provoqués par les éléments naturels en Suisse est conséquent et sa tendance est en augmentation depuis plusieurs années, malgré la mise en place d'ouvrages de protection et la mise en oeuvre de moyens importants. Majoritairement induit par des dégâts matériels, le coût est supporté en partie par les assurances immobilières en ce qui concerne les dommages aux bâtiments. Dans de nombreux pays européens, les gouvernements et les compagnies d'assurance se sont mis à concevoir leur stratégie de prévention en termes de réduction de la vulnérabilité. Depuis 2004, en Suisse, ce coût a dépassé celui des dommages dus à l'incendie, activité traditionnelle des établissements cantonaux d'assurance (ECA). Ce fait, aux implications stratégiques nombreuses dans le domaine public de la gestion des risques, résulte en particulier d'une politique de prévention des incendies menée efficacement depuis plusieurs années, notamment par le biais de la diminution de la vulnérabilité des bâtiments. La thèse, par la mise en valeur de données actuarielles ainsi que par le développement d'outils d'analyse, cherche à illustrer la pertinence d'une telle approche appliquée aux dommages induits par les phénomènes naturels. Elle s'interroge sur la place de l'assurance et son implication dans une prévention ciblée des catastrophes naturelles. La gestion intégrale des risques passe par une juste maîtrise de ses paramètres et de leur compréhension. La première partie de la thèse est ainsi consacrée au développement théorique des concepts clés ayant une influence sur la gestion des risques, comme l'aléa, la vulnérabilité, l'exposition ou le dommage. La littérature à ce sujet, très prolifique ces dernières années, a été repnse et mise en perspective dans le contexte de l'étude, à savoir l'assurance immobilière. Parmi les paramètres du risque, il est démontré dans la thèse que la vulnérabilité est un facteur sur lequel il est possible d'influer de manière efficace dans le but de limiter les coûts des dommages aux bâtiments. Ce raisonnement est confirmé dans un premier temps dans le cadre de l'élaboration d'une méthode d'analyse ayant débouché sur le développement d'un outil d'estimation des dommages aux bâtiments dus aux inondations. L'outil, destiné aux assurances immobilières, et le cas échéant aux propriétaires, offre plusieurs étapes, à savoir : - l'analyse de la vulnérabilité et le potentiel de dommages ; - des propositions de mesures de remédiation et de réduction du risque issues d'une analyse des coûts engendrés par une inondation potentielle; - l'adaptation d'une stratégie globale dans les zones à risque en fonction des éléments à risque. La dernière partie de la thèse est consacrée à l'étude d'un événement de grêle dans le but de fournir une meilleure compréhension des dommages aux bâtiments et de leur structure. Pour cela, deux échantillons ont été sélectionnés et analysés parmi les données de sinistres à disposition de l'étude. Les résultats obtenus, tant au niveau du portefeuille assuré que de l'analyse individuelle, permettent de dégager des tendances nouvelles. Un deuxième objectif de l'étude a consisté à élaborer une modélisation d'événements de grêle basée sur les données à disposition. Le modèle permet de simuler une distribution aléatoire des intensités et, couplé à un modèle d'estimation des risques, offre une simulation des coûts de dommages envisagés pour une zone d'étude déterminée. Les perspectives de ce travail permettent une meilleure focalisation du rôle de l'assurance et de ses besoins en matière de prévention.
Resumo:
Executive Summary Electricity is crucial for modern societies, thus it is important to understand the behaviour of electricity markets in order to be prepared to face the consequences of policy changes. The Swiss electricity market is now in a transition stage from a public monopoly to a liberalised market and it is undergoing an "emergent" liberalisation - i.e. liberalisation taking place without proper regulation. The withdrawal of nuclear capacity is also being debated. These two possible changes directly affect the mechanisms for capacity expansion. Thus, in this thesis we concentrate on understanding the dynamics of capacity expansion in the Swiss electricity market. A conceptual model to help understand the dynamics of capacity expansion in the Swiss electricity market is developed an explained in the first essay. We identify a potential risk of imports dependence. In the second essay a System Dynamics model, based on the conceptual model, is developed to evaluate the consequences of three scenarios: a nuclear phase-out, the implementation of a policy for avoiding imports dependence, and the combination of both. We conclude that the Swiss market is not well prepared to face unexpected changes of supply and demand, and we identify a risk of imports dependence, mainly in the case of a nuclear phase-out. The third essay focus on the opportunity cost of hydro-storage power generation, one of the main generation sources in Switzerland. We use and extended version of our model to test different policies for assigning an opportunity cost to hydro-storage power generation. We conclude that the preferred policies are different for different market participants and depend on market structure.
Resumo:
The websites are becoming the firms’ first contact interface with their clients. Hence, understanding customers’ online attitudes and behaviors have been capturing increased research attention. The extant research has pointed customers’ satisfaction with the websites as the main reason for customers’ online behaviors. This research has used mostly variables related to the characteristics of the websites as the predictors of customers’ website satisfaction. However, recent research shows that groups of individuals displaying distinctive characteristics react differently to the same context. Therefore, behavior may be considerably different among groups of customers. In this study, we develop a conceptual model of the influence of individual characteristics on the traditional website quality – website satisfaction relationship. We propose a model based on the construct of consumer technology attractiveness (CTA) to represent the genuine positive propensity of individuals toward technology. We further test the moderating effect of this construct on the commonly used predictors of customer’s website satisfaction using Hierarchical Multiple Regression. The empirical study was based on websites of banks operating in Portuguese market. The commercial banking industry is one of the Portuguese industries that better uses the Internet to establish relationships with clients. Data were collected through an online website satisfaction survey, participated by the lecturers and postgraduate students from four Portuguese Universities and Polytechnic Institutes. Our final sample comprised 276 valid questionnaires. Our study permits to conclude that the most commonly used antecedents of website overall satisfaction are still relevant for analyzing consumer’s satisfaction with the banks websites. We also conclude that CTA has a significant moderating effect on almost all customers’ website satisfaction variables used in the study. This study contributes to highlight the theoretical importance and significant influence of consumers’ personal characteristics on their online behavior. Moreover, for the practitioners, a better understanding of these individual characteristics will assist them in developing customized websites that will meet customers’ expectations. O estudo dos comportamentos dos consumidores em ambientes online tem vindo a ter um crescente interesse, uma vez que os websites estão a transformar-se num importante ponto de contacto entre as empresas e os seus clientes. A satisfação dos clientes com os websites tem sido apontada pela Literatura como o principal condicionante dos comportamentos online dos consumidores. No entanto, a investigação científica tem conseguido provar que grupos de indivíduos com características distintas reagem de forma diferente quando submetidos a contextos idênticos, o que poderá levar a diferenças significativas no comportamento online de consumidores pertencentes a diferentes grupos. Neste estudo desenvolvemos um modelo conceptual que reflecte a influência de características individuais na relação entre a qualidade e a satisfação com os websites. Propomos um modelo assente na atractividade tecnológica do consumidor (CTA), que representa a propensão genuína que os indivíduos possuem em relação à tecnologia. Testamos o efeito moderador deste conceito sobre as variáveis mais utilizadas nos estudos sobre a satisfação dos consumidores com os websites, utilizando a Regressão Múltipla Hierárquica. O estudo empírico baseou-se nos websites dos bancos que operam no mercado português, uma vez que este sector é um dos que melhor utiliza a Internet na sua relação com os clientes. Os dados foram recolhidos através de um questionário sobre satisfação com os websites, colocado online e dirigido a docentes e estudantes de programas de pós-graduações, mestrados e doutoramentos de quatro universidades e instituto politécnico portugueses, tendo resultado numa amostra final de 276 questionários validados estatisticamente. Este estudo permitiu concluir que as variáveis que são mais utilizadas como antecedentes da satisfação dos consumidores com os websites, continuam a ser igualmente válidas para a análise dos websites dos bancos. Também concluímos que a CTA tem efeitos moderadores significativos na grande maioria das variáveis utilizadas neste estudo. Assim, conseguimos realçar a importância teórica das características pessoais dos consumidores no seu comportamento online. Para os gestores, uma melhor compreensão destas características individuais permitir-lhes-á desenvolver websites customizados que irão satisfazer as expectativas dos seus clientes.
Resumo:
Landscape amenities can be scarce in places with large areas of open space. Intensely farmed areas with high levels of monocropping and livestock production are akin to developed open space areas and do not provide many services in terms of landscape amenities. Open space in the form of farmland is plentiful, but parks and their services are in short supply. This issue is of particular importance for public policy because it is closely linked to the impact of externalities caused by agricultural activities and to the indirect effects of land use dynamics. This study looks at the impact of landscape amenities on rural residential property values in five counties in North Central Iowa using a hedonic pricing model based on geographic information systems. The effect of cropland, pasture, forest, and developed land as land uses surrounding the property is considered, as well as the impact of proximity to recreational areas. The study also includes the effect of other disamenities, such as livestock facilities and quarries, which can be considered part of the developed open space and are a common feature of the Iowa landscape.
Resumo:
A method to estimate DSGE models using the raw data is proposed. The approachlinks the observables to the model counterparts via a flexible specification which doesnot require the model-based component to be solely located at business cycle frequencies,allows the non model-based component to take various time series patterns, andpermits model misspecification. Applying standard data transformations induce biasesin structural estimates and distortions in the policy conclusions. The proposed approachrecovers important model-based features in selected experimental designs. Twowidely discussed issues are used to illustrate its practical use.
Resumo:
It is sometimes argued that the central banks influence the private economy in the short run through controlling a specific component of high powered money, not its total amount. Using a structural VAR approach, this paper evaluates this claim empirically, in the context of the Japanese economy. I estimate a model based on the standard view that the central bank controls the total amount of high powered money, and another model based on the alternative view that it controls only a specific component. It is shown that the former yields much more sensible estimates than thelatter.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
In this paper we analyze the sensitivity of the labour market decisions of workers close toretirement with respect to the incentives created by public regulations. We improve upon the extensiveprior literature on the effect of pension incentives on retirement in two ways. First, bymodeling the transitions between employment, unemployment and retirement in a simultaneousmanner, paying special attention to the transition from unemployment to retirement (which is particularlyimportant in Spain). Second, by considering the influence of unobserved heterogeneity inthe estimation of the effect of our (carefully constructed) incentive variables.Using administrative data, we find that, when properly defined, economic incentives have astrong impact on labour market decisions in Spain. Unemployment regulations are shown to be particularlyinfluential for retirement behaviour, along with the more traditional determinants linked tothe pension system. Pension variables also have a major bearing on both workers reemploymentdecisions and on the strategic actions of employers. The quantitative impact of the incentives, however,is greatly affected by the existence of unobserved heterogeneity among workers. Its omissionleads to sizable biases in the assessment of the sensitivity to economic incentives, a finding thathas clear consequences for the credibility of any model-based policy analysis. We confirm theimportance of this potential problem in one especially interesting instance: the reform of earlyretirement provisions undertaken in Spain in 2002. We use a difference-in-difference approach tomeasure the behavioural reaction to this change, finding a large overestimation when unobservedheterogeneity is not taken into account.
Resumo:
The paper presents a new model based on the basic Maximum Capture model,MAXCAP. The New Chance Constrained Maximum Capture modelintroduces astochastic threshold constraint, which recognises the fact that a facilitycan be open only if a minimum level of demand is captured. A metaheuristicbased on MAX MIN ANT system and TABU search procedure is presented tosolve the model. This is the first time that the MAX MIN ANT system isadapted to solve a location problem. Computational experience and anapplication to 55 node network are also presented.
Resumo:
A class of composite estimators of small area quantities that exploit spatial (distancerelated)similarity is derived. It is based on a distribution-free model for the areas, but theestimators are aimed to have optimal design-based properties. Composition is applied alsoto estimate some of the global parameters on which the small area estimators depend.It is shown that the commonly adopted assumption of random effects is not necessaryfor exploiting the similarity of the districts (borrowing strength across the districts). Themethods are applied in the estimation of the mean household sizes and the proportions ofsingle-member households in the counties (comarcas) of Catalonia. The simplest version ofthe estimators is more efficient than the established alternatives, even though the extentof spatial similarity is quite modest.