920 resultados para Model-based optimization


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The detection of Parkinson's disease (PD) in its preclinical stages prior to outright neurodegeneration is essential to the development of neuroprotective therapies and could reduce the number of misdiagnosed patients. However, early diagnosis is currently hampered by lack of reliable biomarkers. (1) H magnetic resonance spectroscopy (MRS) offers a noninvasive measure of brain metabolite levels that allows the identification of such potential biomarkers. This study aimed at using MRS on an ultrahigh field 14.1 T magnet to explore the striatal metabolic changes occurring in two different rat models of the disease. Rats lesioned by the injection of 6-hydroxydopamine (6-OHDA) in the medial-forebrain bundle were used to model a complete nigrostriatal lesion while a genetic model based on the nigral injection of an adeno-associated viral (AAV) vector coding for the human α-synuclein was used to model a progressive neurodegeneration and dopaminergic neuron dysfunction, thereby replicating conditions closer to early pathological stages of PD. MRS measurements in the striatum of the 6-OHDA rats revealed significant decreases in glutamate and N-acetyl-aspartate levels and a significant increase in GABA level in the ipsilateral hemisphere compared with the contralateral one, while the αSyn overexpressing rats showed a significant increase in the GABA striatal level only. Therefore, we conclude that MRS measurements of striatal GABA levels could allow for the detection of early nigrostriatal defects prior to outright neurodegeneration and, as such, offers great potential as a sensitive biomarker of presymptomatic PD.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

AIMS: While successful termination by pacing of organized atrial tachycardias has been observed in patients, single site rapid pacing has not yet led to conclusive results for the termination of atrial fibrillation (AF). The purpose of this study was to evaluate a novel atrial septal pacing algorithm for the termination of AF in a biophysical model of the human atria. METHODS AND RESULTS: Sustained AF was generated in a model based on human magnetic resonance images and membrane kinetics. Rapid pacing was applied from the septal area following a dual-stage scheme: (i) rapid pacing for 10-30 s at pacing intervals 62-70% of AF cycle length (AFCL), (ii) slow pacing for 1.5 s at 180% AFCL, initiated by a single stimulus at 130% AFCL. Atrial fibrillation termination success rates were computed. A mean success rate for AF termination of 10.2% was obtained for rapid septal pacing only. The addition of the slow pacing phase increased this rate to 20.2%. At an optimal pacing cycle length (64% AFCL) up to 29% of AF termination was observed. CONCLUSION: The proposed septal pacing algorithm could suppress AF reentries in a more robust way than classical single site rapid pacing. Experimental studies are now needed to determine whether similar termination mechanisms and rates can be observed in animals or humans, and in which types of AF this pacing strategy might be most effective.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Network reconstructions at the cell level are a major development in Systems Biology. However, we are far from fully exploiting its potentialities. Often, the incremental complexity of the pursued systems overrides experimental capabilities, or increasingly sophisticated protocols are underutilized to merely refine confidence levels of already established interactions. For metabolic networks, the currently employed confidence scoring system rates reactions discretely according to nested categories of experimental evidence or model-based likelihood. Results: Here, we propose a complementary network-based scoring system that exploits the statistical regularities of a metabolic network as a bipartite graph. As an illustration, we apply it to the metabolism of Escherichia coli. The model is adjusted to the observations to derive connection probabilities between individual metabolite-reaction pairs and, after validation, to assess the reliability of each reaction in probabilistic terms. This network-based scoring system uncovers very specific reactions that could be functionally or evolutionary important, identifies prominent experimental targets, and enables further confirmation of modeling results. Conclusions: We foresee a wide range of potential applications at different sub-cellular or supra-cellular levels of biological interactions given the natural bipartivity of many biological networks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Our docking program, Fitted, implemented in our computational platform, Forecaster, has been modified to carry out automated virtual screening of covalent inhibitors. With this modified version of the program, virtual screening and further docking-based optimization of a selected hit led to the identification of potential covalent reversible inhibitors of prolyl oligopeptidase activity. After visual inspection, a virtual hit molecule together with four analogues were selected for synthesis and made in one-five chemical steps. Biological evaluations on recombinant POP and FAPα enzymes, cell extracts, and living cells demonstrated high potency and selectivity for POP over FAPα and DPPIV. Three compounds even exhibited high nanomolar inhibitory activities in intact living human cells and acceptable metabolic stability. This small set of molecules also demonstrated that covalent binding and/or geometrical constraints to the ligand/protein complex may lead to an increase in bioactivity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Context:  Until now, the testosterone/epitestosterone (T/E) ratio is the main marker for detection of testosterone (T) misuse in athletes. As this marker can be influenced by a number of confounding factors, additional steroid profile parameters indicating T misuse can provide substantiating evidence of doping with endogenous steroids. The evaluation of a steroid profile is currently based upon population statistics. Since large inter-individual variations exist, a paradigm shift towards subject-based references is ongoing in doping analysis. Objective:  Proposition of new biomarkers for the detection of testosterone in sports using extensive steroid profiling and an adaptive model based upon Bayesian inference. Subjects:  6 healthy male volunteers were administered with testosterone undecanoate. Population statistics were performed upon steroid profiles from 2014 male Caucasian athletes participating in official sport competition. Design:  An extended search for new biomarkers in a comprehensive steroid profile combined with Bayesian inference techniques as used in the Athlete Biological Passport resulted in a selection of additional biomarkers that may improve detection of testosterone misuse in sports. Results:  Apart from T/E, 4 other steroid ratios (6α-OH-androstenedione/16α-OH-dehydroepiandrostenedione, 4-OH-androstenedione/16α-OH-androstenedione, 7α-OH-testosterone/7β-OH-dehydroepiandrostenedione and dihydrotestosterone/5β-androstane-3α,17β-diol) were identified as sensitive urinary biomarkers for T misuse. These new biomarkers were rated according to relative response, parameter stability, detection time and discriminative power. Conclusion:  Newly selected biomarkers were found suitable for individual referencing within the concept of the Athlete's Biological Passport. The parameters showed improved detection time and discriminative power compared to the T/E ratio. Such biomarkers can support the evidence of doping with small oral doses of testosterone.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The knowledge of the relationship that links radiation dose and image quality is a prerequisite to any optimization of medical diagnostic radiology. Image quality depends, on the one hand, on the physical parameters such as contrast, resolution, and noise, and on the other hand, on characteristics of the observer that assesses the image. While the role of contrast and resolution is precisely defined and recognized, the influence of image noise is not yet fully understood. Its measurement is often based on imaging uniform test objects, even though real images contain anatomical backgrounds whose statistical nature is much different from test objects used to assess system noise. The goal of this study was to demonstrate the importance of variations in background anatomy by quantifying its effect on a series of detection tasks. Several types of mammographic backgrounds and signals were examined by psychophysical experiments in a two-alternative forced-choice detection task. According to hypotheses concerning the strategy used by the human observers, their signal to noise ratio was determined. This variable was also computed for a mathematical model based on the statistical decision theory. By comparing theoretical model and experimental results, the way that anatomical structure is perceived has been analyzed. Experiments showed that the observer's behavior was highly dependent upon both system noise and the anatomical background. The anatomy partly acts as a signal recognizable as such and partly as a pure noise that disturbs the detection process. This dual nature of the anatomy is quantified. It is shown that its effect varies according to its amplitude and the profile of the object being detected. The importance of the noisy part of the anatomy is, in some situations, much greater than the system noise. Hence, reducing the system noise by increasing the dose will not improve task performance. This observation indicates that the tradeoff between dose and image quality might be optimized by accepting a higher system noise. This could lead to a better resolution, more contrast, or less dose.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this work was to evaluate the water flow computer model, WATABLE, using experimental field observations on water table management plots from a site located near Hastings, FL, USA. The experimental field had scale drainage systems with provisions for subirrigation with buried microirrigation and conventional seepage irrigation systems. Potato (Solanum tuberosum L.) growing seasons from years 1996 and 1997 were used to simulate the hydrology of the area. Water table levels, precipitation, irrigation and runoff volumes were continuously monitored. The model simulated the water movement from a buried microirrigation line source and the response of the water table to irrigation, precipitation, evapotranspiration, and deep percolation. The model was calibrated and verified by comparing simulated results with experimental field observations. The model performed very well in simulating seasonal runoff, irrigation volumes, and water table levels during crop growth. The two-dimensional model can be used to investigate different irrigation strategies involving water table management control. Applications of the model include optimization of the water table depth for each growth stage, and duration, frequency, and rate of irrigation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a control strategy for blood glucose(BG) level regulation in type 1 diabetic patients. To design the controller, model-based predictive control scheme has been applied to a newly developed diabetic patient model. The controller is provided with a feedforward loop to improve meal compensation, a gain-scheduling scheme to account for different BG levels, and an asymmetric cost function to reduce hypoglycemic risk. A simulation environment that has been approved for testing of artificial pancreas control algorithms has been used to test thecontroller. The simulation results show a good controller performance in fasting conditions and meal disturbance rejection, and robustness against model–patient mismatch and errors in mealestimation

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Geophysical techniques can help to bridge the inherent gap with regard to spatial resolution and the range of coverage that plagues classical hydrological methods. This has lead to the emergence of the new and rapidly growing field of hydrogeophysics. Given the differing sensitivities of various geophysical techniques to hydrologically relevant parameters and their inherent trade-off between resolution and range the fundamental usefulness of multi-method hydrogeophysical surveys for reducing uncertainties in data analysis and interpretation is widely accepted. A major challenge arising from such endeavors is the quantitative integration of the resulting vast and diverse database in order to obtain a unified model of the probed subsurface region that is internally consistent with all available data. To address this problem, we have developed a strategy towards hydrogeophysical data integration based on Monte-Carlo-type conditional stochastic simulation that we consider to be particularly suitable for local-scale studies characterized by high-resolution and high-quality datasets. Monte-Carlo-based optimization techniques are flexible and versatile, allow for accounting for a wide variety of data and constraints of differing resolution and hardness and thus have the potential of providing, in a geostatistical sense, highly detailed and realistic models of the pertinent target parameter distributions. Compared to more conventional approaches of this kind, our approach provides significant advancements in the way that the larger-scale deterministic information resolved by the hydrogeophysical data can be accounted for, which represents an inherently problematic, and as of yet unresolved, aspect of Monte-Carlo-type conditional simulation techniques. We present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on pertinent synthetic data and then applied to corresponding field data collected at the Boise Hydrogeophysical Research Site near Boise, Idaho, USA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose a novel multifactor dimensionality reduction method for epistasis detection in small or extended pedigrees, FAM-MDR. It combines features of the Genome-wide Rapid Association using Mixed Model And Regression approach (GRAMMAR) with Model-Based MDR (MB-MDR). We focus on continuous traits, although the method is general and can be used for outcomes of any type, including binary and censored traits. When comparing FAM-MDR with Pedigree-based Generalized MDR (PGMDR), which is a generalization of Multifactor Dimensionality Reduction (MDR) to continuous traits and related individuals, FAM-MDR was found to outperform PGMDR in terms of power, in most of the considered simulated scenarios. Additional simulations revealed that PGMDR does not appropriately deal with multiple testing and consequently gives rise to overly optimistic results. FAM-MDR adequately deals with multiple testing in epistasis screens and is in contrast rather conservative, by construction. Furthermore, simulations show that correcting for lower order (main) effects is of utmost importance when claiming epistasis. As Type 2 Diabetes Mellitus (T2DM) is a complex phenotype likely influenced by gene-gene interactions, we applied FAM-MDR to examine data on glucose area-under-the-curve (GAUC), an endophenotype of T2DM for which multiple independent genetic associations have been observed, in the Amish Family Diabetes Study (AFDS). This application reveals that FAM-MDR makes more efficient use of the available data than PGMDR and can deal with multi-generational pedigrees more easily. In conclusion, we have validated FAM-MDR and compared it to PGMDR, the current state-of-the-art MDR method for family data, using both simulations and a practical dataset. FAM-MDR is found to outperform PGMDR in that it handles the multiple testing issue more correctly, has increased power, and efficiently uses all available information.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Työn tavoitteena oli kehittää automaattinen optimointijärjestelmä energiayhtiön omistamaan pieneen sähkön- ja lämmöntuotantolaitokseen (CHP-laitos). Optimointitarve perustuu energiayhtiön sähkön hankintaan sähköpörssistä, kaasun hankintahintaan, kohteen paikallisiin sähkö- ja lämpökuormituksiin ja muihin laitoksen talouteen vaikuttaviin tekijöihin. Kehitettävällä optimointijärjestelmällä ontarkoitus tulevaisuudessa hallita useita hajautetun energiantuotannon yksiköitäkeskitetysti. Työssä kehitettiin algoritmi, joka optimoi voimalaitoksen taloutta sähkötehoa säätävillä ajomalleilla ja suoralla sähköteho-ohjeella. Työssä kehitetyn algoritmin tuottamia hyötyjä selvitettiin Harjun oppimiskeskuksen CHP-laitoksen mittaushistoriatiedoilla. CHP-laitosten käytön optimointiin luotiin keskitettyyn laskentaan ja hajautettuun ohjaukseen perustuva järjestelmä. Se ohjaa CHP-laitoksia reaaliaikaisesti ja ennustaa historiatietoihin perustuvalla aikasarjamallilla laitoksen tulevaa käyttöä. Optimointijärjestelmän toimivuus ja saatu hyöty selvitettiin Harjun oppimiskeskuksen CHP-laitoksella vertaamalla mittauksista laskettua toteutunutta hyötyä optimointijärjestelmän laskemaan ennustettuun hyötyyn.