947 resultados para stochastic search variable selection
Resumo:
This paper is on the self-scheduling problem for a thermal power producer taking part in a pool-based electricity market as a price-taker, having bilateral contracts and emission-constrained. An approach based on stochastic mixed-integer linear programming approach is proposed for solving the self-scheduling problem. Uncertainty regarding electricity price is considered through a set of scenarios computed by simulation and scenario-reduction. Thermal units are modelled by variable costs, start-up costs and technical operating constraints, such as: forbidden operating zones, ramp up/down limits and minimum up/down time limits. A requirement on emission allowances to mitigate carbon footprint is modelled by a stochastic constraint. Supply functions for different emission allowance levels are accessed in order to establish the optimal bidding strategy. A case study is presented to illustrate the usefulness and the proficiency of the proposed approach in supporting biding strategies. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Materials selection is a matter of great importance to engineering design and software tools are valuable to inform decisions in the early stages of product development. However, when a set of alternative materials is available for the different parts a product is made of, the question of what optimal material mix to choose for a group of parts is not trivial. The engineer/designer therefore goes about this in a part-by-part procedure. Optimizing each part per se can lead to a global sub-optimal solution from the product point of view. An optimization procedure to deal with products with multiple parts, each with discrete design variables, and able to determine the optimal solution assuming different objectives is therefore needed. To solve this multiobjective optimization problem, a new routine based on Direct MultiSearch (DMS) algorithm is created. Results from the Pareto front can help the designer to align his/hers materials selection for a complete set of materials with product attribute objectives, depending on the relative importance of each objective.
Resumo:
Optimization is a very important field for getting the best possible value for the optimization function. Continuous optimization is optimization over real intervals. There are many global and local search techniques. Global search techniques try to get the global optima of the optimization problem. However, local search techniques are used more since they try to find a local minimal solution within an area of the search space. In Continuous Constraint Satisfaction Problems (CCSP)s, constraints are viewed as relations between variables, and the computations are supported by interval analysis. The continuous constraint programming framework provides branch-and-prune algorithms for covering sets of solutions for the constraints with sets of interval boxes which are the Cartesian product of intervals. These algorithms begin with an initial crude cover of the feasible space (the Cartesian product of the initial variable domains) which is recursively refined by interleaving pruning and branching steps until a stopping criterion is satisfied. In this work, we try to find a convenient way to use the advantages in CCSP branchand- prune with local search of global optimization applied locally over each pruned branch of the CCSP. We apply local search techniques of continuous optimization over the pruned boxes outputted by the CCSP techniques. We mainly use steepest descent technique with different characteristics such as penalty calculation and step length. We implement two main different local search algorithms. We use “Procure”, which is a constraint reasoning and global optimization framework, to implement our techniques, then we produce and introduce our results over a set of benchmarks.
Resumo:
A search is performed for Higgs bosons produced in association with top quarks using the diphoton decay mode of the Higgs boson. Selection requirements are optimized separately for leptonic and fully hadronic final states from the top quark decays. The dataset used corresponds to an integrated luminosity of 4.5 fb−1 of proton--proton collisions at a center-of-mass energy of 7 TeV and 20.3 fb−1 at 8 TeV recorded by the ATLAS detector at the CERN Large Hadron Collider. No significant excess over the background prediction is observed and upper limits are set on the tt¯H production cross section. The observed exclusion upper limit at 95% confidence level is 6.7 times the predicted Standard Model cross section value. In addition, limits are set on the strength of the Yukawa coupling between the top quark and the Higgs boson, taking into account the dependence of the tt¯H and tH cross sections as well as the H→γγ branching fraction on the Yukawa coupling. Lower and upper limits at 95% confidence level are set at −1.3 and +8.0 times the Yukawa coupling strength in the Standard Model.
Resumo:
A low-background inclusive search for new physics in events with same-sign dileptons is presented. The search uses proton--proton collisions corresponding to 20.3 fb−1 of integrated luminosity taken in 2012 at a centre-of-mass energy of 8 TeV with the ATLAS detector at the LHC. Pairs of isolated leptons with the same electric charge and large transverse momenta of the type e±e±,e±μ±, and μ±μ± are selected and their invariant mass distribution is examined. No excess of events above the expected level of Standard Model background is found. The results are used to set upper limits on the cross sections for processes beyond the Standard Model. Limits are placed as a function of the dilepton invariant mass within a fiducial region corresponding to the signal event selection criteria. Exclusion limits are also derived for a specific model of doubly charged Higgs boson production.
Resumo:
This study addresses the issue of the presence of a unit root on the growth rate estimation by the least-squares approach. We argue that when the log of a variable contains a unit root, i.e., it is not stationary then the growth rate estimate from the log-linear trend model is not a valid representation of the actual growth of the series. In fact, under such a situation, we show that the growth of the series is the cumulative impact of a stochastic process. As such the growth estimate from such a model is just a spurious representation of the actual growth of the series, which we refer to as a “pseudo growth rate”. Hence such an estimate should be interpreted with caution. On the other hand, we highlight that the statistical representation of a series as containing a unit root is not easy to separate from an alternative description which represents the series as fundamentally deterministic (no unit root) but containing a structural break. In search of a way around this, our study presents a survey of both the theoretical and empirical literature on unit root tests that takes into account possible structural breaks. We show that when a series is trendstationary with breaks, it is possible to use the log-linear trend model to obtain well defined estimates of growth rates for sub-periods which are valid representations of the actual growth of the series. Finally, to highlight the above issues, we carry out an empirical application whereby we estimate meaningful growth rates of real wages per worker for 51 industries from the organised manufacturing sector in India for the period 1973-2003, which are not only unbiased but also asymptotically efficient. We use these growth rate estimates to highlight the evolving inter-industry wage structure in India.
Resumo:
This paper considers the instrumental variable regression model when there is uncertainty about the set of instruments, exogeneity restrictions, the validity of identifying restrictions and the set of exogenous regressors. This uncertainty can result in a huge number of models. To avoid statistical problems associated with standard model selection procedures, we develop a reversible jump Markov chain Monte Carlo algorithm that allows us to do Bayesian model averaging. The algorithm is very exible and can be easily adapted to analyze any of the di¤erent priors that have been proposed in the Bayesian instrumental variables literature. We show how to calculate the probability of any relevant restriction (e.g. the posterior probability that over-identifying restrictions hold) and discuss diagnostic checking using the posterior distribution of discrepancy vectors. We illustrate our methods in a returns-to-schooling application.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Resumo:
The availability of rich firm-level data sets has recently led researchers to uncover new evidence on the effects of trade liberalization. First, trade openness forces the least productive firms to exit the market. Secondly, it induces surviving firms to increase their innovation efforts and thirdly, it increases the degree of product market competition. In this paper we propose a model aimed at providing a coherent interpretation of these findings. We introducing firm heterogeneity into an innovation-driven growth model, where incumbent firms operating in oligopolistic industries perform cost-reducing innovations. In this framework, trade liberalization leads to higher product market competition, lower markups and higher quantity produced. These changes in markups and quantities, in turn, promote innovation and productivity growth through a direct competition effect, based on the increase in the size of the market, and a selection effect, produced by the reallocation of resources towards more productive firms. Calibrated to match US aggregate and firm-level statistics, the model predicts that a 10 percent reduction in variable trade costs reduces markups by 1:15 percent, firm surviving probabilities by 1 percent, and induces an increase in productivity growth of about 13 percent. More than 90 percent of the trade-induced growth increase can be attributed to the selection effect.
Resumo:
It is generally accepted that most plant populations are locally adapted. Yet, understanding how environmental forces give rise to adaptive genetic variation is a challenge in conservation genetics and crucial to the preservation of species under rapidly changing climatic conditions. Environmental variation, phylogeographic history, and population demographic processes all contribute to spatially structured genetic variation, however few current models attempt to separate these confounding effects. To illustrate the benefits of using a spatially-explicit model for identifying potentially adaptive loci, we compared outlier locus detection methods with a recently-developed landscape genetic approach. We analyzed 157 loci from samples of the alpine herb Gentiana nivalis collected across the European Alps. Principle coordinates of neighbor matrices (PCNM), eigenvectors that quantify multi-scale spatial variation present in a data set, were incorporated into a landscape genetic approach relating AFLP frequencies with 23 environmental variables. Four major findings emerged. 1) Fifteen loci were significantly correlated with at least one predictor variable (R (adj) (2) > 0.5). 2) Models including PCNM variables identified eight more potentially adaptive loci than models run without spatial variables. 3) When compared to outlier detection methods, the landscape genetic approach detected four of the same loci plus 11 additional loci. 4) Temperature, precipitation, and solar radiation were the three major environmental factors driving potentially adaptive genetic variation in G. nivalis. Techniques presented in this paper offer an efficient method for identifying potentially adaptive genetic variation and associated environmental forces of selection, providing an important step forward for the conservation of non-model species under global change.
Resumo:
BACKGROUND: Vascular-endothelial-growth-factor (VEGF) is a key mediator of angiogenesis. VEGF-targeting therapies have shown significant benefits and been successfully integrated in routine clinical practice for other types of cancer, such as metastatic colorectal cancer. By contrast, individual trial results in metastatic breast cancer (MBC) are highly variable and their value is controversial. OBJECTIVES: To evaluate the benefits (in progression-free survival (PFS) and overall survival (OS)) and harms (toxicity) of VEGF-targeting therapies in patients with hormone-refractory or hormone-receptor negative metastatic breast cancer. SEARCH METHODS: Searches of CENTRAL, MEDLINE, EMBASE, the Cochrane Breast Cancer Group's Specialised Register, registers of ongoing trials and proceedings of conferences were conducted in January and September 2011, starting in 2000. Reference lists were scanned and members of the Cochrane Breast Cancer Group, experts and manufacturers of relevant drug were contacted to obtain further information. No language restrictions were applied. SELECTION CRITERIA: Randomised controlled trials (RCTs) to evaluate treatment benefit and non-randomised studies in the routine oncology practice setting to evaluate treatment harms. DATA COLLECTION AND ANALYSIS: We performed data collection and analysis according to the published protocol. Individual patient data was sought but not provided. Therefore, the meta-analysis had to be based on published data. Summary statistics for the primary endpoint (PFS) were hazard ratios (HRs). MAIN RESULTS: We identified seven RCTs, one register, and five ongoing trials from a total of 347 references. The published trials for VEGF-targeting drugs in MBC were limited to bevacizumab. Four trials, including a total of 2886 patients, were available for the comparison of first-line chemotherapy, with versus without bevacizumab. PFS (HR 0.67; 95% confidence interval (CI) 0.61 to 0.73) and response rate were significantly better for patients treated with bevacizumab, with moderate heterogeneity regarding the magnitude of the effect on PFS. For second-line chemotherapy, a smaller, but still significant benefit in terms of PFS could be demonstrated for patients treated with bevacizumab (HR 0.85; 95% CI 0.73 to 0.98), as well as a benefit in tumour response. However, OS did not differ significantly, neither in first- (HR 0.93; 95% CI 0.84 to 1.04), nor second-line therapy (HR 0.98; 95% CI 0.83 to 1.16). Quality of life (QoL) was evaluated in four trials but results were published for only two of these with no relevant impact. Subgroup analysis stated a significant greater benefit for patients with previous (taxane) chemotherapy and patients with hormone-receptor negative status. Regarding toxicity, data from RCTs and registry data were consistent and in line with the known toxicity profile of bevacizumab. While significantly higher rates of adverse events (AEs) grade III/IV (odds ratio (OR) 1.77; 95% CI 1.44 to 2.18) and serious adverse events (SAEs) (OR 1.41; 95% CI 1.13 to 1.75) were observed in patients treated with bevacizumab, rates of treatment-related deaths were lower in patients treated with bevacizumab (OR 0.60; 95% CI 0.36 to 0.99). AUTHORS' CONCLUSIONS: The overall patient benefit from adding bevacizumab to first- and second-line chemotherapy in metastatic breast cancer can at best be considered as modest. It is dependent on the type of chemotherapy used and limited to a prolongation of PFS and response rates in both first- and second-line therapy, both surrogate parameters. In contrast, bevacizumab has no significant impact on the patient-related secondary outcomes of OS or QoL, which indicate a direct patient benefit. For this reason, the clinical value of bevacizumab for metastatic breast cancer remains controversial.
Resumo:
Autonomous underwater vehicles (AUV) represent a challenging control problem with complex, noisy, dynamics. Nowadays, not only the continuous scientific advances in underwater robotics but the increasing number of subsea missions and its complexity ask for an automatization of submarine processes. This paper proposes a high-level control system for solving the action selection problem of an autonomous robot. The system is characterized by the use of reinforcement learning direct policy search methods (RLDPS) for learning the internal state/action mapping of some behaviors. We demonstrate its feasibility with simulated experiments using the model of our underwater robot URIS in a target following task
Resumo:
This paper proposes a high-level reinforcement learning (RL) control system for solving the action selection problem of an autonomous robot. Although the dominant approach, when using RL, has been to apply value function based algorithms, the system here detailed is characterized by the use of direct policy search methods. Rather than approximating a value function, these methodologies approximate a policy using an independent function approximator with its own parameters, trying to maximize the future expected reward. The policy based algorithm presented in this paper is used for learning the internal state/action mapping of a behavior. In this preliminary work, we demonstrate its feasibility with simulated experiments using the underwater robot GARBI in a target reaching task
Resumo:
Dilatation of the ascending aorta (AAD) is a prevalent aortopathy that occurs frequently associated with bicuspid aortic valve (BAV), the most common human congenital cardiac malformation. The molecular mechanisms leading to AAD associated with BAV are still poorly understood. The search for differentially expressed genes in diseased tissue by quantitative real-time PCR (qPCR) is an invaluable tool to fill this gap. However, studies dedicated to identify reference genes necessary for normalization of mRNA expression in aortic tissue are scarce. In this report, we evaluate the qPCR expression of six candidate reference genes in tissue from the ascending aorta of 52 patients with a variety of clinical and demographic characteristics, normal and dilated aortas, and different morphologies of the aortic valve (normal aorta and normal valve n = 30; dilated aorta and normal valve n = 10; normal aorta and BAV n = 4; dilated aorta and BAV n = 8). The expression stability of the candidate reference genes was determined with three statistical algorithms, GeNorm, NormFinder and Bestkeeper. The expression analyses showed that the most stable genes for the three algorithms employed were CDKN1β, POLR2A and CASC3, independently of the structure of the aorta and the valve morphology. In conclusion, we propose the use of these three genes as reference genes for mRNA expression analysis in human ascending aorta. However, we suggest searching for specific reference genes when conducting qPCR experiments with new cohort of samples.
Resumo:
Classical treatments of problems of sequential mate choice assume that the distribution of the quality of potential mates is known a priori. This assumption, made for analytical purposes, may seem unrealistic, opposing empirical data as well as evolutionary arguments. Using stochastic dynamic programming, we develop a model that includes the possibility for searching individuals to learn about the distribution and in particular to update mean and variance during the search. In a constant environment, a priori knowledge of the parameter values brings strong benefits in both time needed to make a decision and average value of mate obtained. Knowing the variance yields more benefits than knowing the mean, and benefits increase with variance. However, the costs of learning become progressively lower as more time is available for choice. When parameter values differ between demes and/or searching periods, a strategy relying on fixed a priori information might lead to erroneous decisions, which confers advantages on the learning strategy. However, time for choice plays an important role as well: if a decision must be made rapidly, a fixed strategy may do better even when the fixed image does not coincide with the local parameter values. These results help in delineating the ecological-behavior context in which learning strategies may spread.