871 resultados para Optimal Redundancy
Resumo:
Background The optimal defence hypothesis (ODH) predicts that tissues that contribute most to a plant's fitness and have the highest probability of being attacked will be the parts best defended against biotic threats, including herbivores. In general, young sink tissues and reproductive structures show stronger induced defence responses after attack from pathogens and herbivores and contain higher basal levels of specialized defensive metabolites than other plant parts. However, the underlying physiological mechanisms responsible for these developmentally regulated defence patterns remain unknown. Scope This review summarizes current knowledge about optimal defence patterns in above- and below-ground plant tissues, including information on basal and induced defence metabolite accumulation, defensive structures and their regulation by jasmonic acid (JA). Physiological regulations underlying developmental differences of tissues with contrasting defence patterns are highlighted, with a special focus on the role of classical plant growth hormones, including auxins, cytokinins, gibberellins and brassinosteroids, and their interactions with the JA pathway. By synthesizing recent findings about the dual roles of these growth hormones in plant development and defence responses, this review aims to provide a framework for new discoveries on the molecular basis of patterns predicted by the ODH. Conclusions Almost four decades after its formulation, we are just beginning to understand the underlying molecular mechanisms responsible for the patterns of defence allocation predicted by the ODH. A requirement for future advances will be to understand how developmental and defence processes are integrated.
Resumo:
Let Y be a stochastic process on [0,1] satisfying dY(t)=n 1/2 f(t)dt+dW(t) , where n≥1 is a given scale parameter (`sample size'), W is standard Brownian motion and f is an unknown function. Utilizing suitable multiscale tests, we construct confidence bands for f with guaranteed given coverage probability, assuming that f is isotonic or convex. These confidence bands are computationally feasible and shown to be asymptotically sharp optimal in an appropriate sense.
Resumo:
OBJECTIVES The aim of this study was to optimise dexmedetomidine and alfaxalone dosing, for intramuscular administration with butorphanol, to perform minor surgeries in cats. METHODS Initially, cats were assigned to one of five groups, each composed of six animals and receiving, in addition to 0.3 mg/kg butorphanol intramuscularly, one of the following: (A) 0.005 mg/kg dexmedetomidine, 2 mg/kg alfaxalone; (B) 0.008 mg/kg dexmedetomidine, 1.5 mg/kg alfaxalone; (C) 0.012 mg/kg dexmedetomidine, 1 mg/kg alfaxalone; (D) 0.005 mg/kg dexmedetomidine, 1 mg/kg alfaxalone; and (E) 0.012 mg/kg dexmedetomidine, 2 mg/kg alfaxalone. Thereafter, a modified 'direct search' method, conducted in a stepwise manner, was used to optimise drug dosing. The quality of anaesthesia was evaluated on the basis of composite scores (one for anaesthesia and one for recovery), visual analogue scales and the propofol requirement to suppress spontaneous movements. The medians or means of these variables were used to rank the treatments; 'unsatisfactory' and 'promising' combinations were identified to calculate, through the equation first described by Berenbaum in 1990, new dexmedetomidine and alfaxalone doses to be tested in the next step. At each step, five combinations (one new plus the best previous four) were tested. RESULTS None of the tested combinations resulted in adverse effects. Four steps and 120 animals were necessary to identify the optimal drug combination (0.014 mg/kg dexmedetomidine, 2.5 mg/kg alfaxalone and 0.3 mg/kg butorphanol). CONCLUSIONS AND RELEVANCE The investigated drug mixture, at the doses found with the optimisation method, is suitable for cats undergoing minor clinical procedures.
Resumo:
INTRODUCTION In iliosacral screw fixation, the dimensions of solely intraosseous (secure) pathways, perpendicular to the ilio-sacral articulation (optimal) with corresponding entry (EP) and aiming points (AP) on lateral fluoroscopic projections, and the factors (demographic, anatomic) influencing these have not yet been described. METHODS In 100 CTs of normal pelvises, the height and width of the secure and optimal pathways were measured on axial and coronal views bilaterally (total measurements: n=200). Corresponding EP and AP were defined as either the location of the screw head or tip at the crossing of lateral innominate bones' cortices (EP) and sacral midlines (AP) within the centre of the pathway, respectively. EP and AP were transferred to the sagittal pelvic view using a coordinate system with the zero-point in the centre of the posterior cortex of the S1 vertebral body (x-axis parallel to upper S1 endplate). Distances are expressed in relation to the anteroposterior distance of the S1 upper endplate (in %). The influence of demographic (age, gender, side) and/or anatomic (PIA=pelvic incidence angle; TCA=transversal curvature angle, PID-Index=pelvic incidence distance-index; USW=unilateral sacral width-index) parameters on pathway dimensions and positions of EP and AP were assessed (multivariate analysis). RESULTS The width, height or both factors of the pathways were at least 7mm or more in 32% and 53% or 20%, respectively. The EP was on average 14±24% behind the centre of the posterior S1 cortex and 41±14% below it. The AP was on average 53±7% in the front of the centre of the posterior S1 cortex and 11±7% above it. PIA influenced the width, TCA, PID-Index the height of the pathways. PIA, PID-Index, and USW-Index significantly influenced EP and AP. Age, gender, and TCA significantly influenced EP. CONCLUSION Secure and optimal placement of screws of at least 7mm in diameter will be unfeasible in the majority of patients. Thoughtful preoperative planning of screw placement on CT scans is advisable to identify secure pathways with an optimal direction. For this purpose, the presented methodology of determining and transferring EPs and APs of corresponding pathways to the sagittal pelvic view using a coordinate system may be useful.
Resumo:
This work deals with parallel optimization of expensive objective functions which are modelled as sample realizations of Gaussian processes. The study is formalized as a Bayesian optimization problem, or continuous multi-armed bandit problem, where a batch of q > 0 arms is pulled in parallel at each iteration. Several algorithms have been developed for choosing batches by trading off exploitation and exploration. As of today, the maximum Expected Improvement (EI) and Upper Confidence Bound (UCB) selection rules appear as the most prominent approaches for batch selection. Here, we build upon recent work on the multipoint Expected Improvement criterion, for which an analytic expansion relying on Tallis’ formula was recently established. The computational burden of this selection rule being still an issue in application, we derive a closed-form expression for the gradient of the multipoint Expected Improvement, which aims at facilitating its maximization using gradient-based ascent algorithms. Substantial computational savings are shown in application. In addition, our algorithms are tested numerically and compared to state-of-the-art UCB-based batchsequential algorithms. Combining starting designs relying on UCB with gradient-based EI local optimization finally appears as a sound option for batch design in distributed Gaussian Process optimization.
Resumo:
Many attempts have already been made to detect exomoons around transiting exoplanets, but the first confirmed discovery is still pending. The experiences that have been gathered so far allow us to better optimize future space telescopes for this challenge already during the development phase. In this paper we focus on the forthcoming CHaraterising ExOPlanet Satellite (CHEOPS), describing an optimized decision algorithm with step-by-step evaluation, and calculating the number of required transits for an exomoon detection for various planet moon configurations that can be observable by CHEOPS. We explore the most efficient way for such an observation to minimize the cost in observing time. Our study is based on PTV observations (photocentric transit timing variation) in simulated CHEOPS data, but the recipe does not depend on the actual detection method, and it can be substituted with, e.g., the photodynamical method for later applications. Using the current state-of-the-art level simulation of CHEOPS data we analyzed transit observation sets for different star planet moon configurations and performed a bootstrap analysis to determine their detection statistics. We have found that the detection limit is around an Earth-sized moon. In the case of favorable spatial configurations, systems with at least a large moon and a Neptune-sized planet, an 80% detection chance requires at least 5-6 transit observations on average. There is also a nonzero chance in the case of smaller moons, but the detection statistics deteriorate rapidly, while the necessary transit measurements increase quickly. After the CoRoT and Kepler spacecrafts, CHEOPS will be the next dedicated space telescope that will observe exoplanetary transits and characterize systems with known Doppler-planets. Although it has a smaller aperture than Kepler (the ratio of the mirror diameters is about 1/3) and is mounted with a CCD that is similar to Kepler's, it will observe brighter stars and operate with larger sampling rate; therefore, the detection limit for an exomoon can be the same as or better, which will make CHEOPS a competitive instruments in the quest for exomoons.
Resumo:
The capital structure and regulation of financial intermediaries is an important topic for practitioners, regulators and academic researchers. In general, theory predicts that firms choose their capital structures by balancing the benefits of debt (e.g., tax and agency benefits) against its costs (e.g., bankruptcy costs). However, when traditional corporate finance models have been applied to insured financial institutions, the results have generally predicted corner solutions (all equity or all debt) to the capital structure problem. This paper studies the impact and interaction of deposit insurance, capital requirements and tax benefits on a bankÇs choice of optimal capital structure. Using a contingent claims model to value the firm and its associated claims, we find that there exists an interior optimal capital ratio in the presence of deposit insurance, taxes and a minimum fixed capital standard. Banks voluntarily choose to maintain capital in excess of the minimum required in order to balance the risks of insolvency (especially the loss of future tax benefits) against the benefits of additional debt. Because we derive a closed- form solution, our model provides useful insights on several current policy debates including revisions to the regulatory framework for GSEs, tax policy in general and the tax exemption for credit unions.
Resumo:
This paper shows that optimal policy and consistent policy outcomes require the use of control-theory and game-theory solution techniques. While optimal policy and consistent policy often produce different outcomes even in a one-period model, we analyze consistent policy and its outcome in a simple model, finding that the cause of the inconsistency with optimal policy traces to inconsistent targets in the social loss function. As a result, the central bank should adopt a loss function that differs from the social loss function. Carefully designing the central bank s loss function with consistent targets can harmonize optimal and consistent policy. This desirable result emerges from two observations. First, the social loss function reflects a normative process that does not necessarily prove consistent with the structure of the microeconomy. Thus, the social loss function cannot serve as a direct loss function for the central bank. Second, an optimal loss function for the central bank must depend on the structure of that microeconomy. In addition, this paper shows that control theory provides a benchmark for institution design in a game-theoretical framework.
Resumo:
This paper shows that optimal policy and consistent policy outcomes require the use of control-theory and game-theory solution techniques. While optimal policy and consistent policy often produce different outcomes even in a one-period model, we analyze consistent policy and its outcome in a simple model, finding that the cause of the inconsistency with optimal policy traces to inconsistent targets in the social loss function. As a result, the social loss function cannot serve as a direct loss function for the central bank. Accordingly, we employ implementation theory to design a central bank loss function (mechanism design) with consistent targets, while the social loss function serves as a social welfare criterion. That is, with the correct mechanism design for the central bank loss function, optimal policy and consistent policy become identical. In other words, optimal policy proves implementable (consistent).
Resumo:
Kydland and Prescott (1977) develop a simple model of monetary policy making, where the central bank needs some commitment technique to achieve optimal monetary policy over time. Although not their main focus, they illustrate the difference between consistent and optimal policy in a sequential-decision one-period world. We employ the analytical method developed in Yuan and Miller (2005), whereby the government appoints a central bank with consistent targets or delegates consistent targets to the central bank. Thus, the central bank s welfare function differs from the social welfare function, which cause consistent policy to prove optimal.
Resumo:
This paper reinforces the argument of Harding and Sirmans (2002) that the observed preference of lenders for extended maturity rather than renegotiation of the principle in the case of loan default is due to the superior incentive properties of the former. Specifically, borrowers have a greater incentive to avoid default under extended maturity because it reduces the likelihood that they will be able to escape paying off the full loan balance. Thus, although extended maturity leaves open the possibility of foreclosure, it will be preferred to renegotiation as long as the dead weight loss from foreclosure is not too large.