978 resultados para Robust Stochastic Optimization
Resumo:
Tumor Endothelial Marker-1 (TEM1/CD248) is a tumor vascular marker with high therapeutic and diagnostic potentials. Immuno-imaging with TEM1-specific antibodies can help to detect cancerous lesions, monitor tumor responses, and select patients that are most likely to benefit from TEM1-targeted therapies. In particular, near infrared(NIR) optical imaging with biomarker-specific antibodies can provide real-time, tomographic information without exposing the subjects to radioactivity. To maximize the theranostic potential of TEM1, we developed a panel of all human, multivalent Fc-fusion proteins based on a previously identified single chain antibody (scFv78) that recognizes both human and mouse TEM1. By characterizing avidity, stability, and pharmacokinectics, we identified one fusion protein, 78Fc, with desirable characteristics for immuno-imaging applications. The biodistribution of radiolabeled 78Fc showed that this antibody had minimal binding to normal organs, which have low expression of TEM1. Next, we developed a 78Fc-based tracer and tested its performance in different TEM1-expressing mouse models. The NIR imaging and tomography results suggest that the 78Fc-NIR tracer performs well in distinguishing mouse- or human-TEM1 expressing tumor grafts from normal organs and control grafts in vivo. From these results we conclude that further development and optimization of 78Fc as a TEM1-targeted imaging agent for use in clinical settings is warranted.
Resumo:
The development of CT applications might become a public health problem if no effort is made on the justification and the optimisation of the examinations. This paper presents some hints to assure that the risk-benefit compromise remains in favour of the patient, especially when one deals with the examinations of young patients. In this context a particular attention has to be made on the justification of the examination. When performing the acquisition one needs to optimise the extension of the volume investigated together with the number of acquisition sequences used. Finally, the use of automatic exposure systems, now available on all the units, and the use of the Diagnostic Reference Levels (DRL) should allow help radiologists to control the exposure of their patients.
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
Severe environmental conditions, coupled with the routine use of deicing chemicals and increasing traffic volume, tend to place extreme demands on portland cement concrete (PCC) pavements. In most instances, engineers have been able to specify and build PCC pavements that met these challenges. However, there have also been reports of premature deterioration that could not be specifically attributed to a single cause. Modern concrete mixtures have evolved to become very complex chemical systems. The complexity can be attributed to both the number of ingredients used in any given mixture and the various types and sources of the ingredients supplied to any given project. Local environmental conditions can also influence the outcome of paving projects. This research project investigated important variables that impact the homogeneity and rheology of concrete mixtures. The project consisted of a field study and a laboratory study. The field study collected information from six different projects in Iowa. The information that was collected during the field study documented cementitious material properties, plastic concrete properties, and hardened concrete properties. The laboratory study was used to develop baseline mixture variability information for the field study. It also investigated plastic concrete properties using various new devices to evaluate rheology and mixing efficiency. In addition, the lab study evaluated a strategy for the optimization of mortar and concrete mixtures containing supplementary cementitious materials. The results of the field studies indicated that the quality management concrete (QMC) mixtures being placed in the state generally exhibited good uniformity and good to excellent workability. Hardened concrete properties (compressive strength and hardened air content) were also satisfactory. The uniformity of the raw cementitious materials that were used on the projects could not be monitored as closely as was desired by the investigators; however, the information that was gathered indicated that the bulk chemical composition of most materials streams was reasonably uniform. Specific minerals phases in the cementitious materials were less uniform than the bulk chemical composition. The results of the laboratory study indicated that ternary mixtures show significant promise for improving the performance of concrete mixtures. The lab study also verified the results from prior projects that have indicated that bassanite is typically the major sulfate phase that is present in Iowa cements. This causes the cements to exhibit premature stiffening problems (false set) in laboratory testing. Fly ash helps to reduce the impact of premature stiffening because it behaves like a low-range water reducer in most instances. The premature stiffening problem can also be alleviated by increasing the water–cement ratio of the mixture and providing a remix cycle for the mixture.
Resumo:
In this paper we proose the infimum of the Arrow-Pratt index of absoluterisk aversion as a measure of global risk aversion of a utility function.We then show that, for any given arbitrary pair of distributions, thereexists a threshold level of global risk aversion such that all increasingconcave utility functions with at least as much global risk aversion wouldrank the two distributions in the same way. Furthermore, this thresholdlevel is sharp in the sense that, for any lower level of global riskaversion, we can find two utility functions in this class yielding oppositepreference relations for the two distributions.
Resumo:
The achievable region approach seeks solutions to stochastic optimisation problems by: (i) characterising the space of all possible performances(the achievable region) of the system of interest, and (ii) optimisingthe overall system-wide performance objective over this space. This isradically different from conventional formulations based on dynamicprogramming. The approach is explained with reference to a simpletwo-class queueing system. Powerful new methodologies due to the authorsand co-workers are deployed to analyse a general multiclass queueingsystem with parallel servers and then to develop an approach to optimalload distribution across a network of interconnected stations. Finally,the approach is used for the first time to analyse a class of intensitycontrol problems.
Resumo:
Many definitions and debates exist about the core characteristics of social and solidarity economy (SSE) and its actors. Among others, legal forms, profit, geographical scope, and size as criteria for identifying SSE actors often reveal dissents among SSE scholars. Instead of using a dichotomous, either-in-or-out definition of SSE actors, this paper presents an assessment tool that takes into account multiple dimensions to offer a more comprehensive and nuanced view of the field. We first define the core dimensions of the assessment tool by synthesizing the multiple indicators found in the literature. We then empirically test these dimensions and their interrelatedness and seek to identify potential clusters of actors. Finally we discuss the practical implications of our model.
Resumo:
Iterated Local Search has many of the desirable features of a metaheuristic: it is simple, easy to implement, robust, and highly effective. The essential idea of Iterated Local Search lies in focusing the search not on the full space of solutions but on a smaller subspace defined by the solutions that are locally optimal for a given optimization engine. The success of Iterated Local Search lies in the biased sampling of this set of local optima. How effective this approach turns out to be depends mainly on the choice of the local search, the perturbations, and the acceptance criterion. So far, in spite of its conceptual simplicity, it has lead to a number of state-of-the-art results without the use of too much problem-specific knowledge. But with further work so that the different modules are well adapted to the problem at hand, Iterated Local Search can often become a competitive or even state of the artalgorithm. The purpose of this review is both to give a detailed description of this metaheuristic and to show where it stands in terms of performance.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
Using a suitable Hull and White type formula we develop a methodology to obtain asecond order approximation to the implied volatility for very short maturities. Using thisapproximation we accurately calibrate the full set of parameters of the Heston model. Oneof the reasons that makes our calibration for short maturities so accurate is that we alsotake into account the term-structure for large maturities. We may say that calibration isnot "memoryless", in the sense that the option's behavior far away from maturity doesinfluence calibration when the option gets close to expiration. Our results provide a wayto perform a quick calibration of a closed-form approximation to vanilla options that canthen be used to price exotic derivatives. The methodology is simple, accurate, fast, andit requires a minimal computational cost.
Resumo:
The choice network revenue management model incorporates customer purchase behavioras a function of the offered products, and is the appropriate model for airline and hotel networkrevenue management, dynamic sales of bundles, and dynamic assortment optimization.The optimization problem is a stochastic dynamic program and is intractable. A certainty-equivalencerelaxation of the dynamic program, called the choice deterministic linear program(CDLP) is usually used to generate dyamic controls. Recently, a compact linear programmingformulation of this linear program was given for the multi-segment multinomial-logit (MNL)model of customer choice with non-overlapping consideration sets. Our objective is to obtaina tighter bound than this formulation while retaining the appealing properties of a compactlinear programming representation. To this end, it is natural to consider the affine relaxationof the dynamic program. We first show that the affine relaxation is NP-complete even for asingle-segment MNL model. Nevertheless, by analyzing the affine relaxation we derive a newcompact linear program that approximates the dynamic programming value function betterthan CDLP, provably between the CDLP value and the affine relaxation, and often comingclose to the latter in our numerical experiments. When the segment consideration sets overlap,we show that some strong equalities called product cuts developed for the CDLP remain validfor our new formulation. Finally we perform extensive numerical comparisons on the variousbounds to evaluate their performance.
Resumo:
Structural equation models are widely used in economic, socialand behavioral studies to analyze linear interrelationships amongvariables, some of which may be unobservable or subject to measurementerror. Alternative estimation methods that exploit different distributionalassumptions are now available. The present paper deals with issues ofasymptotic statistical inferences, such as the evaluation of standarderrors of estimates and chi--square goodness--of--fit statistics,in the general context of mean and covariance structures. The emphasisis on drawing correct statistical inferences regardless of thedistribution of the data and the method of estimation employed. A(distribution--free) consistent estimate of $\Gamma$, the matrix ofasymptotic variances of the vector of sample second--order moments,will be used to compute robust standard errors and a robust chi--squaregoodness--of--fit squares. Simple modifications of the usual estimateof $\Gamma$ will also permit correct inferences in the case of multi--stage complex samples. We will also discuss the conditions under which,regardless of the distribution of the data, one can rely on the usual(non--robust) inferential statistics. Finally, a multivariate regressionmodel with errors--in--variables will be used to illustrate, by meansof simulated data, various theoretical aspects of the paper.
Resumo:
A new algorithm called the parameterized expectations approach(PEA) for solving dynamic stochastic models under rational expectationsis developed and its advantages and disadvantages are discussed. Thisalgorithm can, in principle, approximate the true equilibrium arbitrarilywell. Also, this algorithm works from the Euler equations, so that theequilibrium does not have to be cast in the form of a planner's problem.Monte--Carlo integration and the absence of grids on the state variables,cause the computation costs not to go up exponentially when the numberof state variables or the exogenous shocks in the economy increase. \\As an application we analyze an asset pricing model with endogenousproduction. We analyze its implications for time dependence of volatilityof stock returns and the term structure of interest rates. We argue thatthis model can generate hump--shaped term structures.
Resumo:
Among the underlying assumptions of the Black-Scholes option pricingmodel, those of a fixed volatility of the underlying asset and of aconstantshort-term riskless interest rate, cause the largest empirical biases. Onlyrecently has attention been paid to the simultaneous effects of thestochasticnature of both variables on the pricing of options. This paper has tried toestimate the effects of a stochastic volatility and a stochastic interestrate inthe Spanish option market. A discrete approach was used. Symmetricand asymmetricGARCH models were tried. The presence of in-the-mean and seasonalityeffectswas allowed. The stochastic processes of the MIBOR90, a Spanishshort-terminterest rate, from March 19, 1990 to May 31, 1994 and of the volatilityofthe returns of the most important Spanish stock index (IBEX-35) fromOctober1, 1987 to January 20, 1994, were estimated. These estimators wereused onpricing Call options on the stock index, from November 30, 1993 to May30, 1994.Hull-White and Amin-Ng pricing formulas were used. These prices werecomparedwith actual prices and with those derived from the Black-Scholesformula,trying to detect the biases reported previously in the literature. Whereasthe conditional variance of the MIBOR90 interest rate seemed to be freeofARCH effects, an asymmetric GARCH with in-the-mean and seasonalityeffectsand some evidence of persistence in variance (IEGARCH(1,2)-M-S) wasfoundto be the model that best represent the behavior of the stochasticvolatilityof the IBEX-35 stock returns. All the biases reported previously in theliterature were found. All the formulas overpriced the options inNear-the-Moneycase and underpriced the options otherwise. Furthermore, in most optiontrading, Black-Scholes overpriced the options and, because of thetime-to-maturityeffect, implied volatility computed from the Black-Scholes formula,underestimatedthe actual volatility.