952 resultados para Mixed integer non-linear programming (MINLP)
Resumo:
Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2012
Resumo:
Magdeburg, Univ., Diss, 2007
Resumo:
Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2012
Resumo:
Studies evaluating the mechanical behavior of the trabecular microstructure play an important role in our understanding of pathologies such as osteoporosis, and in increasing our understanding of bone fracture and bone adaptation. Understanding of such behavior in bone is important for predicting and providing early treatment of fractures. The objective of this study is to present a numerical model for studying the initiation and accumulation of trabecular bone microdamage in both the pre- and post-yield regions. A sub-region of human vertebral trabecular bone was analyzed using a uniformly loaded anatomically accurate microstructural three-dimensional finite element model. The evolution of trabecular bone microdamage was governed using a non-linear, modulus reduction, perfect damage approach derived from a generalized plasticity stress-strain law. The model introduced in this paper establishes a history of microdamage evolution in both the pre- and post-yield regions
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
We prove existence theorems for the Dirichlet problem for hypersurfaces of constant special Lagrangian curvature in Hadamard manifolds. The first results are obtained using the continuity method and approximation and then refined using two iterations of the Perron method. The a-priori estimates used in the continuity method are valid in any ambient manifold.
Gaussian estimates for the density of the non-linear stochastic heat equation in any space dimension
Resumo:
In this paper, we establish lower and upper Gaussian bounds for the probability density of the mild solution to the stochastic heat equation with multiplicative noise and in any space dimension. The driving perturbation is a Gaussian noise which is white in time with some spatially homogeneous covariance. These estimates are obtained using tools of the Malliavin calculus. The most challenging part is the lower bound, which is obtained by adapting a general method developed by Kohatsu-Higa to the underlying spatially homogeneous Gaussian setting. Both lower and upper estimates have the same form: a Gaussian density with a variance which is equal to that of the mild solution of the corresponding linear equation with additive noise.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
In this work we develop a viscoelastic bar element that can handle multiple rheo- logical laws with non-linear elastic and non-linear viscous material models. The bar element is built by joining in series an elastic and viscous bar, constraining the middle node position to the bar axis with a reduction method, and stati- cally condensing the internal degrees of freedom. We apply the methodology to the modelling of reversible softening with sti ness recovery both in 2D and 3D, a phenomenology also experimentally observed during stretching cycles on epithelial lung cell monolayers.
Resumo:
A new algorithm called the parameterized expectations approach(PEA) for solving dynamic stochastic models under rational expectationsis developed and its advantages and disadvantages are discussed. Thisalgorithm can, in principle, approximate the true equilibrium arbitrarilywell. Also, this algorithm works from the Euler equations, so that theequilibrium does not have to be cast in the form of a planner's problem.Monte--Carlo integration and the absence of grids on the state variables,cause the computation costs not to go up exponentially when the numberof state variables or the exogenous shocks in the economy increase. \\As an application we analyze an asset pricing model with endogenousproduction. We analyze its implications for time dependence of volatilityof stock returns and the term structure of interest rates. We argue thatthis model can generate hump--shaped term structures.
Resumo:
We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.
Resumo:
This paper presents a test of the predictive validity of various classes ofQALY models (i.e., linear, power and exponential models). We first estimatedTTO utilities for 43 EQ-5D chronic health states and next these states wereembedded in health profiles. The chronic TTO utilities were then used topredict the responses to TTO questions with health profiles. We find that thepower QALY model clearly outperforms linear and exponential QALY models.Optimal power coefficient is 0.65. Our results suggest that TTO-based QALYcalculations may be biased. This bias can be avoided using a power QALY model.
Resumo:
This paper introduces the approach of using Total Unduplicated Reach and Frequency analysis (TURF) to design a product line through a binary linear programming model. This improves the efficiency of the search for the solution to the problem compared to the algorithms that have been used to date. The results obtained through our exact algorithm are presented, and this method shows to be extremely efficient both in obtaining optimal solutions and in computing time for very large instances of the problem at hand. Furthermore, the proposed technique enables the model to be improved in order to overcome the main drawbacks presented by TURF analysis in practice.
Resumo:
We develop a mathematical programming approach for the classicalPSPACE - hard restless bandit problem in stochastic optimization.We introduce a hierarchy of n (where n is the number of bandits)increasingly stronger linear programming relaxations, the lastof which is exact and corresponds to the (exponential size)formulation of the problem as a Markov decision chain, while theother relaxations provide bounds and are efficiently computed. Wealso propose a priority-index heuristic scheduling policy fromthe solution to the first-order relaxation, where the indices aredefined in terms of optimal dual variables. In this way wepropose a policy and a suboptimality guarantee. We report resultsof computational experiments that suggest that the proposedheuristic policy is nearly optimal. Moreover, the second-orderrelaxation is found to provide strong bounds on the optimalvalue.