994 resultados para quasi-linear utility


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer simulation was used to suggest potential selection strategies for beef cattle breeders with different mixes of clients between two potential markets. The traditional market paid on the basis of carcass weight (CWT), while a new market considered marbling grade in addition to CWT as a basis for payment. Both markets instituted discounts for CWT in excess of 340 kg and light carcasses below 300 kg. Herds were simulated for each price category on the carcass weight grid for the new market. This enabled the establishment of phenotypic relationships among the traits examined [CWT, percent intramuscular fat (IMF), carcass value in the traditional market, carcass value in the new market, and the expected proportion of progeny in elite price cells in the new market pricing grid]. The appropriateness of breeding goals was assessed on the basis of client satisfaction. Satisfaction was determined by the equitable distribution of available stock between markets combined with the assessment of the utility of the animal within the market to which it was assigned. The best goal for breeders with predominantly traditional clients was a CWT in excess of 330 kg, while that for breeders with predominantly new market clients was a CWT of between 310 and 329 kg and with a marbling grade of AAA in the Ontario carcass pricing system. For breeders who wished to satisfy both new and traditional clients, the optimal CWT was 310-329 kg and the optimal marbling grade was AA-AAA. This combination resulted in satisfaction levels of greater than 75% among clients, regardless of the distribution of the clients between the traditional and new marketplaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud SLAs compensate customers with credits when average availability drops below certain levels. This is too inflexible because consumers lose non-measurable amounts of performance being only compensated later, in next charging cycles. We propose to schedule virtual machines (VMs), driven by range-based non-linear reductions of utility, different for classes of users and across different ranges of resource allocations: partial utility. This customer-defined metric, allows providers transferring resources between VMs in meaningful and economically efficient ways. We define a comprehensive cost model incorporating partial utility given by clients to a certain level of degradation, when VMs are allocated in overcommitted environments (Public, Private, Community Clouds). CloudSim was extended to support our scheduling model. Several simulation scenarios with synthetic and real workloads are presented, using datacenters with different dimensions regarding the number of servers and computational capacity. We show the partial utility-driven driven scheduling allows more VMs to be allocated. It brings benefits to providers, regarding revenue and resource utilization, allowing for more revenue per resource allocated and scaling well with the size of datacenters when comparing with an utility-oblivious redistribution of resources. Regarding clients, their workloads’ execution time is also improved, by incorporating an SLA-based redistribution of their VM’s computational power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

European Transactions on Telecommunications, vol. 18

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The integration of wind power in eletricity generation brings new challenges to unit commitment due to the random nature of wind speed. For this particular optimisation problem, wind uncertainty has been handled in practice by means of conservative stochastic scenario-based optimisation models, or through additional operating reserve settings. However, generation companies may have different attitudes towards operating costs, load curtailment, or waste of wind energy, when considering the risk caused by wind power variability. Therefore, alternative and possibly more adequate approaches should be explored. This work is divided in two main parts. Firstly we survey the main formulations presented in the literature for the integration of wind power in the unit commitment problem (UCP) and present an alternative model for the wind-thermal unit commitment. We make use of the utility theory concepts to develop a multi-criteria stochastic model. The objectives considered are the minimisation of costs, load curtailment and waste of wind energy. Those are represented by individual utility functions and aggregated in a single additive utility function. This last function is adequately linearised leading to a mixed-integer linear program (MILP) model that can be tackled by general-purpose solvers in order to find the most preferred solution. In the second part we discuss the integration of pumped-storage hydro (PSH) units in the UCP with large wind penetration. Those units can provide extra flexibility by using wind energy to pump and store water in the form of potential energy that can be generated after during peak load periods. PSH units are added to the first model, yielding a MILP model with wind-hydro-thermal coordination. Results showed that the proposed methodology is able to reflect the risk profiles of decision makers for both models. By including PSH units, the results are significantly improved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let V be an infinite-dimensional vector space and for every infinite cardinal n such that n≤dimV, let AE(V,n) denote the semigroup of all linear transformations of V whose defect is less than n. In 2009, Mendes-Gonçalves and Sullivan studied the ideal structure of AE(V,n). Here, we consider a similarly-defined semigroup AE(X,q) of transformations defined on an infinite set X. Quite surprisingly, the results obtained for sets differ substantially from the results obtained in the linear setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"Vegeu el resum a l'inici del document del fitxer adjunt."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O problema de otimização de mínimos quadrados e apresentado como uma classe importante de problemas de minimização sem restrições. A importância dessa classe de problemas deriva das bem conhecidas aplicações a estimação de parâmetros no contexto das analises de regressão e de resolução de sistemas de equações não lineares. Apresenta-se uma revisão dos métodos de otimização de mínimos quadrados lineares e de algumas técnicas conhecidas de linearização. Faz-se um estudo dos principais métodos de gradiente usados para problemas não lineares gerais: Métodos de Newton e suas modificações incluindo os métodos Quasi-Newton mais usados (DFP e BFGS). Introduzem-se depois métodos específicos de gradiente para problemas de mínimos quadrados: Gauss-Newton e Levenberg-Larquardt. Apresenta-se uma variedade de exemplos selecionados na literatura para testar os diferentes métodos usando rotinas MATLAB. Faz-se uma an alise comparativa dos algoritmos baseados nesses ensaios computacionais que exibem as vantagens e desvantagens dos diferentes métodos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a test of the predictive validity of various classes ofQALY models (i.e., linear, power and exponential models). We first estimatedTTO utilities for 43 EQ-5D chronic health states and next these states wereembedded in health profiles. The chronic TTO utilities were then used topredict the responses to TTO questions with health profiles. We find that thepower QALY model clearly outperforms linear and exponential QALY models.Optimal power coefficient is 0.65. Our results suggest that TTO-based QALYcalculations may be biased. This bias can be avoided using a power QALY model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work annealing and growth of CuInS2 thin films is investigated with quasireal-time in situ Raman spectroscopy. During the annealing a shift of the Raman A1 mode towards lower wave numbers with increasing temperature is observed. A linear temperature dependence of the phonon branch of ¿2 cm¿1/100 K is evaluated. The investigation of the growth process (sulfurization of metallic precursors) with high surface sensitivity reveals the occurrence of phases which are not detected with bulk sensitive methods. This allows a detailed insight in the formation of the CuInS2 phases. Independent from stoichiometry and doping of the starting precursors the CuAu ordering of CuInS2 initially forms as the dominating ordering. The transformation of the CuAu ordering into the chalcopyrite one is, in contrast, strongly dependent on the precursor composition and requires high temperatures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrical Impedance Tomography (EIT) is an imaging method which enables a volume conductivity map of a subject to be produced from multiple impedance measurements. It has the potential to become a portable non-invasive imaging technique of particular use in imaging brain function. Accurate numerical forward models may be used to improve image reconstruction but, until now, have employed an assumption of isotropic tissue conductivity. This may be expected to introduce inaccuracy, as body tissues, especially those such as white matter and the skull in head imaging, are highly anisotropic. The purpose of this study was, for the first time, to develop a method for incorporating anisotropy in a forward numerical model for EIT of the head and assess the resulting improvement in image quality in the case of linear reconstruction of one example of the human head. A realistic Finite Element Model (FEM) of an adult human head with segments for the scalp, skull, CSF, and brain was produced from a structural MRI. Anisotropy of the brain was estimated from a diffusion tensor-MRI of the same subject and anisotropy of the skull was approximated from the structural information. A method for incorporation of anisotropy in the forward model and its use in image reconstruction was produced. The improvement in reconstructed image quality was assessed in computer simulation by producing forward data, and then linear reconstruction using a sensitivity matrix approach. The mean boundary data difference between anisotropic and isotropic forward models for a reference conductivity was 50%. Use of the correct anisotropic FEM in image reconstruction, as opposed to an isotropic one, corrected an error of 24 mm in imaging a 10% conductivity decrease located in the hippocampus, improved localisation for conductivity changes deep in the brain and due to epilepsy by 4-17 mm, and, overall, led to a substantial improvement on image quality. This suggests that incorporation of anisotropy in numerical models used for image reconstruction is likely to improve EIT image quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo Simulations were carried out using a nearest neighbour ferromagnetic XYmodel, on both 2-D and 3-D quasi-periodic lattices. In the case of 2-D, both the unfrustrated and frustrated XV-model were studied. For the unfrustrated 2-D XV-model, we have examined the magnetization, specific heat, linear susceptibility, helicity modulus and the derivative of the helicity modulus with respect to inverse temperature. The behaviour of all these quatities point to a Kosterlitz-Thouless transition occuring in temperature range Te == (1.0 -1.05) JlkB and with critical exponents that are consistent with previous results (obtained for crystalline lattices) . However, in the frustrated case, analysis of the spin glass susceptibility and EdwardsAnderson order parameter, in addition to the magnetization, specific heat and linear susceptibility, support a spin glass transition. In the case where the 'thin' rhombus is fully frustrated, a freezing transition occurs at Tf == 0.137 JlkB , which contradicts previous work suggesting the critical dimension of spin glasses to be de > 2 . In the 3-D systems, examination of the magnetization, specific heat and linear susceptibility reveal a conventional second order phase transition. Through a cumulant analysis and finite size scaling, a critical temperature of Te == (2.292 ± 0.003) JI kB and critical exponents of 0:' == 0.03 ± 0.03, f3 == 0.30 ± 0.01 and I == 1.31 ± 0.02 have been obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La modélisation géométrique est importante autant en infographie qu'en ingénierie. Notre capacité à représenter l'information géométrique fixe les limites et la facilité avec laquelle on manipule les objets 3D. Une de ces représentations géométriques est le maillage volumique, formé de polyèdres assemblés de sorte à approcher une forme désirée. Certaines applications, tels que le placage de textures et le remaillage, ont avantage à déformer le maillage vers un domaine plus régulier pour faciliter le traitement. On dit qu'une déformation est \emph{quasi-conforme} si elle borne la distorsion. Cette thèse porte sur l’étude et le développement d'algorithmes de déformation quasi-conforme de maillages volumiques. Nous étudions ces types de déformations parce qu’elles offrent de bonnes propriétés de préservation de l’aspect local d’un solide et qu’elles ont été peu étudiées dans le contexte de l’informatique graphique, contrairement à leurs pendants 2D. Cette recherche tente de généraliser aux volumes des concepts bien maitrisés pour la déformation de surfaces. Premièrement, nous présentons une approche linéaire de la quasi-conformité. Nous développons une méthode déformant l’objet vers son domaine paramétrique par une méthode des moindres carrés linéaires. Cette méthode est simple d'implémentation et rapide d'exécution, mais n'est qu'une approximation de la quasi-conformité car elle ne borne pas la distorsion. Deuxièmement, nous remédions à ce problème par une approche non linéaire basée sur les positions des sommets. Nous développons une technique déformant le domaine paramétrique vers le solide par une méthode des moindres carrés non linéaires. La non-linéarité permet l’inclusion de contraintes garantissant l’injectivité de la déformation. De plus, la déformation du domaine paramétrique au lieu de l’objet lui-même permet l’utilisation de domaines plus généraux. Troisièmement, nous présentons une approche non linéaire basée sur les angles dièdres. Cette méthode définit la déformation du solide par les angles dièdres au lieu des positions des sommets du maillage. Ce changement de variables permet une expression naturelle des bornes de distorsion de la déformation. Nous présentons quelques applications de cette nouvelle approche dont la paramétrisation, l'interpolation, l'optimisation et la compression de maillages tétraédriques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The question of stability of black hole was first studied by Regge and Wheeler who investigated linear perturbations of the exterior Schwarzschild spacetime. Further work on this problem led to the study of quasi-normal modes which is believed as a characteristic sound of black holes. Quasi-normal modes (QNMs) describe the damped oscillations under perturbations in the surrounding geometry of a black hole with frequencies and damping times of oscillations entirely fixed by the black hole parameters.In the present work we study the influence of cosmic string on the QNMs of various black hole background spacetimes which are perturbed by a massless Dirac field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ausgangspunkt der Dissertation ist ein von V. Maz'ya entwickeltes Verfahren, eine gegebene Funktion f : Rn ! R durch eine Linearkombination fh radialer glatter exponentiell fallender Basisfunktionen zu approximieren, die im Gegensatz zu den Splines lediglich eine näherungsweise Zerlegung der Eins bilden und somit ein für h ! 0 nicht konvergentes Verfahren definieren. Dieses Verfahren wurde unter dem Namen Approximate Approximations bekannt. Es zeigt sich jedoch, dass diese fehlende Konvergenz für die Praxis nicht relevant ist, da der Fehler zwischen f und der Approximation fh über gewisse Parameter unterhalb der Maschinengenauigkeit heutiger Rechner eingestellt werden kann. Darüber hinaus besitzt das Verfahren große Vorteile bei der numerischen Lösung von Cauchy-Problemen der Form Lu = f mit einem geeigneten linearen partiellen Differentialoperator L im Rn. Approximiert man die rechte Seite f durch fh, so lassen sich in vielen Fällen explizite Formeln für die entsprechenden approximativen Volumenpotentiale uh angeben, die nur noch eine eindimensionale Integration (z.B. die Errorfunktion) enthalten. Zur numerischen Lösung von Randwertproblemen ist das von Maz'ya entwickelte Verfahren bisher noch nicht genutzt worden, mit Ausnahme heuristischer bzw. experimenteller Betrachtungen zur sogenannten Randpunktmethode. Hier setzt die Dissertation ein. Auf der Grundlage radialer Basisfunktionen wird ein neues Approximationsverfahren entwickelt, welches die Vorzüge der von Maz'ya für Cauchy-Probleme entwickelten Methode auf die numerische Lösung von Randwertproblemen überträgt. Dabei werden stellvertretend das innere Dirichlet-Problem für die Laplace-Gleichung und für die Stokes-Gleichungen im R2 behandelt, wobei für jeden der einzelnen Approximationsschritte Konvergenzuntersuchungen durchgeführt und Fehlerabschätzungen angegeben werden.