931 resultados para Epstein and Zin’s recursive utility function
Resumo:
With each cellular generation, oxygenic photoautotrophs must accumulate abundant protein complexes that mediate light capture, photosynthetic electron transport and carbon fixation. In addition to this net synthesis, oxygenic photoautotrophs must counter the light-dependent photoinactivation of Photosystem II (PSII), using metabolically expensive proteolysis, disassembly, resynthesis and re-assembly of protein subunits. We used growth rates, elemental analyses and protein quantitations to estimate the nitrogen (N) metabolism costs to both accumulate the photosynthetic system and to maintain PSII function in the diatom Thalassiosira pseudonana, growing at two pCO2 levels across a range of light levels. The photosynthetic system contains c. 15-25% of total cellular N. Under low growth light, N (re)cycling through PSII repair is only c. 1% of the cellular N assimilation rate. As growth light increases to inhibitory levels, N metabolite cycling through PSII repair increases to c. 14% of the cellular N assimilation rate. Cells growing under the assumed future 750 ppmv pCO2 show higher growth rates under optimal light, coinciding with a lowered N metabolic cost to maintain photosynthesis, but then suffer greater photoinhibition of growth under excess light, coincident with rising costs to maintain photosynthesis. We predict this quantitative trait response to light will vary across taxa.
Resumo:
Despite the key importance of altered oceanic mantle as a repository and carrier of light elements (B, Li, and Be) to depth, its inventory of these elements has hardly been explored and quantified. In order to constrain the systematics and budget of these elements we have studied samples of highly serpentinized (>50%) spinel harzburgite drilled at the Mid-Atlantic Ridge (Fifteen-Twenty Fracture zone, ODP Leg 209, Sites 1272A and 1274A). In-situ analysis by secondary ion mass spectrometry reveals that the B, Li and Be contents of mantle minerals (olivine, orthopyroxene, and clinopyroxene) remain unchanged during serpentinization. B and Li abundances largely correspond to those of unaltered mantle minerals whereas Be is close to the detection limit. The Li contents of clinopyroxene are slightly higher (0.44-2.8 µg/g) compared to unaltered mantle clinopyroxene, and olivine and clinopyroxene show an inverse Li partitioning compared to literature data. These findings along with textural observations and major element composition obtained from microprobe analysis suggest reaction of the peridotites with a mafic silicate melt before serpentinization. Serpentine minerals are enriched in B (most values between 10 and 100 µg/g), depleted in Li (most values below 1 µg/g) compared to the primary phases, with considerable variation within and between samples. Be is at the detection limit. Analysis of whole rock samples by prompt gamma activation shows that serpentinization tends to increase B (10.4-65.0 µg/g), H2O and Cl contents and to lower Li contents (0.07-3.37 µg/g) of peridotites, implying that-contrary to alteration of oceanic crust-B is fractionated from Li and that the B and Li inventory should depend essentially on rock-water ratios. Based on our results and on literature data, we calculate the inventory of B and Li contained in the oceanic lithosphere, and its partitioning between crust and mantle as a function of plate characteristics. We model four cases, an ODP Leg 209-type lithosphere with almost no igneous crust, and a Semail-type lithosphere with a thick igneous crust, both at 1 and 75 Ma, respectively. The results show that the Li contents of the oceanic lithosphere are highly variable (17-307 kg in a column of 1 m * 1 m * thickness of the lithosphere (kg/col)). They are controlled by the primary mantle phases and by altered crust, whereas the B contents (25-904 kg/col) depend entirely on serpentinization. In all cases, large quantities of B reside in the uppermost part of the plate and could hence be easily liberated during slab dehydration. The most prominent input of Li into subduction zones is to be expected from Semail-type lithosphere because most of the Li is stored at shallow levels in the plate. Subducting an ODP Leg 209-type lithosphere would mean only very little Li contribution from the slab. Serpentinized mantle thus plays an important role in B recycling in subduction zones, but it is of lesser importance for Li.
Resumo:
The conservation of birds and their habitats is essential to maintain well-functioning ecosystems including human-dominated habitats. In simplified or homogenized landscapes, patches of natural and semi-natural habitat are essential for the survival of plant and animal populations. We compared species composition and diversity of trees and birds between gallery forests, tree islands and hedges in a Colombian savanna landscape to assess how fragmented woody plant communities affect forest bird communities and how differences in habitat characteristics influenced bird species traits and their potential ecosystem function. Bird and tree diversity was higher in forests than in tree islands and hedges. Soil depth influenced woody species distribution, and canopy cover and tree height determined bird species distribution, resulting in plant and bird communities that mainly differed between forest and non-forest habitat. Bird and tree species and traits widely co-varied. Bird species in tree islands and hedges were on average smaller, less specialized to habitat and more tolerant to disturbance than in forest, but dietary differences did not emerge. Despite being less complex and diverse than forests, hedges and tree islands significantly contribute to the conservation of forest biodiversity in the savanna matrix. Forest fragments remain essential for the conservation of forest specialists, but hedges and tree islands facilitate spillover of more tolerant forest birds and their ecological functions such as seed dispersal from forest to the savanna matrix.
Resumo:
In the present global era in which firms choose the location of their plants beyond national borders, location characteristics are important for attracting multinational enterprises (MNEs). The better access to countries with large market is clearly attractive for MNEs. For example, special treatments on tariffs such as the Generalized System of Preferences (GSP) are beneficial for MNEs whose home country does not have such treatments. Not only such country characteristics but also region characteristics (i.e. province-level or city-level ones) matter, particularly in the case that location characteristics differ widely between a nation's regions. The existence of industrial concentration, that is, agglomeration, is a typical regional characteristic. It is with consideration of these country-level and region-level characteristics that MNEs decide their location abroad. A large number of academic studies have investigated in what kinds of countries MNEs locate, i.e. location choice analysis. Employing the usual new economic geography model (i.e. constant elasticity of substitution (CES) utility function, Dixit-Stiglitz monopolistic competition, and ice-berg trade costs), the literature derives the profit function, of which coefficients are estimated using maximum likelihood procedures. Recent studies are as follows: Head, Rise, and Swenson (1999) for Japanese MNEs in the US; Belderbos and Carree (2002) for Japanese MNEs in China; Head and Mayer (2004) for Japanese MNEs in Europe; Disdier and Mayer (2004) for French MNEs in Europe; Castellani and Zanfei (2004) for large MNEs worldwide; Mayer, Mejean, and Nefussi (2007) for French MNEs worldwide; Crozet, Mayer, and Mucchielli (2004) for MNEs in France; and Basile, Castellani, and Zanfei (2008) for MNEs in Europe. At the present time, three main topics can be found in this literature. The first introduces various location elements as independent variables. The above-mentioned new economic geography model usually yields the profit function, which is a function of market size, productive factor prices, price of intermediate goods, and trade costs. As a proxy for the price of intermediate goods, the measure of agglomeration is often used, particularly the number of manufacturing firms. Some studies employ more disaggregated numbers of manufacturing firms, such as the number of manufacturing firms with the same nationality as the firms choosing the location (e.g., Head et al., 1999; Crozet et al., 2004) or the number of firms belonging to the same firm group (e.g., Belderbos and Carree, 2002). As part of trade costs, some investment climate measures have been examined: free trade zones in the US (Head et al., 1999), special economic zones and opening coastal cities in China (Belderbos and Carree, 2002), and Objective 1 structural funds and cohesion funds in Europe (Basile et al., 2008). Second, the validity of proxy variables for location elements is further examined. Head and Mayer (2004) examine the validity of market potential on location choice. They propose the use of two measures: the Harris market potential index (Harris, 1954) and the Krugman-type index used in Redding and Venables (2004). The Harris-type index is simply the sum of distance-weighted real GDP. They employ the Krugman-type market potential index, which is directly derived from the new economic geography model, as it takes into account the extent of competition (i.e. price index) and is constructed using estimators of importing country dummy variables in the well-known gravity equation, as in Redding and Venables (2004). They find that "theory does not pay", in the sense that the Harris market potential outperforms Krugman's market potential in both the magnitude of its coefficient and the fit of the model to be estimated. The third topic explores the substitution of location by examining inclusive values in the nested-logit model. For example, using firm-level data on French investments both in France and abroad over the 1992-2002 period, Mayer et al. (2007) investigate the determinants of location choice and assess empirically whether the domestic economy has been losing attractiveness over the recent period or not. The estimated coefficient for inclusive value is strongly significant and near unity, indicating that the national economy is not different from the rest of the world in terms of substitution patterns. Similarly, Disdier and Mayer (2004) investigate whether French MNEs consider Western and Eastern Europe as two distinct groups of potential host countries by examining the coefficient for the inclusive value in nested-logit estimation. They confirm the relevance of an East-West structure in the country location decision and furthermore show that this relevance decreases over time. The purpose of this paper is to investigate the location choice of Japanese MNEs in Thailand, Cambodia, Laos, Myanmar, and Vietnam, and is closely related to the third topic mentioned above. By examining region-level location choice with the nested-logit model, I investigate the relative importance of not only country characteristics but also region characteristics. Such investigation is invaluable particularly in the case of location choice in those five countries: industrialization remains immature in those countries which have not yet succeeded in attracting enough MNEs, and as a result, it is expected that there are not yet crucial regional variations for MNEs within such a nation, meaning the country characteristics are still relatively important to attract MNEs. To illustrate, in the case of Cambodia and Laos, one of the crucial elements for Japanese MNEs would be that LDC preferential tariff schemes are available for exports from Cambodia and Laos. On the other hand, in the case of Thailand and Vietnam, which have accepted a relatively large number of MNEs and thus raised the extent of regional inequality, regional characteristics such as the existence of agglomeration would become important elements in location choice. Our sample countries seem, therefore, to offer rich variations for analyzing the relative importance between country characteristics and region characteristics. Our empirical strategy has a further advantage. As in the third topic in the location choice literature, the use of the nested-logit model enables us to examine substitution patterns between country-based and region-based location decisions by MNEs in the concerned countries. For example, it is possible to investigate empirically whether Japanese multinational firms consider Thailand/Vietnam and the other three countries as two distinct groups of potential host countries, by examining the inclusive value parameters in nested-logit estimation. In particular, our sample countries all experienced dramatic changes in, for example, economic growth or trade costs reduction during the sample period. Thus, we will find the dramatic dynamics of such substitution patterns. Our rigorous analysis of the relative importance between country characteristics and region characteristics is invaluable from the viewpoint of policy implications. First, while the former characteristics should be improved mainly by central government in each country, there is sometimes room for the improvement of the latter characteristics by even local governments or smaller institutions such as private agencies. Consequently, it becomes important for these smaller institutions to know just how crucial the improvement of region characteristics is for attracting foreign companies. Second, as economies grow, country characteristics become similar among countries. For example, the LCD preferential tariff schemes are available only when a country is less developed. Therefore, it is important particularly for the least developed countries to know what kinds of regional characteristics become important following economic growth; in other words, after their country characteristics become similar to those of the more developed countries. I also incorporate one important characteristic of MNEs, namely, productivity. The well-known Helpman-Melitz-Yeaple model indicates that only firms with higher productivity can afford overseas entry (Helpman et al., 2004). Beyond this argument, there may be some differences in MNEs' productivity among our sample countries and regions. Such differences are important from the viewpoint of "spillover effects" from MNEs, which are one of the most important results for host countries in accepting their entry. The spillover effects are that the presence of inward foreign direct investment (FDI) aises domestic firms' productivity through various channels such as imitation. Such positive effects might be larger in areas with more productive MNEs. Therefore, it becomes important for host countries to know how much productive firms are likely to invest in them. The rest of this paper is organized as follows. Section 2 takes a brief look at the worldwide distribution of Japanese overseas affiliates. Section 3 provides an empirical model to examine their location choice, and lastly, we discuss future works to estimate our model.
Resumo:
Computer Fluid Dynamics tools have already become a valuable instrument for Naval Architects during the ship design process, thanks to their accuracy and the available computer power. Unfortunately, the development of RANSE codes, generally used when viscous effects play a major role in the flow, has not reached a mature stage, being the accuracy of the turbulence models and the free surface representation the most important sources of uncertainty. Another level of uncertainty is added when the simulations are carried out for unsteady flows, as those generally studied in seakeeping and maneuvering analysis and URANS equations solvers are used. Present work shows the applicability and the benefits derived from the use of new approaches for the turbulence modeling (Detached Eddy Simulation) and the free surface representation (Level Set) on the URANS equations solver CFDSHIP-Iowa. Compared to URANS, DES is expected to predict much broader frequency contents and behave better in flows where boundary layer separation plays a major role. Level Set methods are able to capture very complex free surface geometries, including breaking and overturning waves. The performance of these improvements is tested in set of fairly complex flows, generated by a Wigley hull at pure drift motion, with drift angle ranging from 10 to 60 degrees and at several Froude numbers to study the impact of its variation. Quantitative verification and validation are performed with the obtained results to guarantee their accuracy. The results show the capability of the CFDSHIP-Iowa code to carry out time-accurate simulations of complex flows of extreme unsteady ship maneuvers. The Level Set method is able to capture very complex geometries of the free surface and the use of DES in unsteady simulations highly improves the results obtained. Vortical structures and instabilities as a function of the drift angle and Fr are qualitatively identified. Overall analysis of the flow pattern shows a strong correlation between the vortical structures and free surface wave pattern. Karman-like vortex shedding is identified and the scaled St agrees well with the universal St value. Tip vortices are identified and the associated helical instabilities are analyzed. St using the hull length decreases with the increase of the distance along the vortex core (x), which is similar to results from other simulations. However, St scaled using distance along the vortex cores shows strong oscillations compared to almost constants for those previous simulations. The difference may be caused by the effect of the free-surface, grid resolution, and interaction between the tip vortex and other vortical structures, which needs further investigations. This study is exploratory in the sense that finer grids are desirable and experimental data is lacking for large α, especially for the local flow. More recently, high performance computational capability of CFDSHIP-Iowa V4 has been improved such that large scale computations are possible. DES for DTMB 5415 with bilge keels at α = 20º were conducted using three grids with 10M, 48M and 250M points. DES analysis for flows around KVLCC2 at α = 30º is analyzed using a 13M grid and compared with the results of DES on the 1.6M grid by. Both studies are consistent with what was concluded on grid resolution herein since dominant frequencies for shear-layer, Karman-like, horse-shoe and helical instabilities only show marginal variation on grid refinement. The penalties of using coarse grids are smaller frequency amplitude and less resolved TKE. Therefore finer grids should be used to improve V&V for resolving most of the active turbulent scales for all different Fr and α, which hopefully can be compared with additional EFD data for large α when it becomes available.
Resumo:
The quasisteady structure of the corona of a laser-irradiated pellet is completely determined for arbitrary Z, (ion charge number} and re/ra (ratio of critical and ablation radii), and for heat-flux saturation factor/above approximately 0.04. The ion-to-electron temperature ratio at rc grows sensibly with Z,; all other quantities depend weakly and nonmonotonically on Z,. For rc /ra close to unity, and all Z, of interest (Z, < 47}, the flow is subsonic at rc. For a given laser power W, flux saturation may decrease (low/) or increase (high/) the ablation pressure Pa relative to the value obtained when saturation is not considered; in some cases a decrease in/with W fixed increases Pa. For intermediate^ ~0.1), Pa cc (W/r* )2/3 p\n\pc = critical density), independently of rc/ra; for/~0.6, Pa «s larger by a factor of about [rc/raf13. For rjra > 1.2 roughly, the mass ablation rate is C{Z,) [{m/kZ.f^Kr^Pl) l,\ independent of pc and/, and barely dependent on Z,(m, is ion mass; k, Boltzmann's constant; K, conductivity coefficient; and C, a tabulated function).
Resumo:
The Pridneprovsky Chemical Plant was one of the largest uranium processing enterprises in the former USSR, producing a huge amount of uranium residues. The Zapadnoe tailings site contains most of these residues. We propose a theoretical framework based on multicriteria decision analysis and fuzzy logic to analyze different remediation alternatives for the Zapadnoe tailings, which simultaneously accounts for potentially conflicting economic, social and environmental objectives. We build an objective hierarchy that includes all the relevant aspects. Fuzzy rather than precise values are proposed for use to evaluate remediation alternatives against the different criteria and to quantify preferences, such as the weights representing the relative importance of criteria identified in the objective hierarchy. Finally, we suggest that remediation alternatives should be evaluated by means of a fuzzy additive multi-attribute utility function and ranked on the basis of the respective trapezoidal fuzzy number representing their overall utility.
Resumo:
The mechanical behavior of three tungsten (W) alloys with vanadium (V) and lanthana (La2O3) additions (W–4%V, W–1%La2O3, W–4%V–1%La2O3) processed by hot isostatic pressing (HIP) have been compared with pure-W to analyze the influence of the dopants. Mechanical characterization was performed by three point bending (TPB) tests in an oxidizing air atmosphere and temperature range between 77 (immersion tests in liquid nitrogen) and 1273 K, through which the fracture toughness, flexural strength, and yield strength as function of temperature were obtained. Results show that the V and La2O3 additions improve the mechanical properties and oxidation behavior, respectively. Furthermore, a synergistic effect of both dopants results in an extraordinary increase of the flexure strength, fracture toughness and resistance to oxidation compared to pure-W, especially at higher temperatures. In addition, a new experimental method was developed to obtain a very small notch tip radius (around 5–7 μm) and much more similar to a crack through the use of a new machined notch. The fracture toughness results were lower than those obtained with traditional machining of the notch, which can be explained with electron microscopy, observations of deformation in the rear part of the notch tip. Finally, scanning electron microscopy (SEM) examination of the microstructure and fracture surfaces was used to determine and analyze the relationship between the macroscopic mechanical properties and the micromechanisms of failure involved, depending on the temperature and the dispersion of the alloy.
Resumo:
The water time constant and mechanical time constant greatly influences the power and speed oscillations of hydro-turbine-generator unit. This paper discusses the turbine power transients in response to different nature and changes in the gate position. The work presented here analyses the characteristics of hydraulic system with an emphasis on changes in the above time constants. The simulation study is based on mathematical first-, second-, third- and fourth-order transfer function models. The study is further extended to identify discrete time-domain models and their characteristic representation without noise and with noise content of 10 & 20 dB signal-to-noise ratio (SNR). The use of self-tuned control approach in minimising the speed deviation under plant parameter changes and disturbances is also discussed.
Resumo:
This work aims to develop a novel Cross-Entropy (CE) optimization-based fuzzy controller for Unmanned Aerial Monocular Vision-IMU System (UAMVIS) to solve the seeand- avoid problem using its accurate autonomous localization information. The function of this fuzzy controller is regulating the heading of this system to avoid the obstacle, e.g. wall. In the Matlab Simulink-based training stages, the Scaling Factor (SF) is adjusted according to the specified task firstly, and then the Membership Function (MF) is tuned based on the optimized Scaling Factor to further improve the collison avoidance performance. After obtained the optimal SF and MF, 64% of rules has been reduced (from 125 rules to 45 rules), and a large number of real flight tests with a quadcopter have been done. The experimental results show that this approach precisely navigates the system to avoid the obstacle. To our best knowledge, this is the first work to present the optimized fuzzy controller for UAMVIS using Cross-Entropy method in Scaling Factors and Membership Functions optimization.
Resumo:
Olivier Danvy and others have shown the syntactic correspondence between reduction semantics (a small-step semantics) and abstract machines, as well as the functional correspondence between reduction-free normalisers (a big-step semantics) and abstract machines. The correspondences are established by program transformation (so-called interderivation) techniques. A reduction semantics and a reduction-free normaliser are interderivable when the abstract machine obtained from them is the same. However, the correspondences fail when the underlying reduction strategy is hybrid, i.e., relies on another sub-strategy. Hybridisation is an essential structural property of full-reducing and complete strategies. Hybridisation is unproblematic in the functional correspondence. But in the syntactic correspondence the refocusing and inlining-of-iterate-function steps become context sensitive, preventing the refunctionalisation of the abstract machine. We show how to solve the problem and showcase the interderivation of normalisers for normal order, the standard, full-reducing and complete strategy of the pure lambda calculus. Our solution makes it possible to interderive, rather than contrive, full-reducing abstract machines. As expected, the machine we obtain is a variant of Pierre Crégut s full Krivine machine KN.
Resumo:
Los decisores cada vez se enfrentan a problemas más complejos en los que tomar una decisión implica tener que considerar simultáneamente muchos criterios que normalmente son conflictivos entre sí. En la mayoría de los problemas de decisión es necesario considerar criterios económicos, sociales y medioambientales. La Teoría de la Decisión proporciona el marco adecuado para poder ayudar a los decisores a resolver estos problemas de decisión complejos, al permitir considerar conjuntamente la incertidumbre existente sobre las consecuencias de cada alternativa en los diferentes atributos y la imprecisión sobre las preferencias de los decisores. En esta tesis doctoral nos centramos en la imprecisión de las preferencias de los decisores cuando éstas pueden ser representadas mediante una función de utilidad multiatributo aditiva. Por lo tanto, consideramos imprecisión tanto en los pesos como en las funciones de utilidad componentes de cada atributo. Se ha considerado el caso en que la imprecisión puede ser representada por intervalos de valores o bien mediante información ordinal, en lugar de proporcionar valores concretos. En este sentido, hemos propuesto métodos que permiten ordenar las diferentes alternativas basados en los conceptos de intensidad de dominación o intensidad de preferencia, los cuales intentan medir la fuerza con la que cada alternativa es preferida al resto. Para todos los métodos propuestos se ha analizado su comportamiento y se ha comparado con los más relevantes existentes en la literatura científica que pueden ser aplicados para resolver este tipo de problemas. Para ello, se ha realizado un estudio de simulación en el que se han usado dos medidas de eficiencia (hit ratio y coeficiente de correlación de Kendall) para comparar los diferentes métodos. ABSTRACT Decision makers increasingly face complex decision-making problems where they have to simultaneously consider many often conflicting criteria. In most decision-making problems it is necessary to consider economic, social and environmental criteria. Decision making theory provides an adequate framework for helping decision makers to make complex decisions where they can jointly consider the uncertainty about the performance of each alternative for each attribute, and the imprecision of the decision maker's preferences. In this PhD thesis we focus on the imprecision of the decision maker's preferences represented by an additive multiattribute utility function. Therefore, we consider the imprecision of weights, as well as of component utility functions for each attribute. We consider the case in which the imprecision is represented by ranges of values or by ordinal information rather than precise values. In this respect, we propose methods for ranking alternatives based on notions of dominance intensity, also known as preference intensity, which attempt to measure how much more preferred each alternative is to the others. The performance of the propose methods has been analyzed and compared against the leading existing methods that are applicable to this type of problem. For this purpose, we conducted a simulation study using two efficiency measures (hit ratio and Kendall correlation coefficient) to compare the different methods.
Resumo:
Los fundamentos de la Teoría de la Decisión Bayesiana proporcionan un marco coherente en el que se pueden resolver los problemas de toma de decisiones. La creciente disponibilidad de ordenadores potentes está llevando a tratar problemas cada vez más complejos con numerosas fuentes de incertidumbre multidimensionales; varios objetivos conflictivos; preferencias, metas y creencias cambiantes en el tiempo y distintos grupos afectados por las decisiones. Estos factores, a su vez, exigen mejores herramientas de representación de problemas; imponen fuertes restricciones cognitivas sobre los decisores y conllevan difíciles problemas computacionales. Esta tesis tratará estos tres aspectos. En el Capítulo 1, proporcionamos una revisión crítica de los principales métodos gráficos de representación y resolución de problemas, concluyendo con algunas recomendaciones fundamentales y generalizaciones. Nuestro segundo comentario nos lleva a estudiar tales métodos cuando sólo disponemos de información parcial sobre las preferencias y creencias del decisor. En el Capítulo 2, estudiamos este problema cuando empleamos diagramas de influencia (DI). Damos un algoritmo para calcular las soluciones no dominadas en un DI y analizamos varios conceptos de solución ad hoc. El último aspecto se estudia en los Capítulos 3 y 4. Motivado por una aplicación de gestión de embalses, introducimos un método heurístico para resolver problemas de decisión secuenciales. Como muestra resultados muy buenos, extendemos la idea a problemas secuenciales generales y cuantificamos su bondad. Exploramos después en varias direcciones la aplicación de métodos de simulación al Análisis de Decisiones. Introducimos primero métodos de Monte Cario para aproximar el conjunto no dominado en problemas continuos. Después, proporcionamos un método de Monte Cario basado en cadenas de Markov para problemas con información completa con estructura general: las decisiones y las variables aleatorias pueden ser continuas, y la función de utilidad puede ser arbitraria. Nuestro esquema es aplicable a muchos problemas modelizados como DI. Finalizamos con un capítulo de conclusiones y problemas abiertos.---ABSTRACT---The foundations of Bayesian Decisión Theory provide a coherent framework in which decisión making problems may be solved. With the advent of powerful computers and given the many challenging problems we face, we are gradually attempting to solve more and more complex decisión making problems with high and multidimensional uncertainty, múltiple objectives, influence of time over decisión tasks and influence over many groups. These complexity factors demand better representation tools for decisión making problems; place strong cognitive demands on the decison maker judgements; and lead to involved computational problems. This thesis will deal with these three topics. In recent years, many representation tools have been developed for decisión making problems. In Chapter 1, we provide a critical review of most of them and conclude with recommendations and generalisations. Given our second query, we could wonder how may we deal with those representation tools when there is only partial information. In Chapter 2, we find out how to deal with such a problem when it is structured as an influence diagram (ID). We give an algorithm to compute nondominated solutions in ID's and analyse several ad hoc solution concepts.- The last issue is studied in Chapters 3 and 4. In a reservoir management case study, we have introduced a heuristic method for solving sequential decisión making problems. Since it shows very good performance, we extend the idea to general problems and quantify its goodness. We explore then in several directions the application of simulation based methods to Decisión Analysis. We first introduce Monte Cario methods to approximate the nondominated set in continuous problems. Then, we provide a Monte Cario Markov Chain method for problems under total information with general structure: decisions and random variables may be continuous, and the utility function may be arbitrary. Our scheme is applicable to many problems modeled as IDs. We conclude with discussions and several open problems.
Resumo:
El conjunto eficiente en la Teoría de la Decisión Multicriterio juega un papel fundamental en los procesos de solución ya que es en este conjunto donde el decisor debe hacer su elección más preferida. Sin embargo, la generación de tal conjunto puede ser difícil, especialmente en problemas continuos y/o no lineales. El primer capítulo de esta memoria, es introductorio a la Decisión Multicriterio y en él se exponen aquellos conceptos y herramientas que se van a utilizar en desarrollos posteriores. El segundo capítulo estudia los problemas de Toma de Decisiones en ambiente de certidumbre. La herramienta básica y punto de partida es la función de valor vectorial que refleja imprecisión sobre las preferencias del decisor. Se propone una caracterización del conjunto de valor eficiente y diferentes aproximaciones con sus propiedades de encaje y convergencia. Varios algoritmos interactivos de solución complementan los desarrollos teóricos. El tercer capítulo está dedicado al caso de ambiente de incertidumbre. Tiene un desarrollo parcialmente paralelo al anterior y utiliza la función de utilidad vectorial como herramienta de modelización de preferencias del decisor. A partir de la consideración de las distribuciones simples se introduce la eficiencia en utilidad, su caracterización y aproximaciones, que posteriormente se extienden a los casos de distribuciones discretas y continuas. En el cuarto capítulo se estudia el problema en ambiente difuso, aunque de manera introductoria. Concluimos sugiriendo distintos problemas abiertos.---ABSTRACT---The efficient set of a Multicriteria Decicion-Making Problem plays a fundamental role in the solution process since the Decisión Maker's preferred choice should be in this set. However, the computation of that set may be difficult, specially in continuous and/or nonlinear problems. Chapter one introduces Multicriteria Decision-Making. We review basic concepts and tools for later developments. Chapter two studies Decision-Making problems under certainty. The basic tool is the vector valué function, which represents imprecisión in the DM's preferences. We propose a characterization of the valué efficient set and different approximations with nesting and convergence properties. Several interactive algorithms complement the theoretical results. We devote Chapter three to problems under uncertainty. The development is parallel to the former and uses vector utility functions to model the DM's preferences. We introduce utility efficiency for simple distributions, its characterization and some approximations, which we partially extend to discrete and continuous classes of distributions. Chapter four studies the problem under fuzziness, at an exploratory level. We conclude with several open problems.
Resumo:
La iluminación con diodos emisores de luz (LED) está reemplazando cada vez en mayor medida a las fuentes de luz tradicionales. La iluminación LED ofrece ventajas en eficiencia, consumo de energía, diseño, tamaño y calidad de la luz. Durante más de 50 años, los investigadores han estado trabajando en mejoras LED. Su principal relevancia para la iluminación está aumentando rápidamente. Esta tesis se centra en un campo de aplicación importante, como son los focos. Se utilizan para enfocar la luz en áreas definidas, en objetos sobresalientes en condiciones profesionales. Esta iluminación de alto rendimiento requiere una calidad de luz definida, que incluya temperaturas ajustables de color correlacionadas (CCT), de alto índice de reproducción cromática (CRI), altas eficiencias, y colores vivos y brillantes. En el paquete LED varios chips de diferentes colores (rojo, azul, fósforo convertido) se combinan para cumplir con la distribución de energía espectral con alto CRI. Para colimar la luz en los puntos concretos deseados con un ángulo de emisión determinado, se utilizan blancos sintonizables y diversos colores de luz y ópticas secundarias. La combinación de una fuente LED de varios colores con elementos ópticos puede causar falta de homogeneidad cromática en la distribución espacial y angular de la luz, que debe resolverse en el diseño óptico. Sin embargo, no hay necesidad de uniformidad perfecta en el punto de luz debido al umbral en la percepción visual del ojo humano. Por lo tanto, se requiere una descripción matemática del nivel de uniformidad del color con respecto a la percepción visual. Esta tesis está organizada en siete capítulos. Después de un capítulo inicial que presenta la motivación que ha guiado la investigación de esta tesis, en el capítulo 2 se presentan los fundamentos científicos de la uniformidad del color en luces concentradas, como son: el espacio de color aplicado CIELAB, la percepción visual del color, los fundamentos de diseño de focos respecto a los motores de luz y ópticas no formadoras de imágenes, y los últimos avances en la evaluación de la uniformidad del color en el campo de los focos. El capítulo 3 desarrolla diferentes métodos para la descripción matemática de la distribución espacial del color en un área definida, como son la diferencia de color máxima, la desviación media del color, el gradiente de la distribución espacial de color, así como la suavidad radial y axial. Cada función se refiere a los diferentes factores que influyen en la visión, los cuales necesitan un tratamiento distinto que el de los datos que se tendrán en cuenta, además de funciones de ponderación que pre- y post-procesan los datos simulados o medidos para la reducción del ruido, la luminancia de corte, la aplicación de la ponderación de luminancia, la función de sensibilidad de contraste, y la función de distribución acumulativa. En el capítulo 4, se obtiene la función de mérito Usl para la estimación de la uniformidad del color percibida en focos. Se basó en los resultados de dos conjuntos de experimentos con factor humano realizados para evaluar la percepción visual de los sujetos de los patrones de focos típicos. El primer experimento con factor humano dio lugar al orden de importancia percibida de los focos. El orden de rango percibido se utilizó para correlacionar las descripciones matemáticas de las funciones básicas y la función ponderada sobre la distribución espacial del color, que condujo a la función Usl. El segundo experimento con factor humano probó la percepción de los focos bajo condiciones ambientales diversas, con el objetivo de proporcionar una escala absoluta para Usl, para poder así sustituir la opinión subjetiva personal de los individuos por una función de mérito estandarizada. La validación de la función Usl se presenta en relación con el alcance de la aplicación y condiciones, así como las limitaciones y restricciones que se realizan en el capítulo 5. Se compararon los datos medidos y simulados de varios sistemas ópticos. Se discuten los campos de aplicación , así como validaciones y restricciones de la función. El capítulo 6 presenta el diseño del sistema de focos y su optimización. Una evaluación muestra el análisis de sistemas basados en el reflector y la lente TIR. Los sistemas ópticos simulados se comparan en la uniformidad del color Usl, sensibilidad a las sombras coloreadas, eficiencia e intensidad luminosa máxima. Se ha comprobado que no hay un sistema único que obtenga los mejores resultados en todas las categorías, y que una excelente uniformidad de color se pudo alcanzar por la conjunción de dos sistemas diferentes. Finalmente, el capítulo 7 presenta el resumen de esta tesis y la perspectiva para investigar otros aspectos. ABSTRACT Illumination with light-emitting diodes (LED) is more and more replacing traditional light sources. They provide advantages in efficiency, energy consumption, design, size and light quality. For more than 50 years, researchers have been working on LED improvements. Their main relevance for illumination is rapidly increasing. This thesis is focused on one important field of application which are spotlights. They are used to focus light on defined areas, outstanding objects in professional conditions. This high performance illumination required a defined light quality including tunable correlated color temperatures (CCT), high color rendering index (CRI), high efficiencies and bright, vivid colors. Several differently colored chips (red, blue, phosphor converted) in the LED package are combined to meet spectral power distribution with high CRI, tunable white and several light colors and secondary optics are used to collimate the light into the desired narrow spots with defined angle of emission. The combination of multi-color LED source and optical elements may cause chromatic inhomogeneities in spatial and angular light distribution which needs to solved at the optical design. However, there is no need for perfect uniformity in the spot light due to threshold in visual perception of human eye. Therefore, a mathematical description of color uniformity level with regard to visual perception is required. This thesis is organized seven seven chapters. After an initial one presenting the motivation that has guided the research of this thesis, Chapter 2 introduces the scientific basics of color uniformity in spot lights including: the applied color space CIELAB, the visual color perception, the spotlight design fundamentals with regards to light engines and nonimaging optics, and the state of the art for the evaluation of color uniformity in the far field of spotlights. Chapter 3 develops different methods for mathematical description of spatial color distribution in a defined area, which are the maximum color difference, the average color deviation, the gradient of spatial color distribution as well as the radial and axial smoothness. Each function refers to different visual influencing factors, and they need different handling of data be taken into account, along with weighting functions which pre- and post-process the simulated or measured data for noise reduction, luminance cutoff, the implementation of luminance weighting, contrast sensitivity function, and cumulative distribution function. In chapter 4, the merit function Usl for the estimation of the perceived color uniformity in spotlights is derived. It was based on the results of two sets of human factor experiments performed to evaluate the visual perception of typical spotlight patterns by subjects. The first human factor experiment resulted in the perceived rank order of the spotlights. The perceived rank order was used to correlate the mathematical descriptions of basic functions and weighted function concerning the spatial color distribution, which lead to the Usl function. The second human factor experiment tested the perception of spotlights under varied environmental conditions, with to objective to provide an absolute scale for Usl, so the subjective personal opinion of individuals could be replaced by a standardized merit function. The validation of the Usl function is presented concerning the application range and conditions as well as limitations and restrictions in carried out in chapter 5. Measured and simulated data of various optical several systems were compared. Fields of applications are discussed as well as validations and restrictions of the function. Chapter 6 presents spotlight system design and their optimization. An evaluation shows the analysis of reflector-based and TIR lens systems. The simulated optical systems are compared in color uniformity Usl , sensitivity to colored shadows, efficiency, and peak luminous intensity. It has been found that no single system which performed best in all categories, and that excellent color uniformity could be reached by two different system assemblies. Finally, chapter 7 summarizes the conclusions of the present thesis and an outlook for further investigation topics.