920 resultados para Estimation Of Distribution Algorithms
Resumo:
ODP Site 1089 is optimally located in order to monitor the occurrence of maxima in Agulhas heat and salt spillage from the Indian to the Atlantic Ocean. Radiolarian-based paleotemperature transfer functions allowed to reconstruct the climatic history for the last 450 kyr at this location. A warm sea surface temperature anomaly during Marine Isotope Stage (MIS) 10 was recognized and traced to other oceanic records along the surface branch of the global thermohaline (THC) circulation system, and is particularly marked at locations where a strong interaction between oceanic and atmospheric overturning cells and fronts occurs. This anomaly is absent in the Vostok ice core deuterium, and in oceanic records from the Antarctic Zone. However, it is present in the deuterium excess record from the Vostok ice core, interpreted as reflecting the temperature at the moisture source site for the snow precipitated at Vostok Station. As atmospheric models predict a subtropical Indian source for such moisture, this provides the necessary teleconnection between East Antarctica and ODP Site 1089, as the subtropical Indian is also the source area of the Agulhas Current, the main climate agent at our study location. The presence of the MIS 10 anomaly in the delta13C foraminiferal records from the same core supports its connection to oceanic mechanisms, linking stronger Agulhas spillover intensity to increased productivity in the study area. We suggest, in analogy to modern oceanographic observations, this to be a consequence of a shallow nutricline, induced by eddy mixing and baroclinic tide generation, which are in turn connected to the flow geometry, and intensity, of the Agulhas Current as it flows past the Agulhas Bank. We interpret the intensified inflow of Agulhas Current to the South Atlantic as responding to the switch between lower and higher amplitude in the insolation forcing in the Agulhas Current source area. This would result in higher SSTs in the Cape Basin during the glacial MIS 10, due to the release into the South Atlantic of the heat previously accumulating in the subtropical and equatorial Indian and Pacific Ocean. If our explanation for the MIS 10 anomaly in terms of an insolation variability switch is correct, we might expect that a future Agulhas SSST anomaly event will further delay the onset of next glacial age. In fact, the insolation forcing conditions for the Holocene (the current interglacial) are very similar to those present during MIS 11 (the interglacial preceding MIS 10), as both periods are characterized by a low insolation variability for the Agulhas Current source area. Natural climatic variability will force the Earth system in the same direction as the anthropogenic global warming trend, and will thus lead to even warmer than expected global temperatures in the near future.
Resumo:
Ulrich and Vorberg (2009) presented a method that fits distinct functions for each order of presentation of standard and test stimuli in a two-alternative forced-choice (2AFC) discrimination task, which removes the contaminating influence of order effects from estimates of the difference limen. The two functions are fitted simultaneously under the constraint that their average evaluates to 0.5 when test and standard have the same magnitude, which was regarded as a general property of 2AFC tasks. This constraint implies that physical identity produces indistinguishability, which is valid when test and standard are identical except for magnitude along the dimension of comparison. However, indistinguishability does not occur at physical identity when test and standard differ on dimensions other than that along which they are compared (e.g., vertical and horizontal lines of the same length are not perceived to have the same length). In these cases, the method of Ulrich and Vorberg cannot be used. We propose a generalization of their method for use in such cases and illustrate it with data from a 2AFC experiment involving length discrimination of horizontal and vertical lines. The resultant data could be fitted with our generalization but not with the method of Ulrich and Vorberg. Further extensions of this method are discussed.
Resumo:
Bayesian adaptive methods have been extensively used in psychophysics to estimate the point at which performance on a task attains arbitrary percentage levels, although the statistical properties of these estimators have never been assessed. We used simulation techniques to determine the small-sample properties of Bayesian estimators of arbitrary performance points, specifically addressing the issues of bias and precision as a function of the target percentage level. The study covered three major types of psychophysical task (yes-no detection, 2AFC discrimination and 2AFC detection) and explored the entire range of target performance levels allowed for by each task. Other factors included in the study were the form and parameters of the actual psychometric function Psi, the form and parameters of the model function M assumed in the Bayesian method, and the location of Psi within the parameter space. Our results indicate that Bayesian adaptive methods render unbiased estimators of any arbitrary point on psi only when M=Psi, and otherwise they yield bias whose magnitude can be considerable as the target level moves away from the midpoint of the range of Psi. The standard error of the estimator also increases as the target level approaches extreme values whether or not M=Psi. Contrary to widespread belief, neither the performance level at which bias is null nor that at which standard error is minimal can be predicted by the sweat factor. A closed-form expression nevertheless gives a reasonable fit to data describing the dependence of standard error on number of trials and target level, which allows determination of the number of trials that must be administered to obtain estimates with prescribed precision.
Resumo:
This paper presents a novel method for determining the temperature of a radiating body. The experimental method requires only very common instrumentation. It is based on the measurement of the stationary temperature of an object placed at different distances from the body and on the application of the energy balance equation in a stationary state. The method allows one to obtain the temperature of an inaccessible radiating body when radiation measurements are not available. The method has been applied to the determination of the filament temperature of incandescent lamps of different powers.
Resumo:
Open Access funded by Medical Research Council Acknowledgment The work reported here was funded by a grant from the Medical Research Council, UK, grant number: MR/J013838/1.
Resumo:
Acknowledgments The authors wish to thank the crews, fishermen and scientists who conducted the various surveys from which data were obtained, and Mark Belchier and Simeon Hill for their contributions. This work was supported by the Government of South Georgia and South Sandwich Islands. Additional logistical support provided by The South Atlantic Environmental Research Institute with thanks to Paul Brickle. Thanks to Stephen Smith of Fisheries and Oceans Canada (DFO) for help in constructing bootstrap confidence limits. Paul Fernandes receives funding from the MASTS pooling initiative (The Marine Alliance for Science and Technology for Scotland), and their support is gratefully acknowledged. MASTS is funded by the Scottish Funding Council (grant reference HR09011) and contributing institutions. We also wish to thank two anonymous referees for their helpful suggestions on earlier versions of this manuscript.
Resumo:
Prospective estimation of patient CT organ dose prior to examination can help technologist adjust CT scan settings to reduce radiation dose to patient while maintaining certain image quality. One possible way to achieve this is matching patient to digital models precisely. In previous work, patient matching was performed manually by matching the trunk height which was defined as the distance from top of clavicle to bottom of pelvis. However, this matching method is time consuming and impractical in scout images where entire trunk is not included. Purpose of this work was to develop an automatic patient matching strategy and verify its accuracy.
Resumo:
This dissertation contributes to the rapidly growing empirical research area in the field of operations management. It contains two essays, tackling two different sets of operations management questions which are motivated by and built on field data sets from two very different industries --- air cargo logistics and retailing.
The first essay, based on the data set obtained from a world leading third-party logistics company, develops a novel and general Bayesian hierarchical learning framework for estimating customers' spillover learning, that is, customers' learning about the quality of a service (or product) from their previous experiences with similar yet not identical services. We then apply our model to the data set to study how customers' experiences from shipping on a particular route affect their future decisions about shipping not only on that route, but also on other routes serviced by the same logistics company. We find that customers indeed borrow experiences from similar but different services to update their quality beliefs that determine future purchase decisions. Also, service quality beliefs have a significant impact on their future purchasing decisions. Moreover, customers are risk averse; they are averse to not only experience variability but also belief uncertainty (i.e., customer's uncertainty about their beliefs). Finally, belief uncertainty affects customers' utilities more compared to experience variability.
The second essay is based on a data set obtained from a large Chinese supermarket chain, which contains sales as well as both wholesale and retail prices of un-packaged perishable vegetables. Recognizing the special characteristics of this particularly product category, we develop a structural estimation model in a discrete-continuous choice model framework. Building on this framework, we then study an optimization model for joint pricing and inventory management strategies of multiple products, which aims at improving the company's profit from direct sales and at the same time reducing food waste and thus improving social welfare.
Collectively, the studies in this dissertation provide useful modeling ideas, decision tools, insights, and guidance for firms to utilize vast sales and operations data to devise more effective business strategies.
Resumo:
A recently developed technique for determining past sea surface temperatures (SST), based on an analysis of the unsaturation ratio of long chain C37 methyl alkenones produced by Prymnesiophyceae phytoplankton (U37 k' ), has been applied to an upper Quaternary sediment core from the equatorial Atlantic. U37 k' temperature estimates were compared to those obtained from delta18O of the planktonic foraminifer Globigerinoides sacculifer and of planktonic foraminiferal assemblages for the last glacial cycle. The alkenone method showed 1.8°C cooling at the last glacial maximum, about 1/2 to 1/3 of the decrease shown by the isotopic method (6.3°C) and foraminiferal modern analogue technique estimates for the warm season (3.8°C). Warm season foraminiferal assemblage estimates based on transfer functions are out of phase with the other estimates, showing a 1.4°C drop at the last glacial maximum with an additional 0.9°C drop in the deglaciation. Increased alkenone abundances, total organic carbon percentage and foraminiferal accumulation rates in the last glaciation indicate an increase in productivity of as much as 4 times over present day. These changes are thought to be due to increased upwelling caused by enhanced winds during the glaciation. If U37 k' estimates are correct, as much as 50-70% (up to 4.5°C) of estimated delta18O and modern analogue temperature changes in the last glaciation may have been due to changes in thermocline depth, whereas transfer functions seem more strongly influenced by seasonality changes. This indicates these estimates may be influenced as strongly by other factors as they are by SST, which in the equatorial Atlantic was only reduced slightly in the last glaciation.
Resumo:
The map representation of an environment should be selected based on its intended application. For example, a geometrically accurate map describing the Euclidean space of an environment is not necessarily the best choice if only a small subset its features are required. One possible subset is the orientations of the flat surfaces in the environment, represented by a special parameterization of normal vectors called axes. Devoid of positional information, the entries of an axis map form a non-injective relationship with the flat surfaces in the environment, which results in physically distinct flat surfaces being represented by a single axis. This drastically reduces the complexity of the map, but retains important information about the environment that can be used in meaningful applications in both two and three dimensions. This thesis presents axis mapping, which is an algorithm that accurately and automatically estimates an axis map of an environment based on sensor measurements collected by a mobile platform. Furthermore, two major applications of axis maps are developed and implemented. First, the LiDAR compass is a heading estimation algorithm that compares measurements of axes with an axis map of the environment. Pairing the LiDAR compass with simple translation measurements forms the basis for an accurate two-dimensional localization algorithm. It is shown that this algorithm eliminates the growth of heading error in both indoor and outdoor environments, resulting in accurate localization over long distances. Second, in the context of geotechnical engineering, a three-dimensional axis map is called a stereonet, which is used as a tool to examine the strength and stability of a rock face. Axis mapping provides a novel approach to create accurate stereonets safely, rapidly, and inexpensively compared to established methods. The non-injective property of axis maps is leveraged to probabilistically describe the relationships between non-sequential measurements of the rock face. The automatic estimation of stereonets was tested in three separate outdoor environments. It is shown that axis mapping can accurately estimate stereonets while improving safety, requiring significantly less time and effort, and lowering costs compared to traditional and current state-of-the-art approaches.
Resumo:
A modified UNIFAC–VISCO group contribution method was developed for the correlation and prediction of viscosity of ionic liquids as a function of temperature at 0.1 MPa. In this original approach, cations and anions were regarded as peculiar molecular groups. The significance of this approach comes from the ability to calculate the viscosity of mixtures of ionic liquids as well as pure ionic liquids. Binary interaction parameters for selected cations and anions were determined by fitting the experimental viscosity data available in literature for selected ionic liquids. The temperature dependence on the viscosity of the cations and anions were fitted to a Vogel–Fulcher–Tamman behavior. Binary interaction parameters and VFT type fitting parameters were then used to determine the viscosity of pure and mixtures of ionic liquids with different combinations of cations and anions to ensure the validity of the prediction method. Consequently, the viscosities of binary ionic liquid mixtures were then calculated by using this prediction method. In this work, the viscosity data of pure ionic liquids and of binary mixtures of ionic liquids are successfully calculated from 293.15 K to 363.15 K at 0.1 MPa. All calculated viscosity data showed excellent agreement with experimental data with a relative absolute average deviation lower than 1.7%.
Resumo:
There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
As one of the most successfully commercialized distributed energy resources, the long-term effects of microturbines (MTs) on the distribution network has not been fully investigated due to the complex thermo-fluid-mechanical energy conversion processes. This is further complicated by the fact that the parameter and internal data of MTs are not always available to the electric utility, due to different ownerships and confidentiality concerns. To address this issue, a general modeling approach for MTs is proposed in this paper, which allows for the long-term simulation of the distribution network with multiple MTs. First, the feasibility of deriving a simplified MT model for long-term dynamic analysis of the distribution network is discussed, based on the physical understanding of dynamic processes that occurred within MTs. Then a three-stage identification method is developed in order to obtain a piecewise MT model and predict electro-mechanical system behaviors with saturation. Next, assisted with the electric power flow calculation tool, a fast simulation methodology is proposed to evaluate the long-term impact of multiple MTs on the distribution network. Finally, the model is verified by using Capstone C30 microturbine experiments, and further applied to the dynamic simulation of a modified IEEE 37-node test feeder with promising results.