933 resultados para Optimal fusion performance
Resumo:
This article examines the ability of several models to generate optimal hedge ratios. Statistical models employed include univariate and multivariate generalized autoregressive conditionally heteroscedastic (GARCH) models, and exponentially weighted and simple moving averages. The variances of the hedged portfolios derived using these hedge ratios are compared with those based on market expectations implied by the prices of traded options. One-month and three-month hedging horizons are considered for four currency pairs. Overall, it has been found that an exponentially weighted moving-average model leads to lower portfolio variances than any of the GARCH-based, implied or time-invariant approaches.
Resumo:
Although it is well known that water is essential for human homeostasis and survival, only recently have we begun to understand its role in the maintenance of brain function. Herein, we integrate emerging evidence regarding the effects of both dehydration and additional acute water consumption on cognition and mood. Current findings in the field suggest that particular cognitive abilities and mood states are positively influenced by water consumption. The impact of dehydration on cognition and mood is particularly relevant for those with poor fluid regulation, such as the elderly and children. We critically review the most recent advances in both behavioural and neuroimaging studies of dehydration and link the findings to the known effects of water on hormonal, neurochemical and vascular functions in an attempt to suggest plausible mechanisms of action. We identify some methodological weaknesses, including inconsistent measurements in cognitive assessment and the lack of objective hydration state measurements as well as gaps in knowledge concerning mediating factors that may influence water intervention effects. Finally, we discuss how future research can best elucidate the role of water in the optimal maintenance of brain health and function.
Resumo:
In recent years, ZigBee has been proven to be an excellent solution to create scalable and flexible home automation networks. In a home automation network, consumer devices typically collect data from a home monitoring environment and then transmit the data to an end user through multi-hop communication without the need for any human intervention. However, due to the presence of typical obstacles in a home environment, error-free reception may not be possible, particularly for power constrained devices. A mobile sink based data transmission scheme can be one solution but obstacles create significant complexities for the sink movement path determination process. Therefore, an obstacle avoidance data routing scheme is of vital importance to the design of an efficient home automation system. This paper presents a mobile sink based obstacle avoidance routing scheme for a home monitoring system. The mobile sink collects data by traversing through the obstacle avoidance path. Through ZigBee based hardware implementation and verification, the proposed scheme successfully transmits data through the obstacle avoidance path to improve network performance in terms of life span, energy consumption and reliability. The application of this work can be applied to a wide range of intelligent pervasive consumer products and services including robotic vacuum cleaners and personal security robots1.
Resumo:
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Resumo:
Multispectral iris recognition uses information from multiple bands of the electromagnetic spectrum to better represent certain physiological characteristics of the iris texture and enhance obtained recognition accuracy. This paper addresses the questions of single versus cross spectral performance and compares score-level fusion accuracy for different feature types, combining different wavelengths to overcome limitations in less constrained recording environments. Further it is investigated whether Doddington's “goats” (users who are particularly difficult to recognize) in one spectrum also extend to other spectra. Focusing on the question of feature stability at different wavelengths, this work uses manual ground truth segmentation, avoiding bias by segmentation impact. Experiments on the public UTIRIS multispectral iris dataset using 4 feature extraction techniques reveal a significant enhancement when combining NIR + Red for 2-channel and NIR + Red + Blue for 3-channel fusion, across different feature types. Selective feature-level fusion is investigated and shown to improve overall and especially cross-spectral performance without increasing the overall length of the iris code.
Resumo:
Thermochromic windows are able to modulate their transmittance in both the visible and the near-infrared field as a function of their temperature. As a consequence, they allow to control the solar gains in summer, thus reducing the energy needs for space cooling. However, they may also yield a reduction in the daylight availability, which results in the energy consumption for indoor artificial lighting being increased. This paper investigates, by means of dynamic simulations, the application of thermochromic windows to an existing office building in terms of energy savings on an annual basis, while also focusing on the effects in terms of daylighting and thermal comfort. In particular, due attention is paid to daylight availability, described through illuminance maps and by the calculation of the daylight factor, which in several countries is subject thresholds. The study considers both a commercially available thermochromic pane and a series of theoretical thermochromic glazing. The expected performance is compared to static clear and reflective insulating glass units. The simulations are repeated in different climatic conditions, showing that the overall energy savings compared to clear glazing can range from around 5% for cold climates to around 20% in warm climates, while not compromising daylight availability. Moreover the role played by the transition temperature of the pane is examined, pointing out an optimal transition temperatures that is irrespective of the climatic conditions.
Resumo:
In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The thesis aims to elaborate on the optimum trigger speed for Vehicle Activated Signs (VAS) and to study the effectiveness of VAS trigger speed on drivers’ behaviour. Vehicle activated signs (VAS) are speed warning signs that are activated by individual vehicle when the driver exceeds a speed threshold. The threshold, which triggers the VAS, is commonly based on a driver speed, and accordingly, is called a trigger speed. At present, the trigger speed activating the VAS is usually set to a constant value and does not consider the fact that an optimal trigger speed might exist. The optimal trigger speed significantly impacts driver behaviour. In order to be able to fulfil the aims of this thesis, systematic vehicle speed data were collected from field experiments that utilized Doppler radar. Further calibration methods for the radar used in the experiment have been developed and evaluated to provide accurate data for the experiment. The calibration method was bidirectional; consisting of data cleaning and data reconstruction. The data cleaning calibration had a superior performance than the calibration based on the reconstructed data. To study the effectiveness of trigger speed on driver behaviour, the collected data were analysed by both descriptive and inferential statistics. Both descriptive and inferential statistics showed that the change in trigger speed had an effect on vehicle mean speed and on vehicle standard deviation of the mean speed. When the trigger speed was set near the speed limit, the standard deviation was high. Therefore, the choice of trigger speed cannot be based solely on the speed limit at the proposed VAS location. The optimal trigger speeds for VAS were not considered in previous studies. As well, the relationship between the trigger value and its consequences under different conditions were not clearly stated. The finding from this thesis is that the optimal trigger speed should be primarily based on lowering the standard deviation rather than lowering the mean speed of vehicles. Furthermore, the optimal trigger speed should be set near the 85th percentile speed, with the goal of lowering the standard deviation.
Resumo:
The p-median problem is often used to locate P service facilities in a geographically distributed population. Important for the performance of such a model is the distance measure. Distance measure can vary if the accuracy of the road network varies. The rst aim in this study is to analyze how the optimal location solutions vary, using the p-median model, when the road network is alternated. It is hard to nd an exact optimal solution for p-median problems. Therefore, in this study two heuristic solutions are applied, simulating annealing and a classic heuristic. The secondary aim is to compare the optimal location solutions using dierent algorithms for large p-median problem. The investigation is conducted by the means of a case study in a rural region with an asymmetrically distributed population, Dalecarlia. The study shows that the use of more accurate road networks gives better solutions for optimal location, regardless what algorithm that is used and regardless how many service facilities that is optimized for. It is also shown that the simulated annealing algorithm not just is much faster than the classic heuristic used here, but also in most cases gives better location solutions.
Resumo:
Introduction Performance in cross-country skiing is influenced by the skier’s ability to continuously produce propelling forces and force magnitude in relation to the net external forces. A surrogate indicator of the “power supply” in cross-country skiing would be a physiological variable that reflects an important performance-related capability, whereas the body mass itself is an indicator of the “power demand” experienced by the skier. To adequately evaluate an elite skier’s performance capability, it is essential to establish the optimal ratio between the physiological variable and body mass. The overall aim of this doctoral thesis was to investigate the importance of body-mass exponent optimization for the evaluation of performance capability in cross-country skiing. Methods In total, 83 elite cross-country skiers (56 men and 27 women) volunteered to participate in the four studies. The physiological variables of maximal oxygen uptake (V̇O2max) and oxygen uptake corresponding to a blood-lactate concentration of 4 mmol∙l-1 (V̇O2obla) were determined while treadmill roller skiing using the diagonal-stride technique; mean oxygen uptake (V̇O2dp) and upper-body power output (Ẇ) were determined during double-poling tests using a ski-ergometer. Competitive performance data for elite male skiers were collected from two 15-km classical-technique skiing competitions and a 1.25-km sprint prologue; additionally, a 2-km double-poling roller-skiing time trial using the double-poling technique was used as an indicator of upper-body performance capability among elite male and female junior skiers. Power-function modelling was used to explain the race and time-trial speeds based on the physiological variables and body mass. Results The optimal V̇O2max-to-mass ratios to explain 15-km race speed were V̇O2max divided by body mass raised to the 0.48 and 0.53 power, and these models explained 68% and 69% of the variance in mean skiing speed, respectively; moreover, the 95% confidence intervals (CI) for the body-mass exponents did not include either 0 or 1. For the modelling of race speed in the sprint prologue, body mass failed to contribute to the models based on V̇O2max, V̇O2obla, and V̇O2dp. The upper-body power output-to-body mass ratio that optimally explained time-trial speed was Ẇ ∙ m-0.57 and the model explained 63% of the variance in speed. Conclusions The results in this thesis suggest that V̇O2max divided by the square root of body mass should be used as an indicator of performance in 15-km classical-technique races among elite male skiers rather than the absolute or simple ratio-standard scaled expression. To optimally explain an elite male skier’s performance capability in sprint prologues, power-function models based on oxygen-uptake variables expressed absolutely are recommended. Moreover, to evaluate elite junior skiers’ performance capabilities in 2-km double-poling roller-skiing time trials, it is recommended that Ẇ divided by the square root of body mass should be used rather than absolute or simple ratio-standard scaled expression of power output.
Resumo:
Most of water distribution systems (WDS) need rehabilitation due to aging infrastructure leading to decreasing capacity, increasing leakage and consequently low performance of the WDS. However an appropriate strategy including location and time of pipeline rehabilitation in a WDS with respect to a limited budget is the main challenge which has been addressed frequently by researchers and practitioners. On the other hand, selection of appropriate rehabilitation technique and material types is another main issue which has yet to address properly. The latter can affect the environmental impacts of a rehabilitation strategy meeting the challenges of global warming mitigation and consequent climate change. This paper presents a multi-objective optimization model for rehabilitation strategy in WDS addressing the abovementioned criteria mainly focused on greenhouse gas (GHG) emissions either directly from fossil fuel and electricity or indirectly from embodied energy of materials. Thus, the objective functions are to minimise: (1) the total cost of rehabilitation including capital and operational costs; (2) the leakage amount; (3) GHG emissions. The Pareto optimal front containing optimal solutions is determined using Non-dominated Sorting Genetic Algorithm NSGA-II. Decision variables in this optimisation problem are classified into a number of groups as: (1) percentage proportion of each rehabilitation technique each year; (2) material types of new pipeline for rehabilitation each year. Rehabilitation techniques used here includes replacement, rehabilitation and lining, cleaning, pipe duplication. The developed model is demonstrated through its application to a Mahalat WDS located in central part of Iran. The rehabilitation strategy is analysed for a 40 year planning horizon. A number of conventional techniques for selecting pipes for rehabilitation are analysed in this study. The results show that the optimal rehabilitation strategy considering GHG emissions is able to successfully save the total expenses, efficiently decrease the leakage amount from the WDS whilst meeting environmental criteria.
Resumo:
Nesta dissertação realizou-se um experimento de Monte Carlo para re- velar algumas características das distribuições em amostras finitas dos estimadores Backfitting (B) e de Integração Marginal(MI) para uma regressão aditiva bivariada. Está-se particularmente interessado em fornecer alguma evidência de como os diferentes métodos de seleção da janela hn, tais co- mo os métodos plug-in, impactam as propriedades em pequenas amostras dos estimadores. Está-se interessado, também, em fornecer evidência do comportamento de diferentes estimadores de hn relativamente a seqüência ótima de hn que minimiza uma função perda escolhida. O impacto de ignorar a dependência entre os regressores na estimação da janela é tam- bém investigado. Esta é uma prática comum e deve ter impacto sobre o desempenho dos estimadores. Além disso, não há nenhuma rotina atual- mente disponível nos pacotes estatísticos/econométricos para a estimação de regressões aditivas via os métodos de Backfitting e Integração Marginal. É um dos objetivos a criação de rotinas em Gauss para a implementação prática destes estimadores. Por fim, diferentemente do que ocorre atual- mente, quando a utilização dos estimadores-B e MI é feita de maneira completamente ad-hoc, há o objetivo de fornecer a usuários informação que permita uma escolha mais objetiva de qual estimador usar quando se está trabalhando com uma amostra finita.
Resumo:
Este trabalho mostra que a solução ótima do contrato de remuneração do empregado não é de salário fixo quando sua utilidade reserva é uma função de um fator que pode variar. A remuneração ótima do empregado incluirá um bônus que será também uma função do mesmo fator que modifica sua utilidade reserva, mesmo que tal fator não dependa do seu esforço e que o agente seja avesso ao risco. Esse resultado contrasta com a teoria clássica segundo a qual só se deveria alocar risco ao funcionário quando tal contrato fosse necessário para prover os incentivos para um esforço maior do agente. Outra conclusão desse trabalho é que existe um limite para o tamanho do risco que o funcionário assume no contrato ótimo, ou seja, o valor do bônus é uma função crescente da diferença dos valores da utilidade reserva nos diferentes cenários possíveis até certo ponto apenas e a partir de determinado valor para essa diferença, a magnitude do bônus se mantém estável.
Resumo:
This paper investigates the importance of the fiow of funds as an implicit incetive provided by investors to portfolio managers in a two-period relationship. We show that the fiow of funds is a powerful incentive in an asset management contract. We build a binomial moral hazard model to explain the main trade-ofIs in the relationship between fiow, fees and performance. The main assumption is that efIort depend" on the combination of implicit and explicit incentives while the probability distrioutioll function of returns depends on efIort. In the case of full commitment, the investor's relevant trade-ofI is to give up expected return in the second period vis-à-vis to induce efIort in the first período The more concerned the investor is with today's payoff. the more willing he will be to give up expected return in the following periods. That is. in the second period, the investor penalizes observed low returns by withdrawing resources from non-performing portfolio managers. Besides, he pays performance fee when the observed excess return is positive. When commitment is not a plausible hypothesis, we consider that the investor also learns some symmetríc and imperfect information about the ability of the manager to generate positive excess returno In this case, observed returns reveal ability as well as efIort choices exerted by the portfolio manager. We show that implicit incentives can explain the fiow-performance relationship and, conversely, endogenous expected return determines incentives provision and define their optimal leveIs. We provide a numerical solution in Matlab that characterize these results.