969 resultados para attori, concorrenza, COOP, Akka, benchmark


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A fast Knowledge-based Evolution Strategy, KES, for the multi-objective minimum spanning tree, is presented. The proposed algorithm is validated, for the bi-objective case, with an exhaustive search for small problems (4-10 nodes), and compared with a deterministic algorithm, EPDA and NSGA-II for larger problems (up to 100 nodes) using benchmark hard instances. Experimental results show that KES finds the true Pareto fronts for small instances of the problem and calculates good approximation Pareto sets for larger instances tested. It is shown that the fronts calculated by YES are superior to NSGA-II fronts and almost as good as those established by EPDA. KES is designed to be scalable to multi-objective problems and fast due to its small complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several pixel-based people counting methods have been developed over the years. Among these the product of scale-weighted pixel sums and a linear correlation coefficient is a popular people counting approach. However most approaches have paid little attention to resolving the true background and instead take all foreground pixels into account. With large crowds moving at varying speeds and with the presence of other moving objects such as vehicles this approach is prone to problems. In this paper we present a method which concentrates on determining the true-foreground, i.e. human-image pixels only. To do this we have proposed, implemented and comparatively evaluated a human detection layer to make people counting more robust in the presence of noise and lack of empty background sequences. We show the effect of combining human detection with a pixel-map based algorithm to i) count only human-classified pixels and ii) prevent foreground pixels belonging to humans from being absorbed into the background model. We evaluate the performance of this approach on the PETS 2009 dataset using various configurations of the proposed methods. Our evaluation demonstrates that the basic benchmark method we implemented can achieve an accuracy of up to 87% on sequence ¿S1.L1 13-57 View 001¿ and our proposed approach can achieve up to 82% on sequence ¿S1.L3 14-33 View 001¿ where the crowd stops and the benchmark accuracy falls to 64%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we evaluate the Probabilistic Occupancy Map (POM) pedestrian detection algorithm on the PETS 2009 benchmark dataset. POM is a multi-camera generative detection method, which estimates ground plane occupancy from multiple background subtraction views. Occupancy probabilities are iteratively estimated by fitting a synthetic model of the background subtraction to the binary foreground motion. Furthermore, we test the integration of this algorithm into a larger framework designed for understanding human activities in real environments. We demonstrate accurate detection and localization on the PETS dataset, despite suboptimal calibration and foreground motion segmentation input.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we are mainly concerned with the development of efficient computer models capable of accurately predicting the propagation of low-to-middle frequency sound in the sea, in axially symmetric (2D) and in fully 3D environments. The major physical features of the problem, i.e. a variable bottom topography, elastic properties of the subbottom structure, volume attenuation and other range inhomogeneities are efficiently treated. The computer models presented are based on normal mode solutions of the Helmholtz equation on the one hand, and on various types of numerical schemes for parabolic approximations of the Helmholtz equation on the other. A new coupled mode code is introduced to model sound propagation in range-dependent ocean environments with variable bottom topography, where the effects of an elastic bottom, of volume attenuation, surface and bottom roughness are taken into account. New computer models based on finite difference and finite element techniques for the numerical solution of parabolic approximations are also presented. They include an efficient modeling of the bottom influence via impedance boundary conditions, they cover wide angle propagation, elastic bottom effects, variable bottom topography and reverberation effects. All the models are validated on several benchmark problems and versus experimental data. Results thus obtained were compared with analogous results from standard codes in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The self-consistent field theory (SCFT) prediction for the compression force between two semi-dilute polymer brushes is compared to the benchmark experiments of Taunton et al. [Nature, 1988, 332, 712]. The comparison is done with previously established parameters, and without any fitting parameters whatsoever. The SCFT provides a significant quantitative improvement over the classical strong-stretching theory (SST), yielding excellent quantitative agreement with the experiment. Contrary to earlier suggestions, chain fluctuations cannot be ignored for normal experimental conditions. Although the analytical expressions of SST provide invaluable aids to understanding the qualitative behavior of polymeric brushes, the numerical SCFT is necessary in order to provide quantitatively accurate predictions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) over the ocean is presented, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain-rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes’s theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance the understanding of theoretical benefits of the Bayesian approach, sensitivity analyses have been conducted based on two synthetic datasets for which the “true” conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism, but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak owing to saturation effects. It is also suggested that both the choice of the estimators and the prior information are crucial to the retrieval. In addition, the performance of the Bayesian algorithm herein is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper stability of one-step ahead predictive controllers based on non-linear models is established. It is shown that, under conditions which can be fulfilled by most industrial plants, the closed-loop system is robustly stable in the presence of plant uncertainties and input–output constraints. There is no requirement that the plant should be open-loop stable and the analysis is valid for general forms of non-linear system representation including the case out when the problem is constraint-free. The effectiveness of controllers designed according to the algorithm analyzed in this paper is demonstrated on a recognized benchmark problem and on a simulation of a continuous-stirred tank reactor (CSTR). In both examples a radial basis function neural network is employed as the non-linear system model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although in several EU Member States many public interventions have been running for the prevention and/or management of obesity and other nutrition-related health conditions, few have yet been formally evaluated. The multidisciplinary team of the EATWELL project will gather benchmark data on healthy eating interventions in EU Member States and review existing information on the effectiveness of interventions using a three-stage procedure (i) Assessment of the intervention's impact on consumer attitudes, consumer behaviour and diets; (ii) The impact of the change in diets on obesity and health and (iii) The value attached by society to these changes, measured in life years gained, cost savings and quality-adjusted life years. Where evaluations have been inadequate, EATWELL will gather secondary data and analyse them with a multidisciplinary approach incorporating models from the psychology and economics disciplines. Particular attention will be paid to lessons that can be learned from private sector that are transferable to the healthy eating campaigns in the public sector. Through consumer surveys and workshops with other stakeholders, EATWELL will assess the acceptability of the range of potential interventions. Armed with scientific quantitative evaluations of policy interventions and their acceptability to stakeholders, EATWELL expects to recommend more appropriate interventions for Member States and the EU, providing a one-stop guide to methods and measures in interventions evaluation, and outline data collection priorities for the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a world of almost permanent and rapidly increasing electronic data availability, techniques of filtering, compressing, and interpreting this data to transform it into valuable and easily comprehensible information is of utmost importance. One key topic in this area is the capability to deduce future system behavior from a given data input. This book brings together for the first time the complete theory of data-based neurofuzzy modelling and the linguistic attributes of fuzzy logic in a single cohesive mathematical framework. After introducing the basic theory of data-based modelling, new concepts including extended additive and multiplicative submodels are developed and their extensions to state estimation and data fusion are derived. All these algorithms are illustrated with benchmark and real-life examples to demonstrate their efficiency. Chris Harris and his group have carried out pioneering work which has tied together the fields of neural networks and linguistic rule-based algortihms. This book is aimed at researchers and scientists in time series modeling, empirical data modeling, knowledge discovery, data mining, and data fusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Based on integrated system optimisation and parameter estimation a method is described for on-line steady state optimisation which compensates for model-plant mismatch and solves a non-linear optimisation problem by iterating on a linear - quadratic representation. The method requires real process derivatives which are estimated using a dynamic identification technique. The utility of the method is demonstrated using a simulation of the Tennessee Eastman benchmark chemical process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Van der Heijden’s ENDGAME STUDY DATABASE IV, HhdbIV, is the definitive collection of 76,132 chess studies. In each one, White is to achieve the stipulated goal, win or draw: study solutions should be essentially unique with minor alternatives at most. In this second note on the mining of the database, we use the definitive Nalimov endgame tables to benchmark White’s moves in sub-7-man chess against this standard of uniqueness. Amongst goal-compatible mainline positions and goal-achieving moves, we identify the occurrence of absolutely unique moves and analyse the frequency and lengths of absolutely-unique-move sequences, AUMSs. We identify the occurrence of equi-optimal moves and suboptimal moves and refer to a defined method for classifying their significance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Depreciation is a key element of understanding the returns from and price of commercial real estate. Understanding its impact is important for asset allocation models and asset management decisions. It is a key input into well-constructed pricing models and its impact on indices of commercial real estate prices needs to be recognised. There have been a number of previous studies of the impact of depreciation on real estate, particularly in the UK. Law (2004) analysed all of these studies and found that the seemingly consistent results were an illusion as they all used a variety of measurement methods and data. In addition, none of these studies examined impact on total returns; they examined either rental value depreciation alone or rental and capital value depreciation. This study seeks to rectify this omission, adopting the best practice measurement framework set out by Law (2004). Using individual property data from the UK Investment Property Databank for the 10-year period between 1994 and 2003, rental and capital depreciation, capital expenditure rates, and total return series for the data sample and for a benchmark are calculated for 10 market segments. The results are complicated by the period of analysis which started in the aftermath of the major UK real estate recession of the early 1990s, but they give important insights into the impact of depreciation in different segments of the UK real estate investment market.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The question as to whether active management adds any value above that of the funds investment policy is one of continual interest to investors. In order to investigate this issue in the UK real estate market we examine a number of related questions. First, how much return variability is explained by investment policy? Second, how similar are the policies across funds? Third, how much of a fund’s return is determined by investment policy? Finally, how was this added value achieved? Using data for 19 real estate funds we find that investment policy explains less than half of the variability in returns over time, nothing of the variation across funds and that more than 100% of a level of return is attributed to investment policy. The results also show UK real estate fund focus exclusively on trying to pick winners to add value and that in pursuit of active return fund mangers incur high tracking error risk, consequently, successful active management is very difficult to achieve. In addition, the results are dependent on the benchmark used to represent the investment policy of the fund. Nonetheless, active management can indeed add value to a real estate funds performance. This is the good news. The bad news is adding value is much more difficult to achieve than is generally accepted.