87 resultados para Multi-objective algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Almost all material selection problems require that a compromise be sought between some metric of performance and cost. Trade-off methods using utility functions allow optimal solutions to be found for two objective, but for three it is harder. This paper develops and demonstrates a method for dealing with three objectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a system that can reliably track multiple cars in congested traffic environments. Our system's key basis is the implementation of a sequential Monte Carlo algorithm, which introduces robustness against problems arising due to the proximity between vehicles. By directly modelling occlusions and collisions between cars we obtain promising results on an urban traffic dataset. Extensions to this initial framework are also suggested. © 2010 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work addresses the problem of deriving F0 from distanttalking speech signals acquired by a microphone network. The method here proposed exploits the redundancy across the channels by jointly processing the different signals. To this purpose, a multi-microphone periodicity function is derived from the magnitude spectrum of all the channels. This function allows to estimate F0 reliably, even under reverberant conditions, without the need of any post-processing or smoothing technique. Experiments, conducted on real data, showed that the proposed frequency-domain algorithm is more suitable than other time-domain based ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several equations of state (EOS) have been incorporated into a novel algorithm to solve a system of multi-phase equations in which all phases are assumed to be compressible to varying degrees. The EOSs are used to both supply functional relationships to couple the conservative variables to the primitive variables and to calculate accurately thermodynamic quantities of interest, such as the speed of sound. Each EOS has a defined balance of accuracy, robustness and computational speed; selection of an appropriate EOS is generally problem-dependent. This work employs an AUSM+-up method for accurate discretisation of the convective flux terms with modified low-Mach number dissipation for added robustness of the solver. In this paper we show a newly-developed time-marching formulation for temporal discretisation of the governing equations with incorporated time-dependent source terms, as well as considering the system of eigenvalues that render the governing equations hyperbolic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many visual datasets are traditionally used to analyze the performance of different learning techniques. The evaluation is usually done within each dataset, therefore it is questionable if such results are a reliable indicator of true generalization ability. We propose here an algorithm to exploit the existing data resources when learning on a new multiclass problem. Our main idea is to identify an image representation that decomposes orthogonally into two subspaces: a part specific to each dataset, and a part generic to, and therefore shared between, all the considered source sets. This allows us to use the generic representation as un-biased reference knowledge for a novel classification task. By casting the method in the multi-view setting, we also make it possible to use different features for different databases. We call the algorithm MUST, Multitask Unaligned Shared knowledge Transfer. Through extensive experiments on five public datasets, we show that MUST consistently improves the cross-datasets generalization performance. © 2013 Springer-Verlag.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to examine the operating characteristics of a light duty multi cylinder compression ignition engine with regular gasoline fuel at low engine speed and load. The effects of fuel stratification by means of multiple injections as well as the sensitivity of auto-ignition and burn rate to intake pressure and temperature are presented. The measurements used in this study included gaseous emissions, filter smoke opacity and in-cylinder indicated information. It was found that stable, low emission operation was possible with raised intake manifold pressure and temperature, and that fuel stratification can lead to an increase in stability and a reduced reliance on increased temperature and pressure. It was also found that the auto-ignition delay sensitivity of gasoline to intake temperature and pressure was low within the operating window considered in this study. Nevertheless, the requirement for an increase of pressure, temperature and stratification in order to achieve auto-ignition time scales small enough for combustion in the engine was clear, using pump gasoline. Copyright © 2009 SAE International.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our nervous system can efficiently recognize objects in spite of changes in contextual variables such as perspective or lighting conditions. Several lines of research have proposed that this ability for invariant recognition is learned by exploiting the fact that object identities typically vary more slowly in time than contextual variables or noise. Here, we study the question of how this "temporal stability" or "slowness" approach can be implemented within the limits of biologically realistic spike-based learning rules. We first show that slow feature analysis, an algorithm that is based on slowness, can be implemented in linear continuous model neurons by means of a modified Hebbian learning rule. This approach provides a link to the trace rule, which is another implementation of slowness learning. Then, we show analytically that for linear Poisson neurons, slowness learning can be implemented by spike-timing-dependent plasticity (STDP) with a specific learning window. By studying the learning dynamics of STDP, we show that for functional interpretations of STDP, it is not the learning window alone that is relevant but rather the convolution of the learning window with the postsynaptic potential. We then derive STDP learning windows that implement slow feature analysis and the "trace rule." The resulting learning windows are compatible with physiological data both in shape and timescale. Moreover, our analysis shows that the learning window can be split into two functionally different components that are sensitive to reversible and irreversible aspects of the input statistics, respectively. The theory indicates that irreversible input statistics are not in favor of stable weight distributions but may generate oscillatory weight dynamics. Our analysis offers a novel interpretation for the functional role of STDP in physiological neurons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hip fracture is the leading cause of acute orthopaedic hospital admission amongst the elderly, with around a third of patients not surviving one year post-fracture. Although various preventative therapies are available, patient selection is difficult. The current state-of-the-art risk assessment tool (FRAX) ignores focal structural defects, such as cortical bone thinning, a critical component in characterizing hip fragility. Cortical thickness can be measured using CT, but this is expensive and involves a significant radiation dose. Instead, Dual-Energy X-ray Absorptiometry (DXA) is currently the preferred imaging modality for assessing hip fracture risk and is used routinely in clinical practice. Our ambition is to develop a tool to measure cortical thickness using multi-view DXA instead of CT. In this initial study, we work with digitally reconstructed radiographs (DRRs) derived from CT data as a surrogate for DXA scans: this enables us to compare directly the thickness estimates with the gold standard CT results. Our approach involves a model-based femoral shape reconstruction followed by a data-driven algorithm to extract numerous cortical thickness point estimates. In a series of experiments on the shaft and trochanteric regions of 48 proximal femurs, we validated our algorithm and established its performance limits using 20 views in the range 0°-171°: estimation errors were 0:19 ± 0:53mm (mean +/- one standard deviation). In a more clinically viable protocol using four views in the range 0°-51°, where no other bony structures obstruct the projection of the femur, measurement errors were -0:07 ± 0:79 mm. © 2013 SPIE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aerodynamic design of turbomachinery presents the design optimisation community with a number of exquisite challenges. Chief among these are the size of the design space and the extent of discontinuity therein. This discontinuity can serve to limit the full exploitation of high-fidelity computational fluid dynamics (CFD): such codes require detailed geometric information often available only sometime after the basic configuration of the machine has been set by other means. The premise of this paper is that it should be possible to produce higher performing designs in less time by exploiting multi-fidelity techniques to effectively harness CFD earlier in the design process, specifically by facilitating its participation in configuration selection. The adopted strategy of local multi-fidelity correction, generated on demand, combined with a global search algorithm via an adaptive trust region is first tested on a modest, smooth external aerodynamic problem. Speed-up of an order of magnitude is demonstrated, comparable to established techniques applied to smooth problems. A number of enhancements aimed principally at effectively evaluating a wide range of configurations quickly is then applied to the basic strategy, and the emerging technique is tested on a generic aeroengine core compression system. A similar order of magnitude speed-up is achieved on this relatively large and highly discontinuous problem. A five-fold increase in the number of configurations assessed with CFD is observed. As the technique places constraints neither on the underlying physical modelling of the constituent analysis codes nor on first-order agreement between those codes, it has potential applicability to a range of multidisciplinary design challenges. © 2012 by Jerome Jarrett and Tiziano Ghisu.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Confronted with high variety and low volume market demands, many companies, especially the Japanese electronics manufacturing companies, have reconfigured their conveyor assembly lines and adopted seru production systems. Seru production system is a new type of work-cell-based manufacturing system. A lot of successful practices and experience show that seru production system can gain considerable flexibility of job shop and high efficiency of conveyor assembly line. In implementing seru production, the multi-skilled worker is the most important precondition, and some issues about multi-skilled workers are central and foremost. In this paper, we investigate the training and assignment problem of workers when a conveyor assembly line is entirely reconfigured into several serus. We formulate a mathematical model with double objectives which aim to minimize the total training cost and to balance the total processing times among multi-skilled workers in each seru. To obtain the satisfied task-to-worker training plan and worker-to-seru assignment plan, a three-stage heuristic algorithm with nine steps is developed to solve this mathematical model. Then, several computational cases are taken and computed by MATLAB programming. The computation and analysis results validate the performances of the proposed mathematical model and heuristic algorithm. © 2013 Springer-Verlag London.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present novel batch and online (sequential) versions of the expectation-maximisation (EM) algorithm for inferring the static parameters of a multiple target tracking (MTT) model. Online EM is of particular interest as it is a more practical method for long data sets since in batch EM, or a full Bayesian approach, a complete browse of the data is required between successive parameter updates. Online EM is also suited to MTT applications that demand real-time processing of the data. Performance is assessed in numerical examples using simulated data for various scenarios. For batch estimation our method significantly outperforms an existing gradient based maximum likelihood technique, which we show to be significantly biased. © 2014 Springer Science+Business Media New York.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the concerns over climate change and the escalation in worldwide population, sustainable development attracts more and more attention of academia, policy makers, and businesses in countries. Sustainable manufacturing is an inextricable measure to achieve sustainable development since manufacturing is one of the main energy consumers and greenhouse gas contributors. In the previous researches on production planning of manufacturing systems, environmental factor was rarely considered. This paper investigates the production planning problem under the performance measures of economy and environment with respect to seru production systems, a new manufacturing system praised as Double E (ecology and economy) in Japanese manufacturing industries. We propose a mathematical model with two objectives minimizing carbon dioxide emission and makespan for processing all product types by a seru production system. To solve this mathematical model, we develop an algorithm based on the non-dominated sorting genetic algorithm II. The computation results and analysis of three numeral examples confirm the effectiveness of our proposed algorithm. © 2014 Elsevier Ltd. All rights reserved.