863 resultados para Adaptive Information Dispersal Algorithm
Resumo:
FUELCON is an expert system for optimized refueling design in nuclear engineering. This task is crucial for keeping down operating costs at a plant without compromising safety. FUELCON proposes sets of alternative configurations of allocation of fuel assemblies that are each positioned in the planar grid of a horizontal section of a reactor core. Results are simulated, and an expert user can also use FUELCON to revise rulesets and improve on his or her heuristics. The successful completion of FUELCON led this research team into undertaking a panoply of sequel projects, of which we provide a meta-architectural comparative formal discussion. In this paper, we demonstrate a novel adaptive technique that learns the optimal allocation heuristic for the various cores. The algorithm is a hybrid of a fine-grained neural network and symbolic computation components. This hybrid architecture is sensitive enough to learn the particular characteristics of the ‘in-core fuel management problem’ at hand, and is powerful enough to use this information fully to automatically revise heuristics, thus improving upon those provided by a human expert.
Resumo:
In fluid mechanics, it is well accepted that the Euler equation is one of the reduced forms of the Navier-Stokes equation by truncating the viscous effect. There are other truncation techniques currently being used in order to truncate the Navier-Stokes equation to a reduced form. This paper describes one such technique, suitable for adaptive domain decomposition methods for the solution of viscous flow problems. The physical domain of a viscous flow problem is partitioned into viscous and inviscid subdomains without overlapping regions, and the technique is embedded into a finite volume method. Some numerical results are provided for a flat plate and the NACA0012 aerofoil. Issues related to distributed computing are discussed.
Resumo:
Given the importance of occupant behaviour on evacuation efficiency, a new behavioural feature has been implemented into buildingEXODUS. This feature concerns the response of occupants to exit selection and re-direction. This behaviour is not simply pre-determined by the user as part of the initialisation process, but involves the occupant taking decisions based on their previous experiences and the information available to them. This information concerns the occupants prior knowledge of the enclosure and line-of-sight information concerning queues at neighbouring exits. This new feature is demonstrated and reviewed through several examples.
Resumo:
Given the importance of occupant behavior on evacuation efficiency, a new behavioral feature has been developed and implemented into buildingEXODUS. This feature concerns the response of occupants to exit selection and re-direction. This behavior is not simply pre-determined by the user as part of the initialization process, but involves the occupant taking decisions based on their previous experiences and the information available to them. This information concerns the occupants prior knowledge of the enclosure and line-of-sight information concerning queues at neighboring exits. This new feature is demonstrated and reviewed through several examples.
Resumo:
Given the importance of occupant behavior on evacuation efficiency, a new behavioral feature has been implemented into building EXODUS. This feature concerns the response of occupants to exit selection and re-direction, given that the occupant is queuing at an external exit. This behavior is not simply pre-determined by the user as part of the initialization process, but involves the occupant taking decisions based on their previous experiences with the enclosure and the information available to them. This information concerns the occupant's prior knowledge of the enclosure and line-of-sight information concerning queues at neighboring exits. This new feature is demonstrated and reviewed through several examples.
Resumo:
We present a dynamic distributed load balancing algorithm for parallel, adaptive Finite Element simulations in which we use preconditioned Conjugate Gradient solvers based on domain-decomposition. The load balancing is designed to maintain good partition aspect ratio and we show that cut size is not always the appropriate measure in load balancing. Furthermore, we attempt to answer the question why the aspect ratio of partitions plays an important role for certain solvers. We define and rate different kinds of aspect ratio and present a new center-based partitioning method of calculating the initial distribution which implicitly optimizes this measure. During the adaptive simulation, the load balancer calculates a balancing flow using different versions of the diffusion algorithm and a variant of breadth first search. Elements to be migrated are chosen according to a cost function aiming at the optimization of subdomain shapes. Experimental results for Bramble's preconditioner and comparisons to state-of-the-art load balancers show the benefits of the construction.
Resumo:
The emergent behaviour of autonomic systems, together with the scale of their deployment, impedes prediction of the full range of configuration and failure scenarios; thus it is not possible to devise management and recovery strategies to cover all possible outcomes. One solution to this problem is to embed self-managing and self-healing abilities into such applications. Traditional design approaches favour determinism, even when unnecessary. This can lead to conflicts between the non-functional requirements. Natural systems such as ant colonies have evolved cooperative, finely tuned emergent behaviours which allow the colonies to function at very large scale and to be very robust, although non-deterministic. Simple pheromone-exchange communication systems are highly efficient and are a major contribution to their success. This paper proposes that we look to natural systems for inspiration when designing architecture and communications strategies, and presents an election algorithm which encapsulates non-deterministic behaviour to achieve high scalability, robustness and stability.
Resumo:
Image inpainting refers to restoring a damaged image with missing information. The total variation (TV) inpainting model is one such method that simultaneously fills in the regions with available information from their surroundings and eliminates noises. The method works well with small narrow inpainting domains. However there remains an urgent need to develop fast iterative solvers, as the underlying problem sizes are large. In addition one needs to tackle the imbalance of results between inpainting and denoising. When the inpainting regions are thick and large, the procedure of inpainting works quite slowly and usually requires a significant number of iterations and leads inevitably to oversmoothing in the outside of the inpainting domain. To overcome these difficulties, we propose a solution for TV inpainting method based on the nonlinear multi-grid algorithm.
Resumo:
A communication system model for mutual information performance analysis of multiple-symbol differential M-phase shift keying over time-correlated, time-varying flat-fading communication channels is developed. This model is a finite-state Markov (FSM) equivalent channel representing the cascade of the differential encoder, FSM channel model and differential decoder. A state-space approach is used to model channel phase time correlations. The equivalent model falls in a class that facilitates the use of the forward backward algorithm, enabling the important information theoretic results to be evaluated. Using such a model, one is able to calculate mutual information for differential detection over time-varying fading channels with an essentially finite time set of correlations, including the Clarke fading channel. Using the equivalent channel, it is proved and corroborated by simulations that multiple-symbol differential detection preserves the channel information capacity when the observation interval approaches infinity.
Resumo:
This paper provides mutual information performance analysis of multiple-symbol differential WSK (M-phase shift keying) over time-correlated, time-varying flat-fading communication channels. A state space approach is used to model time correlation of time varying channel phase. This approach captures the dynamics of time correlated, time-varying channels and enables exploitation of the forward-backward algorithm for mutual information performance analysis. It is shown that the differential decoding implicitly uses a sequence of innovations of the channel process time correlation and this sequence is essentially uncorrelated. It enables utilization of multiple-symbol differential detection, as a form of block-by-block maximum likelihood sequence detection for capacity achieving mutual information performance. It is shown that multiple-symbol differential ML detection of BPSK and QPSK practically achieves the channel information capacity with observation times only on the order of a few symbol intervals
Resumo:
Coccolithophores are the largest source of calcium carbonate in the oceans and are considered to play an important role in oceanic carbon cycles. Current methods to detect the presence of coccolithophore blooms from Earth observation data often produce high numbers of false positives in shelf seas and coastal zones due to the spectral similarity between coccolithophores and other suspended particulates. Current methods are therefore unable to characterise the bloom events in shelf seas and coastal zones, despite the importance of these phytoplankton in the global carbon cycle. A novel approach to detect the presence of coccolithophore blooms from Earth observation data is presented. The method builds upon previous optical work and uses a statistical framework to combine spectral, spatial and temporal information to produce maps of coccolithophore bloom extent. Validation and verification results for an area of the north east Atlantic are presented using an in situ database (N = 432) and all available SeaWiFS data for 2003 and 2004. Verification results show that the approach produces a temporal seasonal signal consistent with biological studies of these phytoplankton. Validation using the in situ coccolithophore cell count database shows a high correct recognition rate of 80% and a low false-positive rate of 0.14 (in comparison to 63% and 0.34 respectively for the established, purely spectral approach). To guide its broader use, a full sensitivity analysis for the algorithm parameters is presented.
Resumo:
Satellite altimetry has revolutionized our understanding of ocean dynamics thanks to frequent sampling and global coverage. Nevertheless, coastal data have been flagged as unreliable due to land and calm water interference in the altimeter and radiometer footprint and uncertainty in the modelling of high-frequency tidal and atmospheric forcing. Our study addresses the first issue, i.e. altimeter footprint contamination, via retracking, presenting ALES, the Adaptive Leading Edge Subwaveform retracker. ALES is potentially applicable to all the pulse-limited altimetry missions and its aim is to retrack both open ocean and coastal data with the same accuracy using just one algorithm. ALES selects part of each returned echo and models it with a classic ”open ocean” Brown functional form, by means of least square estimation whose convergence is found through the Nelder-Mead nonlinear optimization technique. By avoiding echoes from bright targets along the trailing edge, it is capable of retrieving more coastal waveforms than the standard processing. By adapting the width of the estimation window according to the significant wave height, it aims at maintaining the accuracy of the standard processing in both the open ocean and the coastal strip. This innovative retracker is validated against tide gauges in the Adriatic Sea and in the Greater Agulhas System for three different missions: Envisat, Jason-1 and Jason-2. Considerations of noise and biases provide a further verification of the strategy. The results show that ALES is able to provide more reliable 20-Hz data for all three missions in areas where even 1-Hz averages are flagged as unreliable in standard products. Application of the ALES retracker led to roughly a half of the analysed tracks showing a marked improvement in correlation with the tide gauge records, with the rms difference being reduced by a factor of 1.5 for Jason-1 and Jason-2 and over 4 for Envisat in the Adriatic Sea (at the closest point to the tide gauge).
Resumo:
A new algorithm for training of nonlinear optimal neuro-controllers (in the form of the model-free, action-dependent, adaptive critic paradigm). Overcomes problems with existing stochastic backpropagation training: need for data storage, parameter shadowing and poor convergence, offering significant benefits for online applications.
Resumo:
A variation of the least means squares (LMS) algorithm, called the delayed LMS (DLMS) algorithm is an ideally suited to achieve highly pipelined, adaptive digital filter implementations. The paper presents an efficient method of determining the delays in the DLMS filter and then transferring these delays using retiming in order to achieve fully pipelined circuit architectures for FPGA implementation. The method has been used to derive a series of retimed delayed LMS (RDLMS) architectures, which considerable reduce the number of delays and convergence time and give superior performance in terms of throughput rate when compared to previous work. Three circuit architectures and three hardware shared versions are presented which have been implemented using the Virtex-II FPGA technology resulting in a throughout rate of 182 Msample/s.
Resumo:
High-speed field-programmable gate array (FPGA) implementations of an adaptive least mean square (LMS) filter with application in an electronic support measures (ESM) digital receiver, are presented. They employ "fine-grained" pipelining, i.e., pipelining within the processor and result in an increased output latency when used in the LMS recursive system. Therefore, the major challenge is to maintain a low latency output whilst increasing the pipeline stage in the filter for higher speeds. Using the delayed LMS (DLMS) algorithm, fine-grained pipelined FPGA implementations using both the direct form (DF) and the transposed form (TF) are considered and compared. It is shown that the direct form LMS filter utilizes the FPGA resources more efficiently thereby allowing a 120 MHz sampling rate.