946 resultados para Practical algorithm
Resumo:
A numerical scheme is presented for the solution of the Euler equations of compressible flow of a gas in a single spatial co-ordinate. This includes flow in a duct of variable cross-section as well as flow with slab, cylindrical or spherical symmetry and can prove useful when testing codes for the two-dimensional equations governing compressible flow of a gas. The resulting scheme requires an average of the flow variables across the interface between cells and for computational efficiency this average is chosen to be the arithmetic mean, which is in contrast to the usual ‘square root’ averages found in this type of scheme. The scheme is applied with success to five problems with either slab or cylindrical symmetry and a comparison is made in the cylindrical case with results from a two-dimensional problem with no sources.
Resumo:
An efficient algorithm based on flux difference splitting is presented for the solution of the two-dimensional shallow water equations in a generalised coordinate system. The scheme is based on solving linearised Riemann problems approximately and in more than one dimension incorporates operator splitting. The scheme has good jump capturing properties and the advantage of using body-fitted meshes. Numerical results are shown for flow past a circular obstruction.
Resumo:
An efficient algorithm based on flux difference splitting is presented for the solution of the three-dimensional equations of isentropic flow in a generalised coordinate system, and with a general convex gas law. The scheme is based on solving linearised Riemann problems approximately and in more than one dimension incorporates operator splitting. The algorithm requires only one function evaluation of the gas law in each computational cell. The scheme has good shock capturing properties and the advantage of using body-fitted meshes. Numerical results are shown for Mach 3 flow of air past a circular cylinder. Furthermore, the algorithm also applies to shallow water flows by employing the familiar gas dynamics analogy.
The TAMORA algorithm: satellite rainfall estimates over West Africa using multi-spectral SEVIRI data
Resumo:
A multi-spectral rainfall estimation algorithm has been developed for the Sahel region of West Africa with the purpose of producing accumulated rainfall estimates for drought monitoring and food security. Radar data were used to calibrate multi-channel SEVIRI data from MSG, and a probability of rainfall at several different rain-rates was established for each combination of SEVIRI radiances. Radar calibrations from both Europe (the SatPrecip algorithm) and Niger (TAMORA algorithm) were used. 10 day estimates were accumulated from SatPrecip and TAMORA and compared with kriged gauge data and TAMSAT satellite rainfall estimates over West Africa. SatPrecip was found to produce large overestimates for the region, probably because of its non-local calibration. TAMORA was negatively biased for areas of West Africa with relatively high rainfall, but its skill was comparable to TAMSAT for the low-rainfall region climatologically similar to its calibration area around Niamey. These results confirm the high importance of local calibration for satellite-derived rainfall estimates. As TAMORA shows no improvement in skill over TAMSAT for dekadal estimates, the extra cloud-microphysical information provided by multi-spectral data may not be useful in determining rainfall accumulations at a ten day timescale. Work is ongoing to determine whether it shows improved accuracy at shorter timescales.
Resumo:
Based on insufficient evidence, and inadequate research, Floridi and his students report inaccuracies and draw false conclusions in their Minds and Machines evaluation, which this paper aims to clarify. Acting as invited judges, Floridi et al. participated in nine, of the ninety-six, Turing tests staged in the finals of the 18th Loebner Prize for Artificial Intelligence in October 2008. From the transcripts it appears that they used power over solidarity as an interrogation technique. As a result, they were fooled on several occasions into believing that a machine was a human and that a human was a machine. Worse still, they did not realise their mistake. This resulted in a combined correct identification rate of less than 56%. In their paper they assumed that they had made correct identifications when they in fact had been incorrect.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts. However, the research does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely ‘Intelligent Agents’. In the approach considered a task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The agents hence contribute towards fault tolerance and towards building reliable systems. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
Dense deployments of wireless local area networks (WLANs) are becoming a norm in many cities around the world. However, increased interference and traffic demands can severely limit the aggregate throughput achievable unless an effective channel assignment scheme is used. In this work, a simple and effective distributed channel assignment (DCA) scheme is proposed. It is shown that in order to maximise throughput, each access point (AP) simply chooses the channel with the minimum number of active neighbour nodes (i.e. nodes associated with neighbouring APs that have packets to send). However, application of such a scheme to practice depends critically on its ability to estimate the number of neighbour nodes in each channel, for which no practical estimator has been proposed before. In view of this, an extended Kalman filter (EKF) estimator and an estimate of the number of nodes by AP are proposed. These not only provide fast and accurate estimates but can also exploit channel switching information of neighbouring APs. Extensive packet level simulation results show that the proposed minimum neighbour and EKF estimator (MINEK) scheme is highly scalable and can provide significant throughput improvement over other channel assignment schemes.
Resumo:
A Bayesian Model Averaging approach to the estimation of lag structures is introduced, and applied to assess the impact of R&D on agricultural productivity in the US from 1889 to 1990. Lag and structural break coefficients are estimated using a reversible jump algorithm that traverses the model space. In addition to producing estimates and standard deviations for the coe¢ cients, the probability that a given lag (or break) enters the model is estimated. The approach is extended to select models populated with Gamma distributed lags of di¤erent frequencies. Results are consistent with the hypothesis that R&D positively drives productivity. Gamma lags are found to retain their usefulness in imposing a plausible structure on lag coe¢ cients, and their role is enhanced through the use of model averaging.
Resumo:
Estimating snow mass at continental scales is difficult but important for understanding landatmosphere interactions, biogeochemical cycles and Northern latitudes’ hydrology. Remote sensing provides the only consistent global observations, but the uncertainty in measurements is poorly understood. Existing techniques for the remote sensing of snow mass are based on the Chang algorithm, which relates the absorption of Earth-emitted microwave radiation by a snow layer to the snow mass within the layer. The absorption also depends on other factors such as the snow grain size and density, which are assumed and fixed within the algorithm. We examine the assumptions, compare them to field measurements made at the NASA Cold Land Processes Experiment (CLPX) Colorado field site in 2002–3, and evaluate the consequences of deviation and variability for snow mass retrieval. The accuracy of the emission model used to devise the algorithm also has an impact on its accuracy, so we test this with the CLPX measurements of snow properties against SSM/I and AMSR-E satellite measurements.
Resumo:
A neural network enhanced proportional, integral and derivative (PID) controller is presented that combines the attributes of neural network learning with a generalized minimum-variance self-tuning control (STC) strategy. The neuro PID controller is structured with plant model identification and PID parameter tuning. The plants to be controlled are approximated by an equivalent model composed of a simple linear submodel to approximate plant dynamics around operating points, plus an error agent to accommodate the errors induced by linear submodel inaccuracy due to non-linearities and other complexities. A generalized recursive least-squares algorithm is used to identify the linear submodel, and a layered neural network is used to detect the error agent in which the weights are updated on the basis of the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model, and therefore the error agent is naturally functioned within the control law. In this way the controller can deal not only with a wide range of linear dynamic plants but also with those complex plants characterized by severe non-linearity, uncertainties and non-minimum phase behaviours. Two simulation studies are provided to demonstrate the effectiveness of the controller design procedure.
Resumo:
A self-tuning proportional, integral and derivative control scheme based on genetic algorithms (GAs) is proposed and applied to the control of a real industrial plant. This paper explores the improvement in the parameter estimator, which is an essential part of an adaptive controller, through the hybridization of recursive least-squares algorithms by making use of GAs and the possibility of the application of GAs to the control of industrial processes. Both the simulation results and the experiments on a real plant show that the proposed scheme can be applied effectively.
Resumo:
Radial basis functions can be combined into a network structure that has several advantages over conventional neural network solutions. However, to operate effectively the number and positions of the basis function centres must be carefully selected. Although no rigorous algorithm exists for this purpose, several heuristic methods have been suggested. In this paper a new method is proposed in which radial basis function centres are selected by the mean-tracking clustering algorithm. The mean-tracking algorithm is compared with k means clustering and it is shown that it achieves significantly better results in terms of radial basis function performance. As well as being computationally simpler, the mean-tracking algorithm in general selects better centre positions, thus providing the radial basis functions with better modelling accuracy
Resumo:
Predictive controllers are often only applicable for open-loop stable systems. In this paper two such controllers are designed to operate on open-loop critically stable systems, each of which is used to find the control inputs for the roll control autopilot of a jet fighter aircraft. It is shown how it is quite possible for good predictive control to be achieved on open-loop critically stable systems.