995 resultados para Resolution algorithm
Resumo:
A spectral performance model, designed to simulate the system spectral throughput for each of the 21 channels in the HIRDLS radiometer, is described. This model uses the measured spectral characteristics of each of the components in the optical train, appropriately corrected for their optical environment, to determine the end-to-end spectral throughput profile for each channel. This profile is then combined with the predicted thermal emission from the atmosphere, arising from the height of interest, to establish an in-band (wanted) to out-of-band (unwanted) radiance ratio. The results from the use of the model demonstrate that the instrument level radiometric requirements for the instrument will be achieved. The optical arrangement and spectral design requirements for filtering in the HIRDLS instrument are described together with a presentation of the performance achieved for the complete set of manufactured filters. Compliance of the predicted passband throughput model to the spectral positioning requi rements of the instrument is also demonstrated.
Resumo:
A Bayesian Model Averaging approach to the estimation of lag structures is introduced, and applied to assess the impact of R&D on agricultural productivity in the US from 1889 to 1990. Lag and structural break coefficients are estimated using a reversible jump algorithm that traverses the model space. In addition to producing estimates and standard deviations for the coe¢ cients, the probability that a given lag (or break) enters the model is estimated. The approach is extended to select models populated with Gamma distributed lags of di¤erent frequencies. Results are consistent with the hypothesis that R&D positively drives productivity. Gamma lags are found to retain their usefulness in imposing a plausible structure on lag coe¢ cients, and their role is enhanced through the use of model averaging.
Resumo:
The usefulness of any simulation of atmospheric tracers using low-resolution winds relies on both the dominance of large spatial scales in the strain and time dependence that results in a cascade in tracer scales. Here, a quantitative study on the accuracy of such tracer studies is made using the contour advection technique. It is shown that, although contour stretching rates are very insensitive to the spatial truncation of the wind field, the displacement errors in filament position are sensitive. A knowledge of displacement characteristics is essential if Lagrangian simulations are to be used for the inference of airmass origin. A quantitative lower estimate is obtained for the tracer scale factor (TSF): the ratio of the smallest resolved scale in the advecting wind field to the smallest “trustworthy” scale in the tracer field. For a baroclinic wave life cycle the TSF = 6.1 ± 0.3 while for the Northern Hemisphere wintertime lower stratosphere the TSF = 5.5 ± 0.5, when using the most stringent definition of the trustworthy scale. The similarity in the TSF for the two flows is striking and an explanation is discussed in terms of the activity of potential vorticity (PV) filaments. Uncertainty in contour initialization is investigated for the stratospheric case. The effect of smoothing initial contours is to introduce a spinup time, after which wind field truncation errors take over from initialization errors (2–3 days). It is also shown that false detail from the proliferation of finescale filaments limits the useful lifetime of such contour advection simulations to 3σ−1 days, where σ is the filament thinning rate, unless filaments narrower than the trustworthy scale are removed by contour surgery. In addition, PV analysis error and diabatic effects are so strong that only PV filaments wider than 50 km are at all believable, even for very high-resolution winds. The minimum wind field resolution required to accurately simulate filaments down to the erosion scale in the stratosphere (given an initial contour) is estimated and the implications for the modeling of atmospheric chemistry are briefly discussed.
Resumo:
This article presents a case study of a comparison of an Eulerian chemical transport model (CTM) and Lagrangian chemical model with measurements taken by aircraft. High-resolution Eulerian integrations produce improved point-by-point comparisons between model results and measurements compared to low resolution. The Lagrangian model requires mixing to be introduced in order to model the measurements.
Resumo:
Estimating snow mass at continental scales is difficult but important for understanding landatmosphere interactions, biogeochemical cycles and Northern latitudes’ hydrology. Remote sensing provides the only consistent global observations, but the uncertainty in measurements is poorly understood. Existing techniques for the remote sensing of snow mass are based on the Chang algorithm, which relates the absorption of Earth-emitted microwave radiation by a snow layer to the snow mass within the layer. The absorption also depends on other factors such as the snow grain size and density, which are assumed and fixed within the algorithm. We examine the assumptions, compare them to field measurements made at the NASA Cold Land Processes Experiment (CLPX) Colorado field site in 2002–3, and evaluate the consequences of deviation and variability for snow mass retrieval. The accuracy of the emission model used to devise the algorithm also has an impact on its accuracy, so we test this with the CLPX measurements of snow properties against SSM/I and AMSR-E satellite measurements.
Resumo:
A neural network enhanced proportional, integral and derivative (PID) controller is presented that combines the attributes of neural network learning with a generalized minimum-variance self-tuning control (STC) strategy. The neuro PID controller is structured with plant model identification and PID parameter tuning. The plants to be controlled are approximated by an equivalent model composed of a simple linear submodel to approximate plant dynamics around operating points, plus an error agent to accommodate the errors induced by linear submodel inaccuracy due to non-linearities and other complexities. A generalized recursive least-squares algorithm is used to identify the linear submodel, and a layered neural network is used to detect the error agent in which the weights are updated on the basis of the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model, and therefore the error agent is naturally functioned within the control law. In this way the controller can deal not only with a wide range of linear dynamic plants but also with those complex plants characterized by severe non-linearity, uncertainties and non-minimum phase behaviours. Two simulation studies are provided to demonstrate the effectiveness of the controller design procedure.
Resumo:
A self-tuning proportional, integral and derivative control scheme based on genetic algorithms (GAs) is proposed and applied to the control of a real industrial plant. This paper explores the improvement in the parameter estimator, which is an essential part of an adaptive controller, through the hybridization of recursive least-squares algorithms by making use of GAs and the possibility of the application of GAs to the control of industrial processes. Both the simulation results and the experiments on a real plant show that the proposed scheme can be applied effectively.
Resumo:
Radial basis functions can be combined into a network structure that has several advantages over conventional neural network solutions. However, to operate effectively the number and positions of the basis function centres must be carefully selected. Although no rigorous algorithm exists for this purpose, several heuristic methods have been suggested. In this paper a new method is proposed in which radial basis function centres are selected by the mean-tracking clustering algorithm. The mean-tracking algorithm is compared with k means clustering and it is shown that it achieves significantly better results in terms of radial basis function performance. As well as being computationally simpler, the mean-tracking algorithm in general selects better centre positions, thus providing the radial basis functions with better modelling accuracy
Resumo:
Predictive controllers are often only applicable for open-loop stable systems. In this paper two such controllers are designed to operate on open-loop critically stable systems, each of which is used to find the control inputs for the roll control autopilot of a jet fighter aircraft. It is shown how it is quite possible for good predictive control to be achieved on open-loop critically stable systems.
Resumo:
High-resolution satellite radar observations of erupting volcanoes can yield valuable information on rapidly changing deposits and geomorphology. Using the TerraSAR-X (TSX) radar with a spatial resolution of about 2 m and a repeat interval of 11-days, we show how a variety of techniques were used to record some of the eruptive history of the Soufriere Hills Volcano, Montserrat between July 2008 and February 2010. After a 15-month pause in lava dome growth, a vulcanian explosion occurred on 28 July 2008 whose vent was hidden by dense cloud. We were able to show the civil authorities using TSX change difference images that this explosion had not disrupted the dome sufficient to warrant continued evacuation. Change difference images also proved to be valuable in mapping new pyroclastic flow deposits: the valley-occupying block-and-ash component tending to increase backscatter and the marginal surge deposits reducing it, with the pattern reversing after the event. By comparing east- and west-looking images acquired 12 hours apart, the deposition of some individual pyroclastic flows can be inferred from change differences. Some of the narrow upper sections of valleys draining the volcano received many tens of metres of rockfall and pyroclastic flow deposits over periods of a few weeks. By measuring the changing shadows cast by these valleys in TSX images the changing depth of infill by deposits could be estimated. In addition to using the amplitude data from the radar images we also used their phase information within the InSAR technique to calculate the topography during a period of no surface activity. This enabled areas of transient topography, crucial for directing future flows, to be captured.
Resumo:
The new HadKPP atmosphere–ocean coupled model is described and then used to determine the effects of sub-daily air–sea coupling and fine near-surface ocean vertical resolution on the representation of the Northern Hemisphere summer intra-seasonal oscillation. HadKPP comprises the Hadley Centre atmospheric model coupled to the K Profile Parameterization ocean-boundary-layer model. Four 30-member ensembles were performed that varied in oceanic vertical resolution between 1 m and 10 m and in coupling frequency between 3 h and 24 h. The 10 m, 24 h ensemble exhibited roughly 60% of the observed 30–50 day variability in sea-surface temperatures and rainfall and very weak northward propagation. Enhancing either only the vertical resolution or only the coupling frequency produced modest improvements in variability and only a standing intra-seasonal oscillation. Only the 1 m, 3 h configuration generated organized, northward-propagating convection similar to observations. Sub-daily surface forcing produced stronger upper-ocean temperature anomalies in quadrature with anomalous convection, which likely affected lower-atmospheric stability ahead of the convection, causing propagation. Well-resolved air–sea coupling did not improve the eastward propagation of the boreal summer intra-seasonal oscillation in this model. Upper-ocean vertical mixing and diurnal variability in coupled models must be improved to accurately resolve and simulate tropical sub-seasonal variability. In HadKPP, the mere presence of air–sea coupling was not sufficient to generate an intra-seasonal oscillation resembling observations.
Resumo:
This paper introduces a new fast, effective and practical model structure construction algorithm for a mixture of experts network system utilising only process data. The algorithm is based on a novel forward constrained regression procedure. Given a full set of the experts as potential model bases, the structure construction algorithm, formed on the forward constrained regression procedure, selects the most significant model base one by one so as to minimise the overall system approximation error at each iteration, while the gate parameters in the mixture of experts network system are accordingly adjusted so as to satisfy the convex constraints required in the derivation of the forward constrained regression procedure. The procedure continues until a proper system model is constructed that utilises some or all of the experts. A pruning algorithm of the consequent mixture of experts network system is also derived to generate an overall parsimonious construction algorithm. Numerical examples are provided to demonstrate the effectiveness of the new algorithms. The mixture of experts network framework can be applied to a wide variety of applications ranging from multiple model controller synthesis to multi-sensor data fusion.
Resumo:
An input variable selection procedure is introduced for the identification and construction of multi-input multi-output (MIMO) neurofuzzy operating point dependent models. The algorithm is an extension of a forward modified Gram-Schmidt orthogonal least squares procedure for a linear model structure which is modified to accommodate nonlinear system modeling by incorporating piecewise locally linear model fitting. The proposed input nodes selection procedure effectively tackles the problem of the curse of dimensionality associated with lattice-based modeling algorithms such as radial basis function neurofuzzy networks, enabling the resulting neurofuzzy operating point dependent model to be widely applied in control and estimation. Some numerical examples are given to demonstrate the effectiveness of the proposed construction algorithm.
Resumo:
A fast backward elimination algorithm is introduced based on a QR decomposition and Givens transformations to prune radial-basis-function networks. Nodes are sequentially removed using an increment of error variance criterion. The procedure is terminated by using a prediction risk criterion so as to obtain a model structure with good generalisation properties. The algorithm can be used to postprocess radial basis centres selected using a k-means routine and, in this mode, it provides a hybrid supervised centre selection approach.