53 resultados para Non-Gaussian dynamic models
Resumo:
Dispersion in the near-field region of localised releases in urban areas is difficult to predict because of the strong influence of individual buildings. Effects include upstream dispersion, trapping of material into building wakes and enhanced concentration fluctuations. As a result, concentration patterns are highly variable in time and mean profiles in the near field are strongly non-Gaussian. These aspects of near-field dispersion are documented by analysing data from direct numerical simulations in arrays of building-like obstacles and are related to the underlying flow structure. The mean flow structure around the buildings is found to exert a strong influence over the dispersion of material in the near field. Diverging streamlines around buildings enhance lateral dispersion. Entrainment of material into building wakes in the very near field gives rise to secondary sources, which then affect the subsequent dispersion pattern. High levels of concentration fluctuations are also found in this very near field; the fluctuation intensity is of order 2 to 5.
Resumo:
We develop a process-based model for the dispersion of a passive scalar in the turbulent flow around the buildings of a city centre. The street network model is based on dividing the airspace of the streets and intersections into boxes, within which the turbulence renders the air well mixed. Mean flow advection through the network of street and intersection boxes then mediates further lateral dispersion. At the same time turbulent mixing in the vertical detrains scalar from the streets and intersections into the turbulent boundary layer above the buildings. When the geometry is regular, the street network model has an analytical solution that describes the variation in concentration in a near-field downwind of a single source, where the majority of scalar lies below roof level. The power of the analytical solution is that it demonstrates how the concentration is determined by only three parameters. The plume direction parameter describes the branching of scalar at the street intersections and hence determines the direction of the plume centreline, which may be very different from the above-roof wind direction. The transmission parameter determines the distance travelled before the majority of scalar is detrained into the atmospheric boundary layer above roof level and conventional atmospheric turbulence takes over as the dominant mixing process. Finally, a normalised source strength multiplies this pattern of concentration. This analytical solution converges to a Gaussian plume after a large number of intersections have been traversed, providing theoretical justification for previous studies that have developed empirical fits to Gaussian plume models. The analytical solution is shown to compare well with very high-resolution simulations and with wind tunnel experiments, although re-entrainment of scalar previously detrained into the boundary layer above roofs, which is not accounted for in the analytical solution, is shown to become an important process further downwind from the source.
Resumo:
Forecasting wind power is an important part of a successful integration of wind power into the power grid. Forecasts with lead times longer than 6 h are generally made by using statistical methods to post-process forecasts from numerical weather prediction systems. Two major problems that complicate this approach are the non-linear relationship between wind speed and power production and the limited range of power production between zero and nominal power of the turbine. In practice, these problems are often tackled by using non-linear non-parametric regression models. However, such an approach ignores valuable and readily available information: the power curve of the turbine's manufacturer. Much of the non-linearity can be directly accounted for by transforming the observed power production into wind speed via the inverse power curve so that simpler linear regression models can be used. Furthermore, the fact that the transformed power production has a limited range can be taken care of by employing censored regression models. In this study, we evaluate quantile forecasts from a range of methods: (i) using parametric and non-parametric models, (ii) with and without the proposed inverse power curve transformation and (iii) with and without censoring. The results show that with our inverse (power-to-wind) transformation, simpler linear regression models with censoring perform equally or better than non-linear models with or without the frequently used wind-to-power transformation.
Resumo:
It is for mally proved that the general smoother for nonlinear dynamics can be for mulated as a sequential method, that is, obser vations can be assimilated sequentially during a for ward integration. The general filter can be derived from the smoother and it is shown that the general smoother and filter solutions at the final time become identical, as is expected from linear theor y. Then, a new smoother algorithm based on ensemble statistics is presented and examined in an example with the Lorenz equations. The new smoother can be computed as a sequential algorithm using only for ward-in-time model integrations. It bears a strong resemblance with the ensemble Kalman filter . The difference is that ever y time a new dataset is available during the for ward integration, an analysis is computed for all previous times up to this time. Thus, the first guess for the smoother is the ensemble Kalman filter solution, and the smoother estimate provides an improvement of this, as one would expect a smoother to do. The method is demonstrated in this paper in an intercomparison with the ensemble Kalman filter and the ensemble smoother introduced by van Leeuwen and Evensen, and it is shown to be superior in an application with the Lorenz equations. Finally , a discussion is given regarding the properties of the analysis schemes when strongly non-Gaussian distributions are used. It is shown that in these cases more sophisticated analysis schemes based on Bayesian statistics must be used.
Gabor wavelets and Gaussian models to separate ground and non-ground for airborne scanned LIDAR data
Resumo:
We compare a number of models of post War US output growth in terms of the degree and pattern of non-linearity they impart to the conditional mean, where we condition on either the previous period's growth rate, or the previous two periods' growth rates. The conditional means are estimated non-parametrically using a nearest-neighbour technique on data simulated from the models. In this way, we condense the complex, dynamic, responses that may be present in to graphical displays of the implied conditional mean.
Combining altimetric/gravimetric and ocean model mean dynamic topography models in the GOCINA region
Resumo:
We study global atmosphere models that are at least as accurate as the hydrostatic primitive equations (HPEs), reviewing known results and reporting some new ones. The HPEs make spherical geopotential and shallow atmosphere approximations in addition to the hydrostatic approximation. As is well known, a consistent application of the shallow atmosphere approximation requires omission of those Coriolis terms that vary as the cosine of latitude and of certain other terms in the components of the momentum equation. An approximate model is here regarded as consistent if it formally preserves conservation principles for axial angular momentum, energy and potential vorticity, and (following R. Müller) if its momentum component equations have Lagrange's form. Within these criteria, four consistent approximate global models, including the HPEs themselves, are identified in a height-coordinate framework. The four models, each of which includes the spherical geopotential approximation, correspond to whether the shallow atmosphere and hydrostatic (or quasi-hydrostatic) approximations are individually made or not made. Restrictions on representing the spatial variation of apparent gravity occur. Solution methods and the situation in a pressure-coordinate framework are discussed. © Crown copyright 2005.
Resumo:
This paper presents a hybrid control strategy integrating dynamic neural networks and feedback linearization into a predictive control scheme. Feedback linearization is an important nonlinear control technique which transforms a nonlinear system into a linear system using nonlinear transformations and a model of the plant. In this work, empirical models based on dynamic neural networks have been employed. Dynamic neural networks are mathematical structures described by differential equations, which can be trained to approximate general nonlinear systems. A case study based on a mixing process is presented.
Resumo:
Dense deployments of wireless local area networks (WLANs) are fast becoming a permanent feature of all developed cities around the world. While this increases capacity and coverage, the problem of increased interference, which is exacerbated by the limited number of channels available, can severely degrade the performance of WLANs if an effective channel assignment scheme is not employed. In an earlier work, an asynchronous, distributed and dynamic channel assignment scheme has been proposed that (1) is simple to implement, (2) does not require any knowledge of the throughput function, and (3) allows asynchronous channel switching by each access point (AP). In this paper, we present extensive performance evaluation of this scheme when it is deployed in the more practical non-uniform and dynamic topology scenarios. Specifically, we investigate its effectiveness (1) when APs are deployed in a nonuniform fashion resulting in some APs suffering from higher levels of interference than others and (2) when APs are effectively switched `on/off' due to the availability/lack of traffic at different times, which creates a dynamically changing network topology. Simulation results based on actual WLAN topologies show that robust performance gains over other channel assignment schemes can still be achieved even in these realistic scenarios.
Resumo:
1. Closed Ecological Systems (CES) are small manmade ecosystems which do not have any material exchange with the surrounding environment. Recent ecological and technological advances enable successful establishment and maintenance of CES, making them a suitable tool for detecting and measuring subtle feedbacks and mechanisms. 2. As a part of an analogue (physical) C cycle modelling experiment, we developed a non-intrusive methodology to control the internal environment and to monitor atmospheric CO2 concentration inside 16 replicated CES. Whilst maintaining an air-tight seal of all CES, this approach allowed for access to the CO2 measuring equipment for periodic re-calibration and repairs. 3. To ensure reliable cross-comparison of CO2 observations between individual CES units and to minimise the cost of the system, only one CO2 sampling unit was used. An ADC BioScientific OP-2 (open-path) analyser mounted on a swinging arm was passing over a set of 16 measuring cells. Each cell was connected to an individual CES with air continuously circulating between them. 4. Using this setup, we were able to continuously measure several environmental variables and CO2 concentration within each closed system, allowing us to study minute effects of changing temperature on C fluxes within each CES. The CES and the measuring cells showed minimal air leakage during an experimental run lasting, on average, 3 months. The CO2 analyser assembly performed reliably for over 2 years, however an early iteration of the present design proved to be sensitive to positioning errors. 5. We indicate how the methodology can be further improved and suggest possible avenues where future CES based research could be applied.
Resumo:
A nonlinear regression structure comprising a wavelet network and a linear term is proposed for system identification. The theoretical foundation of the approach is laid by proving that radial wavelets are orthogonal to linear functions. A constructive procedure for building such models is described and the approach is tested with experimental data.