979 resultados para variational mean-field method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose an elevation-dependent calibratory method to correct for the water vapour-induced delays over Mt. Etna that affect the interferometric syntheric aperture radar (InSAR) results. Water vapour delay fields are modelled from individual zenith delay estimates on a network of continuous GPS receivers. These are interpolated using simple kriging with varying local means over two domains, above and below 2 km in altitude. Test results with data from a meteorological station and 14 continuous GPS stations over Mt. Etna show that a reduction of the mean phase delay field of about 27% is achieved after the model is applied to a 35-day interferogram. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[1] Cloud cover is conventionally estimated from satellite images as the observed fraction of cloudy pixels. Active instruments such as radar and Lidar observe in narrow transects that sample only a small percentage of the area over which the cloud fraction is estimated. As a consequence, the fraction estimate has an associated sampling uncertainty, which usually remains unspecified. This paper extends a Bayesian method of cloud fraction estimation, which also provides an analytical estimate of the sampling error. This method is applied to test the sensitivity of this error to sampling characteristics, such as the number of observed transects and the variability of the underlying cloud field. The dependence of the uncertainty on these characteristics is investigated using synthetic data simulated to have properties closely resembling observations of the spaceborne Lidar NASA-LITE mission. Results suggest that the variance of the cloud fraction is greatest for medium cloud cover and least when conditions are mostly cloudy or clear. However, there is a bias in the estimation, which is greatest around 25% and 75% cloud cover. The sampling uncertainty is also affected by the mean lengths of clouds and of clear intervals; shorter lengths decrease uncertainty, primarily because there are more cloud observations in a transect of a given length. Uncertainty also falls with increasing number of transects. Therefore a sampling strategy aimed at minimizing the uncertainty in transect derived cloud fraction will have to take into account both the cloud and clear sky length distributions as well as the cloud fraction of the observed field. These conclusions have implications for the design of future satellite missions. This paper describes the first integrated methodology for the analytical assessment of sampling uncertainty in cloud fraction observations from forthcoming spaceborne radar and Lidar missions such as NASA's Calipso and CloudSat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data, a problem which models, for example, outdoor sound propagation over inhomogeneous. at terrain. To achieve good approximation at high frequencies with a relatively low number of degrees of freedom, we propose a novel Galerkin boundary element method, using a graded mesh with smaller elements adjacent to discontinuities in impedance and a special set of basis functions so that, on each element, the approximation space contains polynomials ( of degree.) multiplied by traces of plane waves on the boundary. We prove stability and convergence and show that the error in computing the total acoustic field is O( N-(v+1) log(1/2) N), where the number of degrees of freedom is proportional to N logN. This error estimate is independent of the wavenumber, and thus the number of degrees of freedom required to achieve a prescribed level of accuracy does not increase as the wavenumber tends to infinity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we show stability and convergence for a novel Galerkin boundary element method approach to the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data. This problem models, for example, outdoor sound propagation over inhomogeneous flat terrain. To achieve a good approximation with a relatively low number of degrees of freedom we employ a graded mesh with smaller elements adjacent to discontinuities in impedance, and a special set of basis functions for the Galerkin method so that, on each element, the approximation space consists of polynomials (of degree $\nu$) multiplied by traces of plane waves on the boundary. In the case where the impedance is constant outside an interval $[a,b]$, which only requires the discretization of $[a,b]$, we show theoretically and experimentally that the $L_2$ error in computing the acoustic field on $[a,b]$ is ${\cal O}(\log^{\nu+3/2}|k(b-a)| M^{-(\nu+1)})$, where $M$ is the number of degrees of freedom and $k$ is the wavenumber. This indicates that the proposed method is especially commendable for large intervals or a high wavenumber. In a final section we sketch how the same methodology extends to more general scattering problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use the point-source method (PSM) to reconstruct a scattered field from its associated far field pattern. The reconstruction scheme is described and numerical results are presented for three-dimensional acoustic and electromagnetic scattering problems. We give new proofs of the algorithms, based on the Green and Stratton-Chu formulae, which are more general than with the former use of the reciprocity relation. This allows us to handle the case of limited aperture data and arbitrary incident fields. Both for 3D acoustics and electromagnetics, numerical reconstructions of the field for different settings and with noisy data are shown. For shape reconstruction in acoustics, we develop an appropriate strategy to identify areas with good reconstruction quality and combine different such regions into one joint function. Then, we show how shapes of unknown sound-soft scatterers are found as level curves of the total reconstructed field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A vertical conduction current flows in the atmosphere as a result of the global atmospheric electric circuit. The current at the surface consists of the conduction current and a locally generated displacement current, which are often approximately equal in magnitude. A method of separating the two currents using two collectors of different geometry is investigated. The picoammeters connected to the collectors have a RC time constant of approximately 3 s, permitting the investigation of higher frequency air-earth current changes than previously achieved. The displacement current component of the air-earth current derived from the instrument agrees with calculations using simultaneous data from a co-located fast response electric field mill. The mean value of the nondisplacement current measured over 9 h was 1.76 +/- 0.002 pA m(-2). (c) 2006 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method for in situ detection of atmospheric turbulence has been developed using an inexpensive sensor carried within a conventional meteorological radiosonde. The sensor-a Hall effect magnetometer-was used to monitor the terrestrial magnetic field. Rapid time scale (10 s or less) fluctuations in the magnetic field measurement were related to the motion of the radiosonde, which was strongly influenced by atmospheric turbulence. Comparison with cloud radar measurements showed turbulence in regions where rapid time-scale magnetic fluctuations occurred. Reliable measurements were obtained between the surface and the stratosphere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the “flux excess” effect, whereby open solar flux estimates from spacecraft increase with increasing heliocentric distance. We analyze the kinematic effect on these open solar flux estimates of large-scale longitudinal structure in the solar wind flow, with particular emphasis on correcting estimates made using data from near-Earth satellites. We show that scatter, but no net bias, is introduced by the kinematic “bunching effect” on sampling and that this is true for both compression and rarefaction regions. The observed flux excesses, as a function of heliocentric distance, are shown to be consistent with open solar flux estimates from solar magnetograms made using the potential field source surface method and are well explained by the kinematic effect of solar wind speed variations on the frozen-in heliospheric field. Applying this kinematic correction to the Omni-2 interplanetary data set shows that the open solar flux at solar minimum fell from an annual mean of 3.82 × 1016 Wb in 1987 to close to half that value (1.98 × 1016 Wb) in 2007, making the fall in the minimum value over the last two solar cycles considerably faster than the rise inferred from geomagnetic activity observations over four solar cycles in the first half of the 20th century.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prediction of the solar wind conditions in near-Earth space, arising from both quasi-steady and transient structures, is essential for space weather forecasting. To achieve forecast lead times of a day or more, such predictions must be made on the basis of remote solar observations. A number of empirical prediction schemes have been proposed to forecast the transit time and speed of coronal mass ejections (CMEs) at 1 AU. However, the current lack of magnetic field measurements in the corona severely limits our ability to forecast the 1 AU magnetic field strengths resulting from interplanetary CMEs (ICMEs). In this study we investigate the relation between the characteristic magnetic field strengths and speeds of both magnetic cloud and noncloud ICMEs at 1 AU. Correlation between field and speed is found to be significant only in the sheath region ahead of magnetic clouds, not within the clouds themselves. The lack of such a relation in the sheaths ahead of noncloud ICMEs is consistent with such ICMEs being skimming encounters of magnetic clouds, though other explanations are also put forward. Linear fits to the radial speed profiles of ejecta reveal that faster-traveling ICMEs are also expanding more at 1 AU. We combine these empirical relations to form a prediction scheme for the magnetic field strength in the sheaths ahead of magnetic clouds and also suggest a method for predicting the radial speed profile through an ICME on the basis of upstream measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of fluctuating daily surface fluxes on the time-mean oceanic circulation is studied using an empirical flux model. The model produces fluctuating fluxes resulting from atmospheric variability and includes oceanic feedbacks on the fluxes. Numerical experiments were carried out by driving an ocean general circulation model with three different versions of the empirical model. It is found that fluctuating daily fluxes lead to an increase in the meridional overturning circulation (MOC) of the Atlantic of about 1 Sv and a decrease in the Antarctic circumpolar current (ACC) of about 32 Sv. The changes are approximately 7% of the MOC and 16% of the ACC obtained without fluctuating daily fluxes. The fluctuating fluxes change the intensity and the depth of vertical mixing. This, in turn, changes the density field and thus the circulation. Fluctuating buoyancy fluxes change the vertical mixing in a non-linear way: they tend to increase the convective mixing in mostly stable regions and to decrease the convective mixing in mostly unstable regions. The ACC changes are related to the enhanced mixing in the subtropical and the mid-latitude Southern Ocean and reduced mixing in the high-latitude Southern Ocean. The enhanced mixing is related to an increase in the frequency and the depth of convective events. As these events bring more dense water downward, the mixing changes lead to a reduction in meridional gradient of the depth-integrated density in the Southern Ocean and hence the strength of the ACC. The MOC changes are related to more subtle density changes. It is found that the vertical mixing in a latitudinal strip in the northern North Atlantic is more strongly enhanced due to fluctuating fluxes than the mixing in a latitudinal strip in the South Atlantic. This leads to an increase in the density difference between the two strips, which can be responsible for the increase in the Atlantic MOC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent observations from the Argo dataset of temperature and salinity profiles are used to evaluate a series of 3-year data assimilation experiments in a global ice–ocean general circulation model. The experiments are designed to evaluate a new data assimilation system whereby salinity is assimilated along isotherms, S(T ). In addition, the role of a balancing salinity increment to maintain water mass properties is investigated. This balancing increment is found to effectively prevent spurious mixing in tropical regions induced by univariate temperature assimilation, allowing the correction of isotherm geometries without adversely influencing temperature–salinity relationships. In addition, the balancing increment is able to correct a fresh bias associated with a weak subtropical gyre in the North Atlantic using only temperature observations. The S(T ) assimilation method is found to provide an important improvement over conventional depth level assimilation, with lower root-mean-squared forecast errors over the upper 500 m in the tropical Atlantic and Pacific Oceans. An additional set of experiments is performed whereby Argo data are withheld and used for independent evaluation. The most significant improvements from Argo assimilation are found in less well-observed regions (Indian, South Atlantic and South Pacific Oceans). When Argo salinity data are assimilated in addition to temperature, improvements to modelled temperature fields are obtained due to corrections to model density gradients and the resulting circulation. It is found that observations from the Argo array provide an invaluable tool for both correcting modelled water mass properties through data assimilation and for evaluating the assimilation methods themselves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating the magnitude of Agulhas leakage, the volume flux of water from the Indian to the Atlantic Ocean, is difficult because of the presence of other circulation systems in the Agulhas region. Indian Ocean water in the Atlantic Ocean is vigorously mixed and diluted in the Cape Basin. Eulerian integration methods, where the velocity field perpendicular to a section is integrated to yield a flux, have to be calibrated so that only the flux by Agulhas leakage is sampled. Two Eulerian methods for estimating the magnitude of Agulhas leakage are tested within a high-resolution two-way nested model with the goal to devise a mooring-based measurement strategy. At the GoodHope line, a section halfway through the Cape Basin, the integrated velocity perpendicular to that line is compared to the magnitude of Agulhas leakage as determined from the transport carried by numerical Lagrangian floats. In the first method, integration is limited to the flux of water warmer and more saline than specific threshold values. These threshold values are determined by maximizing the correlation with the float-determined time series. By using the threshold values, approximately half of the leakage can directly be measured. The total amount of Agulhas leakage can be estimated using a linear regression, within a 90% confidence band of 12 Sv. In the second method, a subregion of the GoodHope line is sought so that integration over that subregion yields an Eulerian flux as close to the float-determined leakage as possible. It appears that when integration is limited within the model to the upper 300 m of the water column within 900 km of the African coast the time series have the smallest root-mean-square difference. This method yields a root-mean-square error of only 5.2 Sv but the 90% confidence band of the estimate is 20 Sv. It is concluded that the optimum thermohaline threshold method leads to more accurate estimates even though the directly measured transport is a factor of two lower than the actual magnitude of Agulhas leakage in this model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variation calculations of the vibration–rotation energy levels of many isotopomers of HCN are reported, for J=0, 1, and 2, extending up to approximately 8 quanta of each of the stretching vibrations and 14 quanta of the bending mode. The force field, which is represented as a polynomial expansion in Morse coordinates for the bond stretches and even powers of the angle bend, has been refined by least squares to fit simultaneously all observed data on the Σ and Π state vibrational energies, and the Σ state rotational constants, for both HCN and DCN. The observed vibrational energies are fitted to roughly ±0.5 cm−1, and the rotational constants to roughly ±0.0001 cm−1. The force field has been used to predict the vibration rotation spectra of many isotopomers of HCN up to 25 000 cm−1. The results are consistent with the axis‐switching assignments of some weak overtone bands reported recently by Jonas, Yang, and Wodtke, and they also fit and provide the assignment for recent observations by Romanini and Lehmann of very weak absorption bands above 20 000 cm−1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Targeted observations are generally taken in regions of high baroclinicity, but often show little impact. One plausible explanation is that important dynamical information, such as upshear tilt, is not extracted from the targeted observations by the data assimilation scheme and used to correct initial condition error. This is investigated by generating pseudo targeted observations which contain a singular vector (SV) structure that is not present in the background field or routine observations, i.e. assuming that the background has an initial condition error with tilted growing structure. Experiments were performed for a single case-study with varying numbers of pseudo targeted observations. These were assimilated by the Met Office four-dimensional variational (4D-Var) data assimilation scheme, which uses a 6 h window for observations and background-error covariances calculated using the National Meteorological Centre (NMC) method. The forecasts were run using the operational Met Office Unified Model on a 24 km grid. The results presented clearly demonstrate that a 6 h window 4D-Var system is capable of extracting baroclinic information from a limited set of observations and using it to correct initial condition error. To capture the SV structure well (projection of 0.72 in total energy), 50 sondes over an area of 1×106 km2 were required. When the SV was represented by only eight sondes along an example targeting flight track covering a smaller area, the projection onto the SV structure was lower; the resulting forecast perturbations showed an SV structure with increased tilt and reduced initial energy. The total energy contained in the perturbations decreased as the SV structure was less well described by the set of observations (i.e. as fewer pseudo observations were assimilated). The assimilated perturbation had lower energy than the SV unless the pseudo observations were assimilated with the dropsonde observation errors halved from operational values. Copyright © 2010 Royal Meteorological Society

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remote sensing from space-borne platforms is often seen as an appealing method of monitoring components of the hydrological cycle, including river discharge, due to its spatial coverage. However, data from these platforms is often less than ideal because the geophysical properties of interest are rarely measured directly and the measurements that are taken can be subject to significant errors. This study assimilated water levels derived from a TerraSAR-X synthetic aperture radar image and digital aerial photography with simulations from a two dimensional hydraulic model to estimate discharge, inundation extent, depths and velocities at the confluence of the rivers Severn and Avon, UK. An ensemble Kalman filter was used to assimilate spot heights water levels derived by intersecting shorelines from the imagery with a digital elevation model. Discharge was estimated from the ensemble of simulations using state augmentation and then compared with gauge data. Assimilating the real data reduced the error between analyzed mean water levels and levels from three gauging stations to less than 0.3 m, which is less than typically found in post event water marks data from the field at these scales. Measurement bias was evident, but the method still provided a means of improving estimates of discharge for high flows where gauge data are unavailable or of poor quality. Posterior estimates of discharge had standard deviations between 63.3 m3s-1 and 52.7 m3s-1, which were below 15% of the gauged flows along the reach. Therefore, assuming a roughness uncertainty of 0.03-0.05 and no model structural errors discharge could be estimated by the EnKF with accuracy similar to that arguably expected from gauging stations during flood events. Quality control prior to assimilation, where measurements were rejected for being in areas of high topographic slope or close to tall vegetation and trees, was found to be essential. The study demonstrates the potential, but also the significant limitations of currently available imagery to reduce discharge uncertainty in un-gauged or poorly gauged basins when combined with model simulations in a data assimilation framework.