970 resultados para Error in essence


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Identifying the signature of global warming in the world's oceans is challenging because low frequency circulation changes can dominate local temperature changes. The IPCC fourth assessment reported an average ocean heating rate of 0.21 ± 0.04 Wm−2 over the period 1961–2003, with considerable spatial, interannual and inter-decadal variability. We present a new analysis of millions of ocean temperature profiles designed to filter out local dynamical changes to give a more consistent view of the underlying warming. Time series of temperature anomaly for all waters warmer than 14°C show large reductions in interannual to inter-decadal variability and a more spatially uniform upper ocean warming trend (0.12 Wm−2 on average) than previous results. This new measure of ocean warming is also more robust to some sources of error in the ocean observing system. Our new analysis provides a useful addition for evaluation of coupled climate models, to the traditional fixed depth analyses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The performance of a 2D numerical model of flood hydraulics is tested for a major event in Carlisle, UK, in 2005. This event is associated with a unique data set, with GPS surveyed wrack lines and flood extent surveyed 3 weeks after the flood. The Simple Finite Volume (SFV) model is used to solve the 2D Saint-Venant equations over an unstructured mesh of 30000 elements representing channel and floodplain, and allowing detailed hydraulics of flow around bridge piers and other influential features to be represented. The SFV model is also used to corroborate flows recorded for the event at two gauging stations. Calibration of Manning's n is performed with a two stage strategy, with channel values determined by calibration of the gauging station models, and floodplain values determined by optimising the fit between model results and observed water levels and flood extent for the 2005 event. RMS error for the calibrated model compared with surveyed water levels is ~±0.4m, the same order of magnitude as the estimated error in the survey data. The study demonstrates the ability of unstructured mesh hydraulic models to represent important hydraulic processes across a range of scales, with potential applications to flood risk management.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Measurements of the top‐of‐the‐atmosphere outgoing longwave radiation (OLR) for July 2003 from Meteosat‐7 are used to assess the performance of the numerical weather prediction version of the Met Office Unified Model. A significant difference is found over desert regions of northern Africa where the model emits too much OLR by up to 35 Wm−2 in the monthly mean. By cloud‐screening the data we find an error of up to 50 Wm−2 associated with cloud‐free areas, which suggests an error in the model surface temperature, surface emissivity, or atmospheric transmission. By building up a physical model of the radiative properties of mineral dust based on in situ, and surface‐based and satellite remote sensing observations we show that the most plausible explanation for the discrepancy in OLR is due to the neglect of mineral dust in the model. The calculations suggest that mineral dust can exert a longwave radiative forcing by as much as 50 Wm−2 in the monthly mean for 1200 UTC in cloud‐free regions, which accounts for the discrepancy between the model and the Meteosat‐7 observations. This suggests that inclusion of the radiative effects of mineral dust will lead to a significant improvement in the radiation balance of numerical weather prediction models with subsequent improvements in performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the Eady model, where the meridional potential vorticity (PV) gradient is zero, perturbation energy growth can be partitioned cleanly into three mechanisms: (i) shear instability, (ii) resonance, and (iii) the Orr mechanism. Shear instability involves two-way interaction between Rossby edge waves on the ground and lid, resonance occurs as interior PV anomalies excite the edge waves, and the Orr mechanism involves only interior PV anomalies. These mechanisms have distinct implications for the structural and temporal linear evolution of perturbations. Here, a new framework is developed in which the same mechanisms can be distinguished for growth on basic states with nonzero interior PV gradients. It is further shown that the evolution from quite general initial conditions can be accurately described (peak error in perturbation total energy typically less than 10%) by a reduced system that involves only three Rossby wave components. Two of these are counterpropagating Rossby waves—that is, generalizations of the Rossby edge waves when the interior PV gradient is nonzero—whereas the other component depends on the structure of the initial condition and its PV is advected passively with the shear flow. In the cases considered, the three-component model outperforms approximate solutions based on truncating a modal or singular vector basis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper deconstructs the relationship between the Environmental Sustainability Index (ESI) and national income. The ESI attempts to provide a single figure which encapsulates environmental sustainability' for each country included in the analysis, and this allied with a 'league table' format so as to name and shame bad performers, has resulted in widespread reporting within the popular presses of a number of countries. In essence, the higher the value of the ESI then the more 'environmentally sustainable' a country is deemed to be. A logical progression beyond the use of the ESI to publicise environmental sustainability is its use within a more analytical context. Thus an index designed to simplify in order to have an impact on policy is used to try and understand causes of good and bad performance in environmental sustainability. For example the creators of the ESI claim that ESI is related to GDP/capita (adjusted for Purchasing Power Parity) such that the ESI increases linearly with wealth. While this may in a sense be a comforting picture, do the variables within the ESI allow for alternatives to the story, and if they do then what are the repercussions for those producing such indices for broad consumption amongst the policy makers, mangers, the press, etc.? The latter point is especially important given the appetite for such indices amongst non-specialists, and for all their weaknesses the ESI and other such aggregated indices will not go away. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we consider the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data, a problem which models, for example, outdoor sound propagation over inhomogeneous. at terrain. To achieve good approximation at high frequencies with a relatively low number of degrees of freedom, we propose a novel Galerkin boundary element method, using a graded mesh with smaller elements adjacent to discontinuities in impedance and a special set of basis functions so that, on each element, the approximation space contains polynomials ( of degree.) multiplied by traces of plane waves on the boundary. We prove stability and convergence and show that the error in computing the total acoustic field is O( N-(v+1) log(1/2) N), where the number of degrees of freedom is proportional to N logN. This error estimate is independent of the wavenumber, and thus the number of degrees of freedom required to achieve a prescribed level of accuracy does not increase as the wavenumber tends to infinity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we show stability and convergence for a novel Galerkin boundary element method approach to the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data. This problem models, for example, outdoor sound propagation over inhomogeneous flat terrain. To achieve a good approximation with a relatively low number of degrees of freedom we employ a graded mesh with smaller elements adjacent to discontinuities in impedance, and a special set of basis functions for the Galerkin method so that, on each element, the approximation space consists of polynomials (of degree $\nu$) multiplied by traces of plane waves on the boundary. In the case where the impedance is constant outside an interval $[a,b]$, which only requires the discretization of $[a,b]$, we show theoretically and experimentally that the $L_2$ error in computing the acoustic field on $[a,b]$ is ${\cal O}(\log^{\nu+3/2}|k(b-a)| M^{-(\nu+1)})$, where $M$ is the number of degrees of freedom and $k$ is the wavenumber. This indicates that the proposed method is especially commendable for large intervals or a high wavenumber. In a final section we sketch how the same methodology extends to more general scattering problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although the potential importance of scattering of long-wave radiation by clouds has been recognised, most studies have concentrated on the impact of high clouds and few estimates of the global impact of scattering have been presented. This study shows that scattering in low clouds has a significant impact on outgoing long-wave radiation (OLR) in regions of marine stratocumulus (-3.5 W m(-2) for overcast conditions) where the column water vapour is relatively low. This corresponds to an enhancement of the greenhouse effect of such clouds by 10%. The near-global impact of scattering on OLR is estimated to be -3.0 W m(-2), with low clouds contributing -0.9 W m(-2), mid-level cloud -0.7 W m(-2) and high clouds -1.4 W m(-2). Although this effect appears small compared to the global mean OLR of 240 W m(-2), it indicates that neglect of scattering will lead to an error in cloud long-wave forcing of about 10% and an error in net cloud forcing of about 20%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ECMWF full-physics and dry singular vector (SV) packages, using a dry energy norm and a 1-day optimization time, are applied to four high impact European cyclones of recent years that were almost universally badly forecast in the short range. It is shown that these full-physics SVs are much more relevant to severe cyclonic development than those based on dry dynamics plus boundary layer alone. The crucial extra ingredient is the representation of large-scale latent heat release. The severe winter storms all have a long, nearly straight region of high baroclinicity stretching across the Atlantic towards Europe, with a tongue of very high moisture content on its equatorward flank. In each case some of the final-time top SV structures pick out the region of the actual storm. The initial structures were generally located in the mid- to low troposphere. Forecasts based on initial conditions perturbed by moist SVs with opposite signs and various amplitudes show the range of possible 1-day outcomes for reasonable magnitudes of forecast error. In each case one of the perturbation structures gave a forecast very much closer to the actual storm than the control forecast. Deductions are made about the predictability of high-impact extratropical cyclone events. Implications are drawn for the short-range forecast problem and suggestions made for one practicable way to approach short-range ensemble forecasting. Copyright © 2005 Royal Meteorological Society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A method to estimate the size and liquid water content of drizzle drops using lidar measurements at two wavelengths is described. The method exploits the differential absorption of infrared light by liquid water at 905 nm and 1.5 μm, which leads to a different backscatter cross section for water drops larger than ≈50 μm. The ratio of backscatter measured from drizzle samples below cloud base at these two wavelengths (the colour ratio) provides a measure of the median volume drop diameter D0. This is a strong effect: for D0=200 μm, a colour ratio of ≈6 dB is predicted. Once D0 is known, the measured backscatter at 905 nm can be used to calculate the liquid water content (LWC) and other moments of the drizzle drop distribution. The method is applied to observations of drizzle falling from stratocumulus and stratus clouds. High resolution (32 s, 36 m) profiles of D0, LWC and precipitation rate R are derived. The main sources of error in the technique are the need to assume a value for the dispersion parameter μ in the drop size spectrum (leading to at most a 35% error in R) and the influence of aerosol returns on the retrieval (≈10% error in R for the cases considered here). Radar reflectivities are also computed from the lidar data, and compared to independent measurements from a colocated cloud radar, offering independent validation of the derived drop size distributions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The one-dimensional variational assimilation of vertical temperature information in the presence of a boundary-layer capping inversion is studied. For an optimal analysis of the vertical temperature profile, an accurate representation of the background error covariances is essential. The background error covariances are highly flow-dependent due to the variability in the presence, structure and height of the boundary-layer capping inversion. Flow-dependent estimates of the background error covariances are shown by studying the spread in an ensemble of forecasts. A forecast of the temperature profile (used as a background state) may have a significant error in the position of the capping inversion with respect to observations. It is shown that the assimilation of observations may weaken the inversion structure in the analysis if only magnitude errors are accounted for as is the case for traditional data assimilation methods used for operational weather prediction. The positional error is treated explicitly here in a new data assimilation scheme to reduce positional error, in addition to the traditional framework to reduce magnitude error. The distribution of the positional error of the background inversion is estimated for use with the new scheme.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the data. Routine use of mixture models alongside other approaches to phylogenetic inference may often reveal hidden or unexpected patterns of sequence evolution and can improve phylogenetic inference.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Details about the parameters of kinetic systems are crucial for progress in both medical and industrial research, including drug development, clinical diagnosis and biotechnology applications. Such details must be collected by a series of kinetic experiments and investigations. The correct design of the experiment is essential to collecting data suitable for analysis, modelling and deriving the correct information. We have developed a systematic and iterative Bayesian method and sets of rules for the design of enzyme kinetic experiments. Our method selects the optimum design to collect data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. The rules select features of the design such as the substrate range and the number of measurements. We show here that this method can be directly applied to the study of other important kinetic systems, including drug transport, receptor binding, microbial culture and cell transport kinetics. It is possible to reduce the errors in the estimated parameters and, most importantly, increase the efficiency and cost-effectiveness by reducing the necessary amount of experiments and data points measured. (C) 2003 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Kinetic studies on the AR (aldose reductase) protein have shown that it does not behave as a classical enzyme in relation to ring aldose sugars. As with non-enzymatic glycation reactions, there is probably a free radical element involved derived from monosaccharide autoxidation. in the case of AR, there is free radical oxidation of NADPH by autoxidizing monosaccharides, which is enhanced in the presence of the NADPH-binding protein. Thus any assay for AR based on the oxidation of NADPH in the presence of autoxidizing monosaccharides is invalid, and tissue AR measurements based on this method are also invalid, and should be reassessed. AR exhibits broad specificity for both hydrophilic and hydrophobic aldehydes that suggests that the protein may be involved in detoxification. The last thing we would want to do is to inhibit it. ARIs (AR inhibitors) have a number of actions in the cell which are not specific, and which do not involve them binding to AR. These include peroxy-radical scavenging and effects of metal ion chelation. The AR/ARI story emphasizes the importance of correct experimental design in all biocatalytic experiments. Developing the use of Bayesian utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has led to the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-m and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimizes the error in the parameters estimated, and is suitable for simple or complex steady-state models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In areas such as drug development, clinical diagnosis and biotechnology research, acquiring details about the kinetic parameters of enzymes is crucial. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. We demonstrate that a Bayesian approach (the use of prior knowledge) can produce major gains quantifiable in terms of information, productivity and accuracy of each experiment. Developing the use of Bayesian Utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has enabled the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-M and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. (C) 2003 Elsevier Science B.V. All rights reserved.