874 resultados para GDP Interpolation
Resumo:
The question is addressed whether using unbalanced updates in ocean-data assimilation schemes for seasonal forecasting systems can result in a relatively poor simulation of zonal currents. An assimilation scheme, where temperature observations are used for updating only the density field, is compared to a scheme where updates of density field and zonal velocities are related by geostrophic balance. This is done for an equatorial linear shallow-water model. It is found that equatorial zonal velocities can be detoriated if velocity is not updated in the assimilation procedure. Adding balanced updates to the zonal velocity is shown to be a simple remedy for the shallow-water model. Next, optimal interpolation (OI) schemes with balanced updates of the zonal velocity are implemented in two ocean general circulation models. First tests indicate a beneficial impact on equatorial upper-ocean zonal currents.
Resumo:
Endothelin A (ET(A)) transmembrane receptors predominate in rat cardiac myocytes. These are G protein-coupled receptors whose actions are mediated by the G(q) heterotrimeric G proteins. Through these, ET-1 binding to ET(A)-receptors stimulates the hydrolysis of membrane phosphatidylinositol 4,5-bisphosphate to diacylglycerol and inositol 1,4,5-trisphosphate. Diacylglycerol remains in the membrane whereas inositol 1,4,5-trisphosphate is soluble (though its importance in the cardiac myocyte is still debated). Isoforms of the phospholipid-dependent protein kinase, protein kinase C (PKC), are intracellular receptors for diacylglycerol. Cytoplasmic nPKCdelta and nPKCepsilon detect increases in membrane diacylglycerols and translocate to the membrane. This brings about PKC activation, though modifications additional to binding to phospholipids and diacylglycerol are involved. The next event (probably associated with PKC activation) is the activation of the membrane-bound small G protein Ras by exchange of GTP for GDP. Ras.GTP loading translocates Raf family mitogen-activated protein kinase (MAPK) kinase kinases to the membrane, initiates the activation of Raf, and thus activates the extracellular signal-regulated kinase 1/2 (ERK1/2) cascade. Over longer times, two analogous protein kinase cascades, the c-Jun N-terminal kinase and p38-mitogen-activated protein kinase cascades, become activated. As the signals originating from the ET(A) receptor are transmitted through these protein kinase pathways, other signalling molecules become phosphorylated, thus changing their biological activities. For example, ET-1 increases the expression of the c-jun transcription factor gene, and increases abundance and phosphorylation of c-Jun protein. These changes in c-Jun expression and phosphorylation are likely to be important in the regulation of gene transcription.
Resumo:
Eddy covariance has been used in urban areas to evaluate the net exchange of CO2 between the surface and the atmosphere. Typically, only the vertical flux is measured at a height 2–3 times that of the local roughness elements; however, under conditions of relatively low instability, CO2 may accumulate in the airspace below the measurement height. This can result in inaccurate emissions estimates if the accumulated CO2 drains away or is flushed upwards during thermal expansion of the boundary layer. Some studies apply a single height storage correction; however, this requires the assumption that the response of the CO2 concentration profile to forcing is constant with height. Here a full seasonal cycle (7th June 2012 to 3rd June 2013) of single height CO2 storage data calculated from concentrations measured at 10 Hz by open path gas analyser are compared to a data set calculated from a concurrent switched vertical profile measured (2 Hz, closed path gas analyser) at 10 heights within and above a street canyon in central London. The assumption required for the former storage determination is shown to be invalid. For approximately regular street canyons at least one other measurement is required. Continuous measurements at fewer locations are shown to be preferable to a spatially dense, switched profile, as temporal interpolation is ineffective. The majority of the spectral energy of the CO2 storage time series was found to be between 0.001 and 0.2 Hz (500 and 5 s respectively); however, sampling frequencies of 2 Hz and below still result in significantly lower CO2 storage values. An empirical method of correcting CO2 storage values from under-sampled time series is proposed.
Resumo:
1. The rapid expansion of systematic monitoring schemes necessitates robust methods to reliably assess species' status and trends. Insect monitoring poses a challenge where there are strong seasonal patterns, requiring repeated counts to reliably assess abundance. Butterfly monitoring schemes (BMSs) operate in an increasing number of countries with broadly the same methodology, yet they differ in their observation frequency and in the methods used to compute annual abundance indices. 2. Using simulated and observed data, we performed an extensive comparison of two approaches used to derive abundance indices from count data collected via BMS, under a range of sampling frequencies. Linear interpolation is most commonly used to estimate abundance indices from seasonal count series. A second method, hereafter the regional generalized additive model (GAM), fits a GAM to repeated counts within sites across a climatic region. For the two methods, we estimated bias in abundance indices and the statistical power for detecting trends, given different proportions of missing counts. We also compared the accuracy of trend estimates using systematically degraded observed counts of the Gatekeeper Pyronia tithonus (Linnaeus 1767). 3. The regional GAM method generally outperforms the linear interpolation method. When the proportion of missing counts increased beyond 50%, indices derived via the linear interpolation method showed substantially higher estimation error as well as clear biases, in comparison to the regional GAM method. The regional GAM method also showed higher power to detect trends when the proportion of missing counts was substantial. 4. Synthesis and applications. Monitoring offers invaluable data to support conservation policy and management, but requires robust analysis approaches and guidance for new and expanding schemes. Based on our findings, we recommend the regional generalized additive model approach when conducting integrative analyses across schemes, or when analysing scheme data with reduced sampling efforts. This method enables existing schemes to be expanded or new schemes to be developed with reduced within-year sampling frequency, as well as affording options to adapt protocols to more efficiently assess species status and trends across large geographical scales.
Resumo:
This study combines a narrative and modelling framework to analyse the development of Kazakhstan’s oil sector since its takeoff following separation from the USSR. As in the case of other emerging or transitional countries with large natural resource endowments, a key question is whether the exploitation of the natural resource is a benefit to longer term economic development: is it a curse, a blessing – or neither? Narrative evidence suggests that the establishment of good governance, in terms of institutions and policies, provides a background to sound long-term development, especially if combined with the development of sectors outside the natural resource sector, for example diversification into manufacturing and services, often through attracting FDI. The narrative is supported by econometric modelling of the relationship between domestic output, overseas output and exports of oil, which finds in favour of a sustained positive effect of oil exports on GDP. The model then provides a basis for projection of the growth in GDP given a consensus view of likely developments in the oil price.
Resumo:
European air quality legislation has reduced emissions of air pollutants across Europe since the 1970s, affecting air quality, human health and regional climate. We used a coupled composition-climate model to simulate the impacts of European air quality legislation and technology measures implemented between 1970 and 2010. We contrast simulations using two emission scenarios; one with actual emissions in 2010 and the other with emissions that would have occurred in 2010 in the absence of technological improvements and end-of-pipe treatment measures in the energy, industrial and road transport sectors. European emissions of sulphur dioxide, black carbon (BC) and organic carbon in 2010 are 53%, 59% and 32% lower respectively compared to emissions that would have occurred in 2010 in the absence of legislative and technology measures. These emission reductions decreased simulated European annual mean concentrations of fine particulate matter(PM2.5) by 35%, sulphate by 44%, BC by 56% and particulate organic matter by 23%. The reduction in PM2.5 concentrations is calculated to have prevented 80 000 (37 000–116 000, at 95% confidence intervals) premature deaths annually across the European Union, resulting in a perceived financial benefit to society of US$232 billion annually (1.4% of 2010 EU GDP). The reduction in aerosol concentrations due to legislative and technology measures caused a positive change in the aerosol radiative effect at the top of atmosphere, reduced atmospheric absorption and also increased the amount of solar radiation incident at the surface over Europe. We used an energy budget approximation to estimate that these changes in the radiative balance have increased European annual mean surface temperatures and precipitation by 0.45 ± 0.11 °C and by 13 ± 0.8 mm yr−1 respectively. Our results show that the implementation of European legislation and technological improvements to reduce the emission of air pollutants has improved air quality and human health over Europe, as well as having an unintended impact on the regional radiative balance and climate.
Resumo:
Food industry is critical to any nation’s health and well-being; it is also critical to the economic health of a nation, since it can typically constitute over a fifth of the nation’s manufacturing GDP. Food Engineering is a discipline that ought to be at the heart of the food industry. Unfortunately, this discipline is not playing its rightful role today: engineering has been relegated to play the role of a service provider to the food industry, instead of it being a strategic driver for the very growth of the industry. This paper hypothesises that food engineering discipline, today, seems to be continuing the way it was in the last century, and has not risen to the challenges that it really faces. This paper therefore categorises the challenges as those being posed by: 1. Business dynamics, 2. Market forces, 3. Manufacturing environment and 4. Environmental Considerations, and finds the current scope and subject-knowledge competencies of food engineering to be inadequate in meeting these challenges. The paper identifies: a) health, b) environment and c) security as the three key drivers of the discipline, and proposes a new definition of food engineering. This definition requires food engineering to have a broader science base which includes biophysical, biochemical and health sciences, in addition to engineering sciences. This definition, in turn, leads to the discipline acquiring a new set of subject-knowledge competencies that is fit-for-purpose for this day and age, and hopefully for the foreseeable future. The possibility of this approach leading to the development of a higher education program in food engineering is demonstrated by adopting a theme based curriculum development with five core themes, supplemented by appropriate enabling and knowledge integrating courses. At the heart of this theme based approach is an attempt to combine engineering of process and product in a purposeful way, termed here as Food Product Realisation Engineering. Finally, the paper also recommends future development of two possible niche specialisation programs in Nutrition and Functional Food Engineering and Gastronomic Engineering. It is hoped that this reconceptualization of the discipline will not only make it more purposeful for the food industry, but it will also make the subject more intellectually challenging and attract bright young minds to the discipline.
Resumo:
Understanding how the emergence of the anthropogenic warming signal from the noise of internal variability translates to changes in extreme event occurrence is of crucial societal importance. By utilising simulations of cumulative carbon dioxide (CO2) emissions and temperature changes from eleven earth system models, we demonstrate that the inherently lower internal variability found at tropical latitudes results in large increases in the frequency of extreme daily temperatures (exceedances of the 99.9th percentile derived from pre-industrial climate simulations) occurring much earlier than for mid-to-high latitude regions. Most of the world's poorest people live at low latitudes, when considering 2010 GDP-PPP per capita; conversely the wealthiest population quintile disproportionately inhabit more variable mid-latitude climates. Consequently, the fraction of the global population in the lowest socio-economic quintile is exposed to substantially more frequent daily temperature extremes after much lower increases in both mean global warming and cumulative CO2 emissions.
Resumo:
This work is an assessment of frequency of extreme values (EVs) of daily rainfall in the city of Sao Paulo. Brazil, over the period 1933-2005, based on the peaks-over-threshold (POT) and Generalized Pareto Distribution (GPD) approach. Usually. a GPD model is fitted to a sample of POT Values Selected With a constant threshold. However. in this work we use time-dependent thresholds, composed of relatively large p quantities (for example p of 0.97) of daily rainfall amounts computed from all available data. Samples of POT values were extracted with several Values of p. Four different GPD models (GPD-1, GPD-2, GPD-3. and GDP-4) were fitted to each one of these samples by the maximum likelihood (ML) method. The shape parameter was assumed constant for the four models, but time-varying covariates were incorporated into scale parameter of GPD-2. GPD-3, and GPD-4, describing annual cycle in GPD-2. linear trend in GPD-3, and both annual cycle and linear trend in GPD-4. The GPD-1 with constant scale and shape parameters is the simplest model. For identification of the best model among the four models WC used rescaled Akaike Information Criterion (AIC) with second-order bias correction. This criterion isolates GPD-3 as the best model, i.e. the one with positive linear trend in the scale parameter. The slope of this trend is significant compared to the null hypothesis of no trend, for about 98% confidence level. The non-parametric Mann-Kendall test also showed presence of positive trend in the annual frequency of excess over high thresholds. with p-value being virtually zero. Therefore. there is strong evidence that high quantiles of daily rainfall in the city of Sao Paulo have been increasing in magnitude and frequency over time. For example. 0.99 quantiles of daily rainfall amount have increased by about 40 mm between 1933 and 2005. Copyright (C) 2008 Royal Meteorological Society
Resumo:
We estimate crustal structure and thickness of South America north of roughly 40 degrees S. To this end, we analyzed receiver functions from 20 relatively new temporary broadband seismic stations deployed across eastern Brazil. In the analysis we include teleseismic and some regional events, particularly for stations that recorded few suitable earthquakes. We first estimate crustal thickness and average Poisson`s ratio using two different stacking methods. We then combine the new crustal constraints with results from previous receiver function studies. To interpolate the crustal thickness between the station locations, we jointly invert these Moho point constraints, Rayleigh wave group velocities, and regional S and Rayleigh waveforms for a continuous map of Moho depth. The new tomographic Moho map suggests that Moho depth and Moho relief vary slightly with age within the Precambrian crust. Whether or not a positive correlation between crustal thickness and geologic age is derived from the pre-interpolation point constraints depends strongly on the selected subset of receiver functions. This implies that using only pre-interpolation point constraints (receiver functions) inadequately samples the spatial variation in geologic age. The new Moho map also reveals an anomalously deep Moho beneath the oldest core of the Amazonian Craton.
Resumo:
We consider incompressible Stokes flow with an internal interface at which the pressure is discontinuous, as happens for example in problems involving surface tension. We assume that the mesh does not follow the interface, which makes classical interpolation spaces to yield suboptimal convergence rates (typically, the interpolation error in the L(2)(Omega)-norm is of order h(1/2)). We propose a modification of the P(1)-conforming space that accommodates discontinuities at the interface without introducing additional degrees of freedom or modifying the sparsity pattern of the linear system. The unknowns are the pressure values at the vertices of the mesh and the basis functions are computed locally at each element, so that the implementation of the proposed space into existing codes is straightforward. With this modification, numerical tests show that the interpolation order improves to O(h(3/2)). The new pressure space is implemented for the stable P(1)(+)/P(1) mini-element discretization, and for the stabilized equal-order P(1)/P(1) discretization. Assessment is carried out for Poiseuille flow with a forcing surface and for a static bubble. In all cases the proposed pressure space leads to improved convergence orders and to more accurate results than the standard P(1) space. In addition, two Navier-Stokes simulations with moving interfaces (Rayleigh-Taylor instability and merging bubbles) are reported to show that the proposed space is robust enough to carry out realistic simulations. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
Decarbonizing the world`s energy matrix is the strategy being implemented by most countries to reduce CO(2) emissions and thus contribute to achieve the ultimate objectives of the Climate Convention. The evolution of the carbon intensity (I(c)=CO(2)/GDP) in the period 1990-2007 was encouraging but not sufficient to reduce the growth of carbon emission. As a result of COP-15 in Copenhagen these countries (and regions) made pledges that could lead to more reduction: for the United States a 17% reduction in CO(2) emissions by 2020 below the level of 2005: for the European Union a 20% reduction in CO(2) emissions by 2020 below the 1990 level: for China a 40-45% reduction in the carbon intensity and for India a 20-25% reduction in carbon intensity by 2020. We analyzed the consequences of such pledges and concluded that the expected yearly rate of decrease of the carbon intensity follows basically the ""business as usual"" trend in the period 1990-2007 and will, in all likelihood, be insufficient to reduce carbon emissions up to 2020. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The interest in attractive Bose-Einstein Condensates arises due to the chemical instabilities generate when the number of trapped atoms is above a critical number. In this case, recombination process promotes the collapse of the cloud. This behavior is normally geometry dependent. Within the context of the mean field approximation, the system is described by the Gross-Pitaevskii equation. We have considered the attractive Bose-Einstein condensate, confined in a nonspherical trap, investigating numerically and analytically the solutions, using controlled perturbation and self-similar approximation methods. This approximation is valid in all interval of the negative coupling parameter allowing interpolation between weak-coupling and strong-coupling limits. When using the self-similar approximation methods, accurate analytical formulas were derived. These obtained expressions are discussed for several different traps and may contribute to the understanding of experimental observations.
Resumo:
The shuttle radar topography mission (SRTM), was flow on the space shuttle Endeavour in February 2000, with the objective of acquiring a digital elevation model of all land between 60 degrees north latitude and 56 degrees south latitude, using interferometric synthetic aperture radar (InSAR) techniques. The SRTM data are distributed at horizontal resolution of 1 arc-second (similar to 30m) for areas within the USA and at 3 arc-second (similar to 90m) resolution for the rest of the world. A resolution of 90m can be considered suitable for the small or medium-scale analysis, but it is too coarse for more detailed purposes. One alternative is to interpolate the SRTM data at a finer resolution; it will not increase the level of detail of the original digital elevation model (DEM), but it will lead to a surface where there is the coherence of angular properties (i.e. slope, aspect) between neighbouring pixels, which is an important characteristic when dealing with terrain analysis. This work intents to show how the proper adjustment of variogram and kriging parameters, namely the nugget effect and the maximum distance within which values are used in interpolation, can be set to achieve quality results on resampling SRTM data from 3"" to 1"". We present for a test area in western USA, which includes different adjustment schemes (changes in nugget effect value and in the interpolation radius) and comparisons with the original 1"" model of the area, with the national elevation dataset (NED) DEMs, and with other interpolation methods (splines and inverse distance weighted (IDW)). The basic concepts for using kriging to resample terrain data are: (i) working only with the immediate neighbourhood of the predicted point, due to the high spatial correlation of the topographic surface and omnidirectional behaviour of variogram in short distances; (ii) adding a very small random variation to the coordinates of the points prior to interpolation, to avoid punctual artifacts generated by predicted points with the same location than original data points and; (iii) using a small value of nugget effect, to avoid smoothing that can obliterate terrain features. Drainages derived from the surfaces interpolated by kriging and by splines have a good agreement with streams derived from the 1"" NED, with correct identification of watersheds, even though a few differences occur in the positions of some rivers in flat areas. Although the 1"" surfaces resampled by kriging and splines are very similar, we consider the results produced by kriging as superior, since the spline-interpolated surface still presented some noise and linear artifacts, which were removed by kriging.
Resumo:
Purpose: We present an iterative framework for CT reconstruction from transmission ultrasound data which accurately and efficiently models the strong refraction effects that occur in our target application: Imaging the female breast. Methods: Our refractive ray tracing framework has its foundation in the fast marching method (FNMM) and it allows an accurate as well as efficient modeling of curved rays. We also describe a novel regularization scheme that yields further significant reconstruction quality improvements. A final contribution is the development of a realistic anthropomorphic digital breast phantom based on the NIH Visible Female data set. Results: Our system is able to resolve very fine details even in the presence of significant noise, and it reconstructs both sound speed and attenuation data. Excellent correspondence with a traditional, but significantly more computationally expensive wave equation solver is achieved. Conclusions: Apart from the accurate modeling of curved rays, decisive factors have also been our regularization scheme and the high-quality interpolation filter we have used. An added benefit of our framework is that it accelerates well on GPUs where we have shown that clinical 3D reconstruction speeds on the order of minutes are possible.