34 resultados para Point Data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of the most common chelators used in magnetic resonance imaging ( MRI) and in radiopharmaceuticals for medical diagnosis and tumour therapy, H(4)dota, H(4)teta, H(8)dotp and H(8)tetp, is examined from a chemical point of view. Differences between 12- and 14-membered tetraazamacrocyclic derivatives with methylcarboxylate and methylphosphonate pendant arms and their chelates with divalent first-series transition metal and trivalent lanthanide ions are discussed on the basis of their thermodynamic stability constants, X- ray structures and theoretical studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental data for the title reaction were modeled using master equation (ME)/RRKM methods based on the Multiwell suite of programs. The starting point for the exercise was the empirical fitting provided by the NASA (Sander, S. P.; Finlayson-Pitts, B. J.; Friedl, R. R.; Golden, D. M.; Huie, R. E.; Kolb, C. E.; Kurylo, M. J.; Molina, M. J.; Moortgat, G. K.; Orkin, V. L.; Ravishankara, A. R. Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies, Evaluation Number 15; Jet Propulsion Laboratory: Pasadena, California, 2006)(1) and IUPAC (Atkinson, R.; Baulch, D. L.; Cox, R. A.: R. F. Hampson, J.; Kerr, J. A.; Rossi, M. J.; Troe, J. J. Phys. Chem. Ref. Data. 2000, 29, 167) 2 data evaluation panels, which represents the data in the experimental pressure ranges rather well. Despite the availability of quite reliable parameters for these calculations (molecular vibrational frequencies (Parthiban, S.; Lee, T. J. J. Chem. Phys. 2000, 113, 145)3 and a. value (Orlando, J. J.; Tyndall, G. S. J. Phys. Chem. 1996, 100,. 19398)4 of the bond dissociation energy, D-298(BrO-NO2) = 118 kJ mol(-1), corresponding to Delta H-0(circle) = 114.3 kJ mol(-1) at 0 K) and the use of RRKM/ME methods, fitting calculations to the reported data or the empirical equations was anything but straightforward. Using these molecular parameters resulted in a discrepancy between the calculations and the database of rate constants of a factor of ca. 4 at, or close to, the low-pressure limit. Agreement between calculation and experiment could be achieved in two ways, either by increasing Delta H-0(circle) to an unrealistically high value (149.3 kJ mol(-1)) or by increasing , the average energy transferred in a downward collision, to an unusually large value (> 5000 cm(-1)). The discrepancy could also be reduced by making all overall rotations fully active. The system was relatively insensitive to changing the moments of inertia in the transition state to increase the centrifugal effect. The possibility of involvement of BrOONO was tested and cannot account for the difficulties of fitting the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modelling of a nonlinear stochastic dynamical processes from data involves solving the problems of data gathering, preprocessing, model architecture selection, learning or adaptation, parametric evaluation and model validation. For a given model architecture such as associative memory networks, a common problem in non-linear modelling is the problem of "the curse of dimensionality". A series of complementary data based constructive identification schemes, mainly based on but not limited to an operating point dependent fuzzy models, are introduced in this paper with the aim to overcome the curse of dimensionality. These include (i) a mixture of experts algorithm based on a forward constrained regression algorithm; (ii) an inherent parsimonious delaunay input space partition based piecewise local lineal modelling concept; (iii) a neurofuzzy model constructive approach based on forward orthogonal least squares and optimal experimental design and finally (iv) the neurofuzzy model construction algorithm based on basis functions that are Bézier Bernstein polynomial functions and the additive decomposition. Illustrative examples demonstrate their applicability, showing that the final major hurdle in data based modelling has almost been removed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dispersion of a point-source release of a passive scalar in a regular array of cubical, urban-like, obstacles is investigated by means of direct numerical simulations. The simulations are conducted under conditions of neutral stability and fully rough turbulent flow, at a roughness Reynolds number of Reτ = 500. The Navier–Stokes and scalar equations are integrated assuming a constant rate release from a point source close to the ground within the array. We focus on short-range dispersion, when most of the material is still within the building canopy. Mean and fluctuating concentrations are computed for three different pressure gradient directions (0◦ , 30◦ , 45◦). The results agree well with available experimental data measured in a water channel for a flow angle of 0◦ . Profiles of mean concentration and the three-dimensional structure of the dispersion pattern are compared for the different forcing angles. A number of processes affecting the plume structure are identified and discussed, including: (i) advection or channelling of scalar down ‘streets’, (ii) lateral dispersion by turbulent fluctuations and topological dispersion induced by dividing streamlines around buildings, (iii) skewing of the plume due to flow turning with height, (iv) detrainment by turbulent dispersion or mean recirculation, (v) entrainment and release of scalar in building wakes, giving rise to ‘secondary sources’, (vi) plume meandering due to unsteady turbulent fluctuations. Finally, results on relative concentration fluctuations are presented and compared with the literature for point source dispersion over flat terrain and urban arrays. Keywords Direct numerical simulation · Dispersion modelling · Urban array

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land surface albedo is dependent on atmospheric state and hence is difficult to validate. Over the UK persistent cloud cover and land cover heterogeneity at moderate (km-scale) spatial resolution can also complicate comparison of field-measured albedo with that derived from instruments such as the Moderate Resolution Imaging Spectrometer (MODIS). A practical method of comparing moderate resolution satellite-derived albedo with ground-based measurements over an agricultural site in the UK is presented. Point measurements of albedo made on the ground are scaled up to the MODIS resolution (1 km) through reflectance data obtained at a range of spatial scales. The point measurements of albedo agreed in magnitude with MODIS values over the test site to within a few per cent, despite problems such as persistent cloud cover and the difficulties of comparing measurements made during different years. Albedo values derived from airborne and field-measured data were generally lower than the corresponding satellite-derived values. This is thought to be due to assumptions made regarding the ratio of direct to diffuse illumination used when calculating albedo from reflectance. Measurements of albedo calculated for specific times fitted closely to the trajectories of temporal albedo derived from both Systeme pour l'Observation de la Terre (SPOT) Vegetation (VGT) and MODIS instruments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variational data assimilation in continuous time is revisited. The central techniques applied in this paper are in part adopted from the theory of optimal nonlinear control. Alternatively, the investigated approach can be considered as a continuous time generalization of what is known as weakly constrained four-dimensional variational assimilation (4D-Var) in the geosciences. The technique allows to assimilate trajectories in the case of partial observations and in the presence of model error. Several mathematical aspects of the approach are studied. Computationally, it amounts to solving a two-point boundary value problem. For imperfect models, the trade-off between small dynamical error (i.e. the trajectory obeys the model dynamics) and small observational error (i.e. the trajectory closely follows the observations) is investigated. This trade-off turns out to be trivial if the model is perfect. However, even in this situation, allowing for minute deviations from the perfect model is shown to have positive effects, namely to regularize the problem. The presented formalism is dynamical in character. No statistical assumptions on dynamical or observational noise are imposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields uj, j = 1, ..., n of sound sources supported in different bounded domains G1, ..., Gn in from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u1 + + un on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions , to construct uℓ for ℓ = 1, ..., n from u|Λ in the form We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data augmentation is a powerful technique for estimating models with latent or missing data, but applications in agricultural economics have thus far been few. This paper showcases the technique in an application to data on milk market participation in the Ethiopian highlands. There, a key impediment to economic development is an apparently low rate of market participation. Consequently, economic interest centers on the “locations” of nonparticipants in relation to the market and their “reservation values” across covariates. These quantities are of policy interest because they provide measures of the additional inputs necessary in order for nonparticipants to enter the market. One quantity of primary interest is the minimum amount of surplus milk (the “minimum efficient scale of operations”) that the household must acquire before market participation becomes feasible. We estimate this quantity through routine application of data augmentation and Gibbs sampling applied to a random-censored Tobit regression. Incorporating random censoring affects markedly the marketable-surplus requirements of the household, but only slightly the covariates requirements estimates and, generally, leads to more plausible policy estimates than the estimates obtained from the zero-censored formulation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glycogen synthase kinase 3 (GSK3, of which there are two isoforms, GSK3alpha and GSK3beta) was originally characterized in the context of regulation of glycogen metabolism, though it is now known to regulate many other cellular processes. Phosphorylation of GSK3alpha(Ser21) and GSK3beta(Ser9) inhibits their activity. In the heart, emphasis has been placed particularly on GSK3beta, rather than GSK3alpha. Importantly, catalytically-active GSK3 generally restrains gene expression and, in the heart, catalytically-active GSK3 has been implicated in anti-hypertrophic signalling. Inhibition of GSK3 results in changes in the activities of transcription and translation factors in the heart and promotes hypertrophic responses, and it is generally assumed that signal transduction from hypertrophic stimuli to GSK3 passes primarily through protein kinase B/Akt (PKB/Akt). However, recent data suggest that the situation is far more complex. We review evidence pertaining to the role of GSK3 in the myocardium and discuss effects of genetic manipulation of GSK3 activity in vivo. We also discuss the signalling pathways potentially regulating GSK3 activity and propose that, depending on the stimulus, phosphorylation of GSK3 is independent of PKB/Akt. Potential GSK3 substrates studied in relation to myocardial hypertrophy include nuclear factors of activated T cells, beta-catenin, GATA4, myocardin, CREB, and eukaryotic initiation factor 2Bvarepsilon. These and other transcription factor substrates putatively important in the heart are considered. We discuss whether cardiac pathologies could be treated by therapeutic intervention at the GSK3 level but conclude that any intervention would be premature without greater understanding of the precise role of GSK3 in cardiac processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop and analyze a class of efficient Galerkin approximation methods for uncertainty quantification of nonlinear operator equations. The algorithms are based on sparse Galerkin discretizations of tensorized linearizations at nominal parameters. Specifically, we consider abstract, nonlinear, parametric operator equations J(\alpha ,u)=0 for random input \alpha (\omega ) with almost sure realizations in a neighborhood of a nominal input parameter \alpha _0. Under some structural assumptions on the parameter dependence, we prove existence and uniqueness of a random solution, u(\omega ) = S(\alpha (\omega )). We derive a multilinear, tensorized operator equation for the deterministic computation of k-th order statistical moments of the random solution's fluctuations u(\omega ) - S(\alpha _0). We introduce and analyse sparse tensor Galerkin discretization schemes for the efficient, deterministic computation of the k-th statistical moment equation. We prove a shift theorem for the k-point correlation equation in anisotropic smoothness scales and deduce that sparse tensor Galerkin discretizations of this equation converge in accuracy vs. complexity which equals, up to logarithmic terms, that of the Galerkin discretization of a single instance of the mean field problem. We illustrate the abstract theory for nonstationary diffusion problems in random domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flood simulation models and hazard maps are only as good as the underlying data against which they are calibrated and tested. However, extreme flood events are by definition rare, so the observational data of flood inundation extent are limited in both quality and quantity. The relative importance of these observational uncertainties has increased now that computing power and accurate lidar scans make it possible to run high-resolution 2D models to simulate floods in urban areas. However, the value of these simulations is limited by the uncertainty in the true extent of the flood. This paper addresses that challenge by analyzing a point dataset of maximum water extent from a flood event on the River Eden at Carlisle, United Kingdom, in January 2005. The observation dataset is based on a collection of wrack and water marks from two postevent surveys. A smoothing algorithm for identifying, quantifying, and reducing localized inconsistencies in the dataset is proposed and evaluated showing positive results. The proposed smoothing algorithm can be applied in order to improve flood inundation modeling assessment and the determination of risk zones on the floodplain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Question: What are the correlations between the degree of drought stress and temperature, and the adoption of specific adaptive strategies by plants in the Mediterranean region? Location: 602 sites across the Mediterranean region. Method: We considered 12 plant morphological and phenological traits, and measured their abundance at the sites as trait scores obtained from pollen percentages. We conducted stepwise regression analyses of trait scores as a function of plant available moisture (α) and winter temperature (MTCO). Results: Patterns in the abundance for the plant traits we considered are clearly determined by α, MTCO or a combination of both. In addition, trends in leaf size, texture, thickness, pubescence and aromatic leaves and other plant level traits such as thorniness and aphylly, vary according to the life form (tree, shrub, forb), the leaf type (broad, needle) and phenology (evergreen, summer-green). Conclusions: Despite conducting this study based on pollen data we have identified ecologically plausible trends in the abundance of traits along climatic gradients. Plant traits other than the usual life form, leaf type and leaf phenology carry strong climatic signals. Generally, combinations of plant traits are more climatically diagnostic than individual traits. The qualitative and quantitative relationships between plant traits and climate parameters established here will help to provide an improved basis for modelling the impact of climate changes on vegetation and form a starting point for a global analysis of pollen-climate relationships

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Palaeodata in synthesis form are needed as benchmarks for the Palaeoclimate Modelling Intercomparison Project (PMIP). Advances since the last synthesis of terrestrial palaeodata from the last glacial maximum (LGM) call for a new evaluation, especially of data from the tropics. Here pollen, plant-macrofossil, lake-level, noble gas (from groundwater) and δ18O (from speleothems) data are compiled for 18±2 ka (14C), 32 °N–33 °S. The reliability of the data was evaluated using explicit criteria and some types of data were re-analysed using consistent methods in order to derive a set of mutually consistent palaeoclimate estimates of mean temperature of the coldest month (MTCO), mean annual temperature (MAT), plant available moisture (PAM) and runoff (P-E). Cold-month temperature (MAT) anomalies from plant data range from −1 to −2 K near sea level in Indonesia and the S Pacific, through −6 to −8 K at many high-elevation sites to −8 to −15 K in S China and the SE USA. MAT anomalies from groundwater or speleothems seem more uniform (−4 to −6 K), but the data are as yet sparse; a clear divergence between MAT and cold-month estimates from the same region is seen only in the SE USA, where cold-air advection is expected to have enhanced cooling in winter. Regression of all cold-month anomalies against site elevation yielded an estimated average cooling of −2.5 to −3 K at modern sea level, increasing to ≈−6 K by 3000 m. However, Neotropical sites showed larger than the average sea-level cooling (−5 to −6 K) and a non-significant elevation effect, whereas W and S Pacific sites showed much less sea-level cooling (−1 K) and a stronger elevation effect. These findings support the inference that tropical sea-surface temperatures (SSTs) were lower than the CLIMAP estimates, but they limit the plausible average tropical sea-surface cooling, and they support the existence of CLIMAP-like geographic patterns in SST anomalies. Trends of PAM and lake levels indicate wet LGM conditions in the W USA, and at the highest elevations, with generally dry conditions elsewhere. These results suggest a colder-than-present ocean surface producing a weaker hydrological cycle, more arid continents, and arguably steeper-than-present terrestrial lapse rates. Such linkages are supported by recent observations on freezing-level height and tropical SSTs; moreover, simulations of “greenhouse” and LGM climates point to several possible feedback processes by which low-level temperature anomalies might be amplified aloft.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using monthly time-series data 1999-2013, the paper shows that markets for agricultural commodities provide a yardstick for real purchasing power, and thus a reference point for the real value of fiat currencies. The daily need for each adult to consume about 2800 food calories is universal; data from FAO food balance sheets confirm that the world basket of food consumed daily is non-volatile in comparison to the volatility of currency exchange rates, and so the replacement cost of food consumed provides a consistent indicator of economic value. Food commodities are storable for short periods, but ultimately perishable, and this exerts continual pressure for markets to clear in the short term; moreover, food calories can be obtained from a very large range of foodstuffs, and so most households are able to use arbitrage to select a near optimal weighting of quantities purchased. The paper proposes an original method to enable a standard of value to be established, definable in physical units on the basis of actual worldwide consumption of food goods, with an illustration of the method.