87 resultados para Methodology of the conceptual elaboration ferreiriana
Resumo:
Most parameterizations for precipitating convection in use today are bulk schemes, in which an ensemble of cumulus elements with different properties is modelled as a single, representative entraining-detraining plume. We review the underpinning mathematical model for such parameterizations, in particular by comparing it with spectral models in which elements are not combined into the representative plume. The chief merit of a bulk model is that the representative plume can be described by an equation set with the same structure as that which describes each element in a spectral model. The equivalence relies on an ansatz for detrained condensate introduced by Yanai et al. (1973) and on a simplified microphysics. There are also conceptual differences in the closure of bulk and spectral parameterizations. In particular, we show that the convective quasi-equilibrium closure of Arakawa and Schubert (1974) for spectral parameterizations cannot be carried over to a bulk parameterization in a straightforward way. Quasi-equilibrium of the cloud work function assumes a timescale separation between a slow forcing process and a rapid convective response. But, for the natural bulk analogue to the cloud-work function (the dilute CAPE), the relevant forcing is characterised by a different timescale, and so its quasi-equilibrium entails a different physical constraint. Closures of bulk parameterization that use the non-entraining parcel value of CAPE do not suffer from this timescale issue. However, the Yanai et al. (1973) ansatz must be invoked as a necessary ingredient of those closures.
Resumo:
The utility of the nitroaldol reaction for accessing 3-nitro-pyranoside, 3-nitro-septanoside or 4-nitro-septanoside derivatives, by reaction of the anion of nitromethane with glycoside dialdehydes is demonstrated. Initially, the feasibility of using unprotected glucoside dialdehydes was probed for the synthesis of the septanoside products, but this affoided pyranoside rather than septanoside targets. Subsequent studies utilised protected glycoside dialdehydes within the methodology, which allowed entry into a range of 3-nitro or 4-nitro-septanosides in good yield NMR spectroscopic analysis allowed determination of the stereochemistry of each of the products thus afforded.
Resumo:
Perchlorate-reducing bacteria fractionate chlorine stable isotopes giving a powerful approach to monitor the extent of microbial consumption of perchlorate in contaminated sites undergoing remediation or natural perchlorate containing sites. This study reports the full experimental data and methodology used to re-evaluate the chlorine isotope fractionation of perchlorate reduction in duplicate culture experiments of Azospira suillum strain PS at 37 degrees C (Delta Cl-37(Cr)--ClO4-) previously reported, without a supporting data set by Coleman et al. [Coleman, M.L., Ader, M., Chaudhuri, S., Coates,J.D., 2003. Microbial Isotopic Fractionation of Perchlorate Chlorine. Appl. Environ. Microbiol. 69, 4997-5000] in a reconnaissance study, with the goal of increasing the accuracy and precision of the isotopic fractionation determination. The method fully described here for the first time, allows the determination of a higher precision Delta Cl-37(Cl)--ClO4- value, either from accumulated chloride content and isotopic composition or from the residual perchlorate content and isotopic composition. The result sets agree perfectly, within error, giving average Delta Cl-37(Cl)--ClO4- = -14.94 +/- 0.15%omicron. Complementary use of chloride and perchlorate data allowed the identification and rejection of poor quality data by applying mass and isotopic balance checks. This precise Delta Cl-37(Cl)--ClO4-, value can serve as a reference point for comparison with future in situ or microcosm studies but we also note its similarity to the theoretical equilibrium isotopic fractionation between a hypothetical chlorine species of redox state +6 and perchlorate at 37 degrees C and suggest that the first electron transfer during perchlorate reduction may occur at isotopic equilibrium between art enzyme-bound chlorine and perchlorate. (C) 2008 Elsevier B.V. All rights reserved.
Case study of the use of remotely sensed data for modeling flood inundation on the river Severn, UK.
Resumo:
A methodology for using remotely sensed data to both generate and evaluate a hydraulic model of floodplain inundation is presented for a rural case study in the United Kingdom: Upton-upon-Severn. Remotely sensed data have been processed and assembled to provide an excellent test data set for both model construction and validation. In order to assess the usefulness of the data and the issues encountered in their use, two models for floodplain inundation were constructed: one based on an industry standard one-dimensional approach and the other based on a simple two-dimensional approach. The results and their implications for the future use of remotely sensed data for predicting flood inundation are discussed. Key conclusions for the use of remotely sensed data are that care must be taken to integrate different data sources for both model construction and validation and that improvements in ground height data shift the focus in terms of model uncertainties to other sources such as boundary conditions. The differences between the two models are found to be of minor significance.
Resumo:
This paper presents the major characteristics of the Institut Pierre Simon Laplace (IPSL) coupled ocean–atmosphere general circulation model. The model components and the coupling methodology are described, as well as the main characteristics of the climatology and interannual variability. The model results of the standard version used for IPCC climate projections, and for intercomparison projects like the Paleoclimate Modeling Intercomparison Project (PMIP 2) are compared to those with a higher resolution in the atmosphere. A focus on the North Atlantic and on the tropics is used to address the impact of the atmosphere resolution on processes and feedbacks. In the North Atlantic, the resolution change leads to an improved representation of the storm-tracks and the North Atlantic oscillation. The better representation of the wind structure increases the northward salt transports, the deep-water formation and the Atlantic meridional overturning circulation. In the tropics, the ocean–atmosphere dynamical coupling, or Bjerknes feedback, improves with the resolution. The amplitude of ENSO (El Niño-Southern oscillation) consequently increases, as the damping processes are left unchanged.
Resumo:
A systematic modular approach to investigate the respective roles of the ocean and atmosphere in setting El Niño characteristics in coupled general circulation models is presented. Several state-of-the-art coupled models sharing either the same atmosphere or the same ocean are compared. Major results include 1) the dominant role of the atmosphere model in setting El Niño characteristics (periodicity and base amplitude) and errors (regularity) and 2) the considerable improvement of simulated El Niño power spectra—toward lower frequency—when the atmosphere resolution is significantly increased. Likely reasons for such behavior are briefly discussed. It is argued that this new modular strategy represents a generic approach to identifying the source of both coupled mechanisms and model error and will provide a methodology for guiding model improvement.
Resumo:
BACKGROUND: Flavonoid metabolites remain in blood for periods of time potentially long enough to allow interactions with cellular components of this tissue. It is well-established that flavonoids are metabolised within the intestine and liver into methylated, sulphated and glucuronidated counterparts, which inhibit platelet function. METHODOLOGY/PRINCIPAL FINDINGS: We demonstrate evidence suggesting platelets which contain metabolic enzymes, as an alternative location for flavonoid metabolism. Quercetin and a plasma metabolite of this compound, 4'-O-methyl quercetin (tamarixetin) were shown to gain access to the cytosolic compartment of platelets, using confocal microscopy. High performance liquid chromatography (HPLC) and mass spectrometry (MS) showed that quercetin was transformed into a compound with a mass identical to tamarixetin, suggesting that the flavonoid was methylated by catechol-O-methyl transferase (COMT) within platelets. CONCLUSIONS/SIGNIFICANCE: Platelets potentially mediate a third phase of flavonoid metabolism, which may impact on the regulation of the function of these cells by metabolites of these dietary compounds.
Resumo:
Six parameters uniquely describe the orbit of a body about the Sun. Given these parameters, it is possible to make predictions of the body's position by solving its equation of motion. The parameters cannot be directly measured, so they must be inferred indirectly by an inversion method which uses measurements of other quantities in combination with the equation of motion. Inverse techniques are valuable tools in many applications where only noisy, incomplete, and indirect observations are available for estimating parameter values. The methodology of the approach is introduced and the Kepler problem is used as a real-world example. (C) 2003 American Association of Physics Teachers.
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous score for classification of study participants into healthy or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might be confounded by a potentially unknown variation of the cut-off value. To cope with this phenomena it is suggested to use, instead, an overall estimate of the misclassification error previously suggested and used as Youden’s index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel–Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden’s index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
This paper presents the first systematic chronostratigraphic study of the river terraces of the Exe catchment in South West England and a new conceptual model for terrace formation in unglaciated basins with applicability to terrace staircase sequences elsewhere. The Exe catchment lay beyond the maximum extent of Pleistocene ice sheets and the drainage pattern evolved from the Tertiary to the Middle Pleistocene, by which time the major valley systems were in place and downcutting began to create a staircase of strath terraces. The higher terraces (8-6) typically exhibit altitudinal overlap or appear to be draped over the landscape, whilst the middle terraces show greater altitudinal separation and the lowest terraces are of a cut and fill form. The terrace deposits investigated in this study were deposited in cold phases of the glacial-interglacial Milankovitch climatic cycles with the lowest four being deposited in the Devensian Marine Isotope Stages (MIS) 4-2. A new cascade process-response model is proposed of basin terrace evolution in the Exe valley, which emphasises the role of lateral erosion in the creation of strath terraces and the reworking of inherited resistant lithological components down through the staircase. The resultant emergent valley topography and the reworking of artefacts along with gravel clasts, have important implications for the dating of hominin presence and the local landscapes they inhabited. Whilst the terrace chronology suggested here is still not as detailed as that for the Thames or the Solent System it does indicate a Middle Palaeolithic hominin presence in the region, probably prior to the late Wolstonian Complex or MIS 6. This supports existing data from cave sites in South West England.
Resumo:
A methodology is presented for the development of a combined seasonal weather and crop productivity forecasting system. The first stage of the methodology is the determination of the spatial scale(s) on which the system could operate; this determination has been made for the case of groundnut production in India. Rainfall is a dominant climatic determinant of groundnut yield in India. The relationship between yield and rainfall has been explored using data from 1966 to 1995. On the all-India scale, seasonal rainfall explains 52% of the variance in yield. On the subdivisional scale, correlations vary between variance r(2) = 0.62 (significance level p < 10(-4)) and a negative correlation with r(2) = 0.1 (p = 0.13). The spatial structure of the relationship between rainfall and groundnut yield has been explored using empirical orthogonal function (EOF) analysis. A coherent, large-scale pattern emerges for both rainfall and yield. On the subdivisional scale (similar to 300 km), the first principal component (PC) of rainfall is correlated well with the first PC of yield (r(2) = 0.53, p < 10(-4)), demonstrating that the large-scale patterns picked out by the EOFs are related. The physical significance of this result is demonstrated. Use of larger averaging areas for the EOF analysis resulted in lower and (over time) less robust correlations. Because of this loss of detail when using larger spatial scales, the subdivisional scale is suggested as an upper limit on the spatial scale for the proposed forecasting system. Further, district-level EOFs of the yield data demonstrate the validity of upscaling these data to the subdivisional scale. Similar patterns have been produced using data on both of these scales, and the first PCs are very highly correlated (r(2) = 0.96). Hence, a working spatial scale has been identified, typical of that used in seasonal weather forecasting, that can form the basis of crop modeling work for the case of groundnut production in India. Last, the change in correlation between yield and seasonal rainfall during the study period has been examined using seasonal totals and monthly EOFs. A further link between yield and subseasonal variability is demonstrated via analysis of dynamical data.
Resumo:
This paper assesses the impact of the 'decoupling' reform of the Common Agricultural Policy on the labour allocation decisions of Irish farmers. The agricultural household decision-making model provides the conceptual and theoretical framework to examine the interaction between government subsidies and farmers' time allocation decisions. The relationship postulated is that 'decoupling' of agricultural support from production would probably result in a decline in the return to farm labour but it would also lead to an increase in household wealth. The effect of these factors on how farmers allocate their time is tested empirically using labour participation and labour supply models. The models developed are sufficiently general for application elsewhere. The main findings for the Irish situation are that the decoupling of direct payments is likely to increase the probability of farmers participating in the off-farm employment market and that the amount of time allocated to off-farm work will increase.
Resumo:
If the fundamental precepts of Farming Systems Research were to be taken literally then it would imply that for each farm 'unique' solutions should be sought. This is an unrealistic expectation, but it has led to the idea of a recommendation domain, implying creating a taxonomy of farms, in order to increase the general applicability of recommendations. Mathematical programming models are an established means of generating recommended solutions, but for such models to be effective they have to be constructed for 'truly' typical or representative situations. The multi-variate statistical techniques provide a means of creating the required typologies, particularly when an exhaustive database is available. This paper illustrates the application of this methodology in two different studies that shared the common purpose of identifying types of farming systems in their respective study areas. The issues related with the use of factor and cluster analyses for farm typification prior to building representative mathematical programming models for Chile and Pakistan are highlighted. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous Score for classification of study participants into healthy, or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between Studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might he confounded by a potentially unknown variation of the cut-off Value. To cope with this phenomena it is suggested to use, instead an overall estimate of the misclassification error previously suggested and used as Youden's index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel-Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden's index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
The research uses a sociological perspective to build an improved, context specific understanding of innovation diffusion within the UK construction industry. It is argued there is an iterative interplay between actors and the social system they occupy that directly influences the diffusion process as well as the methodology adopted. The research builds upon previous findings that argued a level of best fit for the three innovation diffusion concepts of cohesion, structural equivalence and thresholds. That level of best fit is analysed here using empirical data from the UK construction industry. This analysis allows an understanding of how the relative importance of these concepts' actually varies within the stages of the innovation diffusion process. The conclusion that the level of relevance fluctuates in relation to the stages of the diffusion process is a new development in the field.