35 resultados para [JEL:C5] Mathematical and Quantitative Methods - Econometric Modeling
em CentAUR: Central Archive University of Reading - UK
Resumo:
Answering many of the critical questions in conservation, development and environmental management requires integrating the social and natural sciences. However, understanding the array of available quantitative methods and their associated terminology presents a major barrier to successful collaboration. We provide an overview of quantitative socio-economic methods that distils their complexity into a simple taxonomy. We outline how each has been used in conjunction with ecological models to address questions relating to the management of socio-ecological systems. We review the application of social and ecological quantitative concepts to agro-ecology and classify the approaches used to integrate the two disciplines. Our review included all published integrated models from 2003 to 2008 in 27 journals that publish agricultural modelling research. Although our focus is on agro-ecology, many of the results are broadly applicable to other fields involving an interaction between human activities and ecology. We found 36 papers that integrated social and ecological concepts in a quantitative model. Four different approaches to integration were used, depending on the scale at which human welfare was quantified. Most models viewed humans as pure profit maximizers, both when calculating welfare and predicting behaviour. Synthesis and applications. We reached two main conclusions based on our taxonomy and review. The first is that quantitative methods that extend predictions of behaviour and measurements of welfare beyond a simple market value basis are underutilized by integrated models. The second is that the accuracy of prediction for integrated models remains largely unquantified. Addressing both problems requires researchers to reach a common understanding of modelling goals and data requirements during the early stages of a project.
Resumo:
We present a new sparse shape modeling framework on the Laplace-Beltrami (LB) eigenfunctions. Traditionally, the LB-eigenfunctions are used as a basis for intrinsically representing surface shapes by forming a Fourier series expansion. To reduce high frequency noise, only the first few terms are used in the expansion and higher frequency terms are simply thrown away. However, some lower frequency terms may not necessarily contribute significantly in reconstructing the surfaces. Motivated by this idea, we propose to filter out only the significant eigenfunctions by imposing l1-penalty. The new sparse framework can further avoid additional surface-based smoothing often used in the field. The proposed approach is applied in investigating the influence of age (38-79 years) and gender on amygdala and hippocampus shapes in the normal population. In addition, we show how the emotional response is related to the anatomy of the subcortical structures.
Resumo:
A study was conducted to estimate variation among laboratories and between manual and automated techniques of measuring pressure on the resulting gas production profiles (GPP). Eight feeds (molassed sugarbeet feed, grass silage, maize silage, soyabean hulls, maize gluten feed, whole crop wheat silage, wheat, glucose) were milled to pass a I mm screen and sent to three laboratories (ADAS Nutritional Sciences Research Unit, UK; Institute of Grassland and Environmental Research (IGER), UK; Wageningen University, The Netherlands). Each laboratory measured GPP over 144 h using standardised procedures with manual pressure transducers (MPT) and automated pressure systems (APS). The APS at ADAS used a pressure transducer and bottles in a shaking water bath, while the APS at Wageningen and IGER used a pressure sensor and bottles held in a stationary rack. Apparent dry matter degradability (ADDM) was estimated at the end of the incubation. GPP were fitted to a modified Michaelis-Menten model assuming a single phase of gas production, and GPP were described in terms of the asymptotic volume of gas produced (A), the time to half A (B), the time of maximum gas production rate (t(RM) (gas)) and maximum gas production rate (R-M (gas)). There were effects (P<0.001) of substrate on all parameters. However, MPT produced more (P<0.001) gas, but with longer (P<0.001) B and t(RM gas) (P<0.05) and lower (P<0.001) R-M gas compared to APS. There was no difference between apparatus in ADDM estimates. Interactions occurred between substrate and apparatus, substrate and laboratory, and laboratory and apparatus. However, when mean values for MPT were regressed from the individual laboratories, relationships were good (i.e., adjusted R-2 = 0.827 or higher). Good relationships were also observed with APS, although they were weaker than for MPT (i.e., adjusted R-2 = 0.723 or higher). The relationships between mean MPT and mean APS data were also good (i.e., adjusted R 2 = 0. 844 or higher). Data suggest that, although laboratory and method of measuring pressure are sources of variation in GPP estimation, it should be possible using appropriate mathematical models to standardise data among laboratories so that data from one laboratory could be extrapolated to others. This would allow development of a database of GPP data from many diverse feeds. (c) 2005 Published by Elsevier B.V.
Resumo:
Our new molecular understanding of immune priming states that dendritic cell activation is absolutely pivotal for expansion and differentiation of naïve T lymphocytes, and it follows that understanding DC activation is essential to understand and design vaccine adjuvants. This chapter describes how dendritic cells can be used as a core tool to provide detailed quantitative and predictive immunomics information about how adjuvants function. The role of distinct antigen, costimulation, and differentiation signals from activated DC in priming is explained. Four categories of input signals which control DC activation – direct pathogen detection, sensing of injury or cell death, indirect activation via endogenous proinflammatory mediators, and feedback from activated T cells – are compared and contrasted. Practical methods for studying adjuvants using DC are summarised and the importance of DC subset choice, simulating T cell feedback, and use of knockout cells is highlighted. Finally, five case studies are examined that illustrate the benefit of DC activation analysis for understanding vaccine adjuvant function.
Resumo:
Chemical methods to predict the bioavailable fraction of organic contaminants are usually validated in the literature by comparison with established bioassays. A soil spiked with polycyclic aromatic hydrocarbons (PAHs) was aged over six months and subjected to butanol, cyclodextrin and tenax extractions as well as an exhaustive extraction to determine total PAH concentrations at several time points. Earthworm (Eisenia fetida) and rye grass root (Lolium multiflorum) accumulation bioassays were conducted in parallel. Butanol extractions gave the best relationship with earthworm accumulation (r2 ≤ 0.54, p ≤ 0.01); cyclodextrin, butanol and acetone–hexane extractions all gave good predictions of accumulation in rye grass roots (r2 ≤ 0.86, p ≤ 0.01). However, the profile of the PAHs extracted by the different chemical methods was significantly different (p < 0.01) to that accumulated in the organisms. Biota accumulated a higher proportion of the heavier 4-ringed PAHs. It is concluded that bioaccumulation is a complex process that cannot be predicted by measuring the bioavailable fraction alone. The ability of chemical methods to predict PAH accumulation in Eisenia fetida and Lolium multiflorum was hindered by the varied metabolic fate of the different PAHs within the organisms.
Resumo:
The goal of the review is to provide a state-of-the-art survey on sampling and probe methods for the solution of inverse problems. Further, a configuration approach to some of the problems will be presented. We study the concepts and analytical results for several recent sampling and probe methods. We will give an introduction to the basic idea behind each method using a simple model problem and then provide some general formulation in terms of particular configurations to study the range of the arguments which are used to set up the method. This provides a novel way to present the algorithms and the analytic arguments for their investigation in a variety of different settings. In detail we investigate the probe method (Ikehata), linear sampling method (Colton-Kirsch) and the factorization method (Kirsch), singular sources Method (Potthast), no response test (Luke-Potthast), range test (Kusiak, Potthast and Sylvester) and the enclosure method (Ikehata) for the solution of inverse acoustic and electromagnetic scattering problems. The main ideas, approaches and convergence results of the methods are presented. For each method, we provide a historical survey about applications to different situations.
Resumo:
Techniques for obtaining quantitative values of the temperatures and concentrations of remote hot gaseous effluents from their measured passive emission spectra have been examined in laboratory experiments. The high sensitivity of the spectrometer in the vicinity of the 2397 cm-1 band head region of CO2 has allowed the gas temperature to be calculated from the relative intensity of the observed rotational lines. The spatial distribution of the CO2 in a methane flame has been reconstructed tomographically using a matrix inversion technique. The spectrometer has been calibrated against a black body source at different temperatures and a self absorption correction has been applied to the data avoiding the need to measure the transmission directly. Reconstruction artifacts have been reduced by applying a smoothing routine to the inversion matrix.
Resumo:
Background: Molecular tools may help to uncover closely related and still diverging species from a wide variety of taxa and provide insight into the mechanisms, pace and geography of marine speciation. There is a certain controversy on the phylogeography and speciation modes of species-groups with an Eastern Atlantic-Western Indian Ocean distribution, with previous studies suggesting that older events (Miocene) and/or more recent (Pleistocene) oceanographic processes could have influenced the phylogeny of marine taxa. The spiny lobster genus Palinurus allows for testing among speciation hypotheses, since it has a particular distribution with two groups of three species each in the Northeastern Atlantic (P. elephas, P. mauritanicus and P. charlestoni) and Southeastern Atlantic and Southwestern Indian Oceans (P. gilchristi, P. delagoae and P. barbarae). In the present study, we obtain a more complete understanding of the phylogenetic relationships among these species through a combined dataset with both nuclear and mitochondrial markers, by testing alternative hypotheses on both the mutation rate and tree topology under the recently developed approximate Bayesian computation (ABC) methods. Results: Our analyses support a North-to-South speciation pattern in Palinurus with all the South-African species forming a monophyletic clade nested within the Northern Hemisphere species. Coalescent-based ABC methods allowed us to reject the previously proposed hypothesis of a Middle Miocene speciation event related with the closure of the Tethyan Seaway. Instead, divergence times obtained for Palinurus species using the combined mtDNA-microsatellite dataset and standard mutation rates for mtDNA agree with known glaciation-related processes occurring during the last 2 my. Conclusion: The Palinurus speciation pattern is a typical example of a series of rapid speciation events occurring within a group, with very short branches separating different species. Our results support the hypothesis that recent climate change-related oceanographic processes have influenced the phylogeny of marine taxa, with most Palinurus species originating during the last two million years. The present study highlights the value of new coalescent-based statistical methods such as ABC for testing different speciation hypotheses using molecular data.
Resumo:
Physical, cultural and biological methods for weed control have developed largely independently and are often concerned with weed control in different systems: physical and cultural control in annual crops and biocontrol in extensive grasslands. We discuss the strengths and limitations of four physical and cultural methods for weed control: mechanical, thermal, cutting, and intercropping, and the advantages and disadvantages of combining biological control with them. These physical and cultural control methods may increase soil nitrogen levels and alter microclimate at soil level; this may be of benefit to biocontrol agents, although physical disturbance to the soil and plant damage may be detrimental. Some weeds escape control by these methods; we suggest that these weeds may be controlled by biocontrol agents. It will be easiest to combine biological control with. re and cutting in grasslands; within arable systems it would be most promising to combine biological control (especially using seed predators and foliar pathogens) with cover-cropping, and mechanical weeding combined with foliar bacterial and possibly foliar fungal pathogens. We stress the need to consider the timing of application of combined control methods in order to cause least damage to the biocontrol agent, along with maximum damage to the weed and to consider the wider implications of these different weed control methods.
Resumo:
G3B3 and G2MP2 calculations using Gaussian 03 have been carried out to investigate the protonation preferences for phenylboronic acid. All nine heavy atoms have been protonated in turn. With both methodologies, the two lowest protonation energies are obtained with the proton located either at the ipso carbon atom or at a hydroxyl oxygen atom. Within the G3B3 formalism, the lowest-energy configuration by 4.3 kcal . mol(-1) is found when the proton is located at the ipso carbon, rather than at the electronegative oxygen atom. In the resulting structure, the phenyl ring has lost a significant amount of aromaticity. By contrast, calculations with G2MP2 show that protonation at the hydroxyl oxygen atom is favored by 7.7 kcal . mol(-1). Calculations using the polarizable continuum model (PCM) solvent method also give preference to protonation at the oxygen atom when water is used as the solvent. The preference for protonation at the ipso carbon found by the more accurate G3B3 method is unexpected and its implications in Suzuki coupling are discussed. (C) 2006 Wiley Periodicals, Inc.
Resumo:
Low doses of gamma radiation were given to four different solvents containing C5-BTBP and CyMe4-BTBP, each molecule dissolved both in cyclohexanone and hexanol. Four corresponding solvents were kept unirradiated and used as references for the extraction experiments. Multiple samples were taken from both the irradiated solutions and the reference solutions at certain time intervals. The samples were used in extraction experiments with the radionuclides Am-241 and Eu-152. The protection against radiolysis of the extracting molecules by the diluent used for dissolution without adding a scavenger molecule was checked. The interplay between the diluent and the side group of the extracting molecule for protection against radiolysis was also studied by keeping the same type of core molecule for binding to the metal ions and varying the diluent and side group. The results were unexpected. The presence of a cyclic molecule as both a side group or diluent seems to keep the extraction of europium almost unaffected by radiolysis, while americium behaves differently from solvent to solvent. The diluent alone does not protect the extracting molecule. In some of the studied systems there is a distinct change in the extraction behaviour of Am between the irradiated and reference solutions, an effect that is however only present at the beginning of the experimental series. At later times the difference in distribution ratios between the irradiated and reference solution is constant. This phenomenon is found only when the side group and diluent are structurally dissimilar.
Resumo:
The success of Matrix-assisted laser desorption / ionisation (MALDI) in fields such as proteomics has partially but not exclusively been due to the development of improved data acquisition and sample preparation techniques. This has been required to overcome some of the short comings of the commonly used solid-state MALDI matrices such as - cyano-4-hydroxycinnamic acid (CHCA) and 2,5-dihydroxybenzoic acid (DHB). Solid state matrices form crystalline samples with highly inhomogeneous topography and morphology which results in large fluctuations in analyte signal intensity from spot to spot and positions within the spot. This means that efficient tuning of the mass spectrometer can be impeded and the use of MALDI MS for quantitative measurements is severely impeded. Recently new MALDI liquid matrices have been introduced which promise to be an effective alternative to crystalline matrices. Generally the liquid matrices comprise either ionic liquid matrices (ILMs) or a usually viscous liquid matrix which is doped with a UV lightabsorbing chromophore [1-3]. The advantages are that the droplet surface is smooth and relatively uniform with the analyte homogeneously distributed within. They have the ability to replenish a sampling position between shots negating the need to search for sample hot-spots. Also the liquid nature of the matrix allows for the use of additional additives to change the environment to which the analyte is added.