59 resultados para mixed-stock analysis
em CentAUR: Central Archive University of Reading - UK
Resumo:
Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits. However, common methods are all based on a fixed-SNP-effect mixed linear model (MLM) and single marker analysis, such as efficient mixed model analysis (EMMA). These methods require Bonferroni correction for multiple tests, which often is too conservative when the number of markers is extremely large. To address this concern, we proposed a random-SNP-effect MLM (RMLM) and a multi-locus RMLM (MRMLM) for GWAS. The RMLM simply treats the SNP-effect as random, but it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM method with a less stringent selection criterion. Due to the multi-locus nature, no multiple test correction is needed. Simulation studies show that the MRMLM is more powerful in QTN detection and more accurate in QTN effect estimation than the RMLM, which in turn is more powerful and accurate than the EMMA. To demonstrate the new methods, we analyzed six flowering time related traits in Arabidopsis thaliana and detected more genes than previous reported using the EMMA. Therefore, the MRMLM provides an alternative for multi-locus GWAS.
Resumo:
The ability to predict the responses of ecological communities and individual species to human-induced environmental change remains a key issue for ecologists and conservation managers alike. Responses are often variable among species within groups making general predictions difficult. One option is to include ecological trait information that might help to disentangle patterns of response and also provide greater understanding of how particular traits link whole clades to their environment. Although this ‘‘trait-guild” approach has been used for single disturbances, the importance of particular traits on general responses to multiple disturbances has not been explored. We used a mixed model analysis of 19 data sets from throughout the world to test the effect of ecological and life-history traits on the responses of bee species to different types of anthropogenic environmental change. These changes included habitat loss, fragmentation, agricultural intensification, pesticides and fire. Individual traits significantly affected bee species responses to different disturbances and several traits were broadly predictive among multiple disturbances. The location of nests – above vs. below ground – significantly affected response to habitat loss, agricultural intensification, tillage regime (within agriculture) and fire. Species that nested above ground were on average more negatively affected by isolation from natural habitat and intensive agricultural land use than were species nesting below ground. In contrast below-ground-nesting species were more negatively affected by tilling than were above-ground nesters. The response of different nesting guilds to fire depended on the time since the burn. Social bee species were more strongly affected by isolation from natural habitat and pesticides than were solitary bee species. Surprisingly, body size did not consistently affect species responses, despite its importance in determining many aspects of individuals’ interaction with their environment. Although synergistic interactions among traits remain to be explored, individual traits can be useful in predicting and understanding responses of related species to global change.
Resumo:
This paper derives exact discrete time representations for data generated by a continuous time autoregressive moving average (ARMA) system with mixed stock and flow data. The representations for systems comprised entirely of stocks or of flows are also given. In each case the discrete time representations are shown to be of ARMA form, the orders depending on those of the continuous time system. Three examples and applications are also provided, two of which concern the stationary ARMA(2, 1) model with stock variables (with applications to sunspot data and a short-term interest rate) and one concerning the nonstationary ARMA(2, 1) model with a flow variable (with an application to U.S. nondurable consumers’ expenditure). In all three examples the presence of an MA(1) component in the continuous time system has a dramatic impact on eradicating unaccounted-for serial correlation that is present in the discrete time version of the ARMA(2, 0) specification, even though the form of the discrete time model is ARMA(2, 1) for both models.
Resumo:
A model for comparing the inventory costs of purchasing under the economic order quantity (EOQ) system and the just-in-time (JIT) order purchasing system in existing literature concluded that JIT purchasing was virtually always the preferable inventory ordering system especially at high level of annual demand. By expanding the classical EOQ model, this paper shows that it is possible for the EOQ system to be more cost effective than the JIT system once the inventory demand approaches the EOQ-JIT cost indifference point. The case study conducted in the ready-mixed concrete industry in Singapore supported this proposition.
Resumo:
The UK has a target for an 80% reduction in CO2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures.
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.
Resumo:
The influence of surface waves and an applied wind stress is studied in an ensemble of large eddy simulations to investigate the nature of deeply penetrating jets into an unstratified mixed layer. The influence of a steady monochromatic surface wave propagating parallel to the wind direction is parameterized using the wave-filtered Craik-Leibovich equations. Tracer trajectories and instantaneous downwelling velocities reveal classic counterrotating Langmuir rolls. The associated downwelling jets penetrate to depths in excess of the wave's Stokes depth scale, δs. Qualitative evidence suggests the depth of the jets is controlled by the Ekman depth scale. Analysis of turbulent kinetic energy (tke) budgets reveals a dynamical distinction between Langmuir turbulence and shear-driven turbulence. In the former, tke production is dominated by Stokes shear and a vertical flux term transports tke to a depth where it is dissipated. In the latter, tke production is from the mean shear and is locally balanced by dissipation. We define the turbulent Langmuir number Lat = (v*/Us)0.5 (v* is the ocean's friction velocity and Us is the surface Stokes drift velocity) and a turbulent anisotropy coefficient Rt = /( + ). The transition between shear-driven and Langmuir turbulence is investigated by varying external wave parameters δs and Lat and by diagnosing Rt and the Eulerian mean and Stokes shears. When either Lat or δs are sufficiently small the Stokes shear dominates the mean shear and the flow is preconditioned to Langmuir turbulence and the associated deeply penetrating jets.
Resumo:
This study uses large-eddy simulation (LES) to investigate the characteristics of Langmuir turbulence through the turbulent kinetic energy (TKE) budget. Based on an analysis of the TKE budget a velocity scale for Langmuir turbulence is proposed. The velocity scale depends on both the friction velocity and the surface Stokes drift associated with the wave field. The scaling leads to unique profiles of nondimensional dissipation rate and velocity component variances when the Stokes drift of the wave field is sufficiently large compared to the surface friction velocity. The existence of such a scaling shows that Langmuir turbulence can be considered as a turbulence regime in its own right, rather than a modification of shear-driven turbulence. Comparisons are made between the LES results and observations, but the lack of information concerning the wave field means these are mainly restricted to comparing profile shapes. The shapes of the LES profiles are consistent with observed profiles. The dissipation length scale for Langmuir turbulence is found to be similar to the dissipation length scale in the shear-driven boundary layer. Beyond this it is not possible to test the proposed scaling directly using available data. Entrainment at the base of the mixed layer is shown to be significantly enhanced over that due to normal shear turbulence.
Resumo:
The development of protocols for the identification of metal phosphates in phosphate-treated, metal-contaminated soils is a necessary yet problematical step in the validation of remediation schemes involving immobilization of metals as phosphate phases. The potential for Raman spectroscopy to be applied to the identification of these phosphates in soils has yet to be fully explored. With this in mind, a range of synthetic mixed-metal hydroxylapatites has been characterized and added to soils at known concentrations for analysis using both bulk X-ray powder diffraction (XRD) and Raman spectroscopy. Mixed-metal hydroxylapatites in the binary series Ca-Cd, Ca-Pb, Ca-Sr and Cd-Pb synthesized in the presence of acetate and carbonate ions, were characterized using a range of analytical techniques including XRD, analytical scanning electron microscopy (SEM), infrared spectroscopy (IR), inductively coupled plasma-atomic emission spectrometry (ICP-AES) and Raman spectroscopy. Only the Ca-Cd series displays complete solid solution, although under the synthesis conditions of this study the Cd-5(PO4)(3)OH end member could not be synthesized as a pure phase. Within the Ca-Cd series the cell parameters, IR active modes and Raman active bands vary linearly as a function of Cd content. X-ray diffraction and extended X-ray absorption fine structure spectroscopy (EXAFS) suggest that the Cd is distributed across both the Ca(1) and Ca(2) sites, even at low Cd concentrations. In order to explore the likely detection limits for mixed-metal phosphates in soils for XRD and Raman spectroscopy, soils doped with mixed-metal hydroxylapatites at concentrations of 5, 1 and 0.5 wt.% were then studied. X-ray diffraction could not confirm unambiguously the presence or identity of mixed-metal phosphates in soils at concentrations below 5 wt.%. Raman spectroscopy proved a far more sensitive method for the identification of mixed-metal hydroxylapatites in soils, which could positively identify the presence of such phases in soils at all the dopant concentrations used in this study. Moreover, Raman spectroscopy could also provide an accurate assessment of the degree of chemical substitution in the hydroxylapatites even when present in soils at concentrations as low as 0.1%.
Resumo:
Ochre samples excavated from the neolithic site at Qatalhoyuk, Turkey have been compared with "native" ochres from Clearwell Caves, UK using infrared spectroscopy backed up by Raman spectroscopy, scanning electron microscopy (with energy-dispersive X-rays (EDX) analysis), powder X-ray diffraction, diffuse reflection UV-Vis and atomic absorption spectroscopies. For the Clearwell Caves ochres, which range in colour from yellow-orange to red-brown, it is shown that the colour is related to the nature of the chromophore present and not to any differences in particle size. The darker red ochres contain predominantly haematite while the yellow ochre contains only goethite. The ochres from Qatalhoyuk contain only about one-twentieth of the levels of iron found in the Clearwell Caves ochres. The iron oxide pigment (haematite in all cases studied here) has been mixed with a soft lime plaster which also contains calcite and silicate (clay) minerals. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.
Resumo:
Stable isotopic characterization of chlorine in chlorinated aliphatic pollution is potentially very valuable for risk assessment and monitoring remediation or natural attenuation. The approach has been underused because of the complexity of analysis and the time it takes. We have developed a new method that eliminates sample preparation. Gas chromatography produces individually eluted sample peaks for analysis. The He carrier gas is mixed with Ar and introduced directly into the torch of a multicollector ICPMS. The MC-ICPMS is run at a high mass resolution of >= 10 000 to eliminate interference of mass 37 ArH with Cl. The standardization approach is similar to that for continuous flow stable isotope analysis in which sample and reference materials are measured successively. We have measured PCE relative to a laboratory TCE standard mixed with the sample. Solvent samples of 200 nmol to 1.3 mu mol ( 24- 165 mu g of Cl) were measured. The PCE gave the same value relative to the TCE as measured by the conventional method with a precision of 0.12% ( 2 x standard error) but poorer precision for the smaller samples.
Resumo:
A multivariate fit to the variation in global mean surface air temperature anomaly over the past half century is presented. The fit procedure allows for the effect of response time on the waveform, amplitude and lag of each radiative forcing input, and each is allowed to have its own time constant. It is shown that the contribution of solar variability to the temperature trend since 1987 is small and downward; the best estimate is -1.3% and the 2sigma confidence level sets the uncertainty range of -0.7 to -1.9%. The result is the same if one quantifies the solar variation using galactic cosmic ray fluxes (for which the analysis can be extended back to 1953) or the most accurate total solar irradiance data composite. The rise in the global mean air surface temperatures is predominantly associated with a linear increase that represents the combined effects of changes in anthropogenic well-mixed greenhouse gases and aerosols, although, in recent decades, there is also a considerable contribution by a relative lack of major volcanic eruptions. The best estimate is that the anthropogenic factors contribute 75% of the rise since 1987, with an uncertainty range (set by the 2sigma confidence level using an AR(1) noise model) of 49–160%; thus, the uncertainty is large, but we can state that at least half of the temperature trend comes from the linear term and that this term could explain the entire rise. The results are consistent with the intergovernmental panel on climate change (IPCC) estimates of the changes in radiative forcing (given for 1961–1995) and are here combined with those estimates to find the response times, equilibrium climate sensitivities and pertinent heat capacities (i.e. the depth into the oceans to which a given radiative forcing variation penetrates) of the quasi-periodic (decadal-scale) input forcing variations. As shown by previous studies, the decadal-scale variations do not penetrate as deeply into the oceans as the longer term drifts and have shorter response times. Hence, conclusions about the response to century-scale forcing changes (and hence the associated equilibrium climate sensitivity and the temperature rise commitment) cannot be made from studies of the response to shorter period forcing changes.
Resumo:
This document provides guidelines for fish stock assessment and fishery management using the software tools and other outputs developed by the United Kingdom's Department for International Development's Fisheries Management Science Programme (FMSP) from 1992 to 2004. It explains some key elements of the precautionary approach to fisheries management and outlines a range of alternative stock assessment approaches that can provide the information needed for such precautionary management. Four FMSP software tools, LFDA (Length Frequency Data Analysis), CEDA (Catch Effort Data Analysis), YIELD and ParFish (Participatory Fisheries Stock Assessment), are described with which intermediary parameters, performance indicators and reference points may be estimated. The document also contains examples of the assessment and management of multispecies fisheries, the use of Bayesian methodologies, the use of empirical modelling approaches for estimating yields and in analysing fishery systems, and the assessment and management of inland fisheries. It also provides a comparison of length- and age-based stock assessment methods. A CD-ROM with the FMSP software packages CEDA, LFDA, YIELD and ParFish is included.