71 resultados para hidden semi-Markov model
Resumo:
The GEFSOC Project developed a system for estimating soil carbon (C) stocks and changes at the national and sub-national scale. As part of the development of the system, the Century ecosystem model was evaluated for its ability to simulate soil organic C (SOC) changes in environmental conditions in the Indo-Gangetic Plains, India (IGP). Two long-term fertilizer trials (LTFT), with all necessary parameters needed to run Century, were used for this purpose: a jute (Corchorus capsularis L.), rice (Oryza sativa L.) and wheat (Triticum aestivum L.) trial at Barrackpore, West Bengal, and a rice-wheat trial at Ludhiana, Punjab. The trials represent two contrasting climates of the IGP, viz. semi-arid, dry with mean annual rainfall (MAR) of < 800 mm and humid with > 1600 turn. Both trials involved several different treatments with different organic and inorganic fertilizer inputs. In general, the model tended to overestimate treatment effects by approximately 15%. At the semi-arid site, modelled data simulated actual data reasonably well for all treatments, with the control and chemical N + farm yard manure showing the best agreement (RMSE = 7). At the humid site, Century performed less well. This could have been due to a range of factors including site history. During the study, Century was calibrated to simulate crop yields for the two sites considered using data from across the Indian IGP. However, further adjustments may improve model performance at these sites and others in the IGP. The availability of more longterm experimental data sets (especially those involving flooded lowland rice and triple cropping systems from the IGP) for testing and validation is critical to the application of the model's predictive capabilities for this area of the Indian sub-continent. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The sustainability of cereal/legume intercropping was assessed by monitoring trends in grain yield, soil organic C (SOC) and soil extractable P (Olsen method) measured over 13 years at a long-term field trial on a P-deficient soil in semi-arid Kenya. Goat manure was applied annually for 13 years at 0, 5 and 10 t ha(-1) and trends in grain yield were not identifiable because of season-to-season variations. SOC and Olsen P increased for the first seven years of manure application and then remained constant. The residual effect of manure applied for four years only lasted another seven to eight years when assessed by yield, SOC and Olsen P. Mineral fertilizers provided the same annual rates of N and P as in 5 t ha(-1) manure and initially ,gave the same yield as manure, declining after nine years to about 80%. Therefore, manure applications could be made intermittently and nutrient requirements topped-up with fertilizers. Grain yields for sorghum with continuous manure were described well by correlations with rainfall and manure input only, if data were excluded for seasons with over 500 mm rainfall. A comprehensive simulation model should correctly describe crop losses caused by excess water.
Resumo:
The conceptual and parameter uncertainty of the semi-distributed INCA-N (Integrated Nutrients in Catchments-Nitrogen) model was studied using the GLUE (Generalized Likelihood Uncertainty Estimation) methodology combined with quantitative experimental knowledge, the concept known as 'soft data'. Cumulative inorganic N leaching, annual plant N uptake and annual mineralization proved to be useful soft data to constrain the parameter space. The INCA-N model was able to simulate the seasonal and inter-annual variations in the stream-water nitrate concentrations, although the lowest concentrations during the growing season were not reproduced. This suggested that there were some retention processes or losses either in peatland/wetland areas or in the river which were not included in the INCA-N model. The results of the study suggested that soft data was a way to reduce parameter equifinality, and that the calibration and testing of distributed hydrological and nutrient leaching models should be based both on runoff and/or nutrient concentration data and the qualitative knowledge of experimentalist. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Models are important tools to assess the scope of management effects on crop productivity under different climatic and soil regimes. Accordingly, this study developed and used a simple model to assess the effects of nitrogen fertiliser and planting density on the water use efficiency (q) of maize in semi-arid Kenya. Field experiments were undertaken at Sonning, Berkshire, UK, in 1996 (one sowing) and 1997 (two sowings). The results from the field experiments plus soil and weather data for Machakos, Kenya (1 degree 33'S, 37 degree 14'E and 1560 m above sea level), were then used to predict the effects that N application and planting density may have on water use by a maize crop grown in semi-arid Kenya. The increase in q due to N application was greater under irrigated (15%-19%) than rainfed (7%-8%) conditions. Also, high planting density increased q (by 13%) under irrigation but decreased q (by 17%) under rainfed conditions. The current study has shown the significance of crop modelling techniques in assessing the influence of N and planting density on maize production in one region of semi-arid Kenya where there is high variability of rainfall.
Resumo:
The rate at which a given site in a gene sequence alignment evolves over time may vary. This phenomenon-known as heterotachy-can bias or distort phylogenetic trees inferred from models of sequence evolution that assume rates of evolution are constant. Here, we describe a phylogenetic mixture model designed to accommodate heterotachy. The method sums the likelihood of the data at each site over more than one set of branch lengths on the same tree topology. A branch-length set that is best for one site may differ from the branch-length set that is best for some other site, thereby allowing different sites to have different rates of change throughout the tree. Because rate variation may not be present in all branches, we use a reversible-jump Markov chain Monte Carlo algorithm to identify those branches in which reliable amounts of heterotachy occur. We implement the method in combination with our 'pattern-heterogeneity' mixture model, applying it to simulated data and five published datasets. We find that complex evolutionary signals of heterotachy are routinely present over and above variation in the rate or pattern of evolution across sites, that the reversible-jump method requires far fewer parameters than conventional mixture models to describe it, and serves to identify the regions of the tree in which heterotachy is most pronounced. The reversible-jump procedure also removes the need for a posteriori tests of 'significance' such as the Akaike or Bayesian information criterion tests, or Bayes factors. Heterotachy has important consequences for the correct reconstruction of phylogenies as well as for tests of hypotheses that rely on accurate branch-length information. These include molecular clocks, analyses of tempo and mode of evolution, comparative studies and ancestral state reconstruction. The model is available from the authors' website, and can be used for the analysis of both nucleotide and morphological data.
Resumo:
Varroa destructor is a parasitic mite of the Eastern honeybee Apis cerana. Fifty years ago, two distinct evolutionary lineages (Korean and Japanese) invaded the Western honeybee Apis mellifera. This haplo-diploid parasite species reproduces mainly through brother sister matings, a system which largely favors the fixation of new mutations. In a worldwide sample of 225 individuals from 21 locations collected on Western honeybees and analyzed at 19 microsatellite loci, a series of de novo mutations was observed. Using historical data concerning the invasion, this original biological system has been exploited to compare three mutation models with allele size constraints for microsatellite markers: stepwise (SMM) and generalized (GSM) mutation models, and a model with mutation rate increasing exponentially with microsatellite length (ESM). Posterior probabilities of the three models have been estimated for each locus individually using reversible jump Markov Chain Monte Carlo. The relative support of each model varies widely among loci, but the GSM is the only model that always receives at least 9% support, whatever the locus. The analysis also provides robust estimates of mutation parameters for each locus and of the divergence time of the two invasive lineages (67,000 generations with a 90% credibility interval of 35,000-174,000). With an average of 10 generations per year, this divergence time fits with the last post-glacial Korea Japan land separation. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
We describe a general likelihood-based 'mixture model' for inferring phylogenetic trees from gene-sequence or other character-state data. The model accommodates cases in which different sites in the alignment evolve in qualitatively distinct ways, but does not require prior knowledge of these patterns or partitioning of the data. We call this qualitative variability in the pattern of evolution across sites "pattern-heterogeneity" to distinguish it from both a homogenous process of evolution and from one characterized principally by differences in rates of evolution. We present studies to show that the model correctly retrieves the signals of pattern-heterogeneity from simulated gene-sequence data, and we apply the method to protein-coding genes and to a ribosomal 12S data set. The mixture model outperforms conventional partitioning in both these data sets. We implement the mixture model such that it can simultaneously detect rate- and pattern-heterogeneity. The model simplifies to a homogeneous model or a rate- variability model as special cases, and therefore always performs at least as well as these two approaches, and often considerably improves upon them. We make the model available within a Bayesian Markov-chain Monte Carlo framework for phylogenetic inference, as an easy-to-use computer program.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.
Resumo:
In this paper, a fuzzy Markov random field (FMRF) model is used to segment land-objects into free, grass, building, and road regions by fusing remotely, sensed LIDAR data and co-registered color bands, i.e. scanned aerial color (RGB) photo and near infra-red (NIR) photo. An FMRF model is defined as a Markov random field (MRF) model in a fuzzy domain. Three optimization algorithms in the FMRF model, i.e. Lagrange multiplier (LM), iterated conditional mode (ICM), and simulated annealing (SA), are compared with respect to the computational cost and segmentation accuracy. The results have shown that the FMRF model-based ICM algorithm balances the computational cost and segmentation accuracy in land-cover segmentation from LIDAR data and co-registered bands.
Resumo:
Urban surveillance footage can be of poor quality, partly due to the low quality of the camera and partly due to harsh lighting and heavily reflective scenes. For some computer surveillance tasks very simple change detection is adequate, but sometimes a more detailed change detection mask is desirable, eg, for accurately tracking identity when faced with multiple interacting individuals and in pose-based behaviour recognition. We present a novel technique for enhancing a low-quality change detection into a better segmentation using an image combing estimator in an MRF based model.
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.
Resumo:
We present a novel kinetic multi-layer model that explicitly resolves mass transport and chemical reaction at the surface and in the bulk of aerosol particles (KM-SUB). The model is based on the PRA framework of gas-particle interactions (Poschl-Rudich-Ammann, 2007), and it includes reversible adsorption, surface reactions and surface-bulk exchange as well as bulk diffusion and reaction. Unlike earlier models, KM-SUB does not require simplifying assumptions about steady-state conditions and radial mixing. The temporal evolution and concentration profiles of volatile and non-volatile species at the gas-particle interface and in the particle bulk can be modeled along with surface concentrations and gas uptake coefficients. In this study we explore and exemplify the effects of bulk diffusion on the rate of reactive gas uptake for a simple reference system, the ozonolysis of oleic acid particles, in comparison to experimental data and earlier model studies. We demonstrate how KM-SUB can be used to interpret and analyze experimental data from laboratory studies, and how the results can be extrapolated to atmospheric conditions. In particular, we show how interfacial and bulk transport, i.e., surface accommodation, bulk accommodation and bulk diffusion, influence the kinetics of the chemical reaction. Sensitivity studies suggest that in fine air particulate matter oleic acid and compounds with similar reactivity against ozone (carbon-carbon double bonds) can reach chemical lifetimes of many hours only if they are embedded in a (semi-)solid matrix with very low diffusion coefficients (< 10(-10) cm(2) s(-1)). Depending on the complexity of the investigated system, unlimited numbers of volatile and non-volatile species and chemical reactions can be flexibly added and treated with KM-SUB. We propose and intend to pursue the application of KM-SUB as a basis for the development of a detailed master mechanism of aerosol chemistry as well as for the derivation of simplified but realistic parameterizations for large-scale atmospheric and climate models.
Resumo:
We present a novel kinetic multi-layer model that explicitly resolves mass transport and chemical reaction at the surface and in the bulk of aerosol particles (KM-SUB). The model is based on the PRA framework of gas–particle interactions (P¨oschl et al., 5 2007), and it includes reversible adsorption, surface reactions and surface-bulk exchange as well as bulk diffusion and reaction. Unlike earlier models, KM-SUB does not require simplifying assumptions about steady-state conditions and radial mixing. The temporal evolution and concentration profiles of volatile and non-volatile species at the gas-particle interface and in the particle bulk can be modeled along with surface 10 concentrations and gas uptake coefficients. In this study we explore and exemplify the effects of bulk diffusion on the rate of reactive gas uptake for a simple reference system, the ozonolysis of oleic acid particles, in comparison to experimental data and earlier model studies. We demonstrate how KM-SUB can be used to interpret and analyze experimental data from laboratory stud15 ies, and how the results can be extrapolated to atmospheric conditions. In particular, we show how interfacial transport and bulk transport, i.e., surface accommodation, bulk accommodation and bulk diffusion, influence the kinetics of the chemical reaction. Sensitivity studies suggest that in fine air particulate matter oleic acid and compounds with similar reactivity against ozone (C=C double bonds) can reach chemical lifetimes of 20 multiple hours only if they are embedded in a (semi-)solid matrix with very low diffusion coefficients (10−10 cm2 s−1). Depending on the complexity of the investigated system, unlimited numbers of volatile and non-volatile species and chemical reactions can be flexibly added and treated with KM-SUB. We propose and intend to pursue the application of KM-SUB 25 as a basis for the development of a detailed master mechanism of aerosol chemistry as well as for the derivation of simplified but realistic parameterizations for large-scale atmospheric and climate models.
Resumo:
In financial decision-making, a number of mathematical models have been developed for financial management in construction. However, optimizing both qualitative and quantitative factors and the semi-structured nature of construction finance optimization problems are key challenges in solving construction finance decisions. The selection of funding schemes by a modified construction loan acquisition model is solved by an adaptive genetic algorithm (AGA) approach. The basic objectives of the model are to optimize the loan and to minimize the interest payments for all projects. Multiple projects being undertaken by a medium-size construction firm in Hong Kong were used as a real case study to demonstrate the application of the model to the borrowing decision problems. A compromise monthly borrowing schedule was finally achieved. The results indicate that Small and Medium Enterprise (SME) Loan Guarantee Scheme (SGS) was first identified as the source of external financing. Selection of sources of funding can then be made to avoid the possibility of financial problems in the firm by classifying qualitative factors into external, interactive and internal types and taking additional qualitative factors including sovereignty, credit ability and networking into consideration. Thus a more accurate, objective and reliable borrowing decision can be provided for the decision-maker to analyse the financial options.
Resumo:
Many well-established statistical methods in genetics were developed in a climate of severe constraints on computational power. Recent advances in simulation methodology now bring modern, flexible statistical methods within the reach of scientists having access to a desktop workstation. We illustrate the potential advantages now available by considering the problem of assessing departures from Hardy-Weinberg (HW) equilibrium. Several hypothesis tests of HW have been established, as well as a variety of point estimation methods for the parameter which measures departures from HW under the inbreeding model. We propose a computational, Bayesian method for assessing departures from HW, which has a number of important advantages over existing approaches. The method incorporates the effects-of uncertainty about the nuisance parameters--the allele frequencies--as well as the boundary constraints on f (which are functions of the nuisance parameters). Results are naturally presented visually, exploiting the graphics capabilities of modern computer environments to allow straightforward interpretation. Perhaps most importantly, the method is founded on a flexible, likelihood-based modelling framework, which can incorporate the inbreeding model if appropriate, but also allows the assumptions of the model to he investigated and, if necessary, relaxed. Under appropriate conditions, information can be shared across loci and, possibly, across populations, leading to more precise estimation. The advantages of the method are illustrated by application both to simulated data and to data analysed by alternative methods in the recent literature.