139 resultados para Maximum pseudo-likelihood


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Lincoln–Petersen estimator is one of the most popular estimators used in capture–recapture studies. It was developed for a sampling situation in which two sources independently identify members of a target population. For each of the two sources, it is determined if a unit of the target population is identified or not. This leads to a 2 × 2 table with frequencies f11, f10, f01, f00 indicating the number of units identified by both sources, by the first but not the second source, by the second but not the first source and not identified by any of the two sources, respectively. However, f00 is unobserved so that the 2 × 2 table is incomplete and the Lincoln–Petersen estimator provides an estimate for f00. In this paper, we consider a generalization of this situation for which one source provides not only a binary identification outcome but also a count outcome of how many times a unit has been identified. Using a truncated Poisson count model, truncating multiple identifications larger than two, we propose a maximum likelihood estimator of the Poisson parameter and, ultimately, of the population size. This estimator shows benefits, in comparison with Lincoln–Petersen’s, in terms of bias and efficiency. It is possible to test the homogeneity assumption that is not testable in the Lincoln–Petersen framework. The approach is applied to surveillance data on syphilis from Izmir, Turkey.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Polygalacturonase-inhibiting proteins (PGIPs) are leucine-rich repeat (LRR) plant cell wall glycoproteins involved in plant immunity. They are typically encoded by gene families with a small number of gene copies whose evolutionary origin has been poorly investigated. Here we report the complete characterization of the full complement of the pgip family in soybean (Glycine max [L.] Merr.) and the characterization of the genomic region surrounding the pgip family in four legume species. Results BAC clone and genome sequence analyses showed that the soybean genome contains two pgip loci. Each locus is composed of three clustered genes that are induced following infection with the fungal pathogen Sclerotinia sclerotiorum (Lib.) de Bary, and remnant sequences of pgip genes. The analyzed homeologous soybean genomic regions (about 126 Kb) that include the pgip loci are strongly conserved and this conservation extends also to the genomes of the legume species Phaseolus vulgaris L., Medicago truncatula Gaertn. and Cicer arietinum L., each containing a single pgip locus. Maximum likelihood-based gene trees suggest that the genes within the pgip clusters have independently undergone tandem duplication in each species. Conclusions The paleopolyploid soybean genome contains two pgip loci comprised in large and highly conserved duplicated regions, which are also conserved in bean, M. truncatula and C. arietinum. The genomic features of these legume pgip families suggest that the forces driving the evolution of pgip genes follow the birth-and-death model, similar to that proposed for the evolution of resistance (R) genes of NBS-LRR-type.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

∆14Catm has been estimated as 420 ± 80‰ (IntCal09) during the Last Glacial Maximum (LGM) compared to preindustrial times (0‰), but mechanisms explaining this difference are not yet resolved. ∆14Catm is a function of both cosmogenic production in the high atmosphere and of carbon cycling and partitioning in the Earth system. 10Be-based reconstructions show a contribution of the cosmogenic production term of only 200 ± 200‰ in the LGM. The remaining 220‰ have thus to be explained by changes in the carbon cycle. Recently, Bouttes et al. (2010, 2011) proposed to explain most of the difference in pCO2atm and δ13C between glacial and interglacial times as a result of brine-induced ocean stratification in the Southern Ocean. This mechanism involves the formation of very saline water masses that contribute to high carbon storage in the deep ocean. During glacial times, the sinking of brines is enhanced and more carbon is stored in the deep ocean, lowering pCO2atm. Moreover, the sinking of brines induces increased stratification in the Southern Ocean, which keeps the deep ocean well isolated from the surface. Such an isolated ocean reservoir would be characterized by a low ∆14C signature. Evidence of such 14C-depleted deep waters during the LGM has recently been found in the Southern Ocean (Skinner et al. 2010). The degassing of this carbon with low ∆14C would then reduce ∆14Catm throughout the deglaciation. We have further developed the CLIMBER-2 model to include a cosmogenic production of 14C as well as an interactive atmospheric 14C reservoir. We investigate the role of both the sinking of brine and cosmogenic production, alongside iron fertilization mechanisms, to explain changes in ∆14Catm during the last deglaciation. In our simulations, not only is the sinking of brine mechanism consistent with past ∆14C data, but it also explains most of the differences in pCO2atm and ∆14Catm between the LGM and preindustrial times. Finally, this study represents the first time to our knowledge that a model experiment explains glacial-interglacial differences in pCO2atm, δ13C, and ∆14C together with a coherent LGM climate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Phase studies have been performed for quaternary systems composed of egg lecithin, cosurfactant, water and oil. The lecithin used was the commercially available egg lecithin Ovothin 200 (which comprises ≥ 92% phosphatidylcholine). The cosurfactants employed were propanol and butanol, and these were used at lecithin/cosurfactant mixing ratios (Km) of 1:1 and 1.94:1 (weight basis). Six polar oils were investigated, including the alkanoic acids, octanoic and oleic, their corresponding ethyl esters and the medium and long chain triglycerides, Miglyol 812 and soybean oil. All oils, irrespective of the alcohol and the Km used, gave rise to systems that produced a stable isotropic region along the surfactant/oil axis (designated as a reverse microemulsion system). In addition, the systems incorporating propanol at both Km and butanol at a Km of 1.94: 1, generally gave rise to a liquid crystalline region and, in some cases, a second isotropic non-birefingent area (designated as a normal microemulsion system). The phase behaviour observed was largely dependent upon the alcohol and Km used and the size and the polarity of the oil present.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Flash floods pose a significant danger for life and property. Unfortunately, in arid and semiarid environment the runoff generation shows a complex non-linear behavior with a strong spatial and temporal non-uniformity. As a result, the predictions made by physically-based simulations in semiarid areas are subject to great uncertainty, and a failure in the predictive behavior of existing models is common. Thus better descriptions of physical processes at the watershed scale need to be incorporated into the hydrological model structures. For example, terrain relief has been systematically considered static in flood modelling at the watershed scale. Here, we show that the integrated effect of small distributed relief variations originated through concurrent hydrological processes within a storm event was significant on the watershed scale hydrograph. We model these observations by introducing dynamic formulations of two relief-related parameters at diverse scales: maximum depression storage, and roughness coefficient in channels. In the final (a posteriori) model structure these parameters are allowed to be both time-constant or time-varying. The case under study is a convective storm in a semiarid Mediterranean watershed with ephemeral channels and high agricultural pressures (the Rambla del Albujón watershed; 556 km 2 ), which showed a complex multi-peak response. First, to obtain quasi-sensible simulations in the (a priori) model with time-constant relief-related parameters, a spatially distributed parameterization was strictly required. Second, a generalized likelihood uncertainty estimation (GLUE) inference applied to the improved model structure, and conditioned to observed nested hydrographs, showed that accounting for dynamic relief-related parameters led to improved simulations. The discussion is finally broadened by considering the use of the calibrated model both to analyze the sensitivity of the watershed to storm motion and to attempt the flood forecasting of a stratiform event with highly different behavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is accumulating evidence that macroevolutionary patterns of mammal evolution during the Cenozoic follow similar trajectories on different continents. This would suggest that such patterns are strongly determined by global abiotic factors, such as climate, or by basic eco-evolutionary processes such as filling of niches by specialization. The similarity of pattern would be expected to extend to the history of individual clades. Here, we investigate the temporal distribution of maximum size observed within individual orders globally and on separate continents. While the maximum size of individual orders of large land mammals show differences and comprise several families, the times at which orders reach their maximum size over time show strong congruence, peaking in the Middle Eocene, the Oligocene and the Plio-Pleistocene. The Eocene peak occurs when global temperature and land mammal diversity are high and is best explained as a result of niche expansion rather than abiotic forcing. Since the Eocene, there is a significant correlation between maximum size frequency and global temperature proxy. The Oligocene peak is not statistically significant and may in part be due to sampling issues. The peak in the Plio-Pleistocene occurs when global temperature and land mammal diversity are low, it is statistically the most robust one and it is best explained by global cooling. We conclude that the macroevolutionary patterns observed are a result of the interplay between eco-evolutionary processes and abiotic forcing

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The equations of Milsom are evaluated, giving the ground range and group delay of radio waves propagated via the horizontally stratified model ionosphere proposed by Bradley and Dudeney. Expressions for the ground range which allow for the effects of the underlying E- and F1-regions are used to evaluate the basic maximum usable frequency or M-factors for single F-layer hops. An algorithm for the rapid calculation of the M-factor at a given range is developed, and shown to be accurate to within 5%. The results reveal that the M(3000)F2-factor scaled from vertical-incidence ionograms using the standard URSI procedure can be up to 7.5% in error. A simple addition to the algorithm effects a correction to ionogram values to make these accurate to 0.5%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyses 10 years of in-situ measurements of significant wave height (Hs) and maximum wave height (Hmax) from the ocean weather ship Polarfront in the Norwegian Sea. The 30-minute Ship-Borne Wave Recorder measurements of Hmax and Hs are shown to be consistent with theoretical wave distributions. The linear regression between Hmax and Hs has a slope of 1.53. Neither Hs nor Hmax show a significant trend in the period 2000–2009. These data are combined with earlier observations. The long-term trend over the period 1980–2009 in annual Hs is 2.72 ± 0.88 cm/year. Mean Hs and Hmax are both correlated with the North Atlantic Oscillation (NAO) index during winter. The correlation with the NAO index is highest for the more frequently encountered (75th percentile) wave heights. The wave field variability associated with the NAO index is reconstructed using a 500-year NAO index record. Hs and Hmax are found to vary by up to 1.42 m and 3.10 m respectively over the 500-year period. Trends in all 30-year segments of the reconstructed wave field are lower than the trend in the observations during 1980–2009. The NAO index does not change significantly in 21st century projections from CMIP5 climate models under scenario RCP85, and thus no NAO-related changes are expected in the mean and extreme wave fields of the Norwegian Sea.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article offers a fresh view on the poetics of the pseudo-Vergilian poem Aetna, proposing a carefully planned and executed structure which is supported through a deliberate arrangement of key terms in the poem as well as a network of verbal cross-references

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fire activity has varied globally and continuously since the last glacial maximum (LGM) in response to long-term changes in global climate and shorter-term regional changes in climate, vegetation, and human land use. We have synthesized sedimentary charcoal records of biomass burning since the LGM and present global maps showing changes in fire activity for time slices during the past 21,000 years (as differences in charcoal accumulation values compared to pre-industrial). There is strong broad-scale coherence in fire activity after the LGM, but spatial heterogeneity in the signals increases thereafter. In North America, Europe and southern South America, charcoal records indicate less-than-present fire activity during the deglacial period, from 21,000 to ∼11,000 cal yr BP. In contrast, the tropical latitudes of South America and Africa show greater-than-present fire activity from ∼19,000 to ∼17,000 cal yr BP and most sites from Indochina and Australia show greater-than-present fire activity from 16,000 to ∼13,000 cal yr BP. Many sites indicate greater-than-present or near-present activity during the Holocene with the exception of eastern North America and eastern Asia from 8,000 to ∼3,000 cal yr BP, Indonesia and Australia from 11,000 to 4,000 cal yr BP, and southern South America from 6,000 to 3,000 cal yr BP where fire activity was less than present. Regional coherence in the patterns of change in fire activity was evident throughout the post-glacial period. These complex patterns can largely be explained in terms of large-scale climate controls modulated by local changes in vegetation and fuel load

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Weeds tend to aggregate in patches within fields and there is evidence that this is partly owing to variation in soil properties. Because the processes driving soil heterogeneity operate at different scales, the strength of the relationships between soil properties and weed density would also be expected to be scale-dependent. Quantifying these effects of scale on weed patch dynamics is essential to guide the design of discrete sampling protocols for mapping weed distribution. We have developed a general method that uses novel within-field nested sampling and residual maximum likelihood (REML) estimation to explore scale-dependent relationships between weeds and soil properties. We have validated the method using a case study of Alopecurus myosuroides in winter wheat. Using REML, we partitioned the variance and covariance into scale-specific components and estimated the correlations between the weed counts and soil properties at each scale. We used variograms to quantify the spatial structure in the data and to map variables by kriging. Our methodology successfully captured the effect of scale on a number of edaphic drivers of weed patchiness. The overall Pearson correlations between A. myosuroides and soil organic matter and clay content were weak and masked the stronger correlations at >50 m. Knowing how the variance was partitioned across the spatial scales we optimized the sampling design to focus sampling effort at those scales that contributed most to the total variance. The methods have the potential to guide patch spraying of weeds by identifying areas of the field that are vulnerable to weed establishment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe the creation of a data set describing changes related to the presence of ice sheets, including ice-sheet extent and height, ice-shelf extent, and the distribution and elevation of ice-free land at the Last Glacial Maximum (LGM), which were used in LGM experiments conducted as part of the fifth phase of the Coupled Modelling Intercomparison Project (CMIP5) and the third phase of the Palaeoclimate Modelling Intercomparison Project (PMIP3). The CMIP5/PMIP3 data sets were created from reconstructions made by three different groups, which were all obtained using a model-inversion approach but differ in the assumptions used in the modelling and in the type of data used as constraints. The ice-sheet extent in the Northern Hemisphere (NH) does not vary substantially between the three individual data sources. The difference in the topography of the NH ice sheets is also moderate, and smaller than the differences between these reconstructions (and the resultant composite reconstruction) and ice-sheet reconstructions used in previous generations of PMIP. Only two of the individual reconstructions provide information for Antarctica. The discrepancy between these two reconstructions is larger than the difference for the NH ice sheets, although still less than the difference between the composite reconstruction and previous PMIP ice-sheet reconstructions. Although largely confined to the ice-covered regions, differences between the climate response to the individual LGM reconstructions extend over the North Atlantic Ocean and Northern Hemisphere continents, partly through atmospheric stationary waves. Differences between the climate response to the CMIP5/PMIP3 composite and any individual ice-sheet reconstruction are smaller than those between the CMIP5/PMIP3 composite and the ice sheet used in the last phase of PMIP (PMIP2).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.