132 resultados para Ancestral range estimation
Resumo:
Studies conducted on volcanic islands have greatly contributed to our current understanding of how organisms diversify. The Canary Islands archipelago, located northwest of the coast of northern Africa, harbours a large number of endemic taxa. Because of their low vagility, mygalomorph spiders are usually absent from oceanic islands. The spider Titanidiops canariensis, which inhabits the easternmost islands of the archipelago, constitutes an exception to this rule. Here, we use a multi-locus approach that combines three mitochondrial and four nuclear genes to investigate the origins and phylogeography of this remarkable trap-door spider. We provide a timeframe for the colonisation of the Canary Islands using two alternative approaches: concatenation and species tree inference in a Bayesian relaxed clock framework. Additionally, we investigate the existence of cryptic species on the islands by means of a Bayesian multi-locus species delimitation method. Our results indicate that T. canariensis colonised the Canary Islands once, most likely during the Miocene, although discrepancies between the timeframes from different approaches make the exact timing uncertain. A complex evolutionary history for the species in the archipelago is revealed, which involves two independent colonisations of Fuerteventura from the ancestral range of T. canariensis in northern Lanzarote and a possible back colonisation of southern Lanzarote. The data further corroborate a previously proposed volcanic refugium, highlighting the impact of the dynamic volcanic history of the island on the phylogeographic patterns of the endemic taxa. T. canariensis includes at least two different species, one inhabiting the Jandia peninsula and central Fuerteventura and one spanning from central Fuerteventura to Lanzarote. Our data suggest that the extant northern African Titanidiops lineages may have expanded to the region after the islands were colonised and, hence, are not the source of colonisation. In addition, T. maroccanus may harbour several cryptic species.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Resumo:
Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.
Resumo:
The amino acid composition of the protein from three strains of rat (Wistar, Zucker lean and Zucker obese), subjected to reference and high-fat diets has been used to determine the mean empirical formula, molecular weight and N content of whole-rat protein. The combined whole protein of the rat was uniform for the six experimental groups, containing an estimate of 17.3% N and a mean aminoacyl residue molecular weight of 103.7. This suggests that the appropriate protein factor for the calculation of rat protein from its N content should be 5.77 instead of the classical 6.25. In addition, an estimate of the size of the non-protein N mass in the whole rat gave a figure in the range of 5.5 % of all N. The combination of the two calculations gives a protein factor of 5.5 for the conversion of total N into rat protein.
Resumo:
The aim of this study is to provide an effective and quick reference guide based on the most useful European formulae recently published for subadult age estimation. All of these formulae derive from studies on postnatal growth of the scapula, innominate, femur, and tibia, based on modern skeletal data (173 ♂, 173 ♀) from five documented collections from Spain, Portugal, and Britain. The formulae were calculated from Inverse Regression. For this reason, these formulae are especially useful for modern samples from Western Europe and in particular on 20th century human remains from the Iberian Peninsula. Eleven formulae were selected as the most useful because they can be applied to individuals from within a wide age range and in individuals of unknown sex. Due to their high reliability and because they derive from documented European skeletal samples, we recommend these formulae be used on individuals of Caucasoid ancestry from Western Europe.
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).
Resumo:
Report for the scientific stay at the California Institute of Technology during the summer of 2005. ByoDyn is a tool for simulating the dynamical expression of gene regulatory networks (GRNs) and for parameter estimation in uni- and multicellular models. A software support was carried out describing GRNs in the Systems Biology Markup Language (SBML). This one is a computer format for representing and storing computational models of biochemical pathways in software tools and databases. Supporting this format gives ByoDyn a wide range of possibilities to study the dynamical properties of multiple regulatory pathways.
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
Lean meat percentage (LMP) is an important carcass quality parameter. The aim of this work is to obtain a calibration equation for the Computed Tomography (CT) scans with the Partial Least Square Regression (PLS) technique in order to predict the LMP of the carcass and the different cuts and to study and compare two different methodologies of the selection of the variables (Variable Importance for Projection — VIP- and Stepwise) to be included in the prediction equation. The error of prediction with cross-validation (RMSEPCV) of the LMP obtained with PLS and selection based on VIP value was 0.82% and for stepwise selection it was 0.83%. The prediction of the LMP scanning only the ham had a RMSEPCV of 0.97% and if the ham and the loin were scanned the RMSEPCV was 0.90%. Results indicate that for CT data both VIP and stepwise selection are good methods. Moreover the scanning of only the ham allowed us to obtain a good prediction of the LMP of the whole carcass.
Resumo:
Properties of GMM estimators for panel data, which have become very popular in the empirical economic growth literature, are not well known when the number of individuals is small. This paper analyses through Monte Carlo simulations the properties of various GMM and other estimators when the number of individuals is the one typically available in country growth studies. It is found that, provided that some persistency is present in the series, the system GMM estimator has a lower bias and higher efficiency than all the other estimators analysed, including the standard first-differences GMM estimator.
Resumo:
This paper presents an analysis of motor vehicle insurance claims relating to vehicle damage and to associated medical expenses. We use univariate severity distributions estimated with parametric and non-parametric methods. The methods are implemented using the statistical package R. Parametric analysis is limited to estimation of normal and lognormal distributions for each of the two claim types. The nonparametric analysis presented involves kernel density estimation. We illustrate the benefits of applying transformations to data prior to employing kernel based methods. We use a log-transformation and an optimal transformation amongst a class of transformations that produces symmetry in the data. The central aim of this paper is to provide educators with material that can be used in the classroom to teach statistical estimation methods, goodness of fit analysis and importantly statistical computing in the context of insurance and risk management. To this end, we have included in the Appendix of this paper all the R code that has been used in the analysis so that readers, both students and educators, can fully explore the techniques described
Resumo:
This paper analyses the impact of using different correlation assumptions between lines of business when estimating the risk-based capital reserve, the Solvency Capital Requirement (SCR), under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a one-year horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish non-life insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation.
Resumo:
This research paper seeks to bring into view the present-day situation of Native-American narrative in English. It is divided into four chapters. The first deals with the emergence of what we might call a Native-American narrative style and its evolution from 1900 up until its particularly forceful expression in 1968 with the appearance of N. Scott Momaday’s novel House Made of Dawn. To trace this evolution, we follow the chronology set forth by Paula Gunn Allen in her anthology Voice of the Turtle: American Indian Literature 1900-1970. In the second chapter we hear various voices from contemporary Native-American literary production as we follow Simon J. Ortiz’s anthology Speaking for the Generations: Native Writers on Writing. Noteworthy among these are Leslie Marmon Silko and Gloria Bird, alongside new voices such as those of Esther G. Belin and Daniel David Moses, and closing with Guatemalan-Mayan Victor D. Montejo, exiled in the United States. These writers’ contributions gravitate around two fundamental notions: the interdependence between human beings and the surrounding landscape, and the struggle for survival, which of necessity involves the deconstruction of the (post-)colonial subject. The third chapter deals with an anthology of short stories and poems by present-day Native-American women writers, edited by Joy Harjo and Gloria Bird and entitled Reinventing the Enemy’s Language: Contemporary Native Women’s Writings of North America. It too exemplifies personal and cultural reaffirmation on a landscape rich in ancestral elements, but also where one’s own voice takes shape in the language which, historically, is that of the enemy. In the final chapter we see how translation studies provide a critical perspective and fruitful reflection on the literary production of Native-American translative cultures, where a wide range of writers struggle to bring about the affirmative deconstruction of the colonialised subject. Thus there comes a turnaround in the function of the “enemy’s language,” giving rise also to the question of cultural incommensurability.
Resumo:
Report for the scientific sojourn at the the Philipps-Universität Marburg, Germany, from september to december 2007. For the first, we employed the Energy-Decomposition Analysis (EDA) to investigate aromaticity on Fischer carbenes as it is related through all the reaction mechanisms studied in my PhD thesis. This powerful tool, compared with other well-known aromaticity indices in the literature like NICS, is useful not only for quantitative results but also to measure the degree of conjugation or hyperconjugation in molecules. Our results showed for the annelated benzenoid systems studied here, that electron density is more concentrated on the outer rings than in the central one. The strain-induced bond localization plays a major role as a driven force to keep the more substituted ring as the less aromatic. The discussion presented in this work was contrasted at different levels of theory to calibrate the method and ensure the consistency of our results. We think these conclusions can also be extended to arene chemistry for explaining aromaticity and regioselectivity reactions found in those systems.In the second work, we have employed the Turbomole program package and density-functionals of the best performance in the state of art, to explore reaction mechanisms in the noble gas chemistry. Particularly, we were interested in compounds of the form H--Ng--Ng--F (where Ng (Noble Gas) = Ar, Kr and Xe) and we investigated the relative stability of these species. Our quantum chemical calculations predict that the dixenon compound HXeXeF has an activation barrier for decomposition of 11 kcal/mol which should be large enough to identify the molecule in a low-temperature matrix. The other noble gases present lower activation barriers and therefore are more labile and difficult to be observable systems experimentally.
Resumo:
This paper examines why a financial entity’s solvency capital estimation might be underestimated if the total amount required is obtained directly from a risk measurement. Using Monte Carlo simulation we show that, in some instances, a common risk measure such as Value-at-Risk is not subadditive when certain dependence structures are considered. Higher risk evaluations are obtained for independence between random variables than those obtained in the case of comonotonicity. The paper stresses, therefore, the relationship between dependence structures and capital estimation.