928 resultados para python django bootstrap


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Single nucleotide polymorphisms (SNPs) may be used in biodiversity studies and commercial tasks like traceability, paternity testing and selection for suitable genotypes. Twenty-seven SNPs were characterized and genotyped on 250 individuals belonging to eight Italian goat breeds. Multilocus genotype data were used to infer population structure and assign individuals to populations. To estimate the number of groups (K) to test in population structure analysis we used likelihood values and variance of the bootstrap samples, deriving optimal K from a drop in the likelihood and a rise in the variance plots against K.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to explore the genetic diversity within Echinococcus multilocularis (E. multilocularis), the cestode responsible for the alveolar echinococcosis (AE) in humans, a microsatellite, composed of (CA) and (GA) repeats and designated EmsB, was isolated and characterized in view of its nature and potential field application. PCR-amplification with specific primers exhibited a high degree of size polymorphism between E. multilocularis and Echinococcus granulosus sheep (G1) and camel (G6) strains. Fluorescent-PCR was subsequently performed on a panel of E. multilocularis isolates to assess intra-species polymorphism level. EmsB provided a multi-peak profile, characterized by tandemly repeated microsatellite sequences in the E. multilocularis genome. This "repetition of repeats" feature provided to EmsB a high discriminatory power in that eight clusters, supported by bootstrap p-values larger than 95%, could be defined among the tested E. multilocularis samples. We were able to differentiate not only the Alaskan from the European samples, but also to detect different European isolate clusters. In total, 25 genotypes were defined within 37 E. multilocularis samples. Despite its complexity, this tandem repeated multi-loci microsatellite possesses the three important features for a molecular marker, i.e. sensitivity, repetitiveness and discriminatory power. It will permit assessing the genetic polymorphism of E. multilocularis and to investigate its spatial distribution in detail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of estimating the numbers of motor units N in a muscle is embedded in a general stochastic model using the notion of thinning from point process theory. In the paper a new moment type estimator for the numbers of motor units in a muscle is denned, which is derived using random sums with independently thinned terms. Asymptotic normality of the estimator is shown and its practical value is demonstrated with bootstrap and approximative confidence intervals for a data set from a 31-year-old healthy right-handed, female volunteer. Moreover simulation results are presented and Monte-Carlo based quantiles, means, and variances are calculated for N in{300,600,1000}.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For various reasons, it is important, if not essential, to integrate the computations and code used in data analyses, methodological descriptions, simulations, etc. with the documents that describe and rely on them. This integration allows readers to both verify and adapt the statements in the documents. Authors can easily reproduce them in the future, and they can present the document's contents in a different medium, e.g. with interactive controls. This paper describes a software framework for authoring and distributing these integrated, dynamic documents that contain text, code, data, and any auxiliary content needed to recreate the computations. The documents are dynamic in that the contents, including figures, tables, etc., can be recalculated each time a view of the document is generated. Our model treats a dynamic document as a master or ``source'' document from which one can generate different views in the form of traditional, derived documents for different audiences. We introduce the concept of a compendium as both a container for the different elements that make up the document and its computations (i.e. text, code, data, ...), and as a means for distributing, managing and updating the collection. The step from disseminating analyses via a compendium to reproducible research is a small one. By reproducible research, we mean research papers with accompanying software tools that allow the reader to directly reproduce the results and employ the methods that are presented in the research paper. Some of the issues involved in paradigms for the production, distribution and use of such reproducible research are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The construction of a reliable, practically useful prediction rule for future response is heavily dependent on the "adequacy" of the fitted regression model. In this article, we consider the absolute prediction error, the expected value of the absolute difference between the future and predicted responses, as the model evaluation criterion. This prediction error is easier to interpret than the average squared error and is equivalent to the mis-classification error for the binary outcome. We show that the distributions of the apparent error and its cross-validation counterparts are approximately normal even under a misspecified fitted model. When the prediction rule is "unsmooth", the variance of the above normal distribution can be estimated well via a perturbation-resampling method. We also show how to approximate the distribution of the difference of the estimated prediction errors from two competing models. With two real examples, we demonstrate that the resulting interval estimates for prediction errors provide much more information about model adequacy than the point estimates alone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hardwoods comprise about half of the biomass of forestlands in North America and present many uses including economic, ecological and aesthetic functions. Forest trees rely on the genetic variation within tree populations to overcome the many biotic, abiotic, anthropogenic factors which are further worsened by climate change, that threaten their continued survival and functionality. To harness these inherent genetic variations of tree populations, informed knowledge of the genomic resources and techniques, which are currently lacking or very limited, are imperative for forest managers. The current study therefore aimed to develop genomic microsatellite markers for the leguminous tree species, honey locust, Gleditsia triacanthos L. and test their applicability in assessing genetic variation, estimation of gene flow patterns and identification of a full-sib mapping population. We also aimed to test the usefulness of already developed nuclear and gene-based microsatellite markers in delineation of species and taxonomic relationships between four of the taxonomically difficult Section Lobatae species (Quercus coccinea, Q. ellipsoidalis, Q. rubra and Q. velutina. We recorded 100% amplification of G. triacanthos genomic microsatellites developed using Illumina sequencing techniques in a panel of seven unrelated individuals with 14 of these showing high polymorphism and reproducibility. When characterized in 36 natural population samples, we recorded 20 alleles per locus with no indication for null alleles at 13 of the 14 microsatellites. This is the first report of genomic microsatellites for this species. Honey locust trees occur in fragmented populations of abandoned farmlands and pastures and is described as essentially dioecious. Pollen dispersal if the main source of gene flow within and between populations with the ability to offset the effects of random genetic drift. Factors known to influence gene include fragmentation and degree of isolation, which make the patterns gene flow in fragmented populations of honey locust a necessity for their sustainable management. In this follow-up study, we used a subset of nine of the 14 developed gSSRs to estimate gene flow and identify a full-sib mapping population in two isolated fragments of honey locust. Our analyses indicated that the majority of the seedlings (65-100% - at both strict and relaxed assignment thresholds) were sired by pollen from outside the two fragment populations. Only one selfing event was recorded confirming the functional dioeciousness of honey locust and that the seed parents are almost completely outcrossed. From the Butternut Valley, TN population, pollen donor genotypes were reconstructed and used in paternity assignment analyses to identify a relatively large full-sib family comprised of 149 individuals, proving the usefulness of isolated forest fragments in identification of full-sib families. In the Ames Plantation stand, contemporary pollen dispersal followed a fat-tailed exponential-power distribution, an indication of effective gene flow. Our estimate of δ was 4,282.28 m, suggesting that insect pollinators of honey locust disperse pollen over very long distances. The high proportion of pollen influx into our sampled population implies that our fragment population forms part of a large effectively reproducing population. The high tendency of oak species to hybridize while still maintaining their species identity make it difficult to resolve their taxonomic relationships. Oaks of the section Lobatae are famous in this regard and remain unresolved at both morphological and genetic markers. We applied 28 microsatellite markers including outlier loci with potential roles in reproductive isolation and adaptive divergence between species to natural populations of four known interfertile red oaks, Q. coccinea, Q. ellpsoidalis, Q. rubra and Q. velutina. To better resolve the taxonomic relationships in this difficult clade, we assigned individual samples to species, identified hybrids and introgressive forms and reconstructed phylogenetic relationships among the four species after exclusion of genetically intermediate individuals. Genetic assignment analyses identified four distinct species clusters, with Q. rubra most differentiated from the three other species, but also with a comparatively large number of misclassified individuals (7.14%), hybrids (7.14%) and introgressive forms (18.83%) between Q. ellipsoidalis and Q. velutina. After the exclusion of genetically intermediate individuals, Q. ellipsoidalis grouped as sister species to the largely parapatric Q. coccinea with high bootstrap support (91 %). Genetically intermediate forms in a mixed species stand were located proximate to both potential parental species, which supports recent hybridization of Q. velutina with both Q. ellipsoidalis and Q. rubra. Analyses of genome-wide patterns of interspecific differentiation can provide a better understanding of speciation processes and taxonomic relationships in this taxonomically difficult group of red oak species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this project is to take preliminary steps towards the development of a QUAL2Kw model for Silver Bow Creek, MT. These preliminary steps include initial research and familiarization with QUAL2Kw, use of ArcGIS to fill in geospatial data gaps, and integration of QUAL2Kw and ArcGIS. The integration involves improvement of the QUAL2Kw model output through adding functionality to the model itself, and development of a QUAL2Kw specific tool in ArcGIS. These improvements are designed to help expedite and simplify the viewing of QUAL2Kw output data spatially in ArcGIS as opposed to graphically within QUAL2Kw. These improvements will allow users to quickly and easily view the many output parameters of each model run geographically within ArcGIS. This will make locating potential problem areas or “hot spots” much quicker and easier than interpreting the QUAL2Kw output data from a graph alone. The added functionality of QUAL2KW was achieved through the development of an excel Macro, and the tool in ArcGIS was developed using python scripting and the model builder feature in ArcGIS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Object-oriented modelling languages such as EMOF are often used to specify domain specific meta-models. However, these modelling languages lack the ability to describe behavior or operational semantics. Several approaches have used a subset of Java mixed with OCL as executable meta-languages. In this experience report we show how we use Smalltalk as an executable meta-language in the context of the Moose reengineering environment. We present how we implemented EMOF and its behavioral aspects. Over the last decade we validated this approach through incrementally building a meta-described reengineering environment. Such an approach bridges the gap between a code-oriented view and a meta-model driven one. It avoids the creation of yet another language and reuses the infrastructure and run-time of the underlying implementation language. It offers an uniform way of letting developers focus on their tasks while at the same time allowing them to meta-describe their domain model. The advantage of our approach is that developers use the same tools and environment they use for their regular tasks. Still the approach is not Smalltalk specific but can be applied to language offering an introspective API such as Ruby, Python, CLOS, Java and C#.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report on our experiences with the Spy project, including implementation details and benchmark results. Spy is a re-implementation of the Squeak (i.e., Smalltalk-80) VM using the PyPy toolchain. The PyPy project allows code written in RPython, a subset of Python, to be translated to a multitude of different backends and architectures. During the translation, many aspects of the implementation can be independently tuned, such as the garbage collection algorithm or threading implementation. In this way, a whole host of interpreters can be derived from one abstract interpreter definition. Spy aims to bring these benefits to Squeak, allowing for greater portability and, eventually, improved performance. The current Spy codebase is able to run a small set of benchmarks that demonstrate performance superior to many similar Smalltalk VMs, but which still run slower than in Squeak itself. Spy was built from scratch over the course of a week during a joint Squeak-PyPy Sprint in Bern last autumn.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This event study investigates the impact of the Japanese nuclear disaster in Fukushima-Daiichi on the daily stock prices of French, German, Japanese, and U.S. nuclear utility and alternative energy firms. Hypotheses regarding the (cumulative) abnormal returns based on a three-factor model are analyzed through joint tests by multivariate regression models and bootstrapping. Our results show significant abnormal returns for Japanese nuclear utility firms during the one-week event window and the subsequent four-week post-event window. Furthermore, while French and German nuclear utility and alternative energy stocks exhibit significant abnormal returns during the event window, we cannot confirm abnormal returns for U.S. stocks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce an algorithm (called REDFITmc2) for spectrum estimation in the presence of timescale errors. It is based on the Lomb-Scargle periodogram for unevenly spaced time series, in combination with the Welch's Overlapped Segment Averaging procedure, bootstrap bias correction and persistence estimation. The timescale errors are modelled parametrically and included in the simulations for determining (1) the upper levels of the spectrum of the red-noise AR(1) alternative and (2) the uncertainty of the frequency of a spectral peak. Application of REDFITmc2 to ice core and stalagmite records of palaeoclimate allowed a more realistic evaluation of spectral peaks than when ignoring this source of uncertainty. The results support qualitatively the intuition that stronger effects on the spectrum estimate (decreased detectability and increased frequency uncertainty) occur for higher frequencies. The surplus information brought by algorithm REDFITmc2 is that those effects are quantified. Regarding timescale construction, not only the fixpoints, dating errors and the functional form of the age-depth model play a role. Also the joint distribution of all time points (serial correlation, stratigraphic order) determines spectrum estimation.