900 resultados para computational statistics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Public library statistics are taken from the annual survey. The statistics are used at the local, regional, state, and national levels to compare library performance, justify budget requests, track library data over time, assist in planning and evaluation, and provide valuable information for grants and other library programs. The annual survey collects current information from 543 public libraries about public service outlets, holdings, staffing, income, expenditures, circulation, services, and hours open. Furthermore, it helps provide a total picture of libraries on a state and nationwide basis. This report is authorized by law (Iowa Code 256.51 (H)). Each of the 50 states collects public library information according to guidelines established by the Federal State Cooperative System for public library data (FSCS). The information contained in the Iowa Public Library Statistics is based on definitions approved by FSCS. For additional information, contact Gerry Rowland, State Library, gerry.rowland@lib.state.ia.us; 1-800-248-4483.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Public library statistics are taken from the annual survey. The statistics are used at the local, regional, state, and national levels to compare library performance, justify budget requests, track library data over time, assist in planning and evaluation, and provide valuable information for grants and other library programs. The annual survey collects current information from 543 public libraries about public service outlets, holdings, staffing, income, expenditures, circulation, services, and hours open. Furthermore, it helps provide a total picture of libraries on a state and nationwide basis. This report is authorized by law (Iowa Code 256.51 (H)). Each of the 50 states collects public library information according to guidelines established by the Federal State Cooperative System for public library data (FSCS). The information contained in the Iowa Public Library Statistics is based on definitions approved by FSCS. For additional information, contact Gerry Rowland, State Library, gerry.rowland@lib.state.ia.us; 1-800-248-4483.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proteomics has come a long way from the initial qualitative analysis of proteins present in a given sample at a given time ("cataloguing") to large-scale characterization of proteomes, their interactions and dynamic behavior. Originally enabled by breakthroughs in protein separation and visualization (by two-dimensional gels) and protein identification (by mass spectrometry), the discipline now encompasses a large body of protein and peptide separation, labeling, detection and sequencing tools supported by computational data processing. The decisive mass spectrometric developments and most recent instrumentation news are briefly mentioned accompanied by a short review of gel and chromatographic techniques for protein/peptide separation, depletion and enrichment. Special emphasis is placed on quantification techniques: gel-based, and label-free techniques are briefly discussed whereas stable-isotope coding and internal peptide standards are extensively reviewed. Another special chapter is dedicated to software and computing tools for proteomic data processing and validation. A short assessment of the status quo and recommendations for future developments round up this journey through quantitative proteomics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Meta-analysis of genome-wide association studies (GWASs) has led to the discoveries of many common variants associated with complex human diseases. There is a growing recognition that identifying "causal" rare variants also requires large-scale meta-analysis. The fact that association tests with rare variants are performed at the gene level rather than at the variant level poses unprecedented challenges in the meta-analysis. First, different studies may adopt different gene-level tests, so the results are not compatible. Second, gene-level tests require multivariate statistics (i.e., components of the test statistic and their covariance matrix), which are difficult to obtain. To overcome these challenges, we propose to perform gene-level tests for rare variants by combining the results of single-variant analysis (i.e., p values of association tests and effect estimates) from participating studies. This simple strategy is possible because of an insight that multivariate statistics can be recovered from single-variant statistics, together with the correlation matrix of the single-variant test statistics, which can be estimated from one of the participating studies or from a publicly available database. We show both theoretically and numerically that the proposed meta-analysis approach provides accurate control of the type I error and is as powerful as joint analysis of individual participant data. This approach accommodates any disease phenotype and any study design and produces all commonly used gene-level tests. An application to the GWAS summary results of the Genetic Investigation of ANthropometric Traits (GIANT) consortium reveals rare and low-frequency variants associated with human height. The relevant software is freely available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES: The reconstruction of the right ventricular outflow tract (RVOT) with valved conduits remains a challenge. The reoperation rate at 5 years can be as high as 25% and depends on age, type of conduit, conduit diameter and principal heart malformation. The aim of this study is to provide a bench model with computer fluid dynamics to analyse the haemodynamics of the RVOT, pulmonary artery, its bifurcation, and left and right pulmonary arteries that in the future may serve as a tool for analysis and prediction of outcome following RVOT reconstruction. METHODS: Pressure, flow and diameter at the RVOT, pulmonary artery, bifurcation of the pulmonary artery, and left and right pulmonary arteries were measured in five normal pigs with a mean weight of 24.6 ± 0.89 kg. Data obtained were used for a 3D computer fluid-dynamics simulation of flow conditions, focusing on the pressure, flow and shear stress profile of the pulmonary trunk to the level of the left and right pulmonary arteries. RESULTS: Three inlet steady flow profiles were obtained at 0.2, 0.29 and 0.36 m/s that correspond to the flow rates of 1.5, 2.0 and 2.5 l/min flow at the RVOT. The flow velocity profile was constant at the RVOT down to the bifurcation and decreased at the left and right pulmonary arteries. In all three inlet velocity profiles, low sheer stress and low-velocity areas were detected along the left wall of the pulmonary artery, at the pulmonary artery bifurcation and at the ostia of both pulmonary arteries. CONCLUSIONS: This computed fluid real-time model provides us with a realistic picture of fluid dynamics in the pulmonary tract area. Deep shear stress areas correspond to a turbulent flow profile that is a predictive factor for the development of vessel wall arteriosclerosis. We believe that this bench model may be a useful tool for further evaluation of RVOT pathology following surgical reconstructions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Centralnotations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform.In this way very elaborated aspects of mathematical statistics can be understoodeasily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating,combination of likelihood and robust M-estimation functions are simple additions/perturbations in A2(Pprior). Weighting observations corresponds to a weightedaddition of the corresponding evidence.Likelihood based statistics for general exponential families turns out to have aparticularly easy interpretation in terms of A2(P). Regular exponential families formfinite dimensional linear subspaces of A2(P) and they correspond to finite dimensionalsubspaces formed by their posterior in the dual information space A2(Pprior).The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P.The discussion of A2(P) valued random variables, such as estimation functionsor likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This publication is an historical recording of the most requested statistics on vital events and is a source of information that can be used in further analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This publication is an historical recording of the most requested statistics on vital events and is a source of information that can be used in further analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This publication is an historical recording of the most requested statistics on vital events and is a source of information that can be used in further analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a subsampling estimator for the distribution ofstatistics diverging at either known rates when the underlying timeseries in strictly stationary abd strong mixing. Based on our results weprovide a detailed discussion how to estimate extreme order statisticswith dependent data and present two applications to assessing financialmarket risk. Our method performs well in estimating Value at Risk andprovides a superior alternative to Hill's estimator in operationalizingSafety First portofolio selection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with the impact of "early" nineteenth-century globalization (c.1815-1860) on foreign trade in the Southern Cone (SC). Most of the evidence is drawn from bilateral trades between Britain and the SC, at a time when Britain was the main commercial partner of the new republics. The main conclusion drawn is that early globalization had a positive impact on foreign trade in the SC, and this was due to: improvements in the SC's terms of trade during this period; the SC's per capita consumption of textiles (the main manufacture traded on world markets at that time) increased substantially during this period, at a time when clothing was one of the main items of SC household budgets; British merchants brought with them capital, shipping, insurance, and also facilitated the formation of vast global networks, which further promoted the SC's exports to a wider range of outlets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using a suitable Hull and White type formula we develop a methodology to obtain asecond order approximation to the implied volatility for very short maturities. Using thisapproximation we accurately calibrate the full set of parameters of the Heston model. Oneof the reasons that makes our calibration for short maturities so accurate is that we alsotake into account the term-structure for large maturities. We may say that calibration isnot "memoryless", in the sense that the option's behavior far away from maturity doesinfluence calibration when the option gets close to expiration. Our results provide a wayto perform a quick calibration of a closed-form approximation to vanilla options that canthen be used to price exotic derivatives. The methodology is simple, accurate, fast, andit requires a minimal computational cost.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In moment structure analysis with nonnormal data, asymptotic valid inferences require the computation of a consistent (under general distributional assumptions) estimate of the matrix $\Gamma$ of asymptotic variances of sample second--order moments. Such a consistent estimate involves the fourth--order sample moments of the data. In practice, the use of fourth--order moments leads to computational burden and lack of robustness against small samples. In this paper we show that, under certain assumptions, correct asymptotic inferences can be attained when $\Gamma$ is replaced by a matrix $\Omega$ that involves only the second--order moments of the data. The present paper extends to the context of multi--sample analysis of second--order moment structures, results derived in the context of (simple--sample) covariance structure analysis (Satorra and Bentler, 1990). The results apply to a variety of estimation methods and general type of statistics. An example involving a test of equality of means under covariance restrictions illustrates theoretical aspects of the paper.