849 resultados para large sample distributions
Resumo:
Objective: The aim of this research is to use finite element analysis (FEA) to quantify the effect of the sample shape and the imperfections induced during the manufacturing process of samples on the bond strength and modes of failure of dental adhesive systems through microtensile test. Using the FEA prediction for individual parameters effect, estimation of expected variation and spread of the microtensile bond strength results for different sample geometries is made. Methods: The estimated stress distributions for three different sample shapes, hourglass, stick and dumbbell predicted by FEA are used to predict the strength for different fracture modes. Parameters such as the adhesive thickness, uneven interface of the adhesive and composite and dentin, misalignment of axis of loading, the existence of flaws such as induced cracks during shaping the samples or bubbles created during application of the adhesive are considered. Microtensile experiments are performed simultaneously to measure bond strength and modes of failure. These are compared with the FEA results. Results: The relative bonding strength and its standard deviation for the specimens with different geometries measured through the microtensile tests confirm the findings of the FEA. The hourglass shape samples show lower tensile bond strength and standard deviation compared to the stick and dumbbell shape samples. ANOVA analysis confirms no significant difference between dumbbell and stick geometry results, and major differences of these two geometries compared to hourglass shape measured values. Induced flaws in the adhesive and misalignment of the angle of application of load have significant effect on the microtensile bond strength. Using adhesive with higher modulus the differences between the bond strength of the three sample geometries increase. Significance: The result of the research clarifies the importance of the sample geometry chosen in measuring the bond strength. It quantifies the effect of the imperfections on the bond strength for each of the sample geometries through a systematic and all embracing study. The results explain the reasons of the large spread of the microtensile test results reported by various researchers working in different labs and the need for standardization of the test method and sample shape used in evaluation of the dentin-adhesive bonding system. © 2007 Academy of Dental Materials.
Resumo:
Aim: We used a combination of modelling and genetic approaches to investigate whether Pinguicula grandiflora and Saxifraga spathularis, two species that exhibit disjunct Lusitanian distributions, may have persisted through the Last Glacial Maximum (LGM, c. 21 ka) in separate northern and southern refugia.
Location: Northern and eastern Spain and south-western Ireland.
Methods: Palaeodistribution modelling using maxent was used to identify putative refugial areas for both species at the LGM, as well as to estimate their distributions during the Last Interglacial (LIG, c. 120 ka). Phylogeographical analysis of samples from across both species' ranges was carried out using one chloroplast and three nuclear loci for each species.
Results: The palaeodistribution models identified very limited suitable habitat for either species during the LIG, followed by expansion during the LGM. A single, large refugium across northern Spain and southern France was postulated for P. grandiflora. Two suitable regions were identified for S. spathularis: one in northern Spain, corresponding to the eastern part of the species' present-day distribution in Iberia, and the other on the continental shelf off the west coast of Brittany, south of the limit of the British–Irish ice sheet. Phylogeographical analyses indicated extremely reduced levels of genetic diversity in Irish populations of P. grandiflora relative to those in mainland Europe, but comparable levels of diversity between Irish and mainland European populations of S. spathularis, including the occurrence of private hapotypes in both regions.
Main conclusions: Modelling and phylogeographical analyses indicate that P. grandiflora persisted through the LGM in a southern refugium, and achieved its current Irish distribution via northward dispersal after the retreat of the ice sheets. Although the results for S. spathularis are more equivocal, a similar recolonization scenario also seems the most likely explanation for the species' current distribution.
Resumo:
Bradykinin-related peptides (BRPs) are significant components of the defensive skin secretions of many anuran amphibians, and these secretions represent the source of the most diverse spectrum of such peptides so far encountered in nature. Of the many families of bioactive peptides that have been identified from this source, the BRPs uniquely appear to represent homologues of counterparts that have specific distributions and receptor targets within discrete vertebrate taxa, ranging from fishes through mammals. Their broad spectra of actions, including pain and inflammation induction and smooth muscle effects, make these peptides ideal weapons in predator deterrence. Here, we describe a novel 12-mer BRP (RVALPPGFTPLR-RVAL-(L1, T6, L8)-bradykinin) from the skin secretion of the Fujian large-headed frog (Limnonectes fujianensis). The C-terminal 9 residues of this BRP (-LPPGFTPLR) exhibit three amino acid substitutions (L/R at Position 1, T/S at Position 6 and L/F at Position 8) when compared to canonical mammalian bradykinin (BK), but are identical to the kinin sequence present within the cloned kininogen-2 from the Chinese soft-shelled turtle (Pelodiscus sinensis) and differ from that encoded by kininogen-2 of the Tibetan ground tit (Pseudopodoces humilis) at just a single site (F/L at Position 8). These data would imply that the novel BRP is an amphibian defensive agent against predation by sympatric turtles and also that the primary structure of the avian BK, ornithokinin (RPPGFTPLR), is not invariant within this taxon. Synthetic RVAL-(L1, T6, L8)-bradykinin was found to be an antagonist of BK-induced rat tail artery smooth muscle relaxation acting via the B2-receptor.
Resumo:
We present the latest analysis and results from SEPPCoN (Survey of Ensemble Physical Properties of Cometary Nuclei). This on-going survey involves studying 100 JFCs - about 25% of the known population - at both mid-infrared and visible wave-lengths to constrain the distributions of sizes, shapes, spins, and albedos of this population. Having earlier reported results from measuring thermal emissions of our sample nuclei [1,2,3,4], we report here progress on the visible-wavelength observations that we have obtained at many ground-based facilities in Chile, Spain, and the United States. To date we have attempted observations of 91% of our sample of 100 JFCs, and at least 64 of those were successfully detected. In most cases the comets were at heliocentric distances between 3.0 and 6.5 AU so as to decrease the odds of a comet having a coma. Of the 64 detected comets, 48 were apparently bare, having no extended emission. Our datasets are further augmented by archival data and photometry from the NEAT program [5]. An important goal of SEPPCoN is to accumulate a large comprehensive set of high quality physical data on cometary nuclei in order to make accurate statistical comparisons with other minor-body populations such as Trojans, Centaurs, and Kuiper-belt objects. Information on the size, shape, spin-rate, albedo and color distributions is critical for understanding their origins and evolutionary processes affecting them.
Resumo:
We present new results from SEPPCoN, a Survey of Ensemble Physical Properties of Cometary Nuclei. This project is currently surveying 100 Jupiter-family comets (JFCs) to measure the mid-infrared thermal emission and visible reflected sunlight of the nuclei. The scientific goal is to determine the distributions of radius, geometric albedo, thermal inertia, axial ratio, and color among the JFC nuclei. In the past we have presented results from the completed mid-IR observations of our sample [1]; here we present preliminary results from ongoing, broadband visible-wavelength observations of nuclei obtained from a variety of ground-based facilities (Mauna Kea, Cerro Pachon, La Silla, La Palma, Apache Point, Table Mtn., and Palomar Mtn.), including contributions from the Near Earth Asteroid Telescope project (NEAT) archive. The nuclei were observed at high heliocentric distance (usually over 4 AU) and so many comets show either no or little contamination from dust coma. While several nuclei have been observed as snapshots, we have multiepoch photometry for many of our targets. With our datasets we are building a large database of photometry, and such a database is essential to the derivation of albedo and shape of a large number of nuclei, and to the understanding of biases in the survey. Support for this work was provided by NSF and the NASA Planetary Astronomy program. Reference: [1] Fernandez, Y.R., et al. 2007, BAAS 39, 827.
Resumo:
We put constraints on the properties of the progenitors of peculiar calcium-rich transients using the distribution of locations within their host galaxies. We confirm that this class of transients do not follow the galaxy stellar mass profile and are more likely to be found in remote locations of their apparent hosts. We test the hypothesis that these transients are from low-metallicity progenitors by comparing their spatial distributions with the predictions of self-consistent cosmological simulations that include star formation and chemical enrichment. We find that while metal-poor stars and our transient sample show a consistent preference for large offsets, metallicity alone cannot explain the extreme cases. Invoking a lower age limit on the progenitor helps to improve the match, indicating these events may result from a very old metal-poor population. We also investigate the radial distribution of globular cluster systems, and show that they too are consistent with the class of calcium-rich transients. Because photometric upper limits exist for globular clusters for some members of the class, a production mechanism related to the dense environment of globular clusters is not favoured for the calcium-rich events. However, the methods developed in this paper may be used in the future to constrain the effects of low metallicity on radially distant core-collapse events or help establish a correlation with globular clusters for other classes of peculiar explosions.
Resumo:
BACKGROUND: Disability-adjusted life-years (DALYs) are an indicator of mortality, morbidity, and disability. We calculated DALYs for cancer in middle-aged and older adults participating in the Consortium on Health and Ageing Network of Cohorts in Europe and the United States (CHANCES) consortium.
METHODS: A total of 90 199 participants from five European cohorts with 10 455 incident cancers and 4399 deaths were included in this study. DALYs were calculated as the sum of the years of life lost because of premature mortality (YLLs) and the years lost because of disability (YLDs). Population-attributable fractions (PAFs) were also estimated for five cancer risk factors, ie, smoking, adiposity, physical inactivity, alcohol intake, and type II diabetes.
RESULTS: After a median follow-up of 12 years, the total number of DALYs lost from cancer was 34 474 (382 per 1000 individuals) with a similar distribution by sex. Lung cancer was responsible for the largest number of lost DALYs (22.9%), followed by colorectal (15.3%), prostate (10.2%), and breast cancer (8.7%). Mortality (81.6% of DALYs) predominated over disability. Ever cigarette smoking was the risk factor responsible for the greatest total cancer burden (24.0%, 95% confidence interval [CI] = 22.2% to 26.0%), followed by physical inactivity (4.9%, 95% CI = 0.8% to 8.1%) and adiposity (1.8%, 95% CI = 0.2% to 2.8%).
CONCLUSIONS: DALYs lost from cancer were substantial in this large European sample of middle-aged and older adults. Even if the burden of disease because of cancer is predominantly caused by mortality, some cancers have sizeable consequences for disability. Smoking remained the predominant risk factor for total cancer burden.
Resumo:
O trabalho apresentado centra-se na determinação dos custos de construção de condutas de pequenos e médios diâmetros em Polietileno de Alta Densidade (PEAD) para saneamento básico, tendo como base a metodologia descrita no livro Custos de Construção e Exploração – Volume 9 da série Gestão de Sistemas de Saneamento Básico, de Lencastre et al. (1994). Esta metodologia descrita no livro já referenciado, nos procedimentos de gestão de obra, e para tal foram estimados custos unitários de diversos conjuntos de trabalhos. Conforme Lencastre et al (1994), “esses conjuntos são referentes a movimentos de terras, tubagens, acessórios e respetivos órgãos de manobra, pavimentações e estaleiro, estando englobado na parte do estaleiro trabalhos acessórios correspondentes à obra.” Os custos foram obtidos analisando vários orçamentos de obras de saneamento, resultantes de concursos públicos de empreitadas recentemente realizados. Com vista a tornar a utilização desta metodologia numa ferramenta eficaz, foram organizadas folhas de cálculo que possibilitam obter estimativas realistas dos custos de execução de determinada obra em fases anteriores ao desenvolvimento do projeto, designadamente numa fase de preparação do plano diretor de um sistema ou numa fase de elaboração de estudos de viabilidade económico-financeiros, isto é, mesmo antes de existir qualquer pré-dimensionamento dos elementos do sistema. Outra técnica implementada para avaliar os dados de entrada foi a “Análise Robusta de Dados”, Pestana (1992). Esta metodologia permitiu analisar os dados mais detalhadamente antes de se formularem hipóteses para desenvolverem a análise de risco. A ideia principal é o exame bastante flexível dos dados, frequentemente antes mesmo de os comparar a um modelo probabilístico. Assim, e para um largo conjunto de dados, esta técnica possibilitou analisar a disparidade dos valores encontrados para os diversos trabalhos referenciados anteriormente. Com os dados recolhidos, e após o seu tratamento, passou-se à aplicação de uma metodologia de Análise de Risco, através da Simulação de Monte Carlo. Esta análise de risco é feita com recurso a uma ferramenta informática da Palisade, o @Risk, disponível no Departamento de Engenharia Civil. Esta técnica de análise quantitativa de risco permite traduzir a incerteza dos dados de entrada, representada através de distribuições probabilísticas que o software disponibiliza. Assim, para por em prática esta metodologia, recorreu-se às folhas de cálculo que foram realizadas seguindo a abordagem proposta em Lencastre et al (1994). A elaboração e a análise dessas estimativas poderão conduzir à tomada de decisões sobre a viabilidade da ou das obras a realizar, nomeadamente no que diz respeito aos aspetos económicos, permitindo uma análise de decisão fundamentada quanto à realização dos investimentos.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Economics from the NOVA – School of Business and Economics
Resumo:
PURPOSE: The Cancer Vaccine Consortium of the Cancer Research Institute (CVC-CRI) conducted a multicenter HLA-peptide multimer proficiency panel (MPP) with a group of 27 laboratories to assess the performance of the assay. EXPERIMENTAL DESIGN: Participants used commercially available HLA-peptide multimers and a well characterized common source of peripheral blood mononuclear cells (PBMC). The frequency of CD8+ T cells specific for two HLA-A2-restricted model antigens was measured by flow cytometry. The panel design allowed for participants to use their preferred staining reagents and locally established protocols for both cell labeling, data acquisition and analysis. RESULTS: We observed significant differences in both the performance characteristics of the assay and the reported frequencies of specific T cells across laboratories. These results emphasize the need to identify the critical variables important for the observed variability to allow for harmonization of the technique across institutions. CONCLUSIONS: Three key recommendations emerged that would likely reduce assay variability and thus move toward harmonizing of this assay. (1) Use of more than two colors for the staining (2) collect at least 100,000 CD8 T cells, and (3) use of a background control sample to appropriately set the analytical gates. We also provide more insight into the limitations of the assay and identified additional protocol steps that potentially impact the quality of data generated and therefore should serve as primary targets for systematic analysis in future panels. Finally, we propose initial guidelines for harmonizing assay performance which include the introduction of standard operating protocols to allow for adequate training of technical staff and auditing of test analysis procedures.
Resumo:
BACKGROUND: Studies on the association between homocysteine levels and depression have shown conflicting results. To examine the association between serum total homocysteine (tHcy) levels and major depressive disorder (MDD) in a large community sample with an extended age range. METHODS: A total of 3392 men and women aged 35-66 years participating in the CoLaus study and its psychiatric arm (PsyCoLaus) were included in the analyses. High tHcy measured from fasting blood samples was defined as a concentration ≥15μmol/L. MDD was assessed using the semi-structured Diagnostic Interview for Genetics Studies. RESULTS: In multivariate analyses, elevated tHcy levels were associated with greater odds of meeting the diagnostic criteria for lifetime MDD among men (OR=1.71; 95% CI, 1.18-2.50). This was particularly the case for remitted MDD. Among women, there was no significant association between tHcy levels and MDD and the association tended to be in the opposite direction (OR=0.61; 95% CI, 0.34-1.08). CONCLUSIONS: In this large population-based study, elevated tHcy concentrations are associated with lifetime MDD and particularly with remitted MDD among men.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
The technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] provides an attractive method of building exact tests from statistics whose finite sample distribution is intractable but can be simulated (provided it does not involve nuisance parameters). We extend this method in two ways: first, by allowing for MC tests based on exchangeable possibly discrete test statistics; second, by generalizing the method to statistics whose null distributions involve nuisance parameters (maximized MC tests, MMC). Simplified asymptotically justified versions of the MMC method are also proposed and it is shown that they provide a simple way of improving standard asymptotics and dealing with nonstandard asymptotics (e.g., unit root asymptotics). Parametric bootstrap tests may be interpreted as a simplified version of the MMC method (without the general validity properties of the latter).
Resumo:
We study the workings of the factor analysis of high-dimensional data using artificial series generated from a large, multi-sector dynamic stochastic general equilibrium (DSGE) model. The objective is to use the DSGE model as a laboratory that allow us to shed some light on the practical benefits and limitations of using factor analysis techniques on economic data. We explain in what sense the artificial data can be thought of having a factor structure, study the theoretical and finite sample properties of the principal components estimates of the factor space, investigate the substantive reason(s) for the good performance of di¤usion index forecasts, and assess the quality of the factor analysis of highly dissagregated data. In all our exercises, we explain the precise relationship between the factors and the basic macroeconomic shocks postulated by the model.