970 resultados para Zero-inflated Count Data
Resumo:
A numerical renormalization-group study of the conductance through a quantum wire containing noninteracting electrons side-coupled to a quantum dot is reported. The temperature and the dot-energy dependence of the conductance are examined in the light of a recently derived linear mapping between the temperature-dependent conductance and the universal function describing the conductance for the symmetric Anderson model of a quantum wire with an embedded quantum dot. Two conduction paths, one traversing the wire, the other a bypass through the quantum dot, are identified. A gate potential applied to the quantum wire is shown to control the current through the bypass. When the potential favors transport through the wire, the conductance in the Kondo regime rises from nearly zero at low temperatures to nearly ballistic at high temperatures. When it favors the dot, the pattern is reversed: the conductance decays from nearly ballistic to nearly zero. When comparable currents flow through the two channels, the conductance is nearly temperature independent in the Kondo regime, and Fano antiresonances in the fixed-temperature plots of the conductance as a function of the dot-energy signal interference between them. Throughout the Kondo regime and, at low temperatures, even in the mixed-valence regime, the numerical data are in excellent agreement with the universal mapping.
Resumo:
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.
Resumo:
In the context of cancer diagnosis and treatment, we consider the problem of constructing an accurate prediction rule on the basis of a relatively small number of tumor tissue samples of known type containing the expression data on very many (possibly thousands) genes. Recently, results have been presented in the literature suggesting that it is possible to construct a prediction rule from only a few genes such that it has a negligible prediction error rate. However, in these results the test error or the leave-one-out cross-validated error is calculated without allowance for the selection bias. There is no allowance because the rule is either tested on tissue samples that were used in the first instance to select the genes being used in the rule or because the cross-validation of the rule is not external to the selection process; that is, gene selection is not performed in training the rule at each stage of the cross-validation process. We describe how in practice the selection bias can be assessed and corrected for by either performing a cross-validation or applying the bootstrap external to the selection process. We recommend using 10-fold rather than leave-one-out cross-validation, and concerning the bootstrap, we suggest using the so-called. 632+ bootstrap error estimate designed to handle overfitted prediction rules. Using two published data sets, we demonstrate that when correction is made for the selection bias, the cross-validated error is no longer zero for a subset of only a few genes.
Resumo:
Multi-frequency bioimpedance analysis (MFBIA) was used to determine the impedance, reactance and resistance of 103 lamb carcasses (17.1-34.2 kg) immediately after slaughter and evisceration. Carcasses were halved, frozen and one half subsequently homogenized and analysed for water, crude protein and fat content. Three measures of carcass length were obtained. Diagonal length between the electrodes (right side biceps femoris to left side of neck) explained a greater proportion of the variance in water mass than did estimates of spinal length and was selected for use in the index L-2/Z to predict the mass of chemical components in the carcass. Use of impedance (Z) measured at the characteristic frequency (Z(c)) instead of 50 kHz (Z(50)) did not improve the power of the model to predict the mass of water, protein or fat in the carcass. While L-2/Z(50) explained a significant proportion of variation in the masses of body water (r(2) 0.64), protein (r(2) 0.34) and fat (r(2) 0.35), its inclusion in multi-variate indices offered small or no increases in predictive capacity when hot carcass weight (HCW) and a measure of rib fat-depth (GR) were present in the model. Optimized equations were able to account for 65-90 % of the variance observed in the weight of chemical components in the carcass. It is concluded that single frequency impedance data do not provide better prediction of carcass composition than can be obtained from measures of HCW and GR. Indices of intracellular water mass derived from impedance at zero frequency and the characteristic frequency explained a similar proportion of the variance in carcass protein mass as did the index L-2/Z(50).
Resumo:
Background/Aims: To present a protocol of immediate surgical repair of myelomeningocele (MMC) after birth (`time zero`) and compare this surgical outcome with the surgery performed after the newborn`s admission to the nursery before the operation. Methods: Data from the medical files of 31 patients with MMC that underwent surgery after birth and after admission at the nursery ( group I) were compared with a group of 23 patients with MMC admitted and prospectively followed, who underwent surgery immediately after birth - `at time zero` ( group II). Results: The preoperative rupture of the MMC occurred more frequently in group I (67 vs. 39%, p < 0.05). The need for ventriculoperitoneal shunt was 84% in group I and 65% in group II and 4 of them were performed during the same anesthetic time as the immediate MMC repair, with no statistically significant difference. Group I had a higher incidence of small dehiscences when compared to group II ( 29 vs. 13%, p < 0.05); however, there was no statistically significant difference regarding infections. After 1 year of follow-up, 61% of group I showed neurodevelopmental delay, whereas only 35% of group II showed it. Conclusions: The surgical intervention carried out immediately after the birth showed benefits regarding a lower incidence of preoperative rupture of the MMC, postoperative dehiscences and lower incidence of neurodevelopmental delay 1 year after birth. Copyright (C) 2009 S. Karger AG, Basel
Resumo:
In this paper, we present a method for estimating local thickness distribution in nite element models, applied to injection molded and cast engineering parts. This method features considerable improved performance compared to two previously proposed approaches, and has been validated against thickness measured by di erent human operators. We also demonstrate that the use of this method for assigning a distribution of local thickness in FEM crash simulations results in a much more accurate prediction of the real part performance, thus increasing the bene ts of computer simulations in engineering design by enabling zero-prototyping and thus reducing product development costs. The simulation results have been compared to experimental tests, evidencing the advantage of the proposed method. Thus, the proposed approach to consider local thickness distribution in FEM crash simulations has high potential on the product development process of complex and highly demanding injection molded and casted parts and is currently being used by Ford Motor Company.
Resumo:
Financial literature and financial industry use often zero coupon yield curves as input for testing hypotheses, pricing assets or managing risk. They assume this provided data as accurate. We analyse implications of the methodology and of the sample selection criteria used to estimate the zero coupon bond yield term structure on the resulting volatility of spot rates with different maturities. We obtain the volatility term structure using historical volatilities and Egarch volatilities. As input for these volatilities we consider our own spot rates estimation from GovPX bond data and three popular interest rates data sets: from the Federal Reserve Board, from the US Department of the Treasury (H15), and from Bloomberg. We find strong evidence that the resulting zero coupon bond yield volatility estimates as well as the correlation coefficients among spot and forward rates depend significantly on the data set. We observe relevant differences in economic terms when volatilities are used to price derivatives.
Resumo:
Este trabalho surgiu do âmbito da Tese de Dissertação do Mestrado em Energias Sustentáveis do Instituto Superior de Engenharia do Porto, tendo o acompanhamento dos orientadores da empresa Laboratório Ecotermolab do Instituto de Soldadura e Qualidade e do Instituto Superior de Engenharia do Porto, de forma a garantir a linha traçada indo de acordo aos objectivos propostos. A presente tese abordou o estudo do impacto da influência do ar novo na climatização de edifícios, tendo como base de apoio à análise a simulação dinâmica do edifício em condições reais num programa adequado, acreditado pela norma ASHRAE 140-2004. Este trabalho pretendeu evidenciar qual o impacto da influência do ar novo na climatização de um edifício com a conjugação de vários factores, tais como, ocupação, actividades e padrões de utilização (horários), iluminação e equipamentos, estudando ainda a possibilidade do sistema funcionar em regime de “Free-Cooling”. O princípio partiu fundamentalmente por determinar até que ponto se pode climatizar recorrendo único e exclusivamente à introdução de ar novo em regime de “Free-Cooling”, através de um sistema tudo-ar de Volume de Ar Variável - VAV, sem o apoio de qualquer outro sistema de climatização auxiliar localizado no espaço, respeitando os caudais mínimos impostos pelo RSECE (Decreto-Lei 79/2006). Numa primeira fase foram identificados todos os dados relativos à determinação das cargas térmicas do edifício, tendo em conta todos os factores e contributos alusivos ao valor da carga térmica, tais como a transmissão de calor e seus constituintes, a iluminação, a ventilação, o uso de equipamentos e os níveis de ocupação. Consequentemente foram elaboradas diversas simulações dinâmicas com o recurso ao programa EnergyPlus integrado no DesignBuilder, conjugando variáveis desde as envolventes à própria arquitectura, perfis de utilização ocupacional, equipamentos e taxas de renovação de ar nos diferentes espaços do edifício em estudo. Obtiveram-se vários modelos de forma a promover um estudo comparativo e aprofundado que permitisse determinar o impacto do ar novo na climatização do edifício, perspectivando a capacidade funcional do sistema funcionar em regime de “Free-Cooling”. Deste modo, a análise e comparação dos dados obtidos permitiram chegar às seguintes conclusões: Tendo em consideração que para necessidades de arrefecimento bastante elevadas, o “Free-Cooling” diurno revelou-se pouco eficaz ou quase nulo, para o tipo de clima verificado em Portugal, pois o diferencial de temperatura existente entre o exterior e o interior não é suficiente de modo a tornar possível a remoção das cargas de forma a baixar a temperatura interior para o intervalo de conforto. Em relação ao “Free-Cooling” em horário nocturno ou pós-laboral, este revelou-se bem mais eficiente. Obtiveram-se prestações muito interessantes sobretudo durante as estações de aquecimento e meia-estação, tendo em consideração o facto de existir necessidades de arrefecimento mesmo durante a estação de aquecimento. Referente à ventilação nocturna, isto é, em períodos de madrugada e fecho do edifício, concluiu-se que tal contribui para um abaixamento do calor acumulado durante o dia nos materiais construtivos do edifício e que é libertado ou restituído posteriormente para os espaços em períodos mais tardios. De entre as seguintes variáveis, aumento de caudal de ar novo insuflado e o diferencial de temperatura existente entre o ar exterior e interior, ficou demonstrado que este último teria maior peso contributivo na remoção do calor. Por fim, é ponto assente que de um modo geral, um sistema de climatização será sempre indispensável devido a cargas internas elevadas, requisitos interiores de temperatura e humidade, sendo no entanto aconselhado o “Free- Cooling” como um opção viável a incorporar na solução de climatização, de forma a promover o arrefecimento natural, a redução do consumo energético e a introdução activa de ar novo.
Resumo:
Myocardial Perfusion Gated Single Photon Emission Tomography (Gated-SPET) imaging is used for the combined evaluation of myocardial perfusion and left ventricular (LV). The purpose of this study is to evaluate the influence of the total number of counts acquired from myocardium, in the calculation of myocardial functional parameters using routine software procedures. Methods: Gated-SPET studies were simulated using Monte Carlo GATE package and NURBS phantom. Simulated data were reconstructed and processed using the commercial software package Quantitative Gated-SPECT. The Bland-Altman and Mann-Whitney-Wilcoxon tests were used to analyze the influence of the number of total counts in the calculation of LV myocardium functional parameters. Results: In studies simulated with 3MBq in the myocardium there were significant differences in the functional parameters: Left ventricular ejection fraction (LVEF), end-systolic volume (ESV), Motility and Thickness; between studies acquired with 15s/projection and 30s/projection. Simulations with 4.2MBq show significant differences in LVEF, end-diastolic volume (EDV) and Thickness. Meanwhile in the simulations with 5.4MBq and 8.4MBq the differences were statistically significant for Motility and Thickness. Conclusion: The total number of counts per simulation doesn't significantly interfere with the determination of Gated-SPET functional parameters using the administered average activity of 450MBq to 5.4MBq in myocardium.
Resumo:
This manuscript analyses the data generated by a Zero Length Column (ZLC) diffusion experimental set-up, for 1,3 Di-isopropyl benzene in a 100% alumina matrix with variable particle size. The time evolution of the phenomena resembles those of fractional order systems, namely those with a fast initial transient followed by long and slow tails. The experimental measurements are best fitted with the Harris model revealing a power law behavior.
Resumo:
In an attempt at explaining the observed neutrino mass-squared differences and leptonic mixing, lepton mass matrices with zero textures have been widely studied. In the weak basis where the charged lepton mass matrix is diagonal, various neutrino mass matrices with two zeros have been shown to be consistent with the current experimental data. Using the canonical and Smith normal form methods, we construct the minimal Abelian symmetry realizations of these phenomenological two-zero neutrino textures. The implementation of these symmetries in the context of the seesaw mechanism for Majorana neutrino masses is also discussed. (C) 2014 The Authors. Published by Elsevier B.V.
Resumo:
Several popular Ansatze of lepton mass matrices that contain texture zeros are confronted with current neutrino observational data. We perform a systematic chi(2) analysis in a wide class of schemes, considering arbitrary Hermitian charged-lepton mass matrices and symmetric mass matrices for Majorana neutrinos or Hermitian mass matrices for Dirac neutrinos. Our study reveals that several patterns are still consistent with all the observations at the 68.27% confidence level, while some others are disfavored or excluded by the experimental data. The well-known Frampton-Glashow-Marfatia two-zero textures, hybrid textures, and parallel structures (among others) are considered.
Resumo:
It was reevaluated a reduced schedule for anti-rabies post-exposure immunization with newborn mice nervous tissue vaccine (Fuenzalida 8c Palacios) in a group of 30 non exposed volunteers. The vaccine was administered by intramuscular injections on days zero, 2, 4, 16 and 27, in the deltoid area. Antibody levels were determinated by a simplified serum neutralization microtest on days zero, 16 and 37. On days 16 and 37 the antibody levels of the whole group was >0.5 IU/ml and >1.0 IU/ml, respectively. The cell mediated immunity was precociously detected (on day 4) by the delayed type hipersensitivity skin test. Our results show that this reduced schedule elicited an early and effective humoral and cellular immune response. However it is necessary other studies with larger groups of vaccinees in order to obtain definitive conclusion.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação
Resumo:
Distributed data aggregation is an important task, allowing the de- centralized determination of meaningful global properties, that can then be used to direct the execution of other applications. The resulting val- ues result from the distributed computation of functions like count, sum and average. Some application examples can found to determine the network size, total storage capacity, average load, majorities and many others. In the last decade, many di erent approaches have been pro- posed, with di erent trade-o s in terms of accuracy, reliability, message and time complexity. Due to the considerable amount and variety of ag- gregation algorithms, it can be di cult and time consuming to determine which techniques will be more appropriate to use in speci c settings, jus- tifying the existence of a survey to aid in this task. This work reviews the state of the art on distributed data aggregation algorithms, providing three main contributions. First, it formally de nes the concept of aggrega- tion, characterizing the di erent types of aggregation functions. Second, it succinctly describes the main aggregation techniques, organizing them in a taxonomy. Finally, it provides some guidelines toward the selection and use of the most relevant techniques, summarizing their principal characteristics.