964 resultados para statistic
Resumo:
In this paper, we present approximate distributions for the ratio of the cumulative wavelet periodograms considering stationary and non-stationary time series generated from independent Gaussian processes. We also adapt an existing procedure to use this statistic and its approximate distribution in order to test if two regularly or irregularly spaced time series are realizations of the same generating process. Simulation studies show good size and power properties for the test statistic. An application with financial microdata illustrates the test usefulness. We conclude advocating the use of these approximate distributions instead of the ones obtained through randomizations, mainly in the case of irregular time series. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Background: This study evaluated a wide range of viral load (VL) thresholds to identify a cut-point that best predicts new clinical events in children on stable highly active antiretroviral therapy (HAART). Methods: Cox proportional hazards modeling was used to assess the adjusted risk for World Health Organization stage 3 or 4 clinical events (WHO events) as a function of time-varying CD4, VL, and hemoglobin values in a cohort study of Latin American children on HAART >= 6 months. Models were fit using different VL cut-points between 400 and 50,000 copies per milliliter, with model fit evaluated on the basis of the minimum Akaike information criterion value, a standard model fit statistic. Results: Models were based on 67 subjects with WHO events out of 550 subjects on study. The VL cut-points of >2600 and >32,000 copies per milliliter corresponded to the lowest Akaike information criterion values and were associated with the highest hazard ratios (2.0, P = 0.015; and 2.1, P = 0.0058, respectively) for WHO events. Conclusions: In HIV-infected Latin American children on stable HAART, 2 distinct VL thresholds (>2600 and >32,000 copies/mL) were identified for predicting children at significantly increased risk for HIV-related clinical illness, after accounting for CD4 level, hemoglobin level, and other significant factors.
Resumo:
The asymptotic expansion of the distribution of the gradient test statistic is derived for a composite hypothesis under a sequence of Pitman alternative hypotheses converging to the null hypothesis at rate n(-1/2), n being the sample size. Comparisons of the local powers of the gradient, likelihood ratio, Wald and score tests reveal no uniform superiority property. The power performance of all four criteria in one-parameter exponential family is examined.
Resumo:
A common interest in gene expression data analysis is to identify from a large pool of candidate genes the genes that present significant changes in expression levels between a treatment and a control biological condition. Usually, it is done using a statistic value and a cutoff value that are used to separate the genes differentially and nondifferentially expressed. In this paper, we propose a Bayesian approach to identify genes differentially expressed calculating sequentially credibility intervals from predictive densities which are constructed using the sampled mean treatment effect from all genes in study excluding the treatment effect of genes previously identified with statistical evidence for difference. We compare our Bayesian approach with the standard ones based on the use of the t-test and modified t-tests via a simulation study, using small sample sizes which are common in gene expression data analysis. Results obtained report evidence that the proposed approach performs better than standard ones, especially for cases with mean differences and increases in treatment variance in relation to control variance. We also apply the methodologies to a well-known publicly available data set on Escherichia coli bacterium.
Resumo:
Background: The biorhythm of serum uric acid was evaluated in a large sample of a clinical laboratory database by spectral analysis and the influence of the gender and age on uric acid variability. Methods: Serum uric acid values were extracted from a large database of a clinical laboratory from May 2000 to August 2006. Outlier values were excluded from the analysis and the remaining data (n = 73,925) were grouped by gender and age ranges. Rhythm components were obtained by the Lomb Scargle method and Cosinor analysis. Results: Serum uric acid was higher in men than in women older than 13 years (p<0.05). Compared with 0-12 year group, uric acid increased in men but not in women older than 13 years (p<0.05). Circannual (12 months) and transyear (17 months) rhythm components were detected, but they were significant only in adult individuals (>26 years, p<0.05). Cosinor analysis showed that midline estimating statistic of rhythm (MESOR) values were higher in men (range: 353-368 mu mol/L) than in women (range: 240-278 mu mol/L; p<0.05), independent of the age and rhythm component. The extent of predictable change within a cycle, approximated by the double amplitude, represented up to 20% of the corresponding MESOR. Conclusions: Serum uric acid biorhythm is dependent on gender and age and it may have relevant influence on preanalytical variability of clinical laboratory results.
Resumo:
The self-consistency of a thermodynamical theory for hadronic systems based on the non-extensive statistics is investigated. We show that it is possible to obtain a self-consistent theory according to the asymptotic bootstrap principle if the mass spectrum and the energy density increase q-exponentially. A direct consequence is the existence of a limiting effective temperature for the hadronic system. We show that this result is in agreement with experiments. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
A chaotic encryption algorithm is proposed based on the "Life-like" cellular automata (CA), which acts as a pseudo-random generator (PRNG). The paper main focus is to use chaos theory to cryptography. Thus, CA was explored to look for this "chaos" property. This way, the manuscript is more concerning on tests like: Lyapunov exponent, Entropy and Hamming distance to measure the chaos in CA, as well as statistic analysis like DIEHARD and ENT suites. Our results achieved higher randomness quality than others ciphers in literature. These results reinforce the supposition of a strong relationship between chaos and the randomness quality. Thus, the "chaos" property of CA is a good reason to be employed in cryptography, furthermore, for its simplicity, low cost of implementation and respectable encryption power. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Abstract Background To establish the correlation between quantitative analysis based on B-mode ultrasound images of vulnerable carotid plaque and histological examination of the surgically removed plaque, on the basis of a videodensitometric digital texture characterization. Methods Twenty-five patients (18 males, mean age 67 ± 6.9 years) admitted for carotid endarterectomy for extracranial high-grade internal carotid artery stenosis (≥ 70% luminal narrowing) underwent to quantitative ultrasonic tissue characterization of carotid plaque before surgery. A computer software (Carotid Plaque Analysis Software) was developed to perform the videodensitometric analysis. The patients were divided into 2 groups according to symptomatology (group I, 15 symptomatic patients; and group II, 10 patients asymptomatic). Tissue specimens were analysed for lipid, fibromuscular tissue and calcium. Results The first order statistic parameter mean gray level was able to distinguish the groups I and II (p = 0.04). The second order parameter energy also was able to distinguish the groups (p = 0,02). A histological correlation showed a tendency of mean gray level to have progressively greater values from specimens with < 50% to >75% of fibrosis. Conclusion Videodensitometric computer analysis of scan images may be used to identify vulnerable and potentially unstable lipid-rich carotid plaques, which are less echogenic in density than stable or asymptomatic, more densely fibrotic plaques.
Resumo:
Background UCP2 (uncoupling protein 2) plays an important role in cardiovascular diseases and recent studies have suggested that the A55V polymorphism can cause UCP2 dysfunction. The main aim was to investigate the association of A55V polymorphism with cardiovascular events in a group of 611 patients enrolled in the Medical, Angioplasty or Surgery Study II (MASS II), a randomized trial comparing treatments for patients with coronary artery disease and preserved left ventricular function. Methods The participants of the MASS II were genotyped for the A55V polymorphism using allele-specific PCR assay. Survival curves were calculated with the Kaplan–Meier method and evaluated with the log-rank statistic. The relationship between baseline variables and the composite end-point of cardiac death, acute myocardial infarction (AMI), refractory angina requiring revascularization and cerebrovascular accident were assessed using a Cox proportional hazards survival model. Results There were no significant differences for baseline variables according genotypes. After 2 years of follow-up, dysglycemic patients harboring the VV genotype had higher occurrence of AMI (p=0.026), Death+AMI (p=0.033), new revascularization intervention (p=0.009) and combined events (p=0.037) as compared with patients carrying other genotypes. This association was not evident in normoglycemic patients. Conclusions These findings support the hypothesis that A55V polymorphism is associated with UCP2 functional alterations that increase the risk of cardiovascular events in patients with previous coronary artery disease and dysglycemia.
Resumo:
In this paper, a procedure for the on-line process control of variables is proposed. This procedure consists of inspecting the m-th item from every m produced items and deciding, at each inspection, whether the process is out-of-control. Two sets of limits, warning (µ0 ± W) and control (µ0 ± C), are used. If the value of the monitored statistic falls beyond the control limits or if a sequence of h observations falls between the warning limits and the control limits, the production is stopped for adjustment; otherwise, production goes on. The properties of an ergodic Markov chain are used to obtain an expression for the average cost per item. The parameters (the sampling interval m, the widths of the warning, the control limits W and C(W < C), and the sequence length (h) are optimized by minimizing the cost function. A numerical example illustrates the proposed procedure.
Resumo:
We deal with homogeneous isotropic turbulence and use the two-point velocity correlation tensor field (parametrized by the time variable t) of the velocity fluctuations to equip an affine space K3 of the correlation vectors by a family of metrics. It was shown in Grebenev and Oberlack (J Nonlinear Math Phys 18:109–120, 2011) that a special form of this tensor field generates the so-called semi-reducible pseudo-Riemannian metrics ds2(t) in K3. This construction presents the template for embedding the couple (K3, ds2(t)) into the Euclidean space R3 with the standard metric. This allows to introduce into the consideration the function of length between the fluid particles, and the accompanying important problem to address is to find out which transformations leave the statistic of length to be invariant that presents a basic interest of the paper. Also we classify the geometry of the particles configuration at least locally for a positive Gaussian curvature of this configuration and comment the case of a negative Gaussian curvature.
Resumo:
Although scientific literature has demonstrated the relevance of oral hygiene with chlorhexidine in preventing ventilation-associated pneumonia, there is a wide variation of concentrations, frequency and techniques when using the antiseptic. The aim of this research was to assessthe best chlorhexidine concentration used to perform oral hygiene to prevent ventilation-associated pneumonia. A systematic review followed by four meta-analysis using chlorhexidine concentration as criterion was carried out. Articles in English, Spanish or Portuguese indexed in the Cochrane, Embase, Lilacs, PubMed/Medline and Ovid electronic databases were selected. The research was carried out from May to June 2011. The primary outcome measure of interest was ventilation-associated pneumonia. Ten primary studies were divided in four groups (Gl-4), based on chlorhexidine concentration criterion. Gl (5 primary studies, chlorhexidine 0.12%) showed homogeneity among studies and the use of chlorhexidine represented a protective factor. G2 (3 primary studies, chlorhexidine 0.20%) showed heterogeneity among studies and chlorhexidine did not represent a protective factor. G3 (2 primary studies, chlorhexidine 2,00%) showed homogeneity among studies and the use of chlorhexidine was significant. G4 (10 primary studies with different chlorhexidine concentrations) showed homogeneity among studies and the common Relative Risk was significant. Statistic analyses showed a protective effect of oral hygiene with chlorhexidine in preventing ventilation-associated pneumonia. However, it was not possible to identity a standard to establish optimal chlorhexidine concentration.
Resumo:
Introdução: vários fatores estão associados à lesão de isquemia fria (IF) e referfusão quente (RQ) no transplante hepático (TxH), tais como infiltrado de neutrófilos e linfo-plasmocitário, liberação de citoquinas inflamatórias e apoptose. Porém, pouco se conhece sobre o papel da IF/RQ em enxertos esteatóticos. Objetivo: avaliar o papel da lesão de IF/RQ no TxH em humanos comparando enxertos esteatóticos e não esteatóticos. Métodos: entre maio/02 e março/07 foram realizadas 84 biópsias pós reperfusão (2hs após RQ) e 18 pré reperfusão, totalizando-se 84 TxH em 82 pacientes. As biópsias foram agrupadas em 5 grupos, de acordo com o grau de macro e microesteatose: GEL – leve (<30%), GEM – moderada (30-59%), GEG - grave (≥60%), GEA - sem esteatose, GPR-pré-reperfusão. Nas 102 biópsias foram analisadas: porcentagens de macro e microesteatose, graus de exudato de neutrófilos (0-3) e infiltrado linfo-plasmocitário portal (0-3), índices de apoptose (métodos de Túnel e Caspase- 3) e de ICAM-1. As esteatoses macro (n=49) e microvesicular (n=74) foram individualmente analisadas e classificadas em graus leve (G1), moderado (G2) e grave (G3) e ausente (G4). Resultados: o índice de apoptose (TUNEL) foi: GEL=0.262±0.111, GEM=0.278±0.113, GEG=0.244±0.117, GEA=0,275±0.094 e GPR=0.181±0.123, p-0.07. No grupo macroesteatose índice de apoptose (TUNEL) foi: G1=0.284± 0.106, G2+3=0.160±0.109, G4=0,275±0.094, p-0.05; e no grupo microesteatose, G1=0.222±0.123, G2+3=0.293±0.108, G4=0.275±0.094, p-0.049. O GEG expressou o ICAM-1 em 83% dos casos de forma difusa. Não existiu diferença estatística entre os grupos ao analisarmos os índices de apoptose (caspase-3) e ICAM-1. Conclusão: o GEG e o grupo macroesteatose (moderado e grave) apresentaram significante redução no índice de apoptose, enquanto o grupo microesteatose (moderado e grave), significante aumento. E o GEG apresentou expressão de ICAM-1 difusamente, podendo ser estes marcadores envolvidos na lesão de I/R hepática dos enxertos esteatóticos.
Resumo:
Negli ultimi anni la longevità è divenuto un argomento di notevole interesse in diversi settori scientifici. Le ricerche volte ad indagare i meccanismi che regolano i fattori della longevità si sono moltiplicate nell’ultimo periodo interessando, in maniera diversa, alcune regioni del territorio italiano. Lo studio presentato nella tesi ha l’obiettivo di identificare eventuali aggregazioni territoriali caratterizzate da una significativa propensione alla longevità nella regione Emilia-Romagna mediante l’impiego di metodologie di clustering spaziale, alcune delle quali di recente implementazione. La popolazione in esame è costituita dagli individui residenti in Emilia- Romagna nel quinquennio 2000-2004 suddivisa in classi di età, sesso e comune. L’analisi è di tipo puramente spaziale, in cui l’unità geografica elementare è identificata dal comune, ed è stata condotta separatamente per i due sessi. L’identificazione delle aree regionali ad elevata longevità è avvenuta utilizzando quattro metodologie di clustering spaziale, basate sulla teoria della massima verosimiglianza, che si differenziano tra loro per la modalità di ricerca dei potenziali clusters. La differenza consiste nella capacità di identificare aggregazioni territoriali di forma regolare (spatial scan statistic, Kulldorff e Nagarwalla,1995; Kulldorff,1997, 1999) o dall’andamento geometrico “libero” (flexible scan statistic, Tango e Takahashi,2005; algoritmo genetico, Duczmal et al.,2007; greedy growth search, Yiannakoulias et al.,2007). Le caratteristiche di ciascuna metodologia consentono, in tal modo, di “catturare” le possibili conformazioni geografiche delle aggregazioni presenti sul territorio e la teoria statistica di base, comune ad esse, consente di effettuare agevolmente un confronto tra i risultati ottenuti. La persistenza di un’area caratterizzata da un’elevata propensione alla longevità consente, infatti, di ritenere il cluster identificato di notevole interesse per approfondimenti successivi. Il criterio utilizzato per la valutazione della persistenza di un cluster è stato derivato dalla teoria dei grafi, con particolare riferimento ai multigrafi. L’idea è confrontare, a parità di parametri di ricerca, i grafi associati alle aggregazioni spaziali identificate con le diverse metodologie attraverso una valutazione delle occorrenze dei collegamenti esistenti tra le coppie di vertici. Alcune valutazioni di carattere demografico ed un esame della letteratura esistente sugli studi di longevità, hanno indotto alla definizione di una classe (aperta) di età per rappresentare il fenomeno nella nostra ricerca: sono stati considerati gli individui con età superiore o uguale a 95 anni (indicata con 95+). La misura di sintesi utilizzata per descrivere il fenomeno è un indicatore specifico di longevità, mutuato dalla demografia, indicato con Centenarian Rate (CR) (Robine e Caselli, 2005). Esso è definito dal rapporto tra la popolazione 95+ e la popolazione residente, nello stesso comune, al censimento del 1961. L’idea alla base del CR è confrontare gli individui longevi di un istante temporale con quelli presenti, nella stessa area, circa 40 anni prima dell’osservazione, ipotizzando che l’effetto migratorio di una popolazione possa ritenersi trascurabile oltre i 60 anni di età. La propensione alla longevità coinvolge in maniera diversa le aree del territorio dell’Emilia-Romagna. Le province della regione caratterizzate da una maggiore longevità sono Bologna, Ravenna e parte di Forlì-Cesena mentre la provincia di Ferrara si distingue per un livello ridotto del fenomeno. La distinzione per sesso non appare netta: gli uomini con età 95+, numericamente inferiori alle donne, risiedono principalmente nei comuni delle province di Bologna e Ravenna, con qualche estensione nel territorio forlivese, analogamente a quanto accade per la popolazione femminile che mostra, tuttavia, una maggiore prevalenza nei territori di Bologna e Forlì-Cesena, includendo alcune aree del riminese. Le province occidentali della regione, invece, non risultano interessate significativamente da questo fenomeno. Le metodologie di cluster detection utilizzate nello studio hanno prodotto risultati pressoché simili seppur con criteri di ricerca differenti. La spatial scan statistic si conferma una metodologia efficace e veloce ma il vincolo geometrico regolare imposto al cluster condiziona il suo utilizzo, rivelando una scarsa adattabilità nell’identificazione di aggregazioni irregolari. La metodologia FSC ha evidenziato buone capacità di ricerca e velocità di esecuzione, completata da una descrizione chiara e dettagliata dei risultati e dalla possibilità di poter visualizzare graficamente i clusters finali, anche se con un livello minimo di dettaglio. Il limite principale della metodologia è la dimensione ridotta del cluster finale: l’eccessivo impegno computazionale richiesto dalla procedura induce a fissare il limite massimo al di sotto delle 30 aree, rendendola così utilizzabile solo nelle indagini in cui si ipotizza un’estensione limitata del fenomeno sul territorio. L’algoritmo genetico GA si rivela efficace nell’identificazione di clusters di qualsiasi forma ed estensione, seppur con una velocità di esecuzione inferiore rispetto alle procedure finora descritte. Senza un’adeguata selezione dei parametri di ricerca,la procedura può individuare clusters molto irregolari ed estesi, consigliando l’uso di penalizzazione non nulla in fase di ricerca. La scelta dei parametri di ricerca non è comunque agevole ed immediata e, spesso, è lasciata all’esperienza del ricercatore. Questo modo di procedere, in aggiunta alla mancanza di informazioni a priori sul fenomeno, aumenta il grado di soggettività introdotto nella selezione dei parametri influenzando i risultati finali. Infine, la metodologia GGS richiede un carico computazionale nettamente superiore rispetto a quello necessario per le altre metodologie utilizzate e l’introduzione di due parametri di controllo favorisce una maggiore arbitrarietà nella selezione dei valori di ricerca adeguati; inoltre, la recente implementazione della procedura e la mancanza di studi su dati reali inducono ad effettuare un numero maggiore di prove durante la fase di ricerca dei clusters.
Resumo:
Background. The surgical treatment of dysfunctional hips is a severe condition for the patient and a costly therapy for the public health. Hip resurfacing techniques seem to hold the promise of various advantages over traditional THR, with particular attention to young and active patients. Although the lesson provided in the past by many branches of engineering is that success in designing competitive products can be achieved only by predicting the possible scenario of failure, to date the understanding of the implant quality is poorly pre-clinically addressed. Thus revision is the only delayed and reliable end point for assessment. The aim of the present work was to model the musculoskeletal system so as to develop a protocol for predicting failure of hip resurfacing prosthesis. Methods. Preliminary studies validated the technique for the generation of subject specific finite element (FE) models of long bones from Computed Thomography data. The proposed protocol consisted in the numerical analysis of the prosthesis biomechanics by deterministic and statistic studies so as to assess the risk of biomechanical failure on the different operative conditions the implant might face in a population of interest during various activities of daily living. Physiological conditions were defined including the variability of the anatomy, bone densitometry, surgery uncertainties and published boundary conditions at the hip. The protocol was tested by analysing a successful design on the market and a new prototype of a resurfacing prosthesis. Results. The intrinsic accuracy of models on bone stress predictions (RMSE < 10%) was aligned to the current state of the art in this field. The accuracy of prediction on the bone-prosthesis contact mechanics was also excellent (< 0.001 mm). The sensitivity of models prediction to uncertainties on modelling parameter was found below 8.4%. The analysis of the successful design resulted in a very good agreement with published retrospective studies. The geometry optimisation of the new prototype lead to a final design with a low risk of failure. The statistical analysis confirmed the minimal risk of the optimised design over the entire population of interest. The performances of the optimised design showed a significant improvement with respect to the first prototype (+35%). Limitations. On the authors opinion the major limitation of this study is on boundary conditions. The muscular forces and the hip joint reaction were derived from the few data available in the literature, which can be considered significant but hardly representative of the entire variability of boundary conditions the implant might face over the patients population. This moved the focus of the research on modelling the musculoskeletal system; the ongoing activity is to develop subject-specific musculoskeletal models of the lower limb from medical images. Conclusions. The developed protocol was able to accurately predict known clinical outcomes when applied to a well-established device and, to support the design optimisation phase providing important information on critical characteristics of the patients when applied to a new prosthesis. The presented approach does have a relevant generality that would allow the extension of the protocol to a large set of orthopaedic scenarios with minor changes. Hence, a failure mode analysis criterion can be considered a suitable tool in developing new orthopaedic devices.