975 resultados para LIKELIHOOD RATIO TEST
Resumo:
We demonstrate that the process of generating smooth transitions Call be viewed as a natural result of the filtering operations implied in the generation of discrete-time series observations from the sampling of data from an underlying continuous time process that has undergone a process of structural change. In order to focus discussion, we utilize the problem of estimating the location of abrupt shifts in some simple time series models. This approach will permit its to address salient issues relating to distortions induced by the inherent aggregation associated with discrete-time sampling of continuous time processes experiencing structural change, We also address the issue of how time irreversible structures may be generated within the smooth transition processes. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
An emerging issue in the field of astronomy is the integration, management and utilization of databases from around the world to facilitate scientific discovery. In this paper, we investigate application of the machine learning techniques of support vector machines and neural networks to the problem of amalgamating catalogues of galaxies as objects from two disparate data sources: radio and optical. Formulating this as a classification problem presents several challenges, including dealing with a highly unbalanced data set. Unlike the conventional approach to the problem (which is based on a likelihood ratio) machine learning does not require density estimation and is shown here to provide a significant improvement in performance. We also report some experiments that explore the importance of the radio and optical data features for the matching problem.
Resumo:
There may be circumstances where it is necessary for microbiologists to compare variances rather than means, e,g., in analysing data from experiments to determine whether a particular treatment alters the degree of variability or testing the assumption of homogeneity of variance prior to other statistical tests. All of the tests described in this Statnote have their limitations. Bartlett’s test may be too sensitive but Levene’s and the Brown-Forsythe tests also have problems. We would recommend the use of the variance-ratio test to compare two variances and the careful application of Bartlett’s test if there are more than two groups. Considering that these tests are not particularly robust, it should be remembered that the homogeneity of variance assumption is usually the least important of those considered when carrying out an ANOVA. If there is concern about this assumption and especially if the other assumptions of the analysis are also not likely to be met, e.g., lack of normality or non additivity of treatment effects then it may be better either to transform the data or to carry out a non-parametric test on the data.
Resumo:
When a suspect's DNA profile is admitted into court as a match to evidence the probability of the perpetrator being another individual must be calculated from database allele frequencies. The two methods used for this calculation are phenotypic frequency and likelihood ratio. Neither of these calculations takes into account substructuring within populations. In these substructured populations the frequency of homozygotes increases and that of heterozygotes usually decreases. The departure from Hardy- Weinberg expectation in a sample population can be estimated using Sewall Wright's Fst statistic. Fst values were calculated in four populations of African descent by comparing allele frequencies at three short tandem repeat loci. This was done by amplifying the three loci in each sample using the Polymerase Chain Reaction and separating these fragments using polyacrylamide gel electrophoresis. The gels were then silver stained and autoradiograms taken, from which allele frequencies were estimated. Fst values averaged 0.007+- 0.005 within populations of African descent and 0.02+- 0.01 between white and black populations.
Resumo:
The time series analysis has played an increasingly important role in weather and climate studies. The success of these studies depends crucially on the knowledge of the quality of climate data such as, for instance, air temperature and rainfall data. For this reason, one of the main challenges for the researchers in this field is to obtain homogeneous series. A time series of climate data is considered homogeneous when the values of the observed data can change only due to climatic factors, i.e., without any interference from external non-climatic factors. Such non-climatic factors may produce undesirable effects in the time series, as unrealistic homogeneity breaks, trends and jumps. In the present work it was investigated climatic time series for the city of Natal, RN, namely air temperature and rainfall time series, for the period spanning from 1961 to 2012. The main purpose was to carry out an analysis in order to check the occurrence of homogeneity breaks or trends in the series under investigation. To this purpose, it was applied some basic statistical procedures, such as normality and independence tests. The occurrence of trends was investigated by linear regression analysis, as well as by the Spearman and Mann-Kendall tests. The homogeneity was investigated by the SNHT, as well as by the Easterling-Peterson and Mann-Whitney-Pettit tests. Analyzes with respect to normality showed divergence in their results. The von Neumann ratio test showed that in the case of the air temperature series the data are not independent and identically distributed (iid), whereas for the rainfall series the data are iid. According to the applied testings, both series display trends. The mean air temperature series displays an increasing trend, whereas the rainfall series shows an decreasing trend. Finally, the homogeneity tests revealed that all series under investigations present inhomogeneities, although they breaks depend on the applied test. In summary, the results showed that the chosen techniques may be applied in order to verify how well the studied time series are characterized. Therefore, these results should be used as a guide for further investigations about the statistical climatology of Natal or even of any other place.
Resumo:
Survival models deals with the modelling of time to event data. In certain situations, a share of the population can no longer be subjected to the event occurrence. In this context, the cure fraction models emerged. Among the models that incorporate a fraction of cured one of the most known is the promotion time model. In the present study we discuss hypothesis testing in the promotion time model with Weibull distribution for the failure times of susceptible individuals. Hypothesis testing in this model may be performed based on likelihood ratio, gradient, score or Wald statistics. The critical values are obtained from asymptotic approximations, which may result in size distortions in nite sample sizes. This study proposes bootstrap corrections to the aforementioned tests and Bartlett bootstrap to the likelihood ratio statistic in Weibull promotion time model. Using Monte Carlo simulations we compared the nite sample performances of the proposed corrections in contrast with the usual tests. The numerical evidence favors the proposed corrected tests. At the end of the work an empirical application is presented.
Resumo:
This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification.
In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information.
In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase signals, we can represent the whale calls by their polynomial phase coefficients. In this dissertation, we used the Weyl transform to capture chirp rate information, and used a two dimensional feature set to represent whale vocalizations globally. Experimental results showed that our Weyl feature set outperforms chirplet coefficients and MFCC (Mel Frequency Cepstral Coefficients) when applied to our collected data.
Since whale vocalizations can be represented by polynomial phase coefficients, it is plausible that the signals lie on a manifold parameterized by these coefficients. We also studied the intrinsic structure of high dimensional whale data by exploiting its geometry. Experimental results showed that nonlinear mappings such as Laplacian Eigenmap and ISOMAP outperform linear mappings such as PCA and MDS, suggesting that the whale acoustic data is nonlinear.
We also explored deep learning algorithms on whale acoustic data. We built each layer as convolutions with either a PCA filter bank (PCANet) or a DCT filter bank (DCTNet). With the DCT filter bank, each layer has different a time-frequency scale representation, and from this, one can extract different physical information. Experimental results showed that our PCANet and DCTNet achieve high classification rate on the whale vocalization data set. The word error rate of the DCTNet feature is similar to the MFSC in speech recognition tasks, suggesting that the convolutional network is able to reveal acoustic content of speech signals.
Resumo:
The problem of decentralized sequential detection is studied in this thesis, where local sensors are memoryless, receive independent observations, and no feedback from the fusion center. In addition to traditional criteria of detection delay and error probability, we introduce a new constraint: the number of communications between local sensors and the fusion center. This metric is able to reflect both the cost of establishing communication links as well as overall energy consumption over time. A new formulation for communication-efficient decentralized sequential detection is proposed where the overall detection delay is minimized with constraints on both error probabilities and the communication cost. Two types of problems are investigated based on the communication-efficient formulation: decentralized hypothesis testing and decentralized change detection. In the former case, an asymptotically person-by-person optimum detection framework is developed, where the fusion center performs a sequential probability ratio test based on dependent observations. The proposed algorithm utilizes not only reported statistics from local sensors, but also the reporting times. The asymptotically relative efficiency of proposed algorithm with respect to the centralized strategy is expressed in closed form. When the probabilities of false alarm and missed detection are close to one another, a reduced-complexity algorithm is proposed based on a Poisson arrival approximation. In addition, decentralized change detection with a communication cost constraint is also investigated. A person-by-person optimum change detection algorithm is proposed, where transmissions of sensing reports are modeled as a Poisson process. The optimum threshold value is obtained through dynamic programming. An alternative method with a simpler fusion rule is also proposed, where the threshold values in the algorithm are determined by a combination of sequential detection analysis and constrained optimization. In both decentralized hypothesis testing and change detection problems, tradeoffs in parameter choices are investigated through Monte Carlo simulations.
Resumo:
Trypanosomiasis has been identified as a neglected tropical disease in both humans and animals in many regions of sub-Saharan Africa. Whilst assessments of the biology of trypanosomes, vectors, vertebrate hosts and the environment have provided useful information about life cycles, transmission, and pathogenesis of the parasites that could be used for treatment and control, less information is available about the effects of interactions among multiple intrinsic factors on trypanosome presence in tsetse flies from different sites. It is known that multiple species of tsetse flies can transmit trypanosomes but differences in their vector competence has normally been studied in relation to individual factors in isolation, such as: intrinsic factors of the flies (e.g. age, sex); habitat characteristics; presence of endosymbionts (e.g. Wigglesworthia glossinidia, Sodalis glossinidius); feeding pattern; host communities that the flies feed on; and which species of trypanosomes are transmitted. The purpose of this study was to take a more integrated approach to investigate trypanosome prevalence in tsetse flies. In chapter 2, techniques were optimised for using the Polymerase Chain Reaction (PCR) to identify species of trypanosomes (Trypanosoma vivax, T. congolense, T. brucei, T. simiae, and T. godfreyi) present in four species of tsetse flies (Glossina austeni, G. brevipalpis, G. longipennis and G. pallidipes) from two regions of eastern Kenya (the Shimba Hills and Nguruman). Based on universal primers targeting the internal transcribed spacer 1 region (ITS-1), T. vivax was the predominant pathogenic species detected in flies, both singly and in combination with other species of trypanosomes. Using Generalised Linear Models (GLMs) and likelihood ratio tests to choose the best-fitting models, presence of T. vivax was significantly associated with an interaction between subpopulation (a combination between collection sites and species of Glossina) and sex of the flies (X2 = 7.52, df = 21, P-value = 0.0061); prevalence in females overall was higher than in males but this was not consistent across subpopulations. Similarly, T. congolense was significantly associated only with subpopulation (X2 = 18.77, df = 1, P-value = 0.0046); prevalence was higher overall in the Shimba Hills than in Nguruman but this pattern varied by species of tsetse fly. When associations were analysed in individual species of tsetse flies, there were no consistent associations between trypanosome prevalence and any single factor (site, sex, age) and different combinations of interactions were found to be significant for each. The results thus demonstrated complex interactions between vectors and trypanosome prevalence related to both the distribution and intrinsic factors of tsetse flies. The potential influence of the presence of S. glossinidius on trypanosome presence in tsetse flies was studied in chapter 3. A high number of Sodalis positive flies was found in the Shimba Hills, while there were only two positive flies from Nguruman. Presence or absence of Sodalis was significantly associated with subpopulation while trypanosome presence showed a significant association with age (X2 = 4.65, df = 14, P-value = 0.0310) and an interaction between subpopulation and sex (X2 = 18.94, df = 10, P-value = 0.0043). However, the specific associations that were significant varied across species of trypanosomes, with T. congolense and T. brucei but not T. vivax showing significant interactions involving Sodalis. Although it has previously been concluded that presence of Sodalis increases susceptibility to trypanosomes, the results presented here suggest a more complicated relationship, which may be biased by differences in the distribution and intrinsic factors of tsetse flies, as well as which trypanosome species are considered. In chapter 4 trypanosome status was studied in relation to blood meal sources, feeding status and feeding patterns of G. pallidipes (which was the predominant fly species collected for this study) as determined by sequencing the mitochondrial cytochrome B gene using DNA extracted from abdomen samples. African buffalo and African elephants were the main sources of blood meals but antelopes, warthogs, humans, giraffes and hyenas were also identified. Feeding on multiple hosts was common in flies sampled from the Shimba Hills but most flies from Nguruman had fed on single host species. Based on Multiple Correspondence Analysis (MCA), host-feeding patterns showed a correlation with site of sample collection and Sodalis status, while trypanosome status was correlated with sex and age of the flies, suggesting that recent host-feeding patterns from blood meal analysis cannot predict trypanosome status. In conclusion, the complexity of interactions found suggests that strategies of tsetse fly control should be specific to particular epidemic areas. Future studies should include laboratory experiments that use local colonies of tsetse flies, local strains of trypanosomes and local S. glossinidius under controlled environmental conditions to tease out the factors that affect vector competence and the relative influence of external environmental factors on the dynamics of these interactions.
Resumo:
La neoplasia tiroidea impulsa la búsqueda de métodos diagnósticos para obtener un dictamen precoz y tratamiento oportuno que permitan mayor supervivencia y mejor calidad de vida. Objetivo: determinar la correlación entre estudio citológico e histopatológico en el diagnóstico de Neoplasia Tiroidea en pacientes atendidos en SOLCA – Cuenca, periodo 2009-2013. Metodología: estudio observacional, retrospectivo, analítico y de correlación diagnóstica, elaborado con historias clínicas de pacientes en quienes se realizó punciones (PAAF) para la citología, según el Sistema Bethesda, y con histopatología, para diagnosticar neoplasia tiroidea. Resultados: investigación desarrollada con 415 pacientes con neoplasia tiroidea. Caracterizada por 89.2% de mujeres; edad promedio de 51.8 ± 15.2 años, de 41-55 años fue la mayor categoría (36,9%); 47.2% procedieron de Cuenca y el 37.8% de las provincias vecinas. Estado civil casado/a fue más frecuente, 269 (64,8%), y de profesión “amas de casa” fueron las más afectadas 231 (55,7%). El 96.4% de diagnósticos citológicos Bethesda categoría 6, fueron confirmados por histología. Hubo correlación (r = 0.49) significativa y concordancia moderada (kappa = 0.337) entre citología e histología. Sensibilidad=63% (IC95%: 58 – 69), Especificidad=94% (IC95%: 89 – 98), RVP=10.9 (IC95%: 5 – 22) y RVN=0.39 (IC95%: 0.3 – 0.4). Conclusiones: la citología por PAAF es una herramienta para el estudio, diagnóstico de pacientes con afecciones tiroideas. Una punción realizada por expertos es una técnica rápida, económica, bien tolerada, y produce resultados confiables. La categorización Bethesda representa un sistema confiable, válido para reportar citología de tiroides
Resumo:
Estudio que tiene el objetivo de determinar la concordancia diagnóstica entre la prueba de visualización con ácido (PVAA), citología e histopatología para diagnóstico de lesisones de cuello uterino. Se realizó PVAA y citología a 130 mujeres entre 20 y 89 años beneficiarias del programa DOC del MSP en el Centro de Salud No. 1 de Cuenca y el Hospsital José F. Valdivieso de Santa Isabel desde el 1 de enero hasta el 31 de diciembre de 2004. Resultados: la biopsia únicamente en 22 acetoblancas positivas. La sensibilidad de la PVAA fue del 100, igual a la biopsia (prueba de oro), pero la especificidad fue del 86.4(IC9579,9 - 92,8) debido al alto porcentaje de falsos positivos (17/22 = 77,27). El valor predictivo positivo fue del 22.7(IC952,9 - 42,5) pero el negativo del 100. La concordancia de la prueba por el índice Kappa fue del 32,8(IC9516 - 33) y la eficacia por el índice de Youden, del 86(IC9580 - 92). El cálculo de la probabilidad diagnóstica nos dio un Likelihood Ratio + de 7,35 (IC954,73 -11,4) y un LR - de 0,0 (IC950,0 - 0,4). Los resultados de Papanicolaou fueron iguales a la biopsia. Se detectó el 3,9de lesiones histológicas premalignas, el 0,8LIE de bajo grado (NIC I) y el 3,1LIE de alto grado (NIC II y NIC III). Implicaciones: la PVAA es de alta sensibilidad y mejora la tasa de detección de lesiones precancerosas con respeto a la citología convencional, pero genra inquietudes la elevada proporción de falsos positivos. Por la facilidad de realización y bajo costo resulta atractiva para medios de escasos recursos como el de la muestra
Resumo:
Les arthroplasties totales de la hanche (ATH) et du genou (ATG) sont souvent offertes aux patients atteints de dégénérescence articulaire sévère. Bien qu’efficace chez la majorité des patients, ces interventions mènent à des résultats sous-optimaux dans de nombreux cas. Il demeure difficile d’identifier les patients à risque de résultats sous-optimaux à l’heure actuelle. L’identification de ces patients avant la chirurgie pourrait permettre d’optimiser la gamme de soins et de services offerts et de possiblement améliorer les résultats de leur chirurgie. Ce mémoire a comme objectifs : 1) de réaliser une revue systématique des déterminants associés à la douleur et aux incapacités fonctionnelles rapportées par les patients à moyen-terme suivant ces deux types d’arthroplastie et 2) de développer des modèles de prédiction clinique permettant l’identification des patients à risque de mauvais résultats en terme de douleur et d’incapacités fonctionnelles suivant l’ATH et l’ATG. Une revue systématique de la littérature identifiant les déterminants de la douleur et de la fonction suivant l’ATH et l’ATG a été réalisée dans quatre bases de données jusqu’en avril 2015 et octobre 2014, respectivement. Afin de développer un algorithme de prédiction pouvant identifier les patients à risque de résultats sous-optimaux, nous avons aussi utilisé des données rétrospectives provenant de 265 patients ayant subi une ATH à l’Hôpital Maisonneuve-Rosemont (HMR) de 2004 à 2010. Finalement, des données prospectives sur 141 patients recrutés au moment de leur inclusion sur une liste d’attente pour une ATG dans trois hôpitaux universitaires à Québec, Canada et suivis jusqu’à six mois après la chirurgie ont permis l’élaboration d’une règle de prédiction clinique permettant l’identification des patients à risque de mauvais résultats en terme de douleur et d’incapacités fonctionnelles. Vingt-deux (22) études d’une qualité méthodologique moyenne à excellente ont été incluses dans la revue. Les principaux déterminants de douleur et d’incapacités fonctionnelles après l’ATH incluaient: le niveau préopératoire de douleur et de fonction, un indice de la masse corporelle plus élevé, des comorbidités médicales plus importantes, un état de santé générale diminué, une scolarité plus faible, une arthrose radiographique moins sévère et la présence d’arthrose à la hanche controlatérale. Trente-quatre (34) études évaluant les déterminants de douleur et d’incapacités fonctionnelles après l’ATG avec une qualité méthodologique moyenne à excellente ont été évaluées et les déterminants suivant ont été identifiés: le niveau préopératoire de douleur et de fonction, des comorbidités médicales plus importantes, un état de santé générale diminué, un plus grands niveau d’anxiété et/ou de symptômes dépressifs, la présence de douleur au dos, plus de pensées catastrophiques ou un faible niveau socioéconomique. Pour la création d’une règle de prédiction clinique, un algorithme préliminaire composé de l’âge, du sexe, de l’indice de masse corporelle ainsi que de trois questions du WOMAC préopératoire a permis l’identification des patients à risque de résultats chirurgicaux sous-optimaux (pire quartile du WOMAC postopératoire et percevant leur hanche opérée comme artificielle avec des limitations fonctionnelles mineures ou majeures) à une durée moyenne ±écart type de 446±171 jours après une ATH avec une sensibilité de 75.0% (95% IC: 59.8 – 85.8), une spécificité de 77.8% (95% IC: 71.9 – 82.7) et un rapport de vraisemblance positif de 3.38 (98% IC: 2.49 – 4.57). Une règle de prédiction clinique formée de cinq items du questionnaire WOMAC préopratoire a permis l’identification des patients en attente d’une ATG à risque de mauvais résultats (pire quintile du WOMAC postopératoire) six mois après l’ATG avec une sensibilité de 82.1 % (95% IC: 66.7 – 95.8), une spécificité de 71.7% (95% IC: 62.8 – 79.8) et un rapport de vraisemblance positif de 2.9 (95% IC: 1.8 – 4.7). Les résultats de ce mémoire ont permis d’identifier, à partir de la littérature, une liste de déterminants de douleur et d’incapacités fonctionnelles après l’ATH et l’ATG avec le plus haut niveau d’évidence à ce jour. De plus, deux modèles de prédiction avec de très bonnes capacités prédictives ont été développés afin d’identifier les patients à risque de mauvais résultats chirurgicaux après l’ATH et l’ATG. L’identification de ces patients avant la chirurgie pourrait permettre d’optimiser leur prise en charge et de possiblement améliorer les résultats de leur chirurgie.
Resumo:
Les arthroplasties totales de la hanche (ATH) et du genou (ATG) sont souvent offertes aux patients atteints de dégénérescence articulaire sévère. Bien qu’efficace chez la majorité des patients, ces interventions mènent à des résultats sous-optimaux dans de nombreux cas. Il demeure difficile d’identifier les patients à risque de résultats sous-optimaux à l’heure actuelle. L’identification de ces patients avant la chirurgie pourrait permettre d’optimiser la gamme de soins et de services offerts et de possiblement améliorer les résultats de leur chirurgie. Ce mémoire a comme objectifs : 1) de réaliser une revue systématique des déterminants associés à la douleur et aux incapacités fonctionnelles rapportées par les patients à moyen-terme suivant ces deux types d’arthroplastie et 2) de développer des modèles de prédiction clinique permettant l’identification des patients à risque de mauvais résultats en terme de douleur et d’incapacités fonctionnelles suivant l’ATH et l’ATG. Une revue systématique de la littérature identifiant les déterminants de la douleur et de la fonction suivant l’ATH et l’ATG a été réalisée dans quatre bases de données jusqu’en avril 2015 et octobre 2014, respectivement. Afin de développer un algorithme de prédiction pouvant identifier les patients à risque de résultats sous-optimaux, nous avons aussi utilisé des données rétrospectives provenant de 265 patients ayant subi une ATH à l’Hôpital Maisonneuve-Rosemont (HMR) de 2004 à 2010. Finalement, des données prospectives sur 141 patients recrutés au moment de leur inclusion sur une liste d’attente pour une ATG dans trois hôpitaux universitaires à Québec, Canada et suivis jusqu’à six mois après la chirurgie ont permis l’élaboration d’une règle de prédiction clinique permettant l’identification des patients à risque de mauvais résultats en terme de douleur et d’incapacités fonctionnelles. Vingt-deux (22) études d’une qualité méthodologique moyenne à excellente ont été incluses dans la revue. Les principaux déterminants de douleur et d’incapacités fonctionnelles après l’ATH incluaient: le niveau préopératoire de douleur et de fonction, un indice de la masse corporelle plus élevé, des comorbidités médicales plus importantes, un état de santé générale diminué, une scolarité plus faible, une arthrose radiographique moins sévère et la présence d’arthrose à la hanche controlatérale. Trente-quatre (34) études évaluant les déterminants de douleur et d’incapacités fonctionnelles après l’ATG avec une qualité méthodologique moyenne à excellente ont été évaluées et les déterminants suivant ont été identifiés: le niveau préopératoire de douleur et de fonction, des comorbidités médicales plus importantes, un état de santé générale diminué, un plus grands niveau d’anxiété et/ou de symptômes dépressifs, la présence de douleur au dos, plus de pensées catastrophiques ou un faible niveau socioéconomique. Pour la création d’une règle de prédiction clinique, un algorithme préliminaire composé de l’âge, du sexe, de l’indice de masse corporelle ainsi que de trois questions du WOMAC préopératoire a permis l’identification des patients à risque de résultats chirurgicaux sous-optimaux (pire quartile du WOMAC postopératoire et percevant leur hanche opérée comme artificielle avec des limitations fonctionnelles mineures ou majeures) à une durée moyenne ±écart type de 446±171 jours après une ATH avec une sensibilité de 75.0% (95% IC: 59.8 – 85.8), une spécificité de 77.8% (95% IC: 71.9 – 82.7) et un rapport de vraisemblance positif de 3.38 (98% IC: 2.49 – 4.57). Une règle de prédiction clinique formée de cinq items du questionnaire WOMAC préopratoire a permis l’identification des patients en attente d’une ATG à risque de mauvais résultats (pire quintile du WOMAC postopératoire) six mois après l’ATG avec une sensibilité de 82.1 % (95% IC: 66.7 – 95.8), une spécificité de 71.7% (95% IC: 62.8 – 79.8) et un rapport de vraisemblance positif de 2.9 (95% IC: 1.8 – 4.7). Les résultats de ce mémoire ont permis d’identifier, à partir de la littérature, une liste de déterminants de douleur et d’incapacités fonctionnelles après l’ATH et l’ATG avec le plus haut niveau d’évidence à ce jour. De plus, deux modèles de prédiction avec de très bonnes capacités prédictives ont été développés afin d’identifier les patients à risque de mauvais résultats chirurgicaux après l’ATH et l’ATG. L’identification de ces patients avant la chirurgie pourrait permettre d’optimiser leur prise en charge et de possiblement améliorer les résultats de leur chirurgie.
Resumo:
Cardiovascular disease is one of the leading causes of death around the world. Resting heart rate has been shown to be a strong and independent risk marker for adverse cardiovascular events and mortality, and yet its role as a predictor of risk is somewhat overlooked in clinical practice. With the aim of highlighting its prognostic value, the role of resting heart rate as a risk marker for death and other adverse outcomes was further examined in a number of different patient populations. A systematic review of studies that previously assessed the prognostic value of resting heart rate for mortality and other adverse cardiovascular outcomes was presented. New analyses of nine clinical trials were carried out. Both the original and extended Cox model that allows for analysis of time-dependent covariates were used to evaluate and compare the predictive value of baseline and time-updated heart rate measurements for adverse outcomes in the CAPRICORN, EUROPA, PROSPER, PERFORM, BEAUTIFUL and SHIFT populations. Pooled individual patient meta-analyses of the CAPRICORN, EPHESUS, OPTIMAAL and VALIANT trials, and the BEAUTIFUL and SHIFT trials, were also performed. The discrimination and calibration of the models applied were evaluated using Harrell’s C-statistic and likelihood ratio tests, respectively. Finally, following on from the systematic review, meta-analyses of the relation between baseline and time-updated heart rate, and the risk of death from any cause and from cardiovascular causes, were conducted. Both elevated baseline and time-updated resting heart rates were found to be associated with an increase in the risk of mortality and other adverse cardiovascular events in all of the populations analysed. In some cases, elevated time-updated heart rate was associated with risk of events where baseline heart rate was not. Time-updated heart rate also contributed additional information about the risk of certain events despite knowledge of baseline heart rate or previous heart rate measurements. The addition of resting heart rate to the models where resting heart rate was found to be associated with risk of outcome improved both discrimination and calibration, and in general, the models including time-updated heart rate along with baseline or the previous heart rate measurement had the highest and similar C-statistics, and thus the greatest discriminative ability. The meta-analyses demonstrated that a 5bpm higher baseline heart rate was associated with a 7.9% and an 8.0% increase in the risk of all-cause and cardiovascular death, respectively (both p less than 0.001). Additionally, a 5bpm higher time-updated heart rate (adjusted for baseline heart rate in eight of the ten studies included in the analyses) was associated with a 12.8% (p less than 0.001) and a 10.9% (p less than 0.001) increase in the risk of all-cause and cardiovascular death, respectively. These findings may motivate health care professionals to routinely assess resting heart rate in order to identify individuals at a higher risk of adverse events. The fact that the addition of time-updated resting heart rate improved the discrimination and calibration of models for certain outcomes, even if only modestly, strengthens the case that it be added to traditional risk models. The findings, however, are of particular importance, and have greater implications for the clinical management of patients with pre-existing disease. An elevated, or increasing heart rate over time could be used as a tool, potentially alongside other established risk scores, to help doctors identify patient deterioration or those at higher risk, who might benefit from more intensive monitoring or treatment re-evaluation. Further exploration of the role of continuous recording of resting heart rate, say, when patients are at home, would be informative. In addition, investigation into the cost-effectiveness and optimal frequency of resting heart rate measurement is required. One of the most vital areas for future research is the definition of an objective cut-off value for the definition of a high resting heart rate.
Resumo:
Cardiovascular risk assessment might be improved with the addition of emerging, new tests derived from atherosclerosis imaging, laboratory tests or functional tests. This article reviews relative risk, odds ratios, receiver-operating curves, posttest risk calculations based on likelihood ratios, the net reclassification improvement and integrated discrimination. This serves to determine whether a new test has an added clinical value on top of conventional risk testing and how this can be verified statistically. Two clinically meaningful examples serve to illustrate novel approaches. This work serves as a review and basic work for the development of new guidelines on cardiovascular risk prediction, taking into account emerging tests, to be proposed by members of the 'Taskforce on Vascular Risk Prediction' under the auspices of the Working Group 'Swiss Atherosclerosis' of the Swiss Society of Cardiology in the future.