882 resultados para WEIGHTED MOVING AVERAGES
Resumo:
Principal coordinates analysis and multiple regression analysis were used to determine the environmental factors associated with the decline in phytoplankton production during and after the 1977 drought for the San Francisco Bay-Delta Estuary. Physical, chemical and biological data were collected semimonthly or monthly during the spring-summer between 1973 and 1982 from 15 sampling sites located throughout the Bay-Delta. A decline in phytoplankton community diversity and density during the 1977 drought and subsequent years (1978 through 1981) was described using principal coordinates analysis. The best multiple regression which described the changes in phytoplankton community succession contained the variables water temperature, wind velocity and ortho-phosphate concentration. Together these variables accounted for 61 percent of the variation in the phytoplankton community among years described by principal coordinates analysis. An increase in water temperature, wind velocity and ortho-phosphate concentration within the Bay-Delta, beginning in June 1976 and continuing through 1981, was demonstrated using weighted moving averages. From the strong association between phytoplankton community succession and climatic variables it was hypothesized that the decline in phytoplankton production during and after the 1977 drought was associated with climatic changes within the northeast Pacific.
Resumo:
This paper presents an automatic method to detect and classify weathered aggregates by assessing changes of colors and textures. The method allows the extraction of aggregate features from images and the automatic classification of them based on surface characteristics. The concept of entropy is used to extract features from digital images. An analysis of the use of this concept is presented and two classification approaches, based on neural networks architectures, are proposed. The classification performance of the proposed approaches is compared to the results obtained by other algorithms (commonly considered for classification purposes). The obtained results confirm that the presented method strongly supports the detection of weathered aggregates.
Resumo:
This article examines the ability of several models to generate optimal hedge ratios. Statistical models employed include univariate and multivariate generalized autoregressive conditionally heteroscedastic (GARCH) models, and exponentially weighted and simple moving averages. The variances of the hedged portfolios derived using these hedge ratios are compared with those based on market expectations implied by the prices of traded options. One-month and three-month hedging horizons are considered for four currency pairs. Overall, it has been found that an exponentially weighted moving-average model leads to lower portfolio variances than any of the GARCH-based, implied or time-invariant approaches.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
Aims: This paper describes the development of a risk adjustment (RA) model predictive of individual lesion treatment failure in percutaneous coronary interventions (PCI) for use in a quality monitoring and improvement program. Methods and results: Prospectively collected data for 3972 consecutive revascularisation procedures (5601 lesions) performed between January 2003 and September 2011 were studied. Data on procedures to September 2009 (n = 3100) were used to identify factors predictive of lesion treatment failure. Factors identified included lesion risk class (p < 0.001), occlusion type (p < 0.001), patient age (p = 0.001), vessel system (p < 0.04), vessel diameter (p < 0.001), unstable angina (p = 0.003) and presence of major cardiac risk factors (p = 0.01). A Bayesian RA model was built using these factors with predictive performance of the model tested on the remaining procedures (area under the receiver operating curve: 0.765, Hosmer–Lemeshow p value: 0.11). Cumulative sum, exponentially weighted moving average and funnel plots were constructed using the RA model and subjectively evaluated. Conclusion: A RA model was developed and applied to SPC monitoring for lesion failure in a PCI database. If linked to appropriate quality improvement governance response protocols, SPC using this RA tool might improve quality control and risk management by identifying variation in performance based on a comparison of observed and expected outcomes.
Resumo:
In a multi-target complex network, the links (L-ij) represent the interactions between the drug (d(i)) and the target (t(j)), characterized by different experimental measures (K-i, K-m, IC50, etc.) obtained in pharmacological assays under diverse boundary conditions (c(j)). In this work, we handle Shannon entropy measures for developing a model encompassing a multi-target network of neuroprotective/neurotoxic compounds reported in the CHEMBL database. The model predicts correctly >8300 experimental outcomes with Accuracy, Specificity, and Sensitivity above 80%-90% on training and external validation series. Indeed, the model can calculate different outcomes for >30 experimental measures in >400 different experimental protocolsin relation with >150 molecular and cellular targets on 11 different organisms (including human). Hereafter, we reported by the first time the synthesis, characterization, and experimental assays of a new series of chiral 1,2-rasagiline carbamate derivatives not reported in previous works. The experimental tests included: (1) assay in absence of neurotoxic agents; (2) in the presence of glutamate; and (3) in the presence of H2O2. Lastly, we used the new Assessing Links with Moving Averages (ALMA)-entropy model to predict possible outcomes for the new compounds in a high number of pharmacological tests not carried out experimentally.
Resumo:
Esta tese investiga os efeitos agudos da poluição atmosférica no pico de fluxo expiratório (PFE) de escolares com idades entre 6 e 15 anos, residentes em municípios da Amazônia Brasileira. O primeiro artigo avaliou os efeitos do material particulado fino (PM2,5) no PFE de 309 escolares do município de Alta Floresta, Mato Grosso (MT), durante a estação seca de 2006. Modelos de efeitos mistos foram estimados para toda a amostra e estratificados por turno escolar e presença de sintomas de asma. O segundo artigo expõe as estratégias utilizadas para a determinação da função de variância do erro aleatório dos modelos de efeitos mistos. O terceiro artigo analisa os dados do estudo de painel com 234 escolares, realizado na estação seca de 2008 em Tangará da Serra, MT. Avaliou-se os efeitos lineares e com defasagem distribuída (PDLM) do material particulado inalável (PM10), do PM2,5 e do Black Carbon (BC) no PFE de todos os escolares e estratificados por grupos de idade. Nos três artigos, os modelos de efeitos mistos foram ajustados por tendência temporal, temperatura, umidade e características individuais. Os modelos também consideraram o ajuste da autocorrelação residual e da função de variância do erro aleatório. Quanto às exposições, foram avaliados os efeitos das exposições de 5hs, 6hs, 12hs e 24hs, no dia corrente, com defasagens de 1 a 5 dias e das médias móveis de 2 e 3 dias. No que se refere aos resultados de Alta Floresta, os modelos para todas as crianças indicaram reduções no PFE variando de 0,26 l/min (IC95%: 0,49; 0,04) a 0,38 l/min (IC95%: 0,71; 0,04), para cada aumento de 10g/m3 no PM2,5. Não foram observados efeitos significativos da poluição no grupo das crianças asmáticas. A exposição de 24hs apresentou efeito significativo no grupo de alunos da tarde e no grupo dos não asmáticos. A exposição de 0hs a 5:30hs foi significativa tanto para os alunos da manhã quanto para a tarde. Em Tangará da Serra, os resultados mostraram reduções significativas do PFE para aumentos de 10 unidades do poluente, principalmente para as defasagens de 3, 4 e 5 dias. Para o PM10, as reduções variaram de 0,15 (IC95%: 0,29; 0,01) a 0,25 l/min (IC95%: 0,40 ; 0,10). Para o PM2,5, as reduções estiveram entre 0,46 l/min (IC95%: 0,86 to 0,06 ) e 0,54 l/min (IC95%: 0,95; 0,14). E no BC, a redução foi de aproximadamente 0,014 l/min. Em relação ao PDLM, efeitos mais importantes foram observados nos modelos baseados na exposição do dia corrente até 5 dias passados. O efeito global foi significativo apenas para o PM10, com redução do PFE de 0,31 l/min (IC95%: 0,56; 0,05). Esta abordagem também indicou efeitos defasados significativos para todos os poluentes. Por fim, o estudo apontou as crianças de 6 a 8 anos como grupo mais sensível aos efeitos da poluição. Os achados da tese sugerem que a poluição atmosférica decorrente da queima de biomassa está associada a redução do PFE de crianças e adolescentes com idades entre 6 e 15 anos, residentes na Amazônia Brasileira.
Resumo:
In order to protect user privacy on mobile devices, an event-driven implicit authentication scheme is proposed in this paper. Several methods of utilizing the scheme for recognizing legitimate user behavior are investigated. The investigated methods compute an aggregate score and a threshold in real-time to determine the trust level of the current user using real data derived from user interaction with the device. The proposed scheme is designed to: operate completely in the background, require minimal training period, enable high user recognition rate for implicit authentication, and prompt detection of abnormal activity that can be used to trigger explicitly authenticated access control. In this paper, we investigate threshold computation through standard deviation and EWMA (exponentially weighted moving average) based algorithms. The result of extensive experiments on user data collected over a period of several weeks from an Android phone indicates that our proposed approach is feasible and effective for lightweight real-time implicit authentication on mobile smartphones.
Resumo:
Hoje em dia, um dos grandes objetivos das empresas é conseguirem uma gestão eficiente. Em particular, empresas que lidam com grandes volumes de stocks têm a necessidade de otimizar as quantidades dos seus produtos armazenados, com o objetivo, de entre outros, reduzir os seus custos associados. O trabalho documentado descreve um novo modelo, desenvolvido para a gestão de encomendas de uma empresa líder em soluções de transporte. A eficiência do modelo foi alcançada com a utilização de vários métodos matemáticos de previsão. Salientam-se os métodos de Croston, Teunter e de Syntetos e Boylan adequados para artigos com procuras intermitentes e a utilização de métodos mais tradicionais, tais como médias móveis ou alisamento exponencial. Os conceitos de lead time, stock de segurança, ponto de encomenda e quantidade económica a encomendar foram explorados e serviram de suporte ao modelo desenvolvido. O stock de segurança recebeu especial atenção. Foi estabelecida uma nova fórmula de cálculo em conformidade com as necessidades reais da empresa. A eficiência do modelo foi testada com o acompanhamento da evolução do stock real. Para além de uma redução significativa do valor dos stocks armazenados, a viabilidade do modelo é reflectida pelo nível de serviço alcançado.
Resumo:
Central Governor Model (CGM) suggests that perturbations in the rate of heat storage (AS) are centrally integrated to regulate exercise intensity in a feed-forward fashion to prevent excessive thermal strain. We directly tested the CGM by manipulating ambient temperature (Tam) at 20-minute intervals from 20°C to 35°C, and returning to 20°C, while cycling at a set rate of perceived exertion (RPE). The synchronicity of power output (PO) with changes in HS and Tam were quantified using Auto-Regressive Integrated Moving Averages analysis. PO fluctuated irregularly but was not significantly correlated to changes in thermo physiological status. Repeated measures indicated no changes in lactate accumulation. In conclusion, real time dynamic sensation of Tam and integration of HS does not directly influence voluntary pacing strategies during sub-maximal cycling at a constant RPE while non-significant changes in blood lactate suggest an absence of peripheral fatigue.
Resumo:
La fibrillation auriculaire (FA) est une arythmie touchant les oreillettes. En FA, la contraction auriculaire est rapide et irrégulière. Le remplissage des ventricules devient incomplet, ce qui réduit le débit cardiaque. La FA peut entraîner des palpitations, des évanouissements, des douleurs thoraciques ou l’insuffisance cardiaque. Elle augmente aussi le risque d'accident vasculaire. Le pontage coronarien est une intervention chirurgicale réalisée pour restaurer le flux sanguin dans les cas de maladie coronarienne sévère. 10% à 65% des patients qui n'ont jamais subi de FA, en sont victime le plus souvent lors du deuxième ou troisième jour postopératoire. La FA est particulièrement fréquente après une chirurgie de la valve mitrale, survenant alors dans environ 64% des patients. L'apparition de la FA postopératoire est associée à une augmentation de la morbidité, de la durée et des coûts d'hospitalisation. Les mécanismes responsables de la FA postopératoire ne sont pas bien compris. L'identification des patients à haut risque de FA après un pontage coronarien serait utile pour sa prévention. Le présent projet est basé sur l'analyse d’électrogrammes cardiaques enregistrées chez les patients après pontage un aorte-coronaire. Le premier objectif de la recherche est d'étudier si les enregistrements affichent des changements typiques avant l'apparition de la FA. Le deuxième objectif est d'identifier des facteurs prédictifs permettant d’identifier les patients qui vont développer une FA. Les enregistrements ont été réalisés par l'équipe du Dr Pierre Pagé sur 137 patients traités par pontage coronarien. Trois électrodes unipolaires ont été suturées sur l'épicarde des oreillettes pour enregistrer en continu pendant les 4 premiers jours postopératoires. La première tâche était de développer un algorithme pour détecter et distinguer les activations auriculaires et ventriculaires sur chaque canal, et pour combiner les activations des trois canaux appartenant à un même événement cardiaque. L'algorithme a été développé et optimisé sur un premier ensemble de marqueurs, et sa performance évaluée sur un second ensemble. Un logiciel de validation a été développé pour préparer ces deux ensembles et pour corriger les détections sur tous les enregistrements qui ont été utilisés plus tard dans les analyses. Il a été complété par des outils pour former, étiqueter et valider les battements sinusaux normaux, les activations auriculaires et ventriculaires prématurées (PAA, PVA), ainsi que les épisodes d'arythmie. Les données cliniques préopératoires ont ensuite été analysées pour établir le risque préopératoire de FA. L’âge, le niveau de créatinine sérique et un diagnostic d'infarctus du myocarde se sont révélés être les plus importants facteurs de prédiction. Bien que le niveau du risque préopératoire puisse dans une certaine mesure prédire qui développera la FA, il n'était pas corrélé avec le temps de l'apparition de la FA postopératoire. Pour l'ensemble des patients ayant eu au moins un épisode de FA d’une durée de 10 minutes ou plus, les deux heures précédant la première FA prolongée ont été analysées. Cette première FA prolongée était toujours déclenchée par un PAA dont l’origine était le plus souvent sur l'oreillette gauche. Cependant, au cours des deux heures pré-FA, la distribution des PAA et de la fraction de ceux-ci provenant de l'oreillette gauche était large et inhomogène parmi les patients. Le nombre de PAA, la durée des arythmies transitoires, le rythme cardiaque sinusal, la portion basse fréquence de la variabilité du rythme cardiaque (LF portion) montraient des changements significatifs dans la dernière heure avant le début de la FA. La dernière étape consistait à comparer les patients avec et sans FA prolongée pour trouver des facteurs permettant de discriminer les deux groupes. Cinq types de modèles de régression logistique ont été comparés. Ils avaient une sensibilité, une spécificité et une courbe opérateur-receveur similaires, et tous avaient un niveau de prédiction des patients sans FA très faible. Une méthode de moyenne glissante a été proposée pour améliorer la discrimination, surtout pour les patients sans FA. Deux modèles ont été retenus, sélectionnés sur les critères de robustesse, de précision, et d’applicabilité. Autour 70% patients sans FA et 75% de patients avec FA ont été correctement identifiés dans la dernière heure avant la FA. Le taux de PAA, la fraction des PAA initiés dans l'oreillette gauche, le pNN50, le temps de conduction auriculo-ventriculaire, et la corrélation entre ce dernier et le rythme cardiaque étaient les variables de prédiction communes à ces deux modèles.