940 resultados para Remediation time prediction
Resumo:
Raw measurement data does not always immediately convey useful information, but applying mathematical statistical analysis tools into measurement data can improve the situation. Data analysis can offer benefits like acquiring meaningful insight from the dataset, basing critical decisions on the findings, and ruling out human bias through proper statistical treatment. In this thesis we analyze data from an industrial mineral processing plant with the aim of studying the possibility of forecasting the quality of the final product, given by one variable, with a model based on the other variables. For the study mathematical tools like Qlucore Omics Explorer (QOE) and Sparse Bayesian regression (SB) are used. Later on, linear regression is used to build a model based on a subset of variables that seem to have most significant weights in the SB model. The results obtained from QOE show that the variable representing the desired final product does not correlate with other variables. For SB and linear regression, the results show that both SB and linear regression models built on 1-day averaged data seriously underestimate the variance of true data, whereas the two models built on 1-month averaged data are reliable and able to explain a larger proportion of variability in the available data, making them suitable for prediction purposes. However, it is concluded that no single model can fit well the whole available dataset and therefore, it is proposed for future work to make piecewise non linear regression models if the same available dataset is used, or the plant to provide another dataset that should be collected in a more systematic fashion than the present data for further analysis.
Resumo:
This study evaluates the application of an intelligent hybrid system for time-series forecasting of atmospheric pollutant concentration levels. The proposed method consists of an artificial neural network combined with a particle swarm optimization algorithm. The method not only searches relevant time lags for the correct characterization of the time series, but also determines the best neural network architecture. An experimental analysis is performed using four real time series and the results are shown in terms of six performance measures. The experimental results demonstrate that the proposed methodology achieves a fair prediction of the presented pollutant time series by using compact networks.
Resumo:
PURPOSE: The aim of this longitudinal study was to investigate the value of uterine artery Doppler sonography during the second and third trimesters in the prediction of adverse pregnancy outcome in low-risk women. METHODS: From July 2011 to August 2012, a total of 205 singleton pregnant women presenting at our antenatal clinic were enrolled in this prospective study and were assessed for baseline demographic and obstetric data. They underwent ultrasound evaluation at the time of second and third trimesters, both included Doppler assessment of bilateral uterine arteries to determine the values of the pulsatility index (PI) and resistance index (RI) and presence of early diastolic notch. The endpoint of this study was assessing the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of Doppler ultrasonography of the uterine artery, for the prediction of adverse pregnancy outcomes including preeclampsia, stillbirth, placental abruption and preterm labor. RESULTS: The mean age of cases was 26.4±5.11. The uterine artery PI and RI values for both second (PI: 1.1±0.42 versus 1.53±0.59, p=0.002; RI: 0.55±0.09 versus 0.72±0.13, p=0.000 respectively) and third-trimester (PI: 0.77±0.31 versus 1.09±0.46, p=0.000; RI: 0.46±0.10 versus 0.60±0.14, p=0.010 respectively) evaluations were significantly higher in patients with adverse pregnancy outcome than in normal women. Combination of PI and RI >95th percentile and presence of bilateral notch in second trimester get sensitivity and specificity of 36.1 and 97% respectively, while these measures were 57.5 and 98.2% in third trimester. CONCLUSIONS: According to our study, it seems that uterine artery Doppler may be a valuable tool for the prediction of a variety of adverse outcomes in second and third trimesters.
Resumo:
Cyanobacteria are unicellular, non-nitrogen-fixing prokaryotes, which perform photosynthesis similarly as higher plants. The cyanobacterium Synechocystis sp. strain PCC 6803 is used as a model organism in photosynthesis research. My research described herein aims at understanding the function of the photosynthetic machinery and how it responds to changes in the environment. Detailed knowledge of the regulation of photosynthesis in cyanobacteria can be utilized for biotechnological purposes, for example in the harnessing of solar energy for biofuel production. In photosynthesis, iron participates in electron transfer. Here, we focused on iron transport in Synechocystis sp. strain PCC 6803 and particularly on the environmental regulation of the genes encoding the FutA2BC ferric iron transporter, which belongs to the ABC transporter family. A homology model built for the ATP-binding subunit FutC indicates that it has a functional ATPbinding site as well as conserved interactions with the channel-forming subunit FutB in the transporter complex. Polyamines are important for the cell proliferation, differentiation and apoptosis in prokaryotic and eukaryotic cells. In plants, polyamines have special roles in stress response and in plant survival. The polyamine metabolism in cyanobacteria in response to environmental stress is of interest in research on stress tolerance of higher plants. In this thesis, the potd gene encoding an polyamine transporter subunit from Synechocystis sp. strain PCC 6803 was characterized for the first time. A homology model built for PotD protein indicated that it has capability of binding polyamines, with the preference for spermidine. Furthermore, in order to investigate the structural features of the substrate specificity, polyamines were docked into the binding site. Spermidine was positioned very similarly in Synechocystis PotD as in the template structure and had most favorable interactions of the docked polyamines. Based on the homology model, experimental work was conducted, which confirmed the binding preference. Flavodiiron proteins (Flv) are enzymes, which protect the cell against toxicity of oxygen and/or nitric oxide by reduction. In this thesis, we present a novel type of photoprotection mechanism in cyanobacteria by the heterodimer of Flv2/Flv4. The constructed homology model of Flv2/Flv4 suggests a functional heterodimer capable of rapid electron transfer. The unknown protein sll0218, encoded by the flv2-flv4 operon, is assumed to facilitate the interaction of the Flv2/Flv4 heterodimer and energy transfer between the phycobilisome and PSII. Flv2/Flv4 provides an alternative electron transfer pathway and functions as an electron sink in PSII electron transfer.
Resumo:
Abstract:Two ultrasound based fertility prediction methods were tested prior to embryo transfer (ET) and artificial insemination (AI) in cattle. Female bovines were submitted to estrous synchronization prior to ET and AI. Animals were scanned immediately before ET and AI procedure to target follicle and corpus luteum (CL) size and vascularity. In addition, inseminated animals were also scanned eleven days after insemination to target CL size and vascularity. All data was compared with fertility by using gestational diagnosis 35 days after ovulation. Prior to ET, CL vascularity showed a positive correlation with fertility, and no pregnancy occurred in animals with less than 40% of CL vascularity. Prior to AI and also eleven days after AI, no relationship with fertility was seen in all parameters analyzed (follicle and CL size and vascularity), and contrary, cows with CL vascularity greater than 70% exhibit lower fertility. In inseminated animals, follicle size and vascularity was positive related with CL size and vascularity, as shown by the presence of greater CL size and vascularity originated from follicle with also greater size and vascularity. This is the first time that ultrasound based fertility prediction methods were tested prior to ET and AI and showed an application in ET, but not in AI programs. Further studies are needed including hormone profile evaluation to improve conclusion.
Resumo:
The main objective of this master’s thesis is to examine if Weibull analysis is suitable method for warranty forecasting in the Case Company. The Case Company has used Reliasoft’s Weibull++ software, which is basing on the Weibull method, but the Company has noticed that the analysis has not given right results. This study was conducted making Weibull simulations in different profit centers of the Case Company and then comparing actual cost and forecasted cost. Simula-tions were made using different time frames and two methods for determining future deliveries. The first sub objective is to examine, which parameters of simulations will give the best result to each profit center. The second sub objective of this study is to create a simple control model for following forecasted costs and actual realized costs. The third sub objective is to document all Qlikview-parameters of profit centers. This study is a constructive research, and solutions for company’s problems are figured out in this master’s thesis. In the theory parts were introduced quality issues, for example; what is quality, quality costing and cost of poor quality. Quality is one of the major aspects in the Case Company, so understand-ing the link between quality and warranty forecasting is important. Warranty management was also introduced and other different tools for warranty forecasting. The Weibull method and its mathematical properties and reliability engineering were introduced. The main results of this master’s thesis are that the Weibull analysis forecasted too high costs, when calculating provision. Although, some forecasted values of profit centers were lower than actual values, the method works better for planning purposes. One of the reasons is that quality improving or alternatively quality decreasing is not showing in the results of the analysis in the short run. The other reason for too high values is that the products of the Case Company are com-plex and analyses were made in the profit center-level. The Weibull method was developed for standard products, but products of the Case Company consists of many complex components. According to the theory, this method was developed for homogeneous-data. So the most im-portant notification is that the analysis should be made in the product level, not the profit center level, when the data is more homogeneous.
Resumo:
Electrokinetics has emerged as a potential technique for in situ soil remediation and especially unique because of the ability to work in low permeability soil. In electrokinetic remediation, non-polar contaminants like most organic compounds are transported primarily by electroosmosis, thus the process is effective only if the contaminants are soluble in pore fluid. Therefore, enhancement is needed to improve mobility of these hydrophobic compounds, which tend to adsorb strongly to the soil. On the other hand, as a novel and rapidly growing science, the applications of ultrasound in environmental technology hold a promising future. Compared to conventional methods, ultrasonication can bring several benefits such as environmental friendliness (no toxic chemical are used or produced), low cost, and compact instrumentation. It also can be applied onsite. Ultrasonic energy applied into contaminated soils can increase desorption and mobilization of contaminants and porosity and permeability of soil through developing of cavitation. The research investigated the coupling effect of the combination of these two techniques, electrokinetics and ultrasonication, in persistent organic pollutant removal from contaminated low permeability clayey soil (with kaolin as a model medium). The preliminary study checked feasibility of ultrasonic treatment of kaolin highly contaminated by persistent organic pollutants (POPs). The laboratory experiments were conducted in various conditions (moisture, frequency, power, duration time, initial concentration) to examine the effects of these parameters on the treatment process. Experimental results showed that ultrasonication has a potential to remove POPs, although the removal efficiencies were not high with short duration time. The study also suggested intermittent ultrasonication over longer time as an effective means to increase the removal efficiencies. Then, experiments were conducted to compare the performances among electrokinetic process alone and electrokinetic processes combined with surfactant addition and mainly with ultrasonication, in designed cylinders (with filtercloth separating central part and electrolyte parts) and in open pans. Combined electrokinetic and ultrasonic treatment did prove positive coupling effect compared to each single process alone, though the level of enhancement is not very significant. The assistance of ultrasound in electrokinetic remediation can help reduce POPs from clayey soil by improving the mobility of hydrophobic organic compounds and degrading these contaminants through pyrolysis and oxidation. Ultrasonication also sustains higher current and increases electroosmotic flow. Initial contaminant concentration is an essential input parameter that can affect the removal effectiveness.
Resumo:
In studies of cognitive processing, the allocation of attention has been consistently linked to subtle, phasic adjustments in autonomic control. Both autonomic control of heart rate and control of the allocation of attention are known to decline with age. It is not known, however, whether characteristic individual differences in autonomic control and the ability to control attention are closely linked. To test this, a measure of parasympathetic function, vagal tone (VT) was computed from cardiac recordings from older and younger adults taken before and during performance of two attentiondemanding tasks - the Eriksen visual flanker task and the source memory task. Both tasks elicited event-related potentials (ERPs) that accompany errors, i.e., error-related negativities (ERNs) and error positivities (Pe's). The ERN is a negative deflection in the ERP signal, time-locked to responses made on incorrect trials, likely generated in the anterior cingulate. It is followed immediately by the Pe, a broad, positive deflection which may reflect conscious awareness of having committed an error. Age-attenuation ofERN amplitude has previously been found in paradigms with simple stimulus-response mappings, such as the flanker task, but has rarely been examined in more complex, conceptual tasks. Until now, there have been no reports of its being investigated in a source monitoring task. Age-attenuation of the ERN component was observed in both tasks. Results also indicated that the ERNs generated in these two tasks were generally comparable for young adults. For older adults, however, the ERN from the source monitoring task was not only shallower, but incorporated more frontal processing, apparently reflecting task demands. The error positivities elicited by 3 the two tasks were not comparable, however, and age-attenuation of the Pe was seen only in the more perceptual flanker task. For younger adults, it was Pe scalp topography that seemed to reflect task demands, being maximal over central parietal areas in the flanker task, but over very frontal areas in the source monitoring task. With respect to vagal tone, in the flanker task, neither the number of errors nor ERP amplitudes were predicted by baseline or on-task vagal tone measures. However, in the more difficult source memory task, lower VT was marginally associated with greater numbers of source memory errors in the older group. Thus, for older adults, relatively low levels of parasympathetic control over cardiac response coincided with poorer source memory discrimination. In both groups, lower levels of baseline VT were associated with larger amplitude ERNs, and smaller amplitude Pe's. Thus, low VT was associated in a conceptual task with a greater "emergency response" to errors, and at the same time, reduced awareness of having made them. The efficiency of an individual's complex cognitive processing was therefore associated with the flexibility of parasympathetic control of heart rate, in response to a cognitively challenging task.
Resumo:
Objectif: Évaluer l'efficacité du dépistage de l’hypertension gestationnelle par les caractéristiques démographiques maternelles, les biomarqueurs sériques et le Doppler de l'artère utérine au premier et au deuxième trimestre de grossesse. Élaborer des modèles prédictifs de l’hypertension gestationnelle fondées sur ces paramètres. Methods: Il s'agit d'une étude prospective de cohorte incluant 598 femmes nullipares. Le Doppler utérin a été étudié par échographie transabdominale entre 11 +0 à 13 +6 semaines (1er trimestre) et entre 17 +0 à 21 +6 semaines (2e trimestre). Tous les échantillons de sérum pour la mesure de plusieurs biomarqueurs placentaires ont été recueillis au 1er trimestre. Les caractéristiques démographiques maternelles ont été enregistrées en même temps. Des courbes ROC et les valeurs prédictives ont été utilisés pour analyser la puissance prédictive des paramètres ci-dessus. Différentes combinaisons et leurs modèles de régression logistique ont été également analysés. Résultats: Parmi 598 femmes, on a observé 20 pré-éclampsies (3,3%), 7 pré-éclampsies précoces (1,2%), 52 cas d’hypertension gestationnelle (8,7%) , 10 cas d’hypertension gestationnelle avant 37 semaines (1,7%). L’index de pulsatilité des artères utérines au 2e trimestre est le meilleur prédicteur. En analyse de régression logistique multivariée, la meilleure valeur prédictive au 1er et au 2e trimestre a été obtenue pour la prévision de la pré-éclampsie précoce. Le dépistage combiné a montré des résultats nettement meilleurs comparés avec les paramètres maternels ou Doppler seuls. Conclusion: Comme seul marqueur, le Doppler utérin du deuxième trimestre a la meilleure prédictive pour l'hypertension, la naissance prématurée et la restriction de croissance. La combinaison des caractéristiques démographiques maternelles, des biomarqueurs sériques maternels et du Doppler utérin améliore l'efficacité du dépistage, en particulier pour la pré-éclampsie nécessitant un accouchement prématuré.
Resumo:
Understanding how stem and progenitor cells choose between alternative cell fates is a major challenge in developmental biology. Efforts to tackle this problem have been hampered by the scarcity of markers that can be used to predict cell division outcomes. Here we present a computational method, based on algorithmic information theory, to analyze dynamic features of living cells over time. Using this method, we asked whether rat retinal progenitor cells (RPCs) display characteristic phenotypes before undergoing mitosis that could foretell their fate. We predicted whether RPCs will undergo a self-renewing or terminal division with 99% accuracy, or whether they will produce two photoreceptors or another combination of offspring with 87% accuracy. Our implementation can segment, track and generate predictions for 40 cells simultaneously on a standard computer at 5 min per frame. This method could be used to isolate cell populations with specific developmental potential, enabling previously impossible investigations.
Resumo:
La fibrillation auriculaire (FA) est une arythmie touchant les oreillettes. En FA, la contraction auriculaire est rapide et irrégulière. Le remplissage des ventricules devient incomplet, ce qui réduit le débit cardiaque. La FA peut entraîner des palpitations, des évanouissements, des douleurs thoraciques ou l’insuffisance cardiaque. Elle augmente aussi le risque d'accident vasculaire. Le pontage coronarien est une intervention chirurgicale réalisée pour restaurer le flux sanguin dans les cas de maladie coronarienne sévère. 10% à 65% des patients qui n'ont jamais subi de FA, en sont victime le plus souvent lors du deuxième ou troisième jour postopératoire. La FA est particulièrement fréquente après une chirurgie de la valve mitrale, survenant alors dans environ 64% des patients. L'apparition de la FA postopératoire est associée à une augmentation de la morbidité, de la durée et des coûts d'hospitalisation. Les mécanismes responsables de la FA postopératoire ne sont pas bien compris. L'identification des patients à haut risque de FA après un pontage coronarien serait utile pour sa prévention. Le présent projet est basé sur l'analyse d’électrogrammes cardiaques enregistrées chez les patients après pontage un aorte-coronaire. Le premier objectif de la recherche est d'étudier si les enregistrements affichent des changements typiques avant l'apparition de la FA. Le deuxième objectif est d'identifier des facteurs prédictifs permettant d’identifier les patients qui vont développer une FA. Les enregistrements ont été réalisés par l'équipe du Dr Pierre Pagé sur 137 patients traités par pontage coronarien. Trois électrodes unipolaires ont été suturées sur l'épicarde des oreillettes pour enregistrer en continu pendant les 4 premiers jours postopératoires. La première tâche était de développer un algorithme pour détecter et distinguer les activations auriculaires et ventriculaires sur chaque canal, et pour combiner les activations des trois canaux appartenant à un même événement cardiaque. L'algorithme a été développé et optimisé sur un premier ensemble de marqueurs, et sa performance évaluée sur un second ensemble. Un logiciel de validation a été développé pour préparer ces deux ensembles et pour corriger les détections sur tous les enregistrements qui ont été utilisés plus tard dans les analyses. Il a été complété par des outils pour former, étiqueter et valider les battements sinusaux normaux, les activations auriculaires et ventriculaires prématurées (PAA, PVA), ainsi que les épisodes d'arythmie. Les données cliniques préopératoires ont ensuite été analysées pour établir le risque préopératoire de FA. L’âge, le niveau de créatinine sérique et un diagnostic d'infarctus du myocarde se sont révélés être les plus importants facteurs de prédiction. Bien que le niveau du risque préopératoire puisse dans une certaine mesure prédire qui développera la FA, il n'était pas corrélé avec le temps de l'apparition de la FA postopératoire. Pour l'ensemble des patients ayant eu au moins un épisode de FA d’une durée de 10 minutes ou plus, les deux heures précédant la première FA prolongée ont été analysées. Cette première FA prolongée était toujours déclenchée par un PAA dont l’origine était le plus souvent sur l'oreillette gauche. Cependant, au cours des deux heures pré-FA, la distribution des PAA et de la fraction de ceux-ci provenant de l'oreillette gauche était large et inhomogène parmi les patients. Le nombre de PAA, la durée des arythmies transitoires, le rythme cardiaque sinusal, la portion basse fréquence de la variabilité du rythme cardiaque (LF portion) montraient des changements significatifs dans la dernière heure avant le début de la FA. La dernière étape consistait à comparer les patients avec et sans FA prolongée pour trouver des facteurs permettant de discriminer les deux groupes. Cinq types de modèles de régression logistique ont été comparés. Ils avaient une sensibilité, une spécificité et une courbe opérateur-receveur similaires, et tous avaient un niveau de prédiction des patients sans FA très faible. Une méthode de moyenne glissante a été proposée pour améliorer la discrimination, surtout pour les patients sans FA. Deux modèles ont été retenus, sélectionnés sur les critères de robustesse, de précision, et d’applicabilité. Autour 70% patients sans FA et 75% de patients avec FA ont été correctement identifiés dans la dernière heure avant la FA. Le taux de PAA, la fraction des PAA initiés dans l'oreillette gauche, le pNN50, le temps de conduction auriculo-ventriculaire, et la corrélation entre ce dernier et le rythme cardiaque étaient les variables de prédiction communes à ces deux modèles.
Resumo:
Le suivi thérapeutique est recommandé pour l’ajustement de la dose des agents immunosuppresseurs. La pertinence de l’utilisation de la surface sous la courbe (SSC) comme biomarqueur dans l’exercice du suivi thérapeutique de la cyclosporine (CsA) dans la transplantation des cellules souches hématopoïétiques est soutenue par un nombre croissant d’études. Cependant, pour des raisons intrinsèques à la méthode de calcul de la SSC, son utilisation en milieu clinique n’est pas pratique. Les stratégies d’échantillonnage limitées, basées sur des approches de régression (R-LSS) ou des approches Bayésiennes (B-LSS), représentent des alternatives pratiques pour une estimation satisfaisante de la SSC. Cependant, pour une application efficace de ces méthodologies, leur conception doit accommoder la réalité clinique, notamment en requérant un nombre minimal de concentrations échelonnées sur une courte durée d’échantillonnage. De plus, une attention particulière devrait être accordée à assurer leur développement et validation adéquates. Il est aussi important de mentionner que l’irrégularité dans le temps de la collecte des échantillons sanguins peut avoir un impact non-négligeable sur la performance prédictive des R-LSS. Or, à ce jour, cet impact n’a fait l’objet d’aucune étude. Cette thèse de doctorat se penche sur ces problématiques afin de permettre une estimation précise et pratique de la SSC. Ces études ont été effectuées dans le cadre de l’utilisation de la CsA chez des patients pédiatriques ayant subi une greffe de cellules souches hématopoïétiques. D’abord, des approches de régression multiple ainsi que d’analyse pharmacocinétique de population (Pop-PK) ont été utilisées de façon constructive afin de développer et de valider adéquatement des LSS. Ensuite, plusieurs modèles Pop-PK ont été évalués, tout en gardant à l’esprit leur utilisation prévue dans le contexte de l’estimation de la SSC. Aussi, la performance des B-LSS ciblant différentes versions de SSC a également été étudiée. Enfin, l’impact des écarts entre les temps d’échantillonnage sanguins réels et les temps nominaux planifiés, sur la performance de prédiction des R-LSS a été quantifié en utilisant une approche de simulation qui considère des scénarios diversifiés et réalistes représentant des erreurs potentielles dans la cédule des échantillons sanguins. Ainsi, cette étude a d’abord conduit au développement de R-LSS et B-LSS ayant une performance clinique satisfaisante, et qui sont pratiques puisqu’elles impliquent 4 points d’échantillonnage ou moins obtenus dans les 4 heures post-dose. Une fois l’analyse Pop-PK effectuée, un modèle structural à deux compartiments avec un temps de délai a été retenu. Cependant, le modèle final - notamment avec covariables - n’a pas amélioré la performance des B-LSS comparativement aux modèles structuraux (sans covariables). En outre, nous avons démontré que les B-LSS exhibent une meilleure performance pour la SSC dérivée des concentrations simulées qui excluent les erreurs résiduelles, que nous avons nommée « underlying AUC », comparée à la SSC observée qui est directement calculée à partir des concentrations mesurées. Enfin, nos résultats ont prouvé que l’irrégularité des temps de la collecte des échantillons sanguins a un impact important sur la performance prédictive des R-LSS; cet impact est en fonction du nombre des échantillons requis, mais encore davantage en fonction de la durée du processus d’échantillonnage impliqué. Nous avons aussi mis en évidence que les erreurs d’échantillonnage commises aux moments où la concentration change rapidement sont celles qui affectent le plus le pouvoir prédictif des R-LSS. Plus intéressant, nous avons mis en exergue que même si différentes R-LSS peuvent avoir des performances similaires lorsque basées sur des temps nominaux, leurs tolérances aux erreurs des temps d’échantillonnage peuvent largement différer. En fait, une considération adéquate de l'impact de ces erreurs peut conduire à une sélection et une utilisation plus fiables des R-LSS. Par une investigation approfondie de différents aspects sous-jacents aux stratégies d’échantillonnages limités, cette thèse a pu fournir des améliorations méthodologiques notables, et proposer de nouvelles voies pour assurer leur utilisation de façon fiable et informée, tout en favorisant leur adéquation à la pratique clinique.
Resumo:
The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.
Resumo:
We propose a novel, simple, efficient and distribution-free re-sampling technique for developing prediction intervals for returns and volatilities following ARCH/GARCH models. In particular, our key idea is to employ a Box–Jenkins linear representation of an ARCH/GARCH equation and then to adapt a sieve bootstrap procedure to the nonlinear GARCH framework. Our simulation studies indicate that the new re-sampling method provides sharp and well calibrated prediction intervals for both returns and volatilities while reducing computational costs by up to 100 times, compared to other available re-sampling techniques for ARCH/GARCH models. The proposed procedure is illustrated by an application to Yen/U.S. dollar daily exchange rate data.
Resumo:
Learning Disability (LD) is a general term that describes specific kinds of learning problems. It is a neurological condition that affects a child's brain and impairs his ability to carry out one or many specific tasks. The learning disabled children are neither slow nor mentally retarded. This disorder can make it problematic for a child to learn as quickly or in the same way as some child who isn't affected by a learning disability. An affected child can have normal or above average intelligence. They may have difficulty paying attention, with reading or letter recognition, or with mathematics. It does not mean that children who have learning disabilities are less intelligent. In fact, many children who have learning disabilities are more intelligent than an average child. Learning disabilities vary from child to child. One child with LD may not have the same kind of learning problems as another child with LD. There is no cure for learning disabilities and they are life-long. However, children with LD can be high achievers and can be taught ways to get around the learning disability. In this research work, data mining using machine learning techniques are used to analyze the symptoms of LD, establish interrelationships between them and evaluate the relative importance of these symptoms. To increase the diagnostic accuracy of learning disability prediction, a knowledge based tool based on statistical machine learning or data mining techniques, with high accuracy,according to the knowledge obtained from the clinical information, is proposed. The basic idea of the developed knowledge based tool is to increase the accuracy of the learning disability assessment and reduce the time used for the same. Different statistical machine learning techniques in data mining are used in the study. Identifying the important parameters of LD prediction using the data mining techniques, identifying the hidden relationship between the symptoms of LD and estimating the relative significance of each symptoms of LD are also the parts of the objectives of this research work. The developed tool has many advantages compared to the traditional methods of using check lists in determination of learning disabilities. For improving the performance of various classifiers, we developed some preprocessing methods for the LD prediction system. A new system based on fuzzy and rough set models are also developed for LD prediction. Here also the importance of pre-processing is studied. A Graphical User Interface (GUI) is designed for developing an integrated knowledge based tool for prediction of LD as well as its degree. The designed tool stores the details of the children in the student database and retrieves their LD report as and when required. The present study undoubtedly proves the effectiveness of the tool developed based on various machine learning techniques. It also identifies the important parameters of LD and accurately predicts the learning disability in school age children. This thesis makes several major contributions in technical, general and social areas. The results are found very beneficial to the parents, teachers and the institutions. They are able to diagnose the child’s problem at an early stage and can go for the proper treatments/counseling at the correct time so as to avoid the academic and social losses.