951 resultados para Least square methods
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Olive oil quality grading is traditionally assessed by human sensory evaluation of positive and negative attributes (olfactory, gustatory, and final olfactorygustatory sensations). However, it is not guaranteed that trained panelist can correctly classify monovarietal extra-virgin olive oils according to olive cultivar. In this work, the potential application of human (sensory panelists) and artificial (electronic tongue) sensory evaluation of olive oils was studied aiming to discriminate eight single-cultivar extra-virgin olive oils. Linear discriminant, partial least square discriminant, and sparse partial least square discriminant analyses were evaluated. The best predictive classification was obtained using linear discriminant analysis with simulated annealing selection algorithm. A low-level data fusion approach (18 electronic tongue signals and nine sensory attributes) enabled 100 % leave-one-out cross-validation correct classification, improving the discrimination capability of the individual use of sensor profiles or sensory attributes (70 and 57 % leave-one-out correct classifications, respectively). So, human sensory evaluation and electronic tongue analysis may be used as complementary tools allowing successful monovarietal olive oil discrimination.
Resumo:
Lean meat percentage (LMP) is an important carcass quality parameter. The aim of this work is to obtain a calibration equation for the Computed Tomography (CT) scans with the Partial Least Square Regression (PLS) technique in order to predict the LMP of the carcass and the different cuts and to study and compare two different methodologies of the selection of the variables (Variable Importance for Projection — VIP- and Stepwise) to be included in the prediction equation. The error of prediction with cross-validation (RMSEPCV) of the LMP obtained with PLS and selection based on VIP value was 0.82% and for stepwise selection it was 0.83%. The prediction of the LMP scanning only the ham had a RMSEPCV of 0.97% and if the ham and the loin were scanned the RMSEPCV was 0.90%. Results indicate that for CT data both VIP and stepwise selection are good methods. Moreover the scanning of only the ham allowed us to obtain a good prediction of the LMP of the whole carcass.
Resumo:
BACKGROUND: Pathogen reduction of platelets (PRT-PLTs) using riboflavin and ultraviolet light treatment has undergone Phase 1 and 2 studies examining efficacy and safety. This randomized controlled clinical trial (RCT) assessed the efficacy and safety of PRT-PLTs using the 1-hour corrected count increment (CCI(1hour) ) as the primary outcome. STUDY DESIGN AND METHODS: A noninferiority RCT was performed where patients with chemotherapy-induced thrombocytopenia (six centers) were randomly allocated to receive PRT-PLTs (Mirasol PRT, CaridianBCT Biotechnologies) or reference platelet (PLT) products. The treatment period was 28 days followed by a 28-day follow-up (safety) period. The primary outcome was the CCI(1hour) determined using up to the first eight on-protocol PLT transfusions given during the treatment period. RESULTS: A total of 118 patients were randomly assigned (60 to PRT-PLTs; 58 to reference). Four patients per group did not require PLT transfusions leaving 110 patients in the analysis (56 PRT-PLTs; 54 reference). A total of 541 on-protocol PLT transfusions were given (303 PRT-PLTs; 238 reference). The least square mean CCI was 11,725 (standard error [SE], 1.140) for PRT-PLTs and 16,939 (SE, 1.149) for the reference group (difference, -5214; 95% confidence interval, -7542 to -2887; p<0.0001 for a test of the null hypothesis of no difference between the two groups). CONCLUSION: The study failed to show noninferiority of PRT-PLTs based on predefined CCI criteria. PLT and red blood cell utilization in the two groups was not significantly different suggesting that the slightly lower CCIs (PRT-PLTs) did not increase blood product utilization. Safety data showed similar findings in the two groups. Further studies are required to determine if the lower CCI observed with PRT-PLTs translates into an increased risk of bleeding.
Resumo:
Background/Purpose: The primary treatment goals for gouty arthritis (GA) are rapid relief of pain and inflammation during acute attacks, and long-term hyperuricemia management. A post-hoc analysis of 2 pivotal trials was performed to assess efficacy and safety of canakinumab (CAN), a fully human monoclonal anti-IL-1_ antibody, vs triamcinolone acetonide (TA) in GA patients unable to use NSAIDs and colchicine, and who were on stable urate lowering therapy (ULT) or unable to use ULT. Methods: In these 12-week, randomized, multicenter, double-blind, double-dummy, active-controlled studies (_-RELIEVED and _-RELIEVED II), patients had to have frequent attacks (_3 attacks in previous year) meeting preliminary GA ACR 1977 criteria, and were unresponsive, intolerant, or contraindicated to NSAIDs and/or colchicine, and if on ULT, ULT was stable. Patients were randomized during an acute attack to single dose CAN 150 mg s.c. or TA 40 mg i.m. and were redosed "on demand" for each new attack. Patients completing the core studies were enrolled into blinded 12-week extension studies to further investigate on-demand use of CAN vs TA for new attacks. The subpopulation selected for this post-hoc analysis was (a) unable to use NSAIDs and colchicine due to contraindication, intolerance or lack of efficacy for these drugs, and (b) currently on ULT, or contraindication or previous failure of ULT, as determined by investigators. Subpopulation comprised 101 patients (51 CAN; 50 TA) out of 454 total. Results: Several co-morbidities, including hypertension (56%), obesity (56%), diabetes (18%), and ischemic heart disease (13%) were reported in 90% of this subpopulation. Pain intensity (VAS 100 mm scale) was comparable between CAN and TA treatment groups at baseline (least-square [LS] mean 74.6 and 74.4 mm, respectively). A significantly lower pain score was reported with CAN vs TA at 72 hours post dose (1st co-primary endpoint on baseline flare; LS mean, 23.5 vs 33.6 mm; difference _10.2 mm; 95% CI, _19.9, _0.4; P_0.0208 [1-sided]). CAN significantly reduced risk for their first new attacks by 61% vs TA (HR 0.39; 95% CI, 0.17-0.91, P_0.0151 [1-sided]) for the first 12 weeks (2nd co-primary endpoint), and by 61% vs TA (HR 0.39; 95% CI, 0.19-0.79, P_0.0047 [1-sided]) over 24 weeks. Serum urate levels increased for CAN vs TA with mean change from baseline reaching a maximum of _0.7 _ 2.0 vs _0.1 _ 1.8 mg/dL at 8 weeks, and _0.3 _ 2.0 vs _0.2 _ 1.4 mg/dL at end of study (all had GA attack at baseline). Adverse Events (AEs) were reported in 33 (66%) CAN and 24 (47.1%) TA patients. Infections and infestations were the most common AEs, reported in 10 (20%) and 5 (10%) patients treated with CAN and TA respectively. Incidence of SAEs was comparable between CAN (gastritis, gastroenteritis, chronic renal failure) and TA (aortic valve incompetence, cardiomyopathy, aortic stenosis, diarrohea, nausea, vomiting, bicuspid aortic valve) groups (2 [4.0%] vs 2 [3.9%]). Conclusion: CAN provided superior pain relief and reduced risk of new attack in highly-comorbid GA patients unable to use NSAIDs and colchicine, and who were currently on stable ULT or unable to use ULT. The safety profile in this post-hoc subpopulation was consistent with the overall _-RELIEVED and _-RELIEVED II population.
Resumo:
BACKGROUND AND PURPOSE: Knowledge of cerebral blood flow (CBF) alterations in cases of acute stroke could be valuable in the early management of these cases. Among imaging techniques affording evaluation of cerebral perfusion, perfusion CT studies involve sequential acquisition of cerebral CT sections obtained in an axial mode during the IV administration of iodinated contrast material. They are thus very easy to perform in emergency settings. Perfusion CT values of CBF have proved to be accurate in animals, and perfusion CT affords plausible values in humans. The purpose of this study was to validate perfusion CT studies of CBF by comparison with the results provided by stable xenon CT, which have been reported to be accurate, and to evaluate acquisition and processing modalities of CT data, notably the possible deconvolution methods and the selection of the reference artery. METHODS: Twelve stable xenon CT and perfusion CT cerebral examinations were performed within an interval of a few minutes in patients with various cerebrovascular diseases. CBF maps were obtained from perfusion CT data by deconvolution using singular value decomposition and least mean square methods. The CBF were compared with the stable xenon CT results in multiple regions of interest through linear regression analysis and bilateral t tests for matched variables. RESULTS: Linear regression analysis showed good correlation between perfusion CT and stable xenon CT CBF values (singular value decomposition method: R(2) = 0.79, slope = 0.87; least mean square method: R(2) = 0.67, slope = 0.83). Bilateral t tests for matched variables did not identify a significant difference between the two imaging methods (P >.1). Both deconvolution methods were equivalent (P >.1). The choice of the reference artery is a major concern and has a strong influence on the final perfusion CT CBF map. CONCLUSION: Perfusion CT studies of CBF achieved with adequate acquisition parameters and processing lead to accurate and reliable results.
Resumo:
In this thesis different parameters influencing critical flux in protein ultrafiltration and membrane foul-ing were studied. Short reviews of proteins, cross-flow ultrafiltration, flux decline and criticalflux and the basic theory of Partial Least Square analysis (PLS) are given at the beginning. The experiments were mainly performed using dilute solutions of globular proteins, commercial polymeric membranes and laboratory scale apparatuses. Fouling was studied by flux, streaming potential and FTIR-ATR measurements. Critical flux was evaluated by different kinds of stepwise procedures and by both con-stant pressure and constant flux methods. The critical flux was affected by transmembrane pressure, flow velocity, protein concentration, mem-brane hydrophobicity and protein and membrane charges. Generally, the lowest critical fluxes were obtained at the isoelectric points of the protein and the highest in the presence of electrostatic repulsion between the membrane surface and the protein molecules. In the laminar flow regime the critical flux increased with flow velocity, but not any more above this region. An increase in concentration de-creased the critical flux. Hydrophobic membranes showed fouling in all charge conditionsand, furthermore, especially at the beginning of the experiment even at very low transmembrane pressures. Fouling of these membranes was thought to be due to protein adsorption by hydrophobic interactions. The hydrophilic membranes used suffered more from reversible fouling and concentration polarisation than from irreversible foul-ing. They became fouled at higher transmembrane pressures becauseof pore blocking. In this thesis some new aspects on critical flux are presented that are important for ultrafiltration and fractionation of proteins.
Resumo:
OBJECTIVE: The objective of this study was to compare posttreatment seizure severity in a phase III clinical trial of eslicarbazepine acetate (ESL) as adjunctive treatment of refractory partial-onset seizures. METHODS: The Seizure Severity Questionnaire (SSQ) was administered at baseline and posttreatment. The SSQ total score (TS) and component scores (frequency and helpfulness of warning signs before seizures [BS]; severity and bothersomeness of ictal movement and altered consciousness during seizures [DS]; cognitive, emotional, and physical aspects of postictal recovery after seizures [AS]; and overall severity and bothersomeness [SB]) were calculated for the per-protocol population. Analysis of covariance, adjusted for baseline scores, estimated differences in posttreatment least square means between treatment arms. RESULTS: Out of 547 per-protocol patients, 441 had valid SSQ TS both at baseline and posttreatment. Mean posttreatment TS for ESL 1200mg/day was significantly lower than that for placebo (2.68 vs 3.20, p<0.001), exceeding the minimal clinically important difference (MCID: 0.48). Mean DS, AS, and SB were also significantly lower with ESL 1200mg/day; differences in AS and SB exceeded the MCIDs. The TS, DS, AS, and SB were lower for ESL 800mg/day than for placebo; only SB was significant (p=0.013). For both ESL arms combined versus placebo, mean scores differed significantly for TS (p=0.006), DS (p=0.031), and SB (p=0.001). CONCLUSIONS: Therapeutic ESL doses led to clinically meaningful, dose-dependent reductions in seizure severity, as measured by SSQ scores. CLASSIFICATION OF EVIDENCE: This study presents Class I evidence that adjunctive ESL (800 and 1200mg/day) led to clinically meaningful, dose-dependent seizure severity reductions, measured by the SSQ.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.
Resumo:
The goal of this work is the development and validation of an analytical method for fast quantification of sibutramine in pharmaceutical formulations, using diffuse reflectance infrared spectroscopy and partial least square regression. The multivariate model was elaborated from 22 mixtures containing sibutramine and excipients (lactose, microcrystalline cellulose, colloidal silicon dioxide and magnesium stearate) and using fragmented (750-1150/ 1350-1500/ 1850-1950/ 2600-2900 cm-1) and smoothing spectral data. Using 10 latent variables, excellent predictive capacity were observed in the calibration (n=20, RMSEC=0.004, R= 0.999) and external validation (n=5, RMSEC= 9.36, R=0.999) phases. In the analysis of synthetic mixtures the precision (SD=3,47%) was compatible with the rules of the Agencia Nacional de Vigilância Sanitária (ANVISA-Brazil). In the analysis of commercial drugs good agreement was observed between spectroscopic and chromatographic methods.
Resumo:
Multivariate models were developed using Artificial Neural Network (ANN) and Least Square - Support Vector Machines (LS-SVM) for estimating lignin siringyl/guaiacyl ratio and the contents of cellulose, hemicelluloses and lignin in eucalyptus wood by pyrolysis associated to gaseous chromatography and mass spectrometry (Py-GC/MS). The results obtained by two calibration methods were in agreement with those of reference methods. However a comparison indicated that the LS-SVM model presented better predictive capacity for the cellulose and lignin contents, while the ANN model presented was more adequate for estimating the hemicelluloses content and lignin siringyl/guaiacyl ratio.
Resumo:
Acetylation was performed to reduce the polarity of wood and increase its compatibility with polymer matrices for the production of composites. These reactions were performed first as a function of acetic acid and anhydride concentration in a mixture catalyzed by sulfuric acid. A concentration of 50%/50% (v/v) of acetic acid and anhydride was found to produced the highest conversion rate between the functional groups. After these reactions, the kinetics were investigated by varying times and temperatures using a 3² factorial design, and showed time was the most relevant parameter in determining the conversion of hydroxyl into carbonyl groups.
Resumo:
The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.
Resumo:
En esta Tesis se presenta el modelo de Kou, Difusión con saltos doble exponenciales, para la valoración de opciones Call de tipo europeo sobre los precios del petróleo como activo subyacente. Se mostrarán los cálculos numéricos para la formulación de expresiones analíticas que se resolverán mediante la implementación de algoritmos numéricos eficientes que conllevaran a los precios teóricos de las opciones evaluadas. Posteriormente se discutirán las ventajas de usar métodos como la transformada de Fourier por la sencillez relativa de su programación frente a los desarrollos de otras técnicas numéricas. Este método es usado en conjunto con el ejercicio de calibración no paramétrica de regularización, que mediante la minimización de los errores al cuadrado sujeto a una penalización fundamentada en el concepto de entropía relativa, resultaran en la obtención de precios para las opciones Call sobre el petróleo considerando una mejor capacidad del modelo de asignar precios justos frente a los transados en el mercado.