23 resultados para estimation methods
em Reposit
Resumo:
Interest rate risk is one of the major financial risks faced by banks due to the very nature of the banking business. The most common approach in the literature has been to estimate the impact of interest rate risk on banks using a simple linear regression model. However, the relationship between interest rate changes and bank stock returns does not need to be exclusively linear. This article provides a comprehensive analysis of the interest rate exposure of the Spanish banking industry employing both parametric and non parametric estimation methods. Its main contribution is to use, for the first time in the context of banks’ interest rate risk, a nonparametric regression technique that avoids the assumption of a specific functional form. One the one hand, it is found that the Spanish banking sector exhibits a remarkable degree of interest rate exposure, although the impact of interest rate changes on bank stock returns has significantly declined following the introduction of the euro. Further, a pattern of positive exposure emerges during the post-euro period. On the other hand, the results corresponding to the nonparametric model support the expansion of the conventional linear model in an attempt to gain a greater insight into the actual degree of exposure.
Resumo:
As it is widely known, in structural dynamic applications, ranging from structural coupling to model updating, the incompatibility between measured and simulated data is inevitable, due to the problem of coordinate incompleteness. Usually, the experimental data from conventional vibration testing is collected at a few translational degrees of freedom (DOF) due to applied forces, using hammer or shaker exciters, over a limited frequency range. Hence, one can only measure a portion of the receptance matrix, few columns, related to the forced DOFs, and rows, related to the measured DOFs. In contrast, by finite element modeling, one can obtain a full data set, both in terms of DOFs and identified modes. Over the years, several model reduction techniques have been proposed, as well as data expansion ones. However, the latter are significantly fewer and the demand for efficient techniques is still an issue. In this work, one proposes a technique for expanding measured frequency response functions (FRF) over the entire set of DOFs. This technique is based upon a modified Kidder's method and the principle of reciprocity, and it avoids the need for modal identification, as it uses the measured FRFs directly. In order to illustrate the performance of the proposed technique, a set of simulated experimental translational FRFs is taken as reference to estimate rotational FRFs, including those that are due to applied moments.
Resumo:
In hyperspectral imagery a pixel typically consists mixture of spectral signatures of reference substances, also called endmembers. Linear spectral mixture analysis, or linear unmixing, aims at estimating the number of endmembers, their spectral signatures, and their abundance fractions. This paper proposes a framework for hyperpsectral unmixing. A blind method (SISAL) is used for the estimation of the unknown endmember signature and their abundance fractions. This method solve a non-convex problem by a sequence of augmented Lagrangian optimizations, where the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. The proposed framework simultaneously estimates the number of endmembers present in the hyperspectral image by an algorithm based on the minimum description length (MDL) principle. Experimental results on both synthetic and real hyperspectral data demonstrate the effectiveness of the proposed algorithm.
Resumo:
This paper addresses the estimation of object boundaries from a set of 3D points. An extension of the constrained clustering algorithm developed by Abrantes and Marques in the context of edge linking is presented. The object surface is approximated using rectangular meshes and simplex nets. Centroid-based forces are used for attracting the model nodes towards the data, using competitive learning methods. It is shown that competitive learning improves the model performance in the presence of concavities and allows to discriminate close surfaces. The proposed model is evaluated using synthetic data and medical images (MRI and ultrasound images).
Resumo:
This paper is an elaboration of the simplex identification via split augmented Lagrangian (SISAL) algorithm (Bioucas-Dias, 2009) to blindly unmix hyperspectral data. SISAL is a linear hyperspectral unmixing method of the minimum volume class. This method solve a non-convex problem by a sequence of augmented Lagrangian optimizations, where the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. With respect to SISAL, we introduce a dimensionality estimation method based on the minimum description length (MDL) principle. The effectiveness of the proposed algorithm is illustrated with simulated and real data.
Resumo:
In this work, 14 primary schools of Lisbon city, Portugal, followed a questionnaire of the ISAAC - International Study of Asthma and Allergies in Childhood Program, in 2009/2010. The questionnaire contained questions to identify children with respiratory diseases (wheeze, asthma and rhinitis). Total particulate matter (TPM) was passively collected inside two classrooms of each of 14 primary schools. Two types of filter matrices were used to collect TPM: Millipore (IsoporeTM) polycarbonate and quartz. Three campaigns were selected for the measurement of TPM: Spring, Autumn and Winter. The highest difference between the two types of filters is that the mass of collected particles was higher in quartz filters than in polycarbonate filters, even if their correlation is excellent. The highest TPM depositions occurred between October 2009 and March 2010, when related with rhinitis proportion. Rhinitis was found to be related to TPM when the data were grouped seasonally and averaged for all the schools. For the data of 2006/2007, the seasonal variation was found to be related to outdoor particle deposition (below 10 μm).
Resumo:
The portfolio generating the iTraxx EUR index is modeled by coupled Markov chains. Each of the industries of the portfolio evolves according to its own Markov transition matrix. Using a variant of the method of moments, the model parameters are estimated from a data set of Standard and Poor's. Swap spreads are evaluated by Monte-Carlo simulations. Along with an actuarially fair spread, at least squares spread is considered.
Resumo:
Chromium dioxide (CrO2) has been extensively used in the magnetic recording industry. However, it is its ferromagnetic half-metallic nature that has more recently attracted much attention, primarily for the development of spintronic devices. CrO2 is the only stoichiometric binary oxide theoretically predicted to be fully spin polarized at the Fermi level. It presents a Curie temperature of ∼ 396 K, i.e. well above room temperature, and a magnetic moment of 2 mB per formula unit. However an antiferromagnetic native insulating layer of Cr2O3 is always present on the CrO2 surface which enhances the CrO2 magnetoresistance and might be used as a barrier in magnetic tunnel junctions.
Resumo:
Background: With the decrease of DNA sequencing costs, sequence-based typing methods are rapidly becoming the gold standard for epidemiological surveillance. These methods provide reproducible and comparable results needed for a global scale bacterial population analysis, while retaining their usefulness for local epidemiological surveys. Online databases that collect the generated allelic profiles and associated epidemiological data are available but this wealth of data remains underused and are frequently poorly annotated since no user-friendly tool exists to analyze and explore it. Results: PHYLOViZ is platform independent Java software that allows the integrated analysis of sequence-based typing methods, including SNP data generated from whole genome sequence approaches, and associated epidemiological data. goeBURST and its Minimum Spanning Tree expansion are used for visualizing the possible evolutionary relationships between isolates. The results can be displayed as an annotated graph overlaying the query results of any other epidemiological data available. Conclusions: PHYLOViZ is a user-friendly software that allows the combined analysis of multiple data sources for microbial epidemiological and population studies. It is freely available at http://www.phyloviz.net.
Resumo:
Personal memories composed of digital pictures are very popular at the moment. To retrieve these media items annotation is required. During the last years, several approaches have been proposed in order to overcome the image annotation problem. This paper presents our proposals to address this problem. Automatic and semi-automatic learning methods for semantic concepts are presented. The automatic method is based on semantic concepts estimated using visual content, context metadata and audio information. The semi-automatic method is based on results provided by a computer game. The paper describes our proposals and presents their evaluations.
Resumo:
A crucial method for investigating patients with coronary artery disease (CAD) is the calculation of the left ventricular ejection fraction (LVEF). It is, consequently, imperative to precisely estimate the value of LVEF--a process that can be done with myocardial perfusion scintigraphy. Therefore, the present study aimed to establish and compare the estimation performance of the quantitative parameters of the reconstruction methods filtered backprojection (FBP) and ordered-subset expectation maximization (OSEM). Methods: A beating-heart phantom with known values of end-diastolic volume, end-systolic volume, and LVEF was used. Quantitative gated SPECT/quantitative perfusion SPECT software was used to obtain these quantitative parameters in a semiautomatic mode. The Butterworth filter was used in FBP, with the cutoff frequencies between 0.2 and 0.8 cycles per pixel combined with the orders of 5, 10, 15, and 20. Sixty-three reconstructions were performed using 2, 4, 6, 8, 10, 12, and 16 OSEM subsets, combined with several iterations: 2, 4, 6, 8, 10, 12, 16, 32, and 64. Results: With FBP, the values of end-diastolic, end-systolic, and the stroke volumes rise as the cutoff frequency increases, whereas the value of LVEF diminishes. This same pattern is verified with the OSEM reconstruction. However, with OSEM there is a more precise estimation of the quantitative parameters, especially with the combinations 2 iterations × 10 subsets and 2 iterations × 12 subsets. Conclusion: The OSEM reconstruction presents better estimations of the quantitative parameters than does FBP. This study recommends the use of 2 iterations with 10 or 12 subsets for OSEM and a cutoff frequency of 0.5 cycles per pixel with the orders 5, 10, or 15 for FBP as the best estimations for the left ventricular volumes and ejection fraction quantification in myocardial perfusion scintigraphy.
Resumo:
Tomographic image can be degraded, partially by patient based attenuation. The aim of this paper is to quantitatively verify the effects of attenuation correction methods Chang and CT in 111In studies through the analysis of profiles from abdominal SPECT, correspondent to a uniform radionuclide uptake organ, the left kidney.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Epidemiological studies showed increased prevalence of respiratory symptoms and adverse changes in pulmonary function parameters in poultry workers, corroborating the increased exposure to risk factors, such as fungal load and their metabolites. This study aimed to determine the occupational exposure threat due to fungal contamination caused by the toxigenic isolates belonging to the complex of the species of Aspergillus flavus and also isolates fromAspergillus fumigatus species complex. The study was carried out in seven Portuguese poultries, using cultural and molecularmethodologies. For conventional/cultural methods, air, surfaces, and litter samples were collected by impaction method using the Millipore Air Sampler. For the molecular analysis, air samples were collected by impinger method using the Coriolis μ air sampler. After DNA extraction, samples were analyzed by real-time PCR using specific primers and probes for toxigenic strains of the Aspergillus flavus complex and for detection of isolates from Aspergillus fumigatus complex. Through conventional methods, and among the Aspergillus genus, different prevalences were detected regarding the presence of Aspergillus flavus and Aspergillus fumigatus species complexes, namely: 74.5 versus 1.0% in the air samples, 24.0 versus 16.0% in the surfaces, 0 versus 32.6% in new litter, and 9.9 versus 15.9%in used litter. Through molecular biology, we were able to detect the presence of aflatoxigenic strains in pavilions in which Aspergillus flavus did not grow in culture. Aspergillus fumigatus was only found in one indoor air sample by conventional methods. Using molecular methodologies, however, Aspergillus fumigatus complex was detected in seven indoor samples from three different poultry units. The characterization of fungal contamination caused by Aspergillus flavus and Aspergillus fumigatus raises the concern of occupational threat not only due to the detected fungal load but also because of the toxigenic potential of these species.