19 resultados para density estimation
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
The portfolio generating the iTraxx EUR index is modeled by coupled Markov chains. Each of the industries of the portfolio evolves according to its own Markov transition matrix. Using a variant of the method of moments, the model parameters are estimated from a data set of Standard and Poor's. Swap spreads are evaluated by Monte-Carlo simulations. Along with an actuarially fair spread, at least squares spread is considered.
Resumo:
We investigate, via numerical simulations, mean field, and density functional theories, the magnetic response of a dipolar hard sphere fluid at low temperatures and densities, in the region of strong association. The proposed parameter-free theory is able to capture both the density and temperature dependence of the ring-chain equilibrium and the contribution to the susceptibility of a chain of generic length. The theory predicts a nonmonotonic temperature dependence of the initial (zero field) magnetic susceptibility, arising from the competition between magnetically inert particle rings and magnetically active chains. Monte Carlo simulation results closely agree with the theoretical findings. DOI: 10.1103/PhysRevLett.110.148306
Resumo:
The measurement of room impulse response (RIR) when there are high background noise levels frequently means one must deal with very low signal-to-noise ratios (SNR). if such is the case, the measurement might yield unreliable results, even when synchronous averaging techniques are used. Furthermore, if there are non-linearities in the apparatus or system time variances, the final SNR can be severely degraded. The test signals used in RIR measurement are often disturbed by non-stationary ambient noise components. A novel approach based on the energy analysis of ambient noise - both in the time and in frequency - was considered. A modified maximum length sequence (MLS) measurement technique. referred to herein as the hybrid MLS technique, was developed for use in room acoustics. The technique consists of reducing the noise energy of the captured sequences before applying the averaging technique in order to improve the overall SNRs and frequency response accuracy. Experiments were conducted under real conditions with different types of underlying ambient noises. Results are shown and discussed. Advantages and disadvantages of the hybrid MLS technique over standard MLS technique are evaluated and discussed. Our findings show that the new technique leads to a significant increase in the overall SNR. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
The construction industry keeps on demanding huge quantities of natural resources, mainly minerals for mortars and concrete production. The depletion of many quarries and environmental concerns about reducing the dumping of construction and demolition waste in quarries have led to an increase in the procuring and use of recycled aggregates from this type of waste. If they are to be incorporated in concrete and mortars it is essential to know their properties to guarantee the adequate performance of the end products, in both mechanical and durability-related terms. Existing regulated tests were developed for natural aggregates, however, and several problems arise when they are applied to recycled aggregates, especially fine recycled aggregates (FRA). This paper describes the main problems encountered with these tests and proposes an alternative method to determine the density and water absorption of FRA that removes them. The use of sodium hexametaphosphate solutions in the water absorption test has proven to improve its efficiency, minimizing cohesion between particles and helping to release entrained air.
Resumo:
The MCNPX code was used to calculate the TG-43U1 recommended parameters in water and prostate tissue in order to quantify the dosimetric impact in 30 patients treated with (125)I prostate implants when replacing the TG-43U1 formalism parameters calculated in water by a prostate-like medium in the planning system (PS) and to evaluate the uncertainties associated with Monte Carlo (MC) calculations. The prostate density was obtained from the CT of 100 patients with prostate cancer. The deviations between our results for water and the TG-43U1 consensus dataset values were -2.6% for prostate V100, -13.0% for V150, and -5.8% for D90; -2.0% for rectum V100, and -5.1% for D0.1; -5.0% for urethra D10, and -5.1% for D30. The same differences between our water and prostate results were all under 0.3%. Uncertainties estimations were up to 2.9% for the gL(r) function, 13.4% for the F(r,θ) function and 7.0% for Λ, mainly due to seed geometry uncertainties. Uncertainties in extracting the TG-43U1 parameters in the MC simulations as well as in the literature comparison are of the same order of magnitude as the differences between dose distributions computed for water and prostate-like medium. The selection of the parameters for the PS should be done carefully, as it may considerably affect the dose distributions. The seeds internal geometry uncertainties are a major limiting factor in the MC parameters deduction.
Resumo:
The article reports density measurements of dipropyl (DPA), dibutyl (DBA) and bis(2-ethylhexyl) (DEHA) adipates, using a vibrating U-tube densimeter, model DMA HP, from Anton Paar GmbH. The measurements were performed in the temperature range (293 to 373) K and at pressures up to about 68 MPa, except for DPA for which the upper limits were 363 K and 65 MPa, respectively. The density data for each liquid was correlated with the temperature and pressure using a modified Tait equation. The expanded uncertainty of the present density results is estimated as 0.2% at a 95% confidence level. No literature density data at pressures higher than 0.1 MPa could be found. DEHA literature data at atmospheric pressure agree with the correlation of the present measurements, in the corresponding temperature range, within +/- 0.11%. The isothermal compressibility and the isobaric thermal expansion were calculated by differentiation of the modified Tait correlation equation. These two parameters were also calculated for dimethyl adipate (DMA), from density data reported in a previous work. The uncertainties of isothermal compressibility and the isobaric thermal expansion are estimated to be less than +/- 1.7% and +/- 1.1%, respectively, at a 95% confidence level. Literature data of isothermal compressibility and isobaric thermal expansivity for DMA have an agreement within +/- 1% and +/- 2.4%, respectively, with results calculated in this work. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In Part I of the present work we describe the viscosity measurements performed on tris(2-ethylhexyl) trimellitate or 1,2,4-benzenetricarboxylic acid, tris(2-ethylhexyl) ester (TOTM) up to 65 MPa and at six temperatures from (303 to 373)K, using a new vibrating-wire instrument. The main aim is to contribute to the proposal of that liquid as a potential reference fluid for high viscosity, high pressure and high temperature. The present Part II is dedicated to report the density measurements of TOTM necessary, not only to compute the viscosity data presented in Part I, but also as complementary data for the mentioned proposal. The present density measurements were obtained using a vibrating U-tube densimeter, model DMA HP, using model DMA5000 as a reading unit, both instruments from Anton Paar GmbH. The measurements were performed along five isotherms from (293 to 373)K and at eleven different pressures up to 68 MPa. As far as the authors are aware, the viscosity and density results are the first, above atmospheric pressure, to be published for TOTM. Due to TOTM's high viscosity, its density data were corrected for the viscosity effect on the U-tube density measurements. This effect was estimated using two Newtonian viscosity standard liquids, 20 AW and 200 GW. The density data were correlated with temperature and pressure using a modified Tait equation. The expanded uncertainty of the present density results is estimated as +/- 0.2% at a 95% confidence level. Those results were correlated with temperature and pressure by a modified Tait equation, with deviations within +/- 0.25%. Furthermore, the isothermal compressibility, K-T, and the isobaric thermal expansivity, alpha(p), were obtained by derivation of the modified Tait equation used for correlating the density data. The corresponding uncertainties, at a 95% confidence level, are estimated to be less than +/- 1.5% and +/- 1.2%, respectively. No isobaric thermal expansivity and isothermal compressibility for TOTM were found in the literature. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We investigate the behavior of a patchy particle model close to a hard-wall via Monte Carlo simulation and density functional theory (DFT). Two DFT approaches, based on the homogeneous and inhomogeneous versions of Wertheim's first order perturbation theory for the association free energy are used. We evaluate, by simulation and theory, the equilibrium bulk phase diagram of the fluid and analyze the surface properties for two isochores, one of which is close to the liquid side of the gas-liquid coexistence curve. We find that the density profile near the wall crosses over from a typical high-temperature adsorption profile to a low-temperature desorption one, for the isochore close to coexistence. We relate this behavior to the properties of the bulk network liquid and find that the theoretical descriptions are reasonably accurate in this regime. At very low temperatures, however, an almost fully bonded network is formed, and the simulations reveal a second adsorption regime which is not captured by DFT. We trace this failure to the neglect of orientational correlations of the particles, which are found to exhibit surface induced orientational order in this regime.
Resumo:
Trabalho Final de Mestrado para a obtenção do grau de Mestre em Engenharia Mecânica /Energia
Resumo:
The design of magnetic cores can be carried out by taking into account the optimization of different parameters in accordance with the application requirements. Considering the specifications of the fast field cycling nuclear magnetic resonance (FFC-NMR) technique, the magnetic flux density distribution, at the sample insertion volume, is one of the core parameters that needs to be evaluated. Recently, it has been shown that the FFC-NMR magnets can be built on the basis of solenoid coils with ferromagnetic cores. Since this type of apparatus requires magnets with high magnetic flux density uniformity, a new type of magnet using a ferromagnetic core, copper coils, and superconducting blocks was designed with improved magnetic flux density distribution. In this paper, the designing aspects of the magnet are described and discussed with emphasis on the improvement of the magnetic flux density homogeneity (Delta B/B-0) in the air gap. The magnetic flux density distribution is analyzed based on 3-D simulations and NMR experimental results.
Resumo:
In this work, we present results from teleseismic P-wave receiver functions (PRFs) obtained in Portugal, Western Iberia. A dense seismic station deployment conducted between 2010 and 2012, in the scope of the WILAS project and covering the entire country, allowed the most spatially extensive probing on the bulk crustal seismic properties of Portugal up to date. The application of the H-κ stacking algorithm to the PRFs enabled us to estimate the crustal thickness (H) and the average crustal ratio of the P- and S-waves velocities V p/V s (κ) for the region. Observations of Moho conversions indicate that this interface is relatively smooth with the crustal thickness ranging between 24 and 34 km, with an average of 30 km. The highest V p/V s values are found on the Mesozoic-Cenozoic crust beneath the western and southern coastal domain of Portugal, whereas the lowest values correspond to Palaeozoic crust underlying the remaining part of the subject area. An average V p/V s is found to be 1.72, ranging 1.63-1.86 across the study area, indicating a predominantly felsic composition. Overall, we systematically observe a decrease of V p/V s with increasing crustal thickness. Taken as a whole, our results indicate a clear distinction between the geological zones of the Variscan Iberian Massif in Portugal, the overall shape of the anomalies conditioned by the shape of the Ibero-Armorican Arc, and associated Late Paleozoic suture zones, and the Meso-Cenozoic basin associated with Atlantic rifting stages. Thickened crust (30-34 km) across the studied region may be inherited from continental collision during the Paleozoic Variscan orogeny. An anomalous crustal thinning to around 28 km is observed beneath the central part of the Central Iberian Zone and the eastern part of South Portuguese Zone.
Resumo:
A new algorithm for the velocity vector estimation of moving ships using Single Look Complex (SLC) SAR data in strip map acquisition mode is proposed. The algorithm exploits both amplitude and phase information of the Doppler decompressed data spectrum, with the aim to estimate both the azimuth antenna pattern and the backscattering coefficient as function of the look angle. The antenna pattern estimation provides information about the target velocity; the backscattering coefficient can be used for vessel classification. The range velocity is retrieved in the slow time frequency domain by estimating the antenna pattern effects induced by the target motion, while the azimuth velocity is calculated by the estimated range velocity and the ship orientation. Finally, the algorithm is tested on simulated SAR SLC data.
Resumo:
This paper extents the by now classic sensor fusion complementary filter (CF) design, involving two sensors, to the case where three sensors that provide measurements in different bands are available. This paper shows that the use of classical CF techniques to tackle a generic three sensors fusion problem, based solely on their frequency domain characteristics, leads to a minimal realization, stable, sub-optimal solution, denoted as Complementary Filters3 (CF3). Then, a new approach for the estimation problem at hand is used, based on optimal linear Kalman filtering techniques. Moreover, the solution is shown to preserve the complementary property, i.e. the sum of the three transfer functions of the respective sensors add up to one, both in continuous and discrete time domains. This new class of filters are denoted as Complementary Kalman Filters3 (CKF3). The attitude estimation of a mobile robot is addressed, based on data from a rate gyroscope, a digital compass, and odometry. The experimental results obtained are reported.
Resumo:
This paper addresses the estimation of surfaces from a set of 3D points using the unified framework described in [1]. This framework proposes the use of competitive learning for curve estimation, i.e., a set of points is defined on a deformable curve and they all compete to represent the available data. This paper extends the use of the unified framework to surface estimation. It o shown that competitive learning performes better than snakes, improving the model performance in the presence of concavities and allowing to desciminate close surfaces. The proposed model is evaluated in this paper using syntheticdata and medical images (MRI and ultrasound images).