871 resultados para estimation method
Resumo:
It is essential to accurately estimate the working set size (WSS) of an application for various optimizations such as to partition cache among virtual machines or reduce leakage power dissipated in an over-allocated cache by switching it OFF. However, the state-of-the-art heuristics such as average memory access latency (AMAL) or cache miss ratio (CMR) are poorly correlated to the WSS of an application due to 1) over-sized caches and 2) their dispersed nature. Past studies focus on estimating WSS of an application executing on a uniprocessor platform. Estimating the same for a chip multiprocessor (CMP) with a large dispersed cache is challenging due to the presence of concurrently executing threads/processes. Hence, we propose a scalable, highly accurate method to estimate WSS of an application. We call this method ``tagged WSS (TWSS)'' estimation method. We demonstrate the use of TWSS to switch-OFF the over-allocated cache ways in Static and Dynamic NonUniform Cache Architectures (SNUCA, DNUCA) on a tiled CMP. In our implementation of adaptable way SNUCA and DNUCA caches, decision of altering associativity is taken by each L2 controller. Hence, this approach scales better with the number of cores present on a CMP. It gives overall (geometric mean) 26% and 19% higher energy-delay product savings compared to AMAL and CMR heuristics on SNUCA, respectively.
Resumo:
In this paper, a Decimative Spectral estimation method based on Eigenanalysis and SVD (Singular Value Decomposition) is presented and applied to speech signals in order to estimate Formant/Bandwidth values. The underlying model decomposes a signal into complex damped sinusoids. The algorithm is applied not only on speech samples but on a small amount of the autocorrelation coefficients of a speech frame as well, for finer estimation. Correct estimation of Formant/Bandwidth values depend on the model order thus, the requested number of poles. Overall, experimentation results indicate that the proposed methodology successfully estimates formant trajectories and their respective bandwidths.
Resumo:
Channel estimation method is a key issue in MIMO system. In recent years, a lot of papers on subspace(SS)-based blind channel estimation have been published, and in this paper, combining SS method with a space-time coding scheme, we proposed a novel blind channel estimation method in MIMO system. Simulation result demonstrates the effectiveness of this method.
Resumo:
This doctoral thesis focuses on ground-based measurements of stratospheric nitric acid (HNO3)concentrations obtained by means of the Ground-Based Millimeter-wave Spectrometer (GBMS). Pressure broadened HNO3 emission spectra are analyzed using a new inversion algorithm developed as part of this thesis work and the retrieved vertical profiles are extensively compared to satellite-based data. This comparison effort I carried out has a key role in establishing a long-term (1991-2010), global data record of stratospheric HNO3, with an expected impact on studies concerning ozone decline and recovery. The first part of this work is focused on the development of an ad hoc version of the Optimal Estimation Method (Rodgers, 2000) in order to retrieve HNO3 spectra observed by means of GBMS. I also performed a comparison between HNO3 vertical profiles retrieved with the OEM and those obtained with the old iterative Matrix Inversion method. Results show no significant differences in retrieved profiles and error estimates, with the OEM providing however additional information needed to better characterize the retrievals. A final section of this first part of the work is dedicated to a brief review on the application of the OEM to other trace gases observed by GBMS, namely O3 and N2O. The second part of this study deals with the validation of HNO3 profiles obtained with the new inversion method. The first step has been the validation of GBMS measurements of tropospheric opacity, which is a necessary tool in the calibration of any GBMS spectra. This was achieved by means of comparisons among correlative measurements of water vapor column content (or Precipitable Water Vapor, PWV) since, in the spectral region observed by GBMS, the tropospheric opacity is almost entirely due to water vapor absorption. In particular, I compared GBMS PWV measurements collected during the primary field campaign of the ECOWAR project (Bhawar et al., 2008) with simultaneous PWV observations obtained with Vaisala RS92k radiosondes, a Raman lidar, and an IR Fourier transform spectrometer. I found that GBMS PWV measurements are in good agreement with the other three data sets exhibiting a mean difference between observations of ~9%. After this initial validation, GBMS HNO3 retrievals have been compared to two sets of satellite data produced by the two NASA/JPL Microwave Limb Sounder (MLS) experiments (aboard the Upper Atmosphere Research Satellite (UARS) from 1991 to 1999, and on the Earth Observing System (EOS) Aura mission from 2004 to date). This part of my thesis is inserted in GOZCARDS (Global Ozone Chemistry and Related Trace gas Data Records for the Stratosphere), a multi-year project, aimed at developing a long-term data record of stratospheric constituents relevant to the issues of ozone decline and expected recovery. This data record will be based mainly on satellite-derived measurements but ground-based observations will be pivotal for assessing offsets between satellite data sets. Since the GBMS has been operated for more than 15 years, its nitric acid data record offers a unique opportunity for cross-calibrating HNO3 measurements from the two MLS experiments. I compare GBMS HNO3 measurements obtained from the Italian Alpine station of Testa Grigia (45.9° N, 7.7° E, elev. 3500 m), during the period February 2004 - March 2007, and from Thule Air Base, Greenland (76.5°N 68.8°W), during polar winter 2008/09, and Aura MLS observations. A similar intercomparison is made between UARS MLS HNO3 measurements with those carried out from the GBMS at South Pole, Antarctica (90°S), during the most part of 1993 and 1995. I assess systematic differences between GBMS and both UARS and Aura HNO3 data sets at seven potential temperature levels. Results show that, except for measurements carried out at Thule, ground based and satellite data sets are consistent within the errors, at all potential temperature levels.
Resumo:
In warm and dry climates, the use of porous systems should be required in order to allow a better leaf distribution inside the plant, causing more space in the clusters area and enhancing determined physiological processes so in the leaf (photosynthesis, v entilation, transpiration) as in berry (growth and maturation). Plant geometry indexes, yield and must composition have been studied in three different systems: sprawl with 12 shoots/m (S1); sprawl system with 18 shoots/m (S2) and vertical positioned syste m or VSP with 12 shoots/m (VSP1). Total leaf area increases as the crop load does, whoever surface area depends on to two factors: crop load and the training system (VSP vs. sprawl), which can provide differences in leaf exposure efficiencies. The main objective of this study was to validate digital photography measurements used to compare porosity differences among treatments and, as they affect plant microclimate and, therefore, yield and berry quality. Also, all previous studied indexes (LAI, SA, SFEr) tended to overestimate the relationship between exposed leaf surface and porosity of each treatment, but the use of digital method proved to be an effective tool in order to assess canopy porosity. Results showed that not positioned and free systems (sprawl) scored between 25- 50% more porosity in the clusters area than the fixed vertical system (VSP), which resulted in a better plant microclimate for test conditions, mainly by improving the exposure of internal clusters and internal canopy ventilation. On the other hand, higher crop load treatment (S2) showed a real increase in yield (16%) without any relevant change into must composition, even improving total anthocyanin content into berry during ripening
Resumo:
In warm and dry climates, the use of porous systems should be required in order to allow a better leaf distribution inside the plant, causing more space in the clusters area and enhancing determined physiological processes so in the leaf (photosynthesis, ventilation, transpiration) as in berry (growth and maturation). Plant geometry indexes, yield and must composition have been studied in three different systems: sprawl with 12 shoots/m (S1); sprawl system with 18 shoots/m (S2) and vertical positioned system or VSP with 12 shoots/m (VSP1). Total leaf area increases as the crop load does, whoever surface area depends on to two factors: crop load and the training system (VSP vs . sprawl), which can provide differences in leaf exposure efficiencies. The main objective of this study was to validate digital photography measurements used to compare porosity differences among treatments and, as they affect plant microclimate and, therefore, yield and berry quality. Also, all previous studied indexes (LAI, SA, SFEr) tended to overestimate the relationship between exposed leaf surface and porosity of each treatment, but the use of digital method proved to be an effective tool in order to assess canopy porosity. Results showed that not positioned and free systems (sprawl) scored between 25 - 50% more porosity in the clusters area than the fixed vertical system (VSP), which resulted in a better plant microclimate for test conditions, mainly by improving the exposure of internal clusters and internal canopy ventilation. On the other hand, higher crop load treatment (S2) showed a real increase in yield (16%) without any relevant change into must composition, even improving total anthocyanin content into berry during ripening
Resumo:
The measurement of lifetime prevalence of depression in cross-sectional surveys is biased by recall problems. We estimated it indirectly for two countries using modelling, and quantified the underestimation in the empirical estimate for one. A microsimulation model was used to generate population-based epidemiological measures of depression. We fitted the model to 1-and 12-month prevalence data from the Netherlands Mental Health Survey and Incidence Study (NEMESIS) and the Australian Adult Mental Health and Wellbeing Survey. The lowest proportion of cases ever having an episode in their life is 30% of men and 40% of women, for both countries. This corresponds to a lifetime prevalence of 20 and 30%, respectively, in a cross-sectional setting (aged 15-65). The NEMESIS data were 38% lower than these estimates. We conclude that modelling enabled us to estimate lifetime prevalence of depression indirectly. This method is useful in the absence of direct measurement, but also showed that direct estimates are underestimated by recall bias and by the cross-sectional setting.
Resumo:
The Asian International Input-Output (IO) Table that is compiled by Institute of Developing Economies-JETRO (IDE), was constructed in Isard type form. Thus, it required a lot of time to publish. In order to avoid this time-lag problem and establish a more simple compilation technique, this paper concentrates on verifying the possibility of using the Chenery-Moses type estimation technique. If possible, applying the Chenery-Moses instead of the Isard type would be effective for both impact and linkage analysis (except for some countries such as Malaysia and Singapore and some primary sectors. Using Chenery-Moses estimation method, production of the Asian International IO table can be reduced by two years. And more, this method might have the possibilities to be applied for updating exercise of Asian IO table.
Resumo:
En esta tesis, el método de estimación de error de truncación conocido como restimation ha sido extendido de esquemas de bajo orden a esquemas de alto orden. La mayoría de los trabajos en la bibliografía utilizan soluciones convergidas en mallas de distinto refinamiento para realizar la estimación. En este trabajo se utiliza una solución en una única malla con distintos órdenes polinómicos. Además, no se requiere que esta solución esté completamente convergida, resultando en el método conocido como quasi-a priori T-estimation. La aproximación quasi-a priori estima el error mientras el residuo del método iterativo no es despreciable. En este trabajo se demuestra que algunas de las hipótesis fundamentales sobre el comportamiento del error, establecidas para métodos de bajo orden, dejan de ser válidas en esquemas de alto orden, haciendo necesaria una revisión completa del comportamiento del error antes de redefinir el algoritmo. Para facilitar esta tarea, en una primera etapa se considera el método conocido como Chebyshev Collocation, limitando la aplicación a geometrías simples. La extensión al método Discontinuouos Galerkin Spectral Element Method presenta dificultades adicionales para la definición precisa y la estimación del error, debidos a la formulación débil, la discretización multidominio y la formulación discontinua. En primer lugar, el análisis se enfoca en leyes de conservación escalares para examinar la precisión de la estimación del error de truncación. Después, la validez del análisis se demuestra para las ecuaciones incompresibles y compresibles de Euler y Navier Stokes. El método de aproximación quasi-a priori r-estimation permite desacoplar las contribuciones superficiales y volumétricas del error de truncación, proveyendo información sobre la anisotropía de las soluciones así como su ratio de convergencia con el orden polinómico. Se demuestra que esta aproximación quasi-a priori produce estimaciones del error de truncación con precisión espectral. ABSTRACT In this thesis, the τ-estimation method to estimate the truncation error is extended from low order to spectral methods. While most works in the literature rely on fully time-converged solutions on grids with different spacing to perform the estimation, only one grid with different polynomial orders is used in this work. Furthermore, a non timeconverged solution is used resulting in the quasi-a priori τ-estimation method. The quasi-a priori approach estimates the error when the residual of the time-iterative method is not negligible. It is shown in this work that some of the fundamental assumptions about error tendency, well established for low order methods, are no longer valid in high order schemes, making necessary a complete revision of the error behavior before redefining the algorithm. To facilitate this task, the Chebyshev Collocation Method is considered as a first step, limiting their application to simple geometries. The extension to the Discontinuous Galerkin Spectral Element Method introduces additional features to the accurate definition and estimation of the error due to the weak formulation, multidomain discretization and the discontinuous formulation. First, the analysis focuses on scalar conservation laws to examine the accuracy of the estimation of the truncation error. Then, the validity of the analysis is shown for the incompressible and compressible Euler and Navier Stokes equations. The developed quasi-a priori τ-estimation method permits one to decouple the interfacial and the interior contributions of the truncation error in the Discontinuous Galerkin Spectral Element Method, and provides information about the anisotropy of the solution, as well as its rate of convergence in polynomial order. It is demonstrated here that this quasi-a priori approach yields a spectrally accurate estimate of the truncation error.
Resumo:
This paper presents a method of recovering the 6 DoF pose (Cartesian position and angular rotation) of a range sensor mounted on a mobile platform. The method utilises point targets in a local scene and optimises over the error between their absolute position and their apparent position as observed by the range sensor. The analysis includes an investigation into the sensitivity and robustness of the method. Practical results were collected using a SICK LRS2100 mounted on a P&H electric mining shovel and present the errors in scan data relative to an independent 3D scan of the scene. A comparison to directly measuring the sensor pose is presented and shows the significant accuracy improvements in scene reconstruction using this pose estimation method.
Resumo:
In recent years, a number of phylogenetic methods have been developed for estimating molecular rates and divergence dates under models that relax the molecular clock constraint by allowing rate change throughout the tree. These methods are being used with increasing frequency, but there have been few studies into their accuracy. We tested the accuracy of several relaxed-clock methods (penalized likelihood and Bayesian inference using various models of rate change) using nucleotide sequences simulated on a nine-taxon tree. When the sequences evolved with a constant rate, the methods were able to infer rates accurately, but estimates were more precise when a molecular clock was assumed. When the sequences evolved under a model of autocorrelated rate change, rates were accurately estimated using penalized likelihood and by Bayesian inference using lognormal and exponential models of rate change, while other models did not perform as well. When the sequences evolved under a model of uncorrelated rate change, only Bayesian inference using an exponential rate model performed well. Collectively, the results provide a strong recommendation for using the exponential model of rate change if a conservative approach to divergence time estimation is required. A case study is presented in which we use a simulation-based approach to examine the hypothesis of elevated rates in the Cambrian period, and it is found that these high rate estimates might be an artifact of the rate estimation method. If this bias is present, then the ages of metazoan divergences would be systematically underestimated. The results of this study have implications for studies of molecular rates and divergence dates.
Resumo:
A cost estimation method is required to estimate the life cycle cost of a product family at the early stage of product development in order to evaluate the product family design. There are difficulties with existing cost estimation techniques in estimating the life cycle cost for a product family at the early stage of product development. This paper proposes a framework that combines a knowledge based system and an activity based costing techniques in estimating the life cycle cost of a product family at the early stage of product development. The inputs of the framework are the product family structure and its sub function. The output of the framework is the life cycle cost of a product family that consists of all costs at each product family level and the costs of each product life cycle stage. The proposed framework provides a life cycle cost estimation tool for a product family at the early stage of product development using high level information as its input. The framework makes it possible to estimate the life cycle cost of various product family that use any types of product structure. It provides detailed information related to the activity and resource costs of both parts and products that can assist the designer in analyzing the cost of the product family design. In addition, it can reduce the required amount of information and time to construct the cost estimation system.