832 resultados para accuracy analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To evaluate the comparative efficiency of graphite furnace atomic absorption spectrometry (GFAAS) and hydride generation atomic absorption spectrometry (HGAAS) for trace analysis of arsenic (As) in natural herbal products (NHPs). Method: Arsenic analysis in natural herbal products and standard reference material was conducted using atomic absorption spectrometry (AAS), namely, hydride generation AAS (HGAAS) and graphite furnace (GFAAS). The samples were digested with HNO3–H2O2 in a ratio of 4:1 using microwaveassisted acid digestion. The methods were validated with the aid of the standard reference material 1515 Apple Leaves (SRM) from NIST Results: Mean recovery of three different samples of NHPs, using HGAAS and GFAAS, ranged from 89.3 - 91.4 %, and 91.7 - 93.0 %, respectively. The difference between the two methods was insignificant. A (P= 0.5), B (P=0.4) and C (P=0.88) Relative standard deviation (RSD) RSD, i.e., precision was 2.5 - 6.5 % and 2.3 - 6.7 % using HGAAS and GFAAS techniques, respectively. Recovery of arsenic in SRM was 98 and 102 % by GFAAS and HGAAS, respectively. Conclusion: GFAAS demonstrates acceptable levels of precision and accuracy. Both techniques possess comparable accuracy and repeatability. Thus, the two methods are recommended as an alternative approach for trace analysis of arsenic in natural herbal products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The technique of delineating Populus tremuloides (Michx.) clonal colonies based on morphology and phenology has been utilized in many studies and forestry applications since the 1950s. Recently, the availability and robustness of molecular markers has challenged the validity of such approaches for accurate clonal identification. However, genetically sampling an entire stand is largely impractical or impossible. For that reason, it is often necessary to delineate putative genet boundaries for a more selective approach when genetically analyzing a clonal population. Here I re-evaluated the usefulness of phenotypic delineation by: (1) genetically identifying clonal colonies using nuclear microsatellite markers, (2) assessing phenotypic inter- and intraclonal agreement, and (3) determining the accuracy of visible characters to correctly assign ramets to their respective genets. The long-term soil productivity study plot 28 was chosen for analysis and is located in the Ottawa National Forest, MI (46° 37'60.0" N, 89° 12'42.7" W). In total, 32 genets were identified from 181 stems using seven microsatellite markers. The average genet size was 5.5 ramets and six of the largest were selected for phenotypic analyses. Phenotypic analyses included budbreak timing, DBH, bark thickness, bark color or brightness, leaf senescence, leaf serrations, and leaf length ratio. All phenotypic characters, except for DBH, were useful for the analysis of inter- and intraclonal variation and phenotypic delineation. Generally, phenotypic expression was related to genotype with multiple response permutation procedure (MRPP) intraclonal distance values ranging from 0.148 and 0.427 and an observed MRPP delta value=0.221 when the expected delta=0.5. The phenotypic traits, though, overlapped significantly among some clones. When stems were assigned into phenotypic groups, six phenotypic groups were identified with each group containing a dominant genotype or clonal colony. All phenotypic groups contained stems from at least two clonal colonies and no clonal colony was entirely contained within one phenotypic group. These results demonstrate that phenotype varies with genotype and stand clonality can be determined using phenotypic characters, but phenotypic delineation is less precise. I therefore recommend that some genetic identification follow any phenotypic delineation. The amount of genetic identification required for clonal confirmation is likely to vary based on stand and environmental conditions. Further analysis, however, is needed to test these findings in other forest stands and populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To compare the accuracy of different forecasting approaches an error measure is required. Many error measures have been proposed in the literature, however in practice there are some situations where different measures yield different decisions on forecasting approach selection and there is no agreement on which approach should be used. Generally forecasting measures represent ratios or percentages providing an overall image of how well fitted the forecasting technique is to the observations. This paper proposes a multiplicative Data Envelopment Analysis (DEA) model in order to rank several forecasting techniques. We demonstrate the proposed model by applying it to the set of yearly time series of the M3 competition. The usefulness of the proposed approach has been tested using the M3-competition where five error measures have been applied in and aggregated to a single DEA score.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mechanical fatigue is a failure phenomenon that occurs due to repeated application of mechanical loads. Very High Cycle Fatigue (VHCF) is considered as the domain of fatigue life greater than 10 million load cycles. Increasing numbers of structural components have service life in the VHCF regime, for instance in automotive and high speed train transportation, gas turbine disks, and components of paper production machinery. Safe and reliable operation of these components depends on the knowledge of their VHCF properties. In this thesis both experimental tools and theoretical modelling were utilized to develop better understanding of the VHCF phenomena. In the experimental part, ultrasonic fatigue testing at 20 kHz of cold rolled and hot rolled stainless steel grades was conducted and fatigue strengths in the VHCF regime were obtained. The mechanisms for fatigue crack initiation and short crack growth were investigated using electron microscopes. For the cold rolled stainless steels crack initiation and early growth occurred through the formation of the Fine Granular Area (FGA) observed on the fracture surface and in TEM observations of cross-sections. The crack growth in the FGA seems to control more than 90% of the total fatigue life. For the hot rolled duplex stainless steels fatigue crack initiation occurred due to accumulation of plastic fatigue damage at the external surface, and early crack growth proceeded through a crystallographic growth mechanism. Theoretical modelling of complex cracks involving kinks and branches in an elastic half-plane under static loading was carried out by using the Distributed Dislocation Dipole Technique (DDDT). The technique was implemented for 2D crack problems. Both fully open and partially closed crack cases were analyzed. The main aim of the development of the DDDT was to compute the stress intensity factors. Accuracy of 2% in the computations was attainable compared to the solutions obtained by the Finite Element Method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating with greater precision and accuracy the height of plants has been a challenge for the scientific community. The objective this study is to evaluate the spatial variation of tree heights at different spatial scales in areas of the city of Recife, Brazil, using LiDAR remote sensing data. The LiDAR data were processed in the QT Modeler (Quick Terrain Modeler v. 8.0.2) software from Applied Imagery. The TreeVaW software was utilized to estimate the heights and crown diameters of trees. The results obtained for tree height were consistent with field measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Raman spectroscopy of formamide-intercalated kaolinites treated using controlled-rate thermal analysis technology (CRTA), allowing the separation of adsorbed formamide from intercalated formamide in formamide-intercalated kaolinites, is reported. The Raman spectra of the CRTA-treated formamide-intercalated kaolinites are significantly different from those of the intercalated kaolinites, which display a combination of both intercalated and adsorbed formamide. An intense band is observed at 3629 cm-1, attributed to the inner surface hydroxyls hydrogen bonded to the formamide. Broad bands are observed at 3600 and 3639 cm-1, assigned to the inner surface hydroxyls, which are hydrogen bonded to the adsorbed water molecules. The hydroxyl-stretching band of the inner hydroxyl is observed at 3621 cm-1 in the Raman spectra of the CRTA-treated formamide-intercalated kaolinites. The results of thermal analysis show that the amount of intercalated formamide between the kaolinite layers is independent of the presence of water. Significant differences are observed in the CO stretching region between the adsorbed and intercalated formamide.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diffusion equations that use time fractional derivatives are attractive because they describe a wealth of problems involving non-Markovian Random walks. The time fractional diffusion equation (TFDE) is obtained from the standard diffusion equation by replacing the first-order time derivative with a fractional derivative of order α ∈ (0, 1). Developing numerical methods for solving fractional partial differential equations is a new research field and the theoretical analysis of the numerical methods associated with them is not fully developed. In this paper an explicit conservative difference approximation (ECDA) for TFDE is proposed. We give a detailed analysis for this ECDA and generate discrete models of random walk suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation. The stability and convergence of the ECDA for TFDE in a bounded domain are discussed. Finally, some numerical examples are presented to show the application of the present technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research work analyses techniques for implementing a cell-centred finite-volume time-domain (ccFV-TD) computational methodology for the purpose of studying microwave heating. Various state-of-the-art spatial and temporal discretisation methods employed to solve Maxwell's equations on multidimensional structured grid networks are investigated, and the dispersive and dissipative errors inherent in those techniques examined. Both staggered and unstaggered grid approaches are considered. Upwind schemes using a Riemann solver and intensity vector splitting are studied and evaluated. Staggered and unstaggered Leapfrog and Runge-Kutta time integration methods are analysed in terms of phase and amplitude error to identify which method is the most accurate and efficient for simulating microwave heating processes. The implementation and migration of typical electromagnetic boundary conditions. from staggered in space to cell-centred approaches also is deliberated. In particular, an existing perfectly matched layer absorbing boundary methodology is adapted to formulate a new cell-centred boundary implementation for the ccFV-TD solvers. Finally for microwave heating purposes, a comparison of analytical and numerical results for standard case studies in rectangular waveguides allows the accuracy of the developed methods to be assessed.