46 resultados para Accuracy and precision
Resumo:
This paper proposes a novel way to combine different observation models in a particle filter framework. This, so called, auto-adjustable observation model, enhance the particle filter accuracy when the tracked objects overlap without infringing a great runtime penalty to the whole tracking system. The approach has been tested under two important real world situations related to animal behavior: mice and larvae tracking. The proposal was compared to some state-of-art approaches and the results show, under the datasets tested, that a good trade-off between accuracy and runtime can be achieved using an auto-adjustable observation model. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This work describes the development and optimization of a sequential injection method to automate the determination of paraquat by square-wave voltammetry employing a hanging mercury drop electrode. Automation by sequential injection enhanced the sampling throughput, improving the sensitivity and precision of the measurements as a consequence of the highly reproducible and efficient conditions of mass transport of the analyte toward the electrode surface. For instance, 212 analyses can be made per hour if the sample/standard solution is prepared off-line and the sequential injection system is used just to inject the solution towards the flow cell. In-line sample conditioning reduces the sampling frequency to 44 h(-1). Experiments were performed in 0.10 M NaCl, which was the carrier solution, using a frequency of 200 Hz, a pulse height of 25 mV, a potential step of 2 mV, and a flow rate of 100 mu L s(-1). For a concentration range between 0.010 and 0.25 mg L(-1), the current (i(p), mu A) read at the potential corresponding to the peak maximum fitted the following linear equation with the paraquat concentration (mg L(-1)): ip = (-20.5 +/- 0.3) Cparaquat -(0.02 +/- 0.03). The limits of detection and quantification were 2.0 and 7.0 mu g L(-1), respectively. The accuracy of the method was evaluated by recovery studies using spiked water samples that were also analyzed by molecular absorption spectrophotometry after reduction of paraquat with sodium dithionite in an alkaline medium. No evidence of statistically significant differences between the two methods was observed at the 95% confidence level.
Resumo:
The electrochemical oxidation of promethazine hydrochloride was made on highly boron-doped diamond electrodes. Cyclic voltammetry experiments showed that the oxidation mechanisms involved the formation of an adsorbed product that is more readily oxidized, producing a new peak with lower potential values whose intensity can be increased by applying the accumulation potential for given times. The parameters were optimized and the highest current intensities were obtained by applying +0.78 V for 30 seconds. The square-wave adsorptive voltammetry results obtained in BR buffer showed two well-defined peaks, dependent on the pH and on the voltammetric parameters. The best responses were obtained at pH 4.0, frequency of 50 s(-1), step of 2 mV, and amplitude of 50 mV. Under these conditions, linear responses were obtained for concentrations from 5.96 x 10(-7) to 4.76 x 10(-6) mol L-1, and calculated detection limits of 2.66 x 10(-8) mol L-1 (8.51 mu g L-1) for peak 1 and of 4.61 x 10(-8) mol L-1 (14.77 mu g L-1) for peak 2. The precision and accuracy were evaluated by repeatability and reproducibility experiments, which yielded values of less than 5.00% for both voltammetric peaks. ne applicability of this procedure was tested on commercial formulations of promethazine hydrochloride by observing the stability, specificity, recovery and precision of the procedure in complex samples. All results obtained were compared to recommended procedure by British Pharmacopeia. The voltammetric results indicate that the proposed procedure is stable and sensitive, with good reproducibility even when the accumulation steps involve short times. It is therefore very suitable for the development of the electroanalytical procedure, providing adequate sensitivity and a reliable method.
Resumo:
Soil organic matter (SOM) plays an important role in physical, chemical and biological properties of soil. Therefore, the amount of SOM is important for soil management for sustainable agriculture. The objective of this work was to evaluate the amount of SOM in oxisols by different methods and compare them, using principal component analysis, regarding their limitations. The methods used in this work were Walkley-Black, elemental analysis, total organic carbon (TOC) and thermogravimetry. According to our results, TOC and elemental analysis were the most satisfactory methods for carbon quantification, due to their better accuracy and reproducibility.
Resumo:
A pesquisa de conhecimentos no campo da oftalmologia constitui processo dinâmico que requer divulgação, em especial por meio de relatos escritos. A redação científica vale-se da terminologia greco-latina. Discutem-se aspectos referentes à clareza, objetividade e precisão da escrita e oferece-se glossário sucinto desses termos aplicados a textos científicos.
Resumo:
The relatively large number of nearby radio-quiet and thermally emitting isolated neutron stars (INSs) discovered in the ROSAT All-Sky Survey, dubbed the ""Magnificent Seven"", suggests that they belong to a formerly neglected major component of the overall INS population. So far, attempts to discover similar INSs beyond the solar vicinity failed to confirm any reliable candidate. The good positional accuracy and soft X-ray sensitivity of the EPIC cameras onboard the XMM-Newton satellite allow us to efficiently search for new thermally emitting INSs. We used the 2XMMp catalogue to select sources with no catalogued candidate counterparts and with X-ray spectra similar to those of the Magnificent Seven, but seen at greater distances and thus undergoing higher interstellar absorptions. Identifications in more than 170 astronomical catalogues and visual screening allowed us to select fewer than 30 good INS candidates. In order to rule out alternative identifications, we obtained deep ESO-VLT and SOAR optical imaging for the X-ray brightest candidates. We report here on the optical follow-up results of our search and discuss the possible nature of 8 of our candidates. A high X-ray-to-optical flux ratio together with a stable flux and soft X-ray spectrum make the brightest source of our sample, 2XMM J104608.7-594306, a newly discovered thermally emitting INS. The X-ray source 2XMM J010642.3+005032 has no evident optical counterpart and should be further investigated. The remaining X-ray sources are most probably identified with cataclysmic variables and active galactic nuclei, as inferred from the colours and flux ratios of their likely optical counterparts. Beyond the finding of new thermally emitting INSs, our study aims at constraining the space density of this Galactic population at great distances and at determining whether their apparently high density is a local anomaly or not.
Resumo:
The least squares collocation is a mathematical technique which is used in Geodesy for representation of the Earth's anomalous gravity field from heterogeneous data in type and precision. The use of this technique in the representation of the gravity field requires the statistical characteristics of data through covariance function. The covariances reflect the behavior of the gravity field, in magnitude and roughness. From the statistical point of view, the covariance function represents the statistical dependence among quantities of the gravity field at distinct points or, in other words, shows the tendency to have the same magnitude and the same sign. The determination of the covariance functions is necessary either to describe the behavior of the gravity field or to evaluate its functionals. This paper aims at presenting the results of a study on the plane and spherical covariance functions in determining gravimetric geoid models.
Resumo:
We study quasinormal modes and scattering properties via calculation of the S matrix for scalar and electromagnetic fields propagating in the background of spherically symmetric and axially symmetric traversable Lorentzian wormholes of a generic shape. Such wormholes are described by the general Morris-Thorne ansatz. The properties of quasinormal ringing and scattering are shown to be determined by the behavior of the wormhole's shape function b(r) and shift factor Phi(r) near the throat. In particular, wormholes with the shape function b(r), such that b(dr) approximate to 1, have very long-lived quasinormal modes in the spectrum. We have proved that the axially symmetric traversable Lorentzian wormholes, unlike black holes and other compact rotating objects, do not allow for superradiance. As a by-product we have shown that the 6th order WKB formula used for scattering problems of black or wormholes gives quite high accuracy and thus can be used for quite accurate calculations of the Hawking radiation processes around various black holes.
Resumo:
This work presents the study and development of a combined fault location scheme for three-terminal transmission lines using wavelet transforms (WTs). The methodology is based on the low- and high-frequency components of the transient signals originated from fault situations registered in the terminals of a system. By processing these signals and using the WT, it is possible to determine the time of travelling waves of voltages and/or currents from the fault point to the terminals, as well as estimate the fundamental frequency components. A new approach presents a reliable and accurate fault location scheme combining some different solutions. The main idea is to have a decision routine in order to select which method should be used in each situation presented to the algorithm. The combined algorithm was tested for different fault conditions by simulations using the ATP (Alternative Transients Program) software. The results obtained are promising and demonstrate a highly satisfactory degree of accuracy and reliability of the proposed method.
Resumo:
The most ordinary finite element formulations for 3D frame analysis do not consider the warping of cross-sections as part of their kinematics. So the stiffness, regarding torsion, should be directly introduced by the user into the computational software and the bar is treated as it is working under no warping hypothesis. This approach does not give good results for general structural elements applied in engineering. Both displacement and stress calculation reveal sensible deficiencies for both linear and non-linear applications. For linear analysis, displacements can be corrected by assuming a stiffness that results in acceptable global displacements of the analyzed structure. However, the stress calculation will be far from reality. For nonlinear analysis the deficiencies are even worse. In the past forty years, some special structural matrix analysis and finite element formulations have been proposed in literature to include warping and the bending-torsion effects for 3D general frame analysis considering both linear and non-linear situations. In this work, using a kinematics improvement technique, the degree of freedom ""warping intensity"" is introduced following a new approach for 3D frame elements. This degree of freedom is associated with the warping basic mode, a geometric characteristic of the cross-section, It does not have a direct relation with the rate of twist rotation along the longitudinal axis, as in existent formulations. Moreover, a linear strain variation mode is provided for the geometric non-linear approach, for which complete 3D constitutive relation (Saint-Venant Kirchhoff) is adopted. The proposed technique allows the consideration of inhomogeneous cross-sections with any geometry. Various examples are shown to demonstrate the accuracy and applicability of the proposed formulation. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
In this paper a new boundary element method formulation for elastoplastic analysis of plates with geometrical nonlinearities is presented. The von Mises criterion with linear isotropic hardening is considered to evaluate the plastic zone. Large deflections are assumed but within the context of small strain. To derive the boundary integral equations the von Karman`s hypothesis is taken into account. An initial stress field is applied to correct the true stresses according to the adopted criterion. Isoparametric linear elements are used to approximate the boundary unknown values while triangular internal cells with linear shape function are adopted to evaluate the domain value influences. The nonlinear system of equations is solved by using an implicit scheme together with the consistent tangent operator derived along the paper. Numerical examples are presented to demonstrate the accuracy and the validity of the proposed formulation.
Resumo:
Purpose - The purpose of this paper is to identify the key elements of a new rapid prototyping process, which involves layer-by-layer deposition of liquid-state material and at the same time using an ultraviolet line source to cure the deposited material. This paper reports studies about the behaviour of filaments, deposition accuracy, filaments interaction and functional feasibility of system. Additionally, the author describes the process which has been proposed, the equipment that has been used for these studies and the material which was developed in this application. Design/methodology/approach - The research has been separated into three study areas in accordance with their goals. In the first, both the behaviour of filament and deposition accuracy was studied. The design of the experiment is described with focus on four response factors (bead width, filament quality, deposition accuracy and deposition continuity) along with function of three control factors (deposition height, deposition velocity and extrusion velocity). The author also studied the interaction between filaments as a function of bead centre distance. In addition, two test samples were prepared to serve as a proof of the methodology and to verify the functional feasibility of the process which has been studied. Findings - The results show that the proposed process is functionally feasible, and that it is possible to identify the main effects of control factors over response factors. That analysis is used to predict the condition of process as a function of the parameters which control the process. Also identified were distances of centre beads which result in a specific behaviour. The types of interaction between filaments were analysed and sorted into: union, separation and indeterminate. At the end, the functional feasibility of process was proved whereby two test parts could be built. Originality/value - This paper proposes a new rapid prototyping process and also presents test studies related to this proposition. The author has focused on the filament behaviour, deposition accuracy, interaction between filaments and studied the functional feasibility of process to provide new information about this process, which at the same time is useful to the development of other rapid prototyping processes.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
In Part I [""Fast Transforms for Acoustic Imaging-Part I: Theory,"" IEEE TRANSACTIONS ON IMAGE PROCESSING], we introduced the Kronecker array transform (KAT), a fast transform for imaging with separable arrays. Given a source distribution, the KAT produces the spectral matrix which would be measured by a separable sensor array. In Part II, we establish connections between the KAT, beamforming and 2-D convolutions, and show how these results can be used to accelerate classical and state of the art array imaging algorithms. We also propose using the KAT to accelerate general purpose regularized least-squares solvers. Using this approach, we avoid ill-conditioned deconvolution steps and obtain more accurate reconstructions than previously possible, while maintaining low computational costs. We also show how the KAT performs when imaging near-field source distributions, and illustrate the trade-off between accuracy and computational complexity. Finally, we show that separable designs can deliver accuracy competitive with multi-arm logarithmic spiral geometries, while having the computational advantages of the KAT.
Resumo:
There is not a specific test to diagnose Alzheimer`s disease (AD). Its diagnosis should be based upon clinical history, neuropsychological and laboratory tests, neuroimaging and electroencephalography (EEG). Therefore, new approaches are necessary to enable earlier and more accurate diagnosis and to follow treatment results. In this study we used a Machine Learning (ML) technique, named Support Vector Machine (SVM), to search patterns in EEG epochs to differentiate AD patients from controls. As a result, we developed a quantitative EEG (qEEG) processing method for automatic differentiation of patients with AD from normal individuals, as a complement to the diagnosis of probable dementia. We studied EEGs from 19 normal subjects (14 females/5 males, mean age 71.6 years) and 16 probable mild to moderate symptoms AD patients (14 females/2 males, mean age 73.4 years. The results obtained from analysis of EEG epochs were accuracy 79.9% and sensitivity 83.2%. The analysis considering the diagnosis of each individual patient reached 87.0% accuracy and 91.7% sensitivity.