999 resultados para Freinet, Techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present paper reports some definite evidence for the significance of wavelength positioning accuracy in multicomponent analysis techniques for the correction of line interferences in inductively coupled plasma atomic emission spectrometry (ICP-AES). Using scanning spectrometers commercially available today, a large relative error, DELTA(A) may occur in the estimated analyte concentration, owing to wavelength positioning errors, unless a procedure for data processing can eliminate the problem of optical instability. The emphasis is on the effect of the positioning error (deltalambda) in a model scan, which is evaluated theoretically and determined experimentally. A quantitative relation between DELTA(A) and deltalambda, the peak distance, and the effective widths of the analysis and interfering lines is established under the assumption of Gaussian line profiles. The agreement between calculated and experimental DELTA(A) is also illustrated. The DELTA(A) originating from deltalambda is independent of the net analyte/interferent signal ratio; this contrasts with the situation for the positioning error (dlambda) in a sample scan, where DELTA(A) decreases with an increase in the ratio. Compared with dlambda, the effect of deltalambda is generally less significant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present paper deals with the evaluation of the relative error (DELTA(A)) in estimated analyte concentrations originating from the wavelength positioning error in a sample scan when multicomponent analysis (MCA) techniques are used for correcting line interferences in inductively coupled plasma atomic emission spectrometry. In the theoretical part, a quantitative relation of DELTA(A) with the extent of line overlap, bandwidth and the magnitude of the positioning error is developed under the assumption of Gaussian line profiles. The measurements of eleven samples covering various typical line interferences showed that the calculated DELTA(A) generally agrees well with the experimental one. An expression of the true detection limit associated with MCA techniques was thus formulated. With MCA techniques, the determination of the analyte and interferent concentrations depend on each other while with conventional correction techniques, such as the three-point method, the estimate of interfering signals is independent of the analyte signals. Therefore. a given positioning error results in a larger DELTA(A) and hence a higher true detection limit in the case of MCA techniques than that in the case of conventional correction methods. although the latter could be a reasonable approximation of the former when the peak distance expressed in the effective width of the interfering line is larger than 0.4. In the light of the effect of wavelength positioning errors, MCA techniques have no advantages over conventional correction methods unless the former can bring an essential reduction ot the positioning error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most attractive features of derivative spectrometry is its higher resolving power. In the present power, numerical derivative techniques are evaluated from the viewpoint of increase in selectivity, the latter being expressed in terms of the interferent equivalent concentration (IEC). Typical spectral interferences are covered, including flat background, sloped background, simple curved background and various types of line overlap with different overlapping degrees, which were defined as the ratio of the net interfering signal at the analysis wavelength to the peak signal of the interfering line. the IECs in the derivative spectra are decreased by one to two order of magnitudes compared to those in the original spectra, and in the most cases, assume values below the conventional detection limits. The overlapping degree is the dominant factor that determines whether an analysis line can be resolved from an interfering line with the derivative techniques. Generally, the second derivative technique is effective only for line overlap with an overlapping degree of less than 0.8. The effects of other factors such as line shape, data smoothing, step size and the intensity ratio of analyte to interferent on the performance of the derivative techniques are also discussed. All results are illustrated with practical examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Carbonaceous deposits formed during the temperature-programmed surface reaction (TPSR) of methane dehydro-aromatization (MDA) over Mo/HZSM-5 catalysts have been investigated by TPH, TPCO2 and TPO, in combination with thermal gravimetric analysis (TG). The TPO profiles of the coked catalyst after TPSR of MDA show two temperature peaks: one is at about 776 K and the other at about 865 K. The succeeding TPH experiments only resulted in the diminishing of the area of the high-temperature peak, and had no effect on the area of the low-temperature peak. On the other hand, the TPO profiles of the coked catalyst after succeeding TPCO2 experiments exhibited obvious reduction in the areas of both the high-and low-temperature peaks, particularly in the area of the low-temperature peak. On the basis of TPSR, TPR and TPCO2 experiments and the corresponding TG analysis, quantitative analysis of the coke and the kinetics of its burning-off process have been studied. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This report explores the relation between image intensity and object shape. It is shown that image intensity is related to surface orientation and that a variation in image intensity is related to surface curvature. Computational methods are developed which use the measured intensity variation across surfaces of smooth objects to determine surface orientation. In general, surface orientation is not determined locally by the intensity value recorded at each image point. Tools are needed to explore the problem of determining surface orientation from image intensity. The notion of gradient space , popularized by Huffman and Mackworth, is used to represent surface orientation. The notion of a reflectance map, originated by Horn, is used to represent the relation between surface orientation image intensity. The image Hessian is defined and used to represent surface curvature. Properties of surface curvature are expressed as constraints on possible surface orientations corresponding to a given image point. Methods are presented which embed assumptions about surface curvature in algorithms for determining surface orientation from the intensities recorded in a single view. If additional images of the same object are obtained by varying the direction of incident illumination, then surface orientation is determined locally by the intensity values recorded at each image point. This fact is exploited in a new technique called photometric stereo. The visual inspection of surface defects in metal castings is considered. Two casting applications are discussed. The first is the precision investment casting of turbine blades and vanes for aircraft jet engines. In this application, grain size is an important process variable. The existing industry standard for estimating the average grain size of metals is implemented and demonstrated on a sample turbine vane. Grain size can be computed form the measurements obtained in an image, once the foreshortening effects of surface curvature are accounted for. The second is the green sand mold casting of shuttle eyes for textile looms. Here, physical constraints inherent to the casting process translate into these constraints, it is necessary to interpret features of intensity as features of object shape. Both applications demonstrate that successful visual inspection requires the ability to interpret observed changes in intensity in the context of surface topography. The theoretical tools developed in this report provide a framework for this interpretation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

C.R. Bull, R. Zwiggelaar and R.D. Speller, 'Review of inspection techniques based on the elastic and inelastic scattering of X-rays and their potential in the food and agricultural industry', Journal of Food Engineering 33 (1-2), 167-179 (1997)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate knowledge of traffic demands in a communication network enables or enhances a variety of traffic engineering and network management tasks of paramount importance for operational networks. Directly measuring a complete set of these demands is prohibitively expensive because of the huge amounts of data that must be collected and the performance impact that such measurements would impose on the regular behavior of the network. As a consequence, we must rely on statistical techniques to produce estimates of actual traffic demands from partial information. The performance of such techniques is however limited due to their reliance on limited information and the high amount of computations they incur, which limits their convergence behavior. In this paper we study strategies to improve the convergence of a powerful statistical technique based on an Expectation-Maximization iterative algorithm. First we analyze modeling approaches to generating starting points. We call these starting points informed priors since they are obtained using actual network information such as packet traces and SNMP link counts. Second we provide a very fast variant of the EM algorithm which extends its computation range, increasing its accuracy and decreasing its dependence on the quality of the starting point. Finally, we study the convergence characteristics of our EM algorithm and compare it against a recently proposed Weighted Least Squares approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Training data for supervised learning neural networks can be clustered such that the input/output pairs in each cluster are redundant. Redundant training data can adversely affect training time. In this paper we apply two clustering algorithms, ART2 -A and the Generalized Equality Classifier, to identify training data clusters and thus reduce the training data and training time. The approach is demonstrated for a high dimensional nonlinear continuous time mapping. The demonstration shows six-fold decrease in training time at little or no loss of accuracy in the handling of evaluation data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A massive change is currently taking place in the manner in which power networks are operated. Traditionally, power networks consisted of large power stations which were controlled from centralised locations. The trend in modern power networks is for generated power to be produced by a diverse array of energy sources which are spread over a large geographical area. As a result, controlling these systems from a centralised controller is impractical. Thus, future power networks will be controlled by a large number of intelligent distributed controllers which must work together to coordinate their actions. The term Smart Grid is the umbrella term used to denote this combination of power systems, artificial intelligence, and communications engineering. This thesis focuses on the application of optimal control techniques to Smart Grids with a focus in particular on iterative distributed MPC. A novel convergence and stability proof for iterative distributed MPC based on the Alternating Direction Method of Multipliers is derived. Distributed and centralised MPC, and an optimised PID controllers' performance are then compared when applied to a highly interconnected, nonlinear, MIMO testbed based on a part of the Nordic power grid. Finally, a novel tuning algorithm is proposed for iterative distributed MPC which simultaneously optimises both the closed loop performance and the communication overhead associated with the desired control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.