990 resultados para sample processing
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
A flow injection spectrophotometric system is proposed for phosphite determination in fertilizers by the molybdenum blue method after the processing of each sample two times on-line without and with an oxidizing step. The flow system was designed to add sulfuric acid or permanganate solutions alternately into the system by simply displacing the injector-commutator from one resting position to another, allowing the determination of phosphate and total phosphate, respectively. The concentration of phosphite is obtained then by difference between the two measurents. The influence of flow rates, sample volume, and dimension of flow line connecting the injector-commutator to the main analytical channel was evaluated. The proposed method was applied to phosphite determination in commercial liquid fertilizers. Results obtained with the proposed FIA system were not statistically different from those obtained by titrimetry at the 95% confidence level. In addition, recoveries within 94 and 100% of spiked fertilizers were found. The relative standard deviation (n = 12) related to the phosphite-converted-phosphate peak alone was <= 3.5% for 800 mg L-1 P (phoshite) solution. Precision due to the differences of total phosphate and phosphate was 1.1% for 10 mg L-1 P (phosphate) + 3000 mg L-1 P (phosphite) solution. The sampling rate was calculated as 15 determinations per hour, and the reagent consumption was about 6.3 mg of KMnO4, 200 mg of (NH4)(6)Mo7O24 center dot 4H(2)O, and 40 mg of ascorbic acid per measurement.
Resumo:
A body of research has developed within the context of nonlinear signal and image processing that deals with the automatic, statistical design of digital window-based filters. Based on pairs of ideal and observed signals, a filter is designed in an effort to minimize the error between the ideal and filtered signals. The goodness of an optimal filter depends on the relation between the ideal and observed signals, but the goodness of a designed filter also depends on the amount of sample data from which it is designed. In order to lessen the design cost, a filter is often chosen from a given class of filters, thereby constraining the optimization and increasing the error of the optimal filter. To a great extent, the problem of filter design concerns striking the correct balance between the degree of constraint and the design cost. From a different perspective and in a different context, the problem of constraint versus sample size has been a major focus of study within the theory of pattern recognition. This paper discusses the design problem for nonlinear signal processing, shows how the issue naturally transitions into pattern recognition, and then provides a review of salient related pattern-recognition theory. In particular, it discusses classification rules, constrained classification, the Vapnik-Chervonenkis theory, and implications of that theory for morphological classifiers and neural networks. The paper closes by discussing some design approaches developed for nonlinear signal processing, and how the nature of these naturally lead to a decomposition of the error of a designed filter into a sum of the following components: the Bayes error of the unconstrained optimal filter, the cost of constraint, the cost of reducing complexity by compressing the original signal distribution, the design cost, and the contribution of prior knowledge to a decrease in the error. The main purpose of the paper is to present fundamental principles of pattern recognition theory within the framework of active research in nonlinear signal processing.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
CHARACTERIZATION STUDY OF CAMBUCI FRUIT [Campomanesia phaea (O. Berg.) Landrum] AND ITS APPLICATION IN JELLY PROCESSING The objective of this paper was to study possible differences of varieties of cambuci fruit reported by producers by means of fruit characterization and elaboration of jelly. The fruits were divided in four possible varieties groups, named as A, B, C and D, and submitted to analysis regarding weight, ash, moisture, pH, soluble solids, acidity, ratio, ascorbic acid and water activity. To the preparation of the jelly was chosen the variety A with two formulations, 50%50% and 40%60%, fruit and sugar respectively. Concerning the possible varieties of cambuci only water activity was not significantly different. The acidity and ratio of B variety are noteworthy, because it did differ from the others presenting a more satisfactory result for in natura consumption. All other tests presented statistic alteration of at least one variety, but since these data can be influenced by edaphoclimatic conditions they must be carefully observed. Lower results for pH, acidity, luminosity and degrees hue, were observed for the 60% sugar jelly sample, which contributed for its better results in the preference test for the attributes appearance, color, flavor and texture. The evaluation of attribute aroma of both samples (50% and 60%) did not differ.
Resumo:
Gnocchi is a typical Italian potato-based fresh pasta that can be either homemade or industrially manufactured. The homemade traditional product is consumed fresh on the day it is produced, whereas the industrially manufactured one is vacuum-packed in polyethylene and usually stored at refrigerated conditions. At industrial level, most kinds of gnocchi are usually produced by using some potato derivatives (i.e. flakes, dehydrated products or flour) to which soft wheat flour, salt, some emulsifiers and aromas are added. Recently, a novel type of gnocchi emerged on the Italian pasta market, since it would be as much similar as possible to the traditional homemade one. It is industrially produced from fresh potatoes as main ingredient and soft wheat flour, pasteurized liquid eggs and salt, moreover this product undergoes steam cooking and mashing industrial treatments. Neither preservatives nor emulsifiers are included in the recipe. The main aim of this work was to get inside the industrial manufacture of gnocchi, in order to improve the quality characteristics of the final product, by the study of the main steps of the production, starting from the raw and steam cooked tubers, through the semi-finished materials, such as the potato puree and the formulated dough. For this purpose the investigation of the enzymatic activity of the raw and steam cooked potatoes, the main characteristics of the puree (colour, texture and starch), the interaction among ingredients of differently formulated doughs and the basic quality aspects of the final product have been performed. Results obtained in this work indicated that steam cooking influenced the analysed enzymes (Pectin methylesterase and α- and β-amylases) in different tissues of the tuber. PME resulted still active in the cortex, it therefore may affect the texture of cooked potatoes to be used as main ingredient in the production of gnocchi. Starch degrading enzymes (α- and β-amylases) were inactivated both in the cortex and in the pith of the tuber. The study performed on the potato puree showed that, between the two analysed samples, the product which employed dual lower pressure treatments seemed to be the most suitable to the production of gnocchi, in terms of its better physicochemical and textural properties. It did not evidence aggregation phenomena responsible of hard lumps, which may occur in this kind of semi-finished product. The textural properties of gnocchi doughs were not influenced by the different formulation as expected. Among the ingredients involved in the preparation of the different samples, soft wheat flour seemed to be the most crucial in affecting the quality features of gnocchi doughs. As a consequence of the interactive effect of the ingredients on the physicochemical and textural characteristics of the different doughs, a uniform and well-defined split-up among samples was not obtained. In the comparison of different kinds of gnocchi, the optimal physicochemical and textural properties were detected in the sample made with fresh tubers. This was probably caused not only by the use of fresh steam cooked potatoes, but also by the pasteurized liquid eggs and by the absence of any kind of emulsifier, additive or preserving substance.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
This paper describes informatics for cross-sample analysis with comprehensive two-dimensional gas chromatography (GCxGC) and high-resolution mass spectrometry (HRMS). GCxGC-HRMS analysis produces large data sets that are rich with information, but highly complex. The size of the data and volume of information requires automated processing for comprehensive cross-sample analysis, but the complexity poses a challenge for developing robust methods. The approach developed here analyzes GCxGC-HRMS data from multiple samples to extract a feature template that comprehensively captures the pattern of peaks detected in the retention-times plane. Then, for each sample chromatogram, the template is geometrically transformed to align with the detected peak pattern and generate a set of feature measurements for cross-sample analyses such as sample classification and biomarker discovery. The approach avoids the intractable problem of comprehensive peak matching by using a few reliable peaks for alignment and peak-based retention-plane windows to define comprehensive features that can be reliably matched for cross-sample analysis. The informatics are demonstrated with a set of 18 samples from breast-cancer tumors, each from different individuals, six each for Grades 1-3. The features allow classification that matches grading by a cancer pathologist with 78% success in leave-one-out cross-validation experiments. The HRMS signatures of the features of interest can be examined for determining elemental compositions and identifying compounds.
Resumo:
Nitrogen (N) saturation is an environmental concern for forests in the eastern U.S. Although several watersheds of the Fernow Experimental Forest (FEF), West Virginia exhibit symptoms of Nsaturation, many watersheds display a high degree of spatial variability in soil N processing. This study examined the effects of temperature on net N mineralization and nitrification in N-saturatedsoils from FEF, and how these effects varied between high N-processing vs. low N-processingsoils collected from two watersheds, WS3 (fertilized with [NH4]2SO4) and WS4 (untreated control). Samples of forest floor material (O2 horizon) and mineral soil (to a 5-cm depth) were taken from three subplots within each of four plots that represented the extremes of highest and lowest ratesof net N mineralization and nitrification (hereafter, high N and low N, respectively) of untreated WS4 and N-treated WS3: control/low N, control/high N, N-treated/low N, N-treated/high N. Forest floor material was analyzed for carbon (C), lignin,and N. Subsamples of mineral soil were extractedimmediately with 1 N KCl and analyzed for NH4+and NO3– to determine preincubation levels. Extracts were also analyzed for Mg, Ca, Al, and pH. To test the hypothesis that the lack of net nitrification observed in field incubations on the untreated/low N plot was the result of absence ofnitrifier populations, we characterized the bacterial community involved in N cycling by amplification of amoA genes. Remaining soil was incubated for 28 d at three temperatures (10, 20, and30°C), followed by 1 N KCl extraction and analysis for NH4+ and NO3–. Net nitrification was essentially 100% of net N mineralization for all samples combined. Nitrification rates from lab incubation sat all temperatures supported earlier observations based on field incubations. At 30°C, rates from N- t reated/high N were three times those of N-treated/low N. Highest rates were found for untreated/high N (two times greater than those of N-treated/high N), whereas untreated/low N exhibited no net nitrification. However, soils exhibitingno net nitrification tested positive for presence of nitrifying bacteria, causing us to reject our initial hypothesis. We hypothesize that nitrifier populations in such soil are being inhibited by a combination of low Ca:Al ratios in mineral soil and allelopathic interactions with mycorrhizae of ericaceous species in the herbaceous layer.
Resumo:
Recognizing the increasing amount of information shared on Social Networking Sites (SNS), in this study we aim to explore the information processing strategies of users on Facebook. Specifically, we aim to investigate the impact of various factors on user attitudes towards the posts on their Newsfeed. To collect the data, we program a Facebook application that allows users to evaluate posts in real time. Applying Structural Equation Modeling to a sample of 857 observations we find that it is mostly the affective attitude that shapes user behavior on the network. This attitude, in turn, is mainly determined by the communication intensity between users, overriding comprehensibility of the post and almost neglecting post length and user posting frequency.
Resumo:
The present article analyzed, how need for cognition (NFC) influences the formation of performance expectancies. When processing information, individuals with lower NFC often rely on salient information and shortcuts compared to individuals higher in NFC. We assume that these preferences of processing will also make individuals low in NFC more responsive to salient achievement-related cues because the processing of salient cues is cognitively less demanding than the processing of non-salient cues. Therefore, individuals lower in NFC should tend to draw wider ranging inferences from salient achievement-related information. In a sample of N = 197 secondary school students, achievement-related feedback (grade on an English examination) affected changes in expectancies in non-corresponding academic subjects (e.g., expectation of final grade in mathematics or history) when NFC was lower, whereas for students with higher NFC, changes in expectancies in non-corresponding academic subjects were not affected.
Resumo:
The effect of a traditional Ethiopian lupin processing method on the chemical composition of lupin seed samples was studied. Two sampling districts, namely Mecha and Sekela, representing the mid- and high-altitude areas of north-western Ethiopia, respectively, were randomly selected. Different types of traditionally processed and marketed lupin seed samples (raw, roasted, and fi nished) were collected in six replications from each district. Raw samples are unprocessed, and roasted samples are roasted using fi rewood. Finished samples are those ready for human consumption as snack. Thousand seed weight for raw and roasted samples within a study district was similar (P > 0.05), but it was lower (P < 0.01) for fi nished samples compared to raw and roasted samples. The crude fi bre content of fi nished lupin seed sample from Mecha was lower (P < 0.01) than that of raw and roasted samples. However, the different lupin samples from Sekela had similar crude fi bre content (P > 0.05). The crude protein and crude fat contents of fi nished samples within a study district were higher (P < 0.01) than those of raw and roasted samples, respectively. Roasting had no effect on the crude protein content of lupin seed samples. The crude ash content of raw and roasted lupin samples within a study district was higher (P < 0.01) than that of fi nished lupin samples of the respective study districts. The content of quinolizidine alkaloids of fi nished lupin samples was lower than that of raw and roasted samples. There was also an interaction effect between location and lupin sample type. The traditional processing method of lupin seeds in Ethiopia has a positive contribution improving the crude protein and crude fat content, and lowering the alkaloid content of the fi nished product. The study showed the possibility of adopting the traditional processing method to process bitter white lupin for the use as protein supplement in livestock feed in Ethiopia, but further work has to be done on the processing method and animal evaluation.
Resumo:
XMapTools is a MATLAB©-based graphical user interface program for electron microprobe X-ray image processing, which can be used to estimate the pressure–temperature conditions of crystallization of minerals in metamorphic rocks. This program (available online at http://www.xmaptools.com) provides a method to standardize raw electron microprobe data and includes functions to calculate the oxide weight percent compositions for various minerals. A set of external functions is provided to calculate structural formulae from the standardized analyses as well as to estimate pressure–temperature conditions of crystallization, using empirical and semi-empirical thermobarometers from the literature. Two graphical user interface modules, Chem2D and Triplot3D, are used to plot mineral compositions into binary and ternary diagrams. As an example, the software is used to study a high-pressure Himalayan eclogite sample from the Stak massif in Pakistan. The high-pressure paragenesis consisting of omphacite and garnet has been retrogressed to a symplectitic assemblage of amphibole, plagioclase and clinopyroxene. Mineral compositions corresponding to ~165,000 analyses yield estimates for the eclogitic pressure–temperature retrograde path from 25 kbar to 9 kbar. Corresponding pressure–temperature maps were plotted and used to interpret the link between the equilibrium conditions of crystallization and the symplectitic microstructures. This example illustrates the usefulness of XMapTools for studying variations of the chemical composition of minerals and for retrieving information on metamorphic conditions on a microscale, towards computation of continuous pressure–temperature-and relative time path in zoned metamorphic minerals not affected by post-crystallization diffusion.
Resumo:
Dielectrophoresis (DEP) has been used to manipulate cells in low-conductivity suspending media using AC electrical fields generated on micro-fabricated electrode arrays. This has created the possibility of performing automatically on a micro-scale more sophisticated cell processing than that currently requiring substantial laboratory equipment, reagent volumes, time, and human intervention. In this research the manipulation of aqueous droplets in an immiscible, low-permittivity suspending medium is described to complement previous work on dielectrophoretic cell manipulation. Such droplets can be used as carriers not only for air- and water-borne samples, contaminants, chemical reagents, viral and gene products, and cells, but also the reagents to process and characterize these samples. A long-term goal of this area of research is to perform chemical and biological assays on automated, micro-scaled devices at or near the point-of-care, which will increase the availability of modern medicine to people who do not have ready access to large medical institutions and decrease the cost and delays associated with that lack of access. In this research I present proofs-of-concept for droplet manipulation and droplet-based biochemical analysis using dielectrophoresis as the motive force. Proofs-of-concept developed for the first time in this research include: (1) showing droplet movement on a two-dimensional array of electrodes, (2) achieving controlled dielectric droplet injection, (3) fusing and reacting droplets, and (4) demonstrating a protein fluorescence assay using micro-droplets. ^
Resumo:
Identifying accurate numbers of soldiers determined to be medically not ready after completing soldier readiness processing may help inform Army leadership about ongoing pressures on the military involved in long conflict with regular deployment. In Army soldiers screened using the SRP checklist for deployment, what is the prevalence of soldiers determined to be medically not ready? Study group. 15,289 soldiers screened at all 25 Army deployment platform sites with the eSRP checklist over a 4-month period (June 20, 2009 to October 20, 2009). The data included for analysis included age, rank, component, gender and final deployment medical readiness status from MEDPROS database. Methods.^ This information was compiled and univariate analysis using chi-square was conducted for each of the key variables by medical readiness status. Results. Descriptive epidemiology Of the total sample 1548 (9.7%) were female and 14319 (90.2%) were male. Enlisted soldiers made up 13,543 (88.6%) of the sample and officers 1,746 (11.4%). In the sample, 1533 (10.0%) were soldiers over the age of 40 and 13756 (90.0%) were age 18-40. Reserve, National Guard and Active Duty made up 1,931 (12.6%), 2,942 (19.2%) and 10,416 (68.1%) respectively. Univariate analysis. Overall 1226 (8.0%) of the soldiers screened were determined to be medically not ready for deployment. Biggest predictive factor was female gender OR (2.8; 2.57-3.28) p<0.001. Followed by enlisted rank OR (2.01; 1.60-2.53) p<0.001. Reserve component OR (1.33; 1.16-1.53) p<0.001 and Guard OR (0.37; 0.30-0.46) p<0.001. For age > 40 demonstrated OR (1.2; 1.09-1.50) p<0.003. Overall the results underscore there may be key demographic groups relating to medical readiness that can be targeted with programs and funding to improve overall military medical readiness.^