942 resultados para Matching-to-sample arbitrário
Resumo:
A flow system coupled to a tungsten coil atomizer in an atomic absorption spectrometer (TCA-AAS) was developed for As(III) determination in waters, by extraction with sodium diethyldithiocarbamate (NaDDTC) as complexing agent, and by sorption of the As(III)-DDTC complex in a micro-column filled with 5 mg C18 reversed phase (10 µL dry sorbent), followed by elution with ethanol. A complete pre-concentration/elution cycle took 208 s, with 30 s sample load time (1.7 mL) and 4 s elution time (71 µL). The interface and software for the synchronous control of two peristaltic pumps (RUN/ STOP), an autosampler arm, seven solenoid valves, one injection valve, the electrothermal atomizer and the spectrometer Read function were constructed. The system was characterized and validated by analytical recovery studies performed both in synthetic solutions and in natural waters. Using a 30 s pre-concentration period, the working curve was linear between 0.25 and 6.0 µg L-1 (r = 0.9976), the retention efficiency was 94±1% (6.0 µg L-1), and the pre-concentration coefficient was 28.9. The characteristic mass was 58 pg, the mean repeatability (expressed as the variation coefficient) was 3.4% (n=5), the detection limit was 0.058 µg L-1 (4.1 pg in 71 µL of eluate injected into the coil), and the mean analytical recovery in natural waters was 92.6 ± 9.5 % (n=15). The procedure is simple, economic, less prone to sample loss and contamination and the useful lifetime of the micro-column was between 200-300 pre-concentration cycles.
Resumo:
Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
This book is dedicated to celebrate the 60th birthday of Professor Rainer Huopalahti. Professor Rainer “Repe” Huopalahti has had, and in fact is still enjoying a distinguished career in the analysis of food and food related flavor compounds. One will find it hard to make any progress in this particular field without a valid and innovative sample handling technique and this is a field in which Professor Huopalahti has made great contributions. The title and the front cover of this book honors Professor Huopahti’s early steps in science. His PhD thesis which was published on 1985 is entitled “Composition and content of aroma compounds in the dill herb, Anethum graveolens L., affected by different factors”. At that time, the thesis introduced new technology being applied to sample handling and analysis of flavoring compounds of dill. Sample handling is an essential task that in just about every analysis. If one is working with minor compounds in a sample or trying to detect trace levels of the analytes, one of the aims of sample handling may be to increase the sensitivity of the analytical method. On the other hand, if one is working with a challenging matrix such as the kind found in biological samples, one of the aims is to increase the selectivity. However, quite often the aim is to increase both the selectivity and the sensitivity. This book provides good and representative examples about the necessity of valid sample handling and the role of the sample handling in the analytical method. The contributors of the book are leading Finnish scientists on the field of organic instrumental analytical chemistry. Some of them are also Repe’ s personal friends and former students from the University of Turku, Department of Biochemistry and Food Chemistry. Importantly, the authors all know Repe in one way or another and are well aware of his achievements on the field of analytical chemistry. The editorial team had a great time during the planning phase and during the “hard work editorial phase” of the book. For example, we came up with many ideas on how to publish the book. After many long discussions, we decided to have a limited edition as an “old school hard cover book” – and to acknowledge more modern ways of disseminating knowledge by publishing an internet version of the book on the webpages of the University of Turku. Downloading the book from the webpage for personal use is free of charge. We believe and hope that the book will be read with great interest by scientists working in the fascinating field of organic instrumental analytical chemistry. We decided to publish our book in English for two main reasons. First, we believe that in the near future, more and more teaching in Finnish Universities will be delivered in English. To facilitate this process and encourage students to develop good language skills, it was decided to be published the book in English. Secondly, we believe that the book will also interest scientists outside Finland – particularly in the other member states of the European Union. The editorial team thanks all the authors for their willingness to contribute to this book – and to adhere to the very strict schedule. We also want to thank the various individuals and enterprises who financially supported the book project. Without that support, it would not have been possible to publish the hardcover book.
Resumo:
This paper presents the impact of integrating interventions like nutrition gardening, livestock rearing, product diversification and allied income generation activities in small and marginal coconut homesteads along with nutrition education in improving the food and nutritional security as well as the income of the family members. The activities were carried out through registered Community Based Organizations (CBOs) in three locations in Kerala, India during 2005-2008. Data was collected before and after the project periods through interviews using a pre-tested questionnaire containing statements indicating the adequacy, quality and diversity of food materials. Fifty respondents each were randomly selected from the three communities, thereby resulting in a total sample size of 150. The data was analysed using SPSS by adopting statistical tools like frequency, average, percentage analysis, t – test and regression. Participatory planning and implementation of diverse interventions notably intercropping and off-farm activities along with nutrition education brought out significant improvements in the food and nutritional security, in terms of frequency and quantity of consumption as well as diet diversity. At the end of the project, 96%of the members became completely food secure and 72% nutritionally secure. The overall consumption of fruits, vegetables and milk by both children and adults and egg by children recorded increase over the project period. Consumption of fish was more than the Recommended Dietary Intake (RDI) level during pre and post project periods. Project interventions like nutrition gardening could bring in surplus consumption of vegetables (35%) and fruits (10%) than RDI. In spite of the increased consumption of green leafy vegetables and milk and milk products over the project period, the levels of consumption were still below the RDI levels. CBO-wise analysis of the consumption patterns revealed the need for location-specific interventions matching to the needs and preferences of the communities.
Resumo:
This paper presents a case study to illustrate the range of decisions involved in designing a sampling strategy for a complex, longitudinal research study. It is based on experience from the Young Lives project and identifies the approaches used to sample children for longitudinal follow-up in four less developed countries (LDCs). The rationale for decisions made and the resulting benefits, and limitations, of the approaches adopted are discussed. Of particular importance is the choice of sampling approach to yield useful analysis; specific examples are presented of how this informed the design of the Young Lives sampling strategy.
Resumo:
This paper presents a simple Bayesian approach to sample size determination in clinical trials. It is required that the trial should be large enough to ensure that the data collected will provide convincing evidence either that an experimental treatment is better than a control or that it fails to improve upon control by some clinically relevant difference. The method resembles standard frequentist formulations of the problem, and indeed in certain circumstances involving 'non-informative' prior information it leads to identical answers. In particular, unlike many Bayesian approaches to sample size determination, use is made of an alternative hypothesis that an experimental treatment is better than a control treatment by some specified magnitude. The approach is introduced in the context of testing whether a single stream of binary observations are consistent with a given success rate p(0). Next the case of comparing two independent streams of normally distributed responses is considered, first under the assumption that their common variance is known and then for unknown variance. Finally, the more general situation in which a large sample is to be collected and analysed according to the asymptotic properties of the score statistic is explored. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
Giant planets helped to shape the conditions we see in the Solar System today and they account for more than 99% of the mass of the Sun’s planetary system. They can be subdivided into the Ice Giants (Uranus and Neptune) and the Gas Giants (Jupiter and Saturn), which differ from each other in a number of fundamental ways. Uranus, in particular is the most challenging to our understanding of planetary formation and evolution, with its large obliquity, low self-luminosity, highly asymmetrical internal field, and puzzling internal structure. Uranus also has a rich planetary system consisting of a system of inner natural satellites and complex ring system, five major natural icy satellites, a system of irregular moons with varied dynamical histories, and a highly asymmetrical magnetosphere. Voyager 2 is the only spacecraft to have explored Uranus, with a flyby in 1986, and no mission is currently planned to this enigmatic system. However, a mission to the uranian system would open a new window on the origin and evolution of the Solar System and would provide crucial information on a wide variety of physicochemical processes in our Solar System. These have clear implications for understanding exoplanetary systems. In this paper we describe the science case for an orbital mission to Uranus with an atmospheric entry probe to sample the composition and atmospheric physics in Uranus’ atmosphere. The characteristics of such an orbiter and a strawman scientific payload are described and we discuss the technical challenges for such a mission. This paper is based on a white paper submitted to the European Space Agency’s call for science themes for its large-class mission programme in 2013.
Resumo:
Small local earthquakes from two aftershock sequences in Porto dos GaA(0)chos, Amazon craton-Brazil, were used to estimate the coda wave attenuation in the frequency band of 1 to 24 Hz. The time-domain coda-decay method of a single backscattering model is employed to estimate frequency dependence of the quality factor (Q (c)) of coda waves modeled usingwhere Q (0) is the coda quality factor at frequency of 1 Hz and eta is the frequency parameter. We also used the independent frequency model approach (Morozov, Geophys J Int, 175:239-252, 2008), based in the temporal attenuation coefficient, chi(f) instead of Q(f), given by the equation for the calculation of the geometrical attenuation (gamma) and effective attenuation Q (c) values have been computed at central frequencies (and band) of 1.5 (1-2), 3.0 (2-4), 6.0 (4-8), 9.0 (6-12), 12 (8-16), and 18 (12-24) Hz for five different datasets selected according to the geotectonic environment as well as the ability to sample shallow or deeper structures, particularly the sediments of the Parecis basin and the crystalline basement of the Amazon craton. For the Parecis basin for the surrounding shield and for the whole region of Porto dos GaA(0)chos Using the independent frequency model, we found: for the cratonic zone, gamma = 0.014 s (-aEuro parts per thousand 1), nu a parts per thousand 1.12; for the basin zone with sediments of similar to 500 m, gamma = 0.031 s (-aEuro parts per thousand 1), nu a parts per thousand 1.27; and for the Parecis basin with sediments of similar to 1,000 m, gamma = 0.047 s (-aEuro parts per thousand 1), nu a parts per thousand 1.42. Analysis of the attenuation factor (Q (c)) for different values of the geometrical spreading parameter (nu) indicated that an increase of nu generally causes an increase in Q (c), both in the basin as well as in the craton. But the differences in the attenuation between different geological environments are maintained for different models of geometrical spreading. It was shown that the energy of coda waves is attenuated more strongly in the sediments, (in the deepest part of the basin), than in the basement, (in the craton). Thus, the coda wave analysis can contribute to studies of geological structures in the upper crust, as the average coda quality factor is dependent on the thickness of sedimentary layer.
Resumo:
Dung beetles (Coleoptera: Scarabaeidae: Scarabaeinae) are very useful insects, as they improve the chemo-physical properties of soil, clean pastures from dung pads, and help control symbovine flies associated with bovine cattle. Their importance makes it fundamental to sample and survey them adequately. The objectives of the present study were to determine the influence of decaying insects trapped in pitfalls on the attractiveness of Moura pig Sus scrofa L. (Suidae) and collared peccary Tayassu tajacu (L.) (Tayassuidae) dung used as baits to lure dung beetles, and to establish how long these baits remain attractive to dung beetles when used in these traps. Some dung beetle species seemed to be able to discriminate against foul smell from decaying insects within the first 24 h, hence decreasing trap efficiency. This was more evident in peccary dung-baited traps, which proved to be the least attractive bait. Attractiveness lasted only 24 h for peccary dung, after which it became unattractive, whereas the pig dung bait was highly attractive for 48 h, after which its attractiveness diminished but was not completely lost.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Nuclear Magnetic Resonance (NMR) is a branch of spectroscopy that is based on the fact that many atomic nuclei may be oriented by a strong magnetic field and will absorb radiofrequency radiation at characteristic frequencies. The parameters that can be measured on the resulting spectral lines (line positions, intensities, line widths, multiplicities and transients in time-dependent experi-ments) can be interpreted in terms of molecular structure, conformation, molecular motion and other rate processes. In this way, high resolution (HR) NMR allows performing qualitative and quantitative analysis of samples in solution, in order to determine the structure of molecules in solution and not only. In the past, high-field NMR spectroscopy has mainly concerned with the elucidation of chemical structure in solution, but today is emerging as a powerful exploratory tool for probing biochemical and physical processes. It represents a versatile tool for the analysis of foods. In literature many NMR studies have been reported on different type of food such as wine, olive oil, coffee, fruit juices, milk, meat, egg, starch granules, flour, etc using different NMR techniques. Traditionally, univariate analytical methods have been used to ex-plore spectroscopic data. This method is useful to measure or to se-lect a single descriptive variable from the whole spectrum and , at the end, only this variable is analyzed. This univariate methods ap-proach, applied to HR-NMR data, lead to different problems due especially to the complexity of an NMR spectrum. In fact, the lat-ter is composed of different signals belonging to different mole-cules, but it is also true that the same molecules can be represented by different signals, generally strongly correlated. The univariate methods, in this case, takes in account only one or a few variables, causing a loss of information. Thus, when dealing with complex samples like foodstuff, univariate analysis of spectra data results not enough powerful. Spectra need to be considered in their wholeness and, for analysing them, it must be taken in consideration the whole data matrix: chemometric methods are designed to treat such multivariate data. Multivariate data analysis is used for a number of distinct, differ-ent purposes and the aims can be divided into three main groups: • data description (explorative data structure modelling of any ge-neric n-dimensional data matrix, PCA for example); • regression and prediction (PLS); • classification and prediction of class belongings for new samples (LDA and PLS-DA and ECVA). The aim of this PhD thesis was to verify the possibility of identify-ing and classifying plants or foodstuffs, in different classes, based on the concerted variation in metabolite levels, detected by NMR spectra and using the multivariate data analysis as a tool to inter-pret NMR information. It is important to underline that the results obtained are useful to point out the metabolic consequences of a specific modification on foodstuffs, avoiding the use of a targeted analysis for the different metabolites. The data analysis is performed by applying chemomet-ric multivariate techniques to the NMR dataset of spectra acquired. The research work presented in this thesis is the result of a three years PhD study. This thesis reports the main results obtained from these two main activities: A1) Evaluation of a data pre-processing system in order to mini-mize unwanted sources of variations, due to different instrumental set up, manual spectra processing and to sample preparations arte-facts; A2) Application of multivariate chemiometric models in data analy-sis.
Resumo:
Numerous studies have shown that animals have a sense of quantity and can distinguish between relative amounts. The concepts of relative numerousness, estimation, and subitizing are well established in species as diverse as chimpanzees and salamanders. Mobile animals have practical use for an understanding of number in common situations such as predation, mating, and competition. However, the ability to identify discrete quantities has only been firmly established in humans. The purpose of this study was to test for such “absolute numerousness” judgments in three lion-tailed macaques (Macaca silenus), a non-human primate. The three macaques tested had previously been trained on a computerized matchto- sample (MTS) task using geometric shapes. In this study, they were introduced to a MTS task containing a numerical cue, which required the monkeys to match stimuli containing either one or two items for rewards. If monkeys were successful at the initial matching task, they were tested with stimuli in which the position of the items and then the surface area of the items was controlled. If the monkeys could match successfully without using these non-numerical cues, they would demonstrate the capability to make absolute numerousness judgments. None of the monkeys matched successfully using the numerical cue, so no evidence of absolute numerosity was found. Each macaque progressed through the experiment in an individualized manner, attempting a variety of strategies to obtain rewards. These included side preferences and an alternating-side strategy that were unrelated to the numerical cues in the stimuli. When it became clear that the monkeys were not matching based on a stimulus-based cue, they were tested again on matching geometric shapes. All three macaques stopped using their alternate strategies and were able to match shapes successfully, demonstrating that they were still capable of completing the matching task. The data suggest that the monkeys could not transfer this ability to the numerical stimuli. This indicates that the macaques lack a sense of exact quantity, or that they could not recognize the numerical cues in the stimuli as being relevant to the task.
Resumo:
Fish behaviourists are increasingly turning to non-invasive measurement of steroid hormones in holding water, as opposed to blood plasma. When some of us met at a workshop in Faro, Portugal, in September, 2007, we realised that there were still many issues concerning the application of this procedure that needed resolution, including: Why do we measure release rates rather than just concentrations of steroids in the water? How does one interpret steroid release rates when dealing with fish of different sizes? What are the merits of measuring conjugated as well as free steroids in water? In the ‘static’ sampling procedure, where fish are placed in a separate container for a short period of time, does this affect steroid release—and, if so, how can it be minimised? After exposing a fish to a behavioural stimulus, when is the optimal time to sample? What is the minimum amount of validation when applying the procedure to a new species? The purpose of this review is to attempt to answer these questions and, in doing so, to emphasize that application of the non-invasive procedure requires more planning and validation than conventional plasma sampling. However, we consider that the rewards justify the extra effort.
Resumo:
An accurate and efficient determination of the highly toxic Cr(VI) in solid materials is important to determine the total Cr(VI) inventory of contaminated sites and the Cr(VI) release potential from such sites into the environment. Most commonly, total Cr(VI) is extracted from solid materials following a hot alkaline extraction procedure (US EPA method 3060A) where a complete release of water-extractable and sparingly soluble Cr(VI) phase is achieved. This work presents an evaluation of matrix effects that may occur during the hot alkaline extraction and in the determination of the total Cr(VI) inventory of variably composed contaminated soils and industrial materials (cement, fly ash) and is compared to water-extractable Cr(VI) results. Method validation including multiple extractions and matrix spiking along with chemical and mineralogical characterization showed satisfying results for total Cr(VI) contents for most of the tested materials. However, unreliable results were obtained by applying method 3060A to anoxic soils due to the degradation of organic material and/or reactions with Fe2+-bearing mineral phases. In addition, in certain samples discrepant spike recoveries have to be also attributed to sample heterogeneity. Separation of possible extracted Cr(III) by applying cation-exchange cartridges prior to solution analysis further shows that under the hot alkaline extraction conditions only Cr(VI) is present in solution in measurable amounts, whereas Cr(III) gets precipitated as amorphous Cr(OH)3(am). It is concluded that prior to routine application of method 3060A to a new material type, spiking tests are recommended for the identification of matrix effects. In addition, the mass of extracted solid material should to be well adjusted to the heterogeneity of the Cr(VI) distribution in the material in question.
Resumo:
This paper examines the contribution of job matching to wage growth in the U.S. and Germany using data drawn from the Panel Study of Income Dynamics and the German Socio-Economic Panel from 1984 through 1992. Using a symmetrical set of variables and data handling procedures, real wage growth is found to be higher in the U.S. than in Germany during this period. Also, using two different estimators, job matches are found to enhance wage growth in the U.S. and retard it in Germany. The relationship of general skills to employment in each country appears responsible for this result.