973 resultados para AUTOMATED SAMPLE PREPARATION
Resumo:
The characterisation of mineral texture has been a major concern for process mineralogists, as liberation characteristics of the ores are intimately related to the mineralogical texture. While a great effort has been done to automatically characterise texture in unbroken ores, the characterisation of textural attributes in mineral particles is usually descriptive. However, the quantitative characterisation of texture in mineral particles is essential to improve and predict the performance of minerallurgical processes (i.e. all the processes involved in the liberation and separation of the mineral of interest) and to achieve a more accurate geometallurgical model. Driven by this necessity of achieving a more complete characterisation of textural attributes in mineral particles, a methodology has been recently developed to automatically characterise the type of intergrowth between mineral phases within particles by means of digital image analysis. In this methodology, a set ofminerallurgical indices has been developed to quantify different mineralogical features and to identify the intergrowth pattern by discriminant analysis. The paper shows the application of the methodology to the textural characterisation of chalcopyrite in the rougher concentrate of the Kansanshi copper mine (Zambia). In this sample, the variety of intergrowth patterns of chalcopyrite with the other minerals has been used to illustrate the methodology. The results obtained show that the method identifies the intergrowth type and provides quantitative information to achieve a complete and detailed mineralogical characterisation. Therefore, the use of this methodology as a routinely tool in automated mineralogy would contribute to a better understanding of the ore behaviour during liberation and separation processes.
Resumo:
We report automated DNA sequencing in 16-channel microchips. A microchip prefilled with sieving matrix is aligned on a heating plate affixed to a movable platform. Samples are loaded into sample reservoirs by using an eight-tip pipetting device, and the chip is docked with an array of electrodes in the focal plane of a four-color scanning detection system. Under computer control, high voltage is applied to the appropriate reservoirs in a programmed sequence that injects and separates the DNA samples. An integrated four-color confocal fluorescent detector automatically scans all 16 channels. The system routinely yields more than 450 bases in 15 min in all 16 channels. In the best case using an automated base-calling program, 543 bases have been called at an accuracy of >99%. Separations, including automated chip loading and sample injection, normally are completed in less than 18 min. The advantages of DNA sequencing on capillary electrophoresis chips include uniform signal intensity and tolerance of high DNA template concentration. To understand the fundamentals of these unique features we developed a theoretical treatment of cross-channel chip injection that we call the differential concentration effect. We present experimental evidence consistent with the predictions of the theory.
Resumo:
Environmentally friendly sulfonated black carbon (BC) catalysts were prepared from biodiesel waste, glycerol. These black carbons (BCs) contain a high amount of acidic groups, mainly sulfonated and oxygenated groups. Furthermore, these catalysts show a high catalytic activity in the glycerol etherification reaction with tert-butyl alcohol, the activity being larger for the sample prepared with a higher glycerol:sulfuric acid ratio (1:3). The yield for mono-tert-butyl glycerol (MTBG), di-tert-butyl glycerol (DTBG) and tri-tert-butyl-glycerol (TTBG) were very similar to those obtained using a commercial resin, Amberlyst-15. Furthermore, experimental results show that the carbon with the lowest acidic surface group content, BC prepared in minor glycerol:sulfuric acid ratio (10:1), can be chemically treated after carbonization to achieve an improved catalytic activity. The activity of all BCs is high and very similar, about 50% and 20% for the MTBG and DTBG + TTBG, respectively.
Resumo:
Some recipes include wine or liquor as an ingredient. Sample recipes: Beef tea, with oatmeal or rice; Poached eggs on toast; Isinglass wine jelly.
Resumo:
Bio-oil has successfully been utilized to prepare carbon-silica composites (CSCs) from mesoporous silicas, such as SBA-15, MCM-41, KIT-6 and MMSBA frameworks. These CSCs comprise a thin film of carbon dispersed over the silica matrix and exhibit porosity similar to the parent silica. The surface properties of the resulting materials can be simply tuned by the variation of preparation temperatures leading to a continuum of functionalities ranging from polar hydroxyl rich surfaces to carbonaceous aromatic surfaces, as reflected in solid state NMR, XPS and DRIFT analysis. N2 porosimetry, TEM and SEM images demonstrate that the composites still possess similar ordered mesostructures to the parent silica sample. The modification mechanism is also proposed: silica samples are impregnated with bio-oils (generated from the pyrolysis of waste paper) until the pores are filled, followed by the carbonization at a series of temperatures. Increasing temperature leads to the formation of a carbonaceous layer over the silica surface. The complex mixture of compounds within the bio-oil (including those molecules containing alcohols, aliphatics, carbonyls and aromatics) gives rise to the functionality of the CSCs.
Resumo:
This study explores factors related to the prompt difficulty in Automated Essay Scoring. The sample was composed of 6,924 students. For each student, there were 1-4 essays, across 20 different writing prompts, for a total of 20,243 essays. E-rater® v.2 essay scoring engine developed by the Educational Testing Service was used to score the essays. The scoring engine employs a statistical model that incorporates 10 predictors associated with writing characteristics of which 8 were used. The Rasch partial credit analysis was applied to the scores to determine the difficulty levels of prompts. In addition, the scores were used as outcomes in the series of hierarchical linear models (HLM) in which students and prompts constituted the cross-classification levels. This methodology was used to explore the partitioning of the essay score variance.^ The results indicated significant differences in prompt difficulty levels due to genre. Descriptive prompts, as a group, were found to be more difficult than the persuasive prompts. In addition, the essay score variance was partitioned between students and prompts. The amount of the essay score variance that lies between prompts was found to be relatively small (4 to 7 percent). When the essay-level, student-level-and prompt-level predictors were included in the model, it was able to explain almost all variance that lies between prompts. Since in most high-stakes writing assessments only 1-2 prompts per students are used, the essay score variance that lies between prompts represents an undesirable or "noise" variation. Identifying factors associated with this "noise" variance may prove to be important for prompt writing and for constructing Automated Essay Scoring mechanisms for weighting prompt difficulty when assigning essay score.^
Resumo:
In the wake of the “9-11” terrorists' attacks, the U.S. Government has turned to information technology (IT) to address a lack of information sharing among law enforcement agencies. This research determined if and how information-sharing technology helps law enforcement by examining the differences in perception of the value of IT between law enforcement officers who have access to automated regional information sharing and those who do not. It also examined the effect of potential intervening variables such as user characteristics, training, and experience, on the officers' evaluation of IT. The sample was limited to 588 officers from two sheriff's offices; one of them (the study group) uses information sharing technology, the other (the comparison group) does not. Triangulated methodologies included surveys, interviews, direct observation, and a review of agency records. Data analysis involved the following statistical methods: descriptive statistics, Chi-Square, factor analysis, principal component analysis, Cronbach's Alpha, Mann-Whitney tests, analysis of variance (ANOVA), and Scheffe' post hoc analysis. ^ Results indicated a significant difference between groups: the study group perceived information sharing technology as being a greater factor in solving crime and in increasing officer productivity. The study group was more satisfied with the data available to it. As to the number of arrests made, information sharing technology did not make a difference. Analysis of the potential intervening variables revealed several remarkable results. The presence of a strong performance management imperative (in the comparison sheriff's office) appeared to be a factor in case clearances and arrests, technology notwithstanding. As to the influence of user characteristics, level of education did not influence a user's satisfaction with technology, but user-satisfaction scores differed significantly among years of experience as a law enforcement officer and the amount of computer training, suggesting a significant but weak relationship. ^ Therefore, this study finds that information sharing technology assists law enforcement officers in doing their jobs. It also suggests that other variables such as computer training, experience, and management climate should be accounted for when assessing the impact of information technology. ^
Resumo:
How children rate vegetables may be influenced by the preparation method. The primary objective of this study was for first grade students to be involved in a cooking demonstration and to taste and rate vegetables raw and cooked. First grade children of two classes (N= 52: 18 boys and 34 girls (approximately half Hispanic) that had assented and had signed parental consent participated in the study. The degree of liking a particular vegetable was recorded by the students using a hedonic scale of five commonly eaten vegetables tasted first raw (pre-demonstration) and then cooked (post-demonstration). A food habit questionnaire was filled out by parents to evaluate their mealtime practices and beliefs about their child’s eating habits. Paired sample t-tests revealed significant differences in preferences for vegetables in their raw and cooked states. Several mealtime characteristics were significantly associated with children’s vegetable preferences. Parents who reported being satisfied with how often the family eats evening meals together were more likely to report that their child eats adequate vegetables for their health (p=0.026). Parents who stated that they were satisfied with their child’s eating habits were more likely to report that their child was trying new foods (p<.001). Cooking demonstrations by nutrition professionals may be an important strategy that can be used by parents and teachers to promote vegetable intake. It is important that nutrition professionals provide guidance to encourage consumption of vegetables for parents so that they can model the behavior of healthy food consumption to their children.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
We present the stellar calibrator sample and the conversion from instrumental to physical units for the 24 μm channel of the Multiband Imaging Photometer for Spitzer (MIPS). The primary calibrators are A stars, and the calibration factor based on those stars is 4.54 × 10^-2 MJy sr^–1 (DN/s)^–1, with a nominal uncertainty of 2%. We discuss the data reduction procedures required to attain this accuracy; without these procedures, the calibration factor obtained using the automated pipeline at the Spitzer Science Center is 1.6% ± 0.6% lower. We extend this work to predict 24 μm flux densities for a sample of 238 stars that covers a larger range of flux densities and spectral types. We present a total of 348 measurements of 141 stars at 24 μm. This sample covers a factor of ~460 in 24 μm flux density, from 8.6 mJy up to 4.0 Jy. We show that the calibration is linear over that range with respect to target flux and background level. The calibration is based on observations made using 3 s exposures; a preliminary analysis shows that the calibration factor may be 1% and 2% lower for 10 and 30 s exposures, respectively. We also demonstrate that the calibration is very stable: over the course of the mission, repeated measurements of our routine calibrator, HD 159330, show a rms scatter of only 0.4%. Finally, we show that the point-spread function (PSF) is well measured and allows us to calibrate extended sources accurately; Infrared Astronomy Satellite (IRAS) and MIPS measurements of a sample of nearby galaxies are identical within the uncertainties.