995 resultados para testes de equipamentos
Resumo:
The evaluation of seed vigor is an important factor for detection of lots of high quality seeds, so that development of procedures to evaluate the physiological potential has been an important tool in quality control programs seeds. In this sense the study aimed to adapt the methodologies of accelerated aging, electrical conductivity and potassium leaching to evaluate Moringa oleifera seed vigor LAM.. Therefore, four lots of moringa seeds were subjected to the germination tests, seedling emergence, speed of emergence index, emergence first count, length and dry mass of seedlings and cold test for their physiological characterization, in addition to accelerated aging, electrical conductivity and potassium leaching. The experimental design was completely randomized with four replications of 50 seeds and the means compared by Tukey test at 5% probability. For accelerated aging the periods were studied aging 12, 24 and 72 hours at 40, 42 and 45°C. For the electrical conductivity test was used to a temperature of 25°C for periods of 4, 8, 12, 16 and 24 hours of immersion in 75 to 125 mL of distilled water, using 25 to 50 seeds, and for potassium leaching test samples were used 25 to 50 seeds, placed in plastic cups containing 70 and 100 mL of distilled water at 25°C for periods of 1, 2, 3, 4, 5 and 6 hours. From the results obtained, it can be inferred that the methods best fit for the accelerated aging test Moringa seeds were a temperature of 40°C for 12 to 72 hours, 42°C 72 hours 45°C 24 hours . In the electrical conductivity test Moringa seeds, the combination of 50 seeds in 75 mL distilled water for a period of immersion of 4 hours and 50 seeds in 125 mL of 4 hours were efficient for the differentiation of lots of Moringa seeds as to vigor and for potassium leaching test moringa seeds, the combination of 50 seeds in 100mL of distilled water allowed the separation of lots of four levels of vigor, at 2 hours of immersion, showing promise in evaluate the quality of moringa seeds.
Resumo:
The evaluation of seed vigor is an important factor for detection of lots of high quality seeds, so that development of procedures to evaluate the physiological potential has been an important tool in quality control programs seeds. In this sense the study aimed to adapt the methodologies of accelerated aging, electrical conductivity and potassium leaching to evaluate Moringa oleifera seed vigor LAM.. Therefore, four lots of moringa seeds were subjected to the germination tests, seedling emergence, speed of emergence index, emergence first count, length and dry mass of seedlings and cold test for their physiological characterization, in addition to accelerated aging, electrical conductivity and potassium leaching. The experimental design was completely randomized with four replications of 50 seeds and the means compared by Tukey test at 5% probability. For accelerated aging the periods were studied aging 12, 24 and 72 hours at 40, 42 and 45°C. For the electrical conductivity test was used to a temperature of 25°C for periods of 4, 8, 12, 16 and 24 hours of immersion in 75 to 125 mL of distilled water, using 25 to 50 seeds, and for potassium leaching test samples were used 25 to 50 seeds, placed in plastic cups containing 70 and 100 mL of distilled water at 25°C for periods of 1, 2, 3, 4, 5 and 6 hours. From the results obtained, it can be inferred that the methods best fit for the accelerated aging test Moringa seeds were a temperature of 40°C for 12 to 72 hours, 42°C 72 hours 45°C 24 hours . In the electrical conductivity test Moringa seeds, the combination of 50 seeds in 75 mL distilled water for a period of immersion of 4 hours and 50 seeds in 125 mL of 4 hours were efficient for the differentiation of lots of Moringa seeds as to vigor and for potassium leaching test moringa seeds, the combination of 50 seeds in 100mL of distilled water allowed the separation of lots of four levels of vigor, at 2 hours of immersion, showing promise in evaluate the quality of moringa seeds.
Resumo:
The expansion of cultivated areas with genetically modified crops (GM) is a worldwide phenomenon, stimulating regulatory authorities to implement strict procedures to monitor and verify the presence of GM varieties in agricultural crops. With the constant growing of plant cultivating areas all over the world, consumption of aflatoxin-contaminated food also increased. Aflatoxins correspond to a class of highly toxic contaminants found in agricultural products that can have harmful effects on human and animal health. Therefore, the safety and quality evaluation of agricultural products are important issues for consumers. Lateral flow tests (strip tests) is a promising method for the detection both proteins expressed in GM crops and aflatoxins-contaminated food samples. The advantages of this technique include its simplicity, rapidity and cost-effective when compared to the conventional methods. In this study, two novel and sensitive strip tests assay were developed for the identification of: (i) Cry1Ac and Cry8Ka5 proteins expressed in GM cotton crops and; (ii) aflatoxins from agricultural products. The first strip test was developed using a sandwhich format, while the second one was developed using a competitive format. Gold colloidal nanoparticles were used as detector reagent when coated with monoclonal antibodies. An anti-species specific antibody was sprayed at the nitrocellulose membrane to be used as a control line. To validate the first strip test, GM (Bollgard I® e Planta 50- EMBRAPA) and non-GM cotton leaf (Cooker 312) were used. The results showed that the strip containing antibodies for the identification of Cry1Ac and Cry8Ka5 proteins was capable of correctly distinguishing between GM samples (positive result) and non-GM samples (negative result), in a high sensitivity manner. To validate the second strip test, artificially contaminated soybean with Aspergillus flavus (aflatoxin-producing fungus) was employed. Food samples, such as milk and soybean, were also evaluated for the presence of aflatoxins. The strip test was capable to distinguish between samples with and without aflatoxins samples, at a sensitivity concentration of 0,5 μg/Kg. Therefore, these results suggest that the strip tests developed in this study can be a potential tool as a rapid and cost-effective method for detection of insect resistant GM crops expressing Cry1Ac and Cry8Ka5 and aflatoxins from food samples.
Resumo:
The expansion of cultivated areas with genetically modified crops (GM) is a worldwide phenomenon, stimulating regulatory authorities to implement strict procedures to monitor and verify the presence of GM varieties in agricultural crops. With the constant growing of plant cultivating areas all over the world, consumption of aflatoxin-contaminated food also increased. Aflatoxins correspond to a class of highly toxic contaminants found in agricultural products that can have harmful effects on human and animal health. Therefore, the safety and quality evaluation of agricultural products are important issues for consumers. Lateral flow tests (strip tests) is a promising method for the detection both proteins expressed in GM crops and aflatoxins-contaminated food samples. The advantages of this technique include its simplicity, rapidity and cost-effective when compared to the conventional methods. In this study, two novel and sensitive strip tests assay were developed for the identification of: (i) Cry1Ac and Cry8Ka5 proteins expressed in GM cotton crops and; (ii) aflatoxins from agricultural products. The first strip test was developed using a sandwhich format, while the second one was developed using a competitive format. Gold colloidal nanoparticles were used as detector reagent when coated with monoclonal antibodies. An anti-species specific antibody was sprayed at the nitrocellulose membrane to be used as a control line. To validate the first strip test, GM (Bollgard I® e Planta 50- EMBRAPA) and non-GM cotton leaf (Cooker 312) were used. The results showed that the strip containing antibodies for the identification of Cry1Ac and Cry8Ka5 proteins was capable of correctly distinguishing between GM samples (positive result) and non-GM samples (negative result), in a high sensitivity manner. To validate the second strip test, artificially contaminated soybean with Aspergillus flavus (aflatoxin-producing fungus) was employed. Food samples, such as milk and soybean, were also evaluated for the presence of aflatoxins. The strip test was capable to distinguish between samples with and without aflatoxins samples, at a sensitivity concentration of 0,5 μg/Kg. Therefore, these results suggest that the strip tests developed in this study can be a potential tool as a rapid and cost-effective method for detection of insect resistant GM crops expressing Cry1Ac and Cry8Ka5 and aflatoxins from food samples.
Resumo:
With the progress of devices technology, generation and use of energy ways, power quality parameters start to influence more significantly the various kinds of power consumers. Currently, there are many types of devices that analyze power quality. However, there is a need to create devices, and perform measurements and calculate parameters, find flaws, suggest changes, and to support the management of the installation. In addition, you must ensure that such devices are accessible. To maintain this balance, one magnitude measuring method should be used which does not require great resources processing or memory. The work shows that application of the Goertzel algorithm, compared with the commonly used FFT allows measurements to be made using much less hardware resources, available memory space to implement management functions. The first point of the work is the research of troubles that are more common for low voltage consumers. Then we propose the functional diagram indicate what will be measured, calculated, what problems will be detected and that solutions can be found. Through the Goertzel algorithm simulation using Scilab, is possible to calculate frequency components of a distorted signal with satisfactory results. Finally, the prototype is assembled and tests are carried out by adjusting the parameters necessary for one to maintain a reliable device without increasing its cost.
Resumo:
A significant observational effort has been directed to investigate the nature of the so-called dark energy. In this dissertation we derive constraints on dark energy models using three different observable: measurements of the Hubble rate H(z) (compiled by Meng et al. in 2015.); distance modulus of 580 Supernovae Type Ia (Union catalog Compilation 2.1, 2011); and the observations of baryon acoustic oscilations (BAO) and the cosmic microwave background (CMB) by using the so-called CMB/BAO of six peaks of BAO (a peak determined through the Survey 6dFGS data, two through the SDSS and three through WiggleZ). The statistical analysis used was the method of the χ2 minimum (marginalized or minimized over h whenever possible) to link the cosmological parameter: m, ω and δω0. These tests were applied in two parameterization of the parameter ω of the equation of state of dark energy, p = ωρ (here, p is the pressure and ρ is the component of energy density). In one, ω is considered constant and less than -1/3, known as XCDM model; in the other the parameter of state equantion varies with the redshift, where we the call model GS. This last model is based on arguments that arise from the theory of cosmological inflation. For comparison it was also made the analysis of model CDM. Comparison of cosmological models with different observations lead to different optimal settings. Thus, to classify the observational viability of different theoretical models we use two criteria information, the Bayesian information criterion (BIC) and the Akaike information criteria (AIC). The Fisher matrix tool was incorporated into our testing to provide us with the uncertainty of the parameters of each theoretical model. We found that the complementarity of tests is necessary inorder we do not have degenerate parametric spaces. Making the minimization process we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are m = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059. Performing a marginalization we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are M = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059.
Resumo:
A significant observational effort has been directed to investigate the nature of the so-called dark energy. In this dissertation we derive constraints on dark energy models using three different observable: measurements of the Hubble rate H(z) (compiled by Meng et al. in 2015.); distance modulus of 580 Supernovae Type Ia (Union catalog Compilation 2.1, 2011); and the observations of baryon acoustic oscilations (BAO) and the cosmic microwave background (CMB) by using the so-called CMB/BAO of six peaks of BAO (a peak determined through the Survey 6dFGS data, two through the SDSS and three through WiggleZ). The statistical analysis used was the method of the χ2 minimum (marginalized or minimized over h whenever possible) to link the cosmological parameter: m, ω and δω0. These tests were applied in two parameterization of the parameter ω of the equation of state of dark energy, p = ωρ (here, p is the pressure and ρ is the component of energy density). In one, ω is considered constant and less than -1/3, known as XCDM model; in the other the parameter of state equantion varies with the redshift, where we the call model GS. This last model is based on arguments that arise from the theory of cosmological inflation. For comparison it was also made the analysis of model CDM. Comparison of cosmological models with different observations lead to different optimal settings. Thus, to classify the observational viability of different theoretical models we use two criteria information, the Bayesian information criterion (BIC) and the Akaike information criteria (AIC). The Fisher matrix tool was incorporated into our testing to provide us with the uncertainty of the parameters of each theoretical model. We found that the complementarity of tests is necessary inorder we do not have degenerate parametric spaces. Making the minimization process we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are m = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059. Performing a marginalization we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are M = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059.
Resumo:
Survival models deals with the modelling of time to event data. In certain situations, a share of the population can no longer be subjected to the event occurrence. In this context, the cure fraction models emerged. Among the models that incorporate a fraction of cured one of the most known is the promotion time model. In the present study we discuss hypothesis testing in the promotion time model with Weibull distribution for the failure times of susceptible individuals. Hypothesis testing in this model may be performed based on likelihood ratio, gradient, score or Wald statistics. The critical values are obtained from asymptotic approximations, which may result in size distortions in nite sample sizes. This study proposes bootstrap corrections to the aforementioned tests and Bartlett bootstrap to the likelihood ratio statistic in Weibull promotion time model. Using Monte Carlo simulations we compared the nite sample performances of the proposed corrections in contrast with the usual tests. The numerical evidence favors the proposed corrected tests. At the end of the work an empirical application is presented.
Resumo:
People bedridden by paralysis/motor disability are subject to several problems due to lack of movement. Then, it is necessary to use equipments that enable the people bedridden stand up and walk, so as to reduce the problems due to lack of movement and the time of rehabilitation, impacting directly on its quality of life. The aim of this work is the development of an exoskeleton to make the movement of people with paralysis / motor impairment in the lower limbs, without help of third parties, to be activated by the user. To provide support, stability and security to the deficient, it was decided to use a structure formed by four legs, being each leg consisting of a parallel chain. The gait was obtained by combining the movement of two mechanisms: crank/rocker, responsible for oscillatory motion of the leg, and cam/follower, responsible for the foot motion on the desired trajectory. To achieve the aim of this work was conducted a study about the types of exoskeletons for locomotion/rehabilitation of people with lower limb paralysis and presented a study on the movement of the lower limb joints. It also presented a mathematical model to obtain the desired path to the foot of the exoskeleton, the static model and the design of the structure elements. In the sequence is presented the simulation of movements of a people during the human gait, experimental tests and the comparison with the human gait developed by a people without disabilities.
Resumo:
This study aims to evaluate the uncertainty associated with measurements made by aneroid sphygmomanometer, neonatal electronic balance and electrocautery. Therefore, were performing repeatability tests on all devices for the subsequent execution of normality tests using Shapiro-Wilk; identification of influencing factors that affect the measurement result of each measurement; proposition of mathematical models to calculate the measurement uncertainty associated with measuring evaluated for all equipament and calibration for neonatal electronic balance; evaluation of the measurement uncertainty; and development of a computer program in Java language to systematize the calibration uncertainty of estimates and measurement uncertainty. It was proposed and carried out 23 factorial design for aneroid sphygmomanometer order to investigate the effect of temperature factors, patient and operator and another 32 planning for electrocautery, where it investigated the effects of temperature factors and output electrical power. The expanded uncertainty associated with the measurement of blood pressure significantly reduced the extent of the patient classification tracks. In turn, the expanded uncertainty associated with the mass measurement with neonatal balance indicated a variation of about 1% in the dosage of medication to neonates. Analysis of variance (ANOVA) and the Turkey test indicated significant and indirectly proportional effects of temperature factor in cutting power values and clotting indicated by electrocautery and no significant effect of factors investigated for aneroid sphygmomanometer.
Resumo:
The hydrocycloning operation has a goal to separate solid-liquid suspensions and liquid-liquid emulsions through the centrifugal force action. Hydrocyclones are equipment with reduced size and used in both clarification and thickening. This device is used in many areas, like petrochemical and minerals process, and accumulate advantages like versatility and low cost of maintenance. However, the demand to improve the process and to reduce the costs has motivated several studies of equipment optimization. The filtering hydrocyclone is a non-conventional equipment developed at FEQUI/UFU with objective to improve the hydrocycloning separation efficiency. The purpose of this study is to evaluate the operating conditions of feed concentration and underflow diameter on the performance of a filtering geometry optimized to minimization of energy costs. The filtration effect was investigated through the comparison between the performance of the Optimized Filtering Hydrocyclone (HCOF) and the Optimized Concentrator Hydrocyclone (HCO). Because of the resemblance of hydrocyclones performance, the filtration did not represent significant effect on the performance of the HCOF. It was found that in this geometry the decrease of the variable underflow diameter was very favorable to thickening operation. The suspension concentration of quartzite at 1.0% of solids in volume was increased about 42 times when the 3 mm underflow diameter was used. The increase on the feed solid percentage was good for decreasing the energy spent, so that a minimum number of Euler of 730 was achieved at CVA = 10.0%v. However, a greater amount of solids in suspension leads to a lower efficiency of the equipment. Therefore, to minimize the underflow-to-throughput ratio and keep a high efficiency level, it is indicated to work with dilute suspension (CVA = 1.0%) and 3 mm underflow diameter (η = 67%). But if it is necessary to work with high feed concentration, the use of 5 mm underflow diameter provides a rise in the efficiency. The HCO hydrocyclone was compared to the traditional family of hydrocyclones Rietema and presented advantages like higher efficiency (34% higher in average) and lower energy costs (20% lower in average). Finally, the efficiency curves and project equation have been raised for the HCO hydrocyclone each with satisfactory adjust.
Resumo:
Os classificadores múltiplos são processos que utilizam um conjunto de modelos, cada um deles obtido pela aplicação de um processo de aprendizagem para um problema dado. Combinam vários classificadores individuais, em que para cada um deles são utilizados dados de treino para gerar limites de decisão diferentes. As decisões produzidas pelos classificadores individuais contém erros, que são combinados pelos classificadores múltiplos de forma a reduzir o erro total. Estes têm vindo a ganhar uma crescente importância devido principalmente ao facto de permitirem obter um melhor desempenho quando comparado com o obtido por qualquer um dos modelos que o compõem, principalmente quando as correlações entre os erros cometidos pelos modelos de base são baixos. A investigação nesta área tem crescido, tornando-se uma área de investigação importante. No entanto, para que o desempenho seja melhor do que o desempenho obtido por cada classificador individualmente, é necessário que cada um deles produza uma decisão diferente originando uma diversidade de classificação. Esta diversidade pode ser obtida tanto pela utilização de diferentes conjuntos de dados para o treino individual de cada classificador, como também pela utilização de diferentes parâmetros de formação de diferentes classificadores. Apesar disso, a utilização de classificadores múltiplos para aplicações no mundo real pode apresentar-se como dispendiosa e morosa. Tem-se notado nos dias de hoje que o desenvolvimento web tem vindo a crescer exponencialmente, assim como o uso de bases de dados. Desta forma, combinando a forte utilização da linguagem R para cálculos estatísticos com a crescente utilização das tecnologias web, foi implementado um protótipo que facilitasse a utilização dos classificadores múltiplos, mais precisamente, foi desenvolvida uma aplicação web que permitisse o teste para aprendizagem com classificadores múltiplos, sendo utilizadas as tecnologias PHP, R e MySQL. Com esta aplicação pretende-se que seja possível testar algoritmos independentes do software em que estejam desenvolvidos, não sendo necessariamente escritos em R. Nesta Dissertação foi utilizada a expressão “classificadores múltiplos” por ser a mais comum, apesar de ser redutora e existirem outros termos mais genéricos como por exemplo modelos múltiplos e ensemble learning.