964 resultados para experimental work
Resumo:
In this work is proposed a solid phase preconcentration system of Co2+ ions and its posterior determination by GFAAS in which fractional factorial design and response surface methodology (RSM) were used for optimization of the variables associated with preconcentration system performance. The method is based on cobalt extraction as a complex Co2+-PAN (1:2) in a mini-column of polyurethane foam (PUF) impregnated with 1-(2-pyridylazo)-naphthol (PAN) followed by elution with HCl solution and its determination by GFAAS. The chemical and flow variables studied were pH, buffer concentration, eluent concentration and preconcentration and elution flow rates. Results obtained from fractional factorial design 2(5-1) showed that only the variables pH, buffer concentration and interaction (pH X buffer concentration) based on analysis of variance (ANOVA) were statistically significant at 95% confidence level. Under optimised conditions, the method provided an enrichment factor of 11.6 fold with limit of detection and quantification of 38 and 130 ng L-1, respectively, and linear range varying from 0.13 to 10 µg L-1. The precision (n = 9) assessed by relative standard deviation (RSD) was respectively 5.18 and 2.87% for 0.3 and 3.0 µg L-1 cobalt concentrations.
Resumo:
The results shown in this thesis are based on selected publications of the 2000s decade. The work was carried out in several national and EC funded public research projects and in close cooperation with industrial partners. The main objective of the thesis was to study and quantify the most important phenomena of circulating fluidized bed combustors by developing and applying proper experimental and modelling methods using laboratory scale equipments. An understanding of the phenomena plays an essential role in the development of combustion and emission performance, and the availability and controls of CFB boilers. Experimental procedures to study fuel combustion behaviour under CFB conditions are presented in the thesis. Steady state and dynamic measurements under well controlled conditions were carried out to produce the data needed for the development of high efficiency, utility scale CFB technology. The importance of combustion control and furnace dynamics is emphasized when CFB boilers are scaled up with a once through steam cycle. Qualitative information on fuel combustion characteristics was obtained directly by comparing flue gas oxygen responses during the impulse change experiments with fuel feed. A one-dimensional, time dependent model was developed to analyse the measurement data Emission formation was studied combined with fuel combustion behaviour. Correlations were developed for NO, N2O, CO and char loading, as a function of temperature and oxygen concentration in the bed area. An online method to characterize char loading under CFB conditions was developed and validated with the pilot scale CFB tests. Finally, a new method to control air and fuel feeds in CFB combustion was introduced. The method is based on models and an analysis of the fluctuation of the flue gas oxygen concentration. The effect of high oxygen concentrations on fuel combustion behaviour was also studied to evaluate the potential of CFB boilers to apply oxygenfiring technology to CCS. In future studies, it will be necessary to go through the whole scale up chain from laboratory phenomena devices through pilot scale test rigs to large scale, commercial boilers in order to validate the applicability and scalability of the, results. This thesis shows the chain between the laboratory scale phenomena test rig (bench scale) and the CFB process test rig (pilot). CFB technology has been scaled up successfully from an industrial scale to a utility scale during the last decade. The work shown in the thesis, for its part, has supported the development by producing new detailed information on combustion under CFB conditions.
Resumo:
Fuel elements of PWR type nuclear reactors consist of rod bundles, arranged in a square array, and held by spacer grids. The coolant flows, mainly, axially along the rods. Although such elements are laterally open, experiments are performed in closed type test sections, originating the appearance of subchannels with different geometries. In the present work, utilizing a test section of two bundles of 4x4 pins each, experiments were performed to determine the friction and the grid drag coefficients for the different subchannels and to observe the effect of the grids in the crossflow, in cases of inlet flow maldistribution.
Resumo:
The flow structure of cold and ignited jets issuing into a co-flowing air stream was experimentally studied using a laser Doppler velocimeter. Methane was employed as the jet fluid discharging from circular and elliptic nozzles with aspect ratios varying from 1.29 to 1.60. The diameter of the circular nozzle was 4.6 mm and the elliptic nozzles had approximately the same exit area as that of the circular nozzle. These non-circular nozzles were employed in order to increase the stability of attached jet diffusion flames. The time-averaged velocity and r.m.s. value of the velocity fluctuation in the streamwise and transverse directions were measured over the range of co-flowing stream velocities corresponding to different modes of flame blowout that are identified as either lifted or attached flames. On the basis of these measurements, attempts were made to explain the existence of an apparent optimum aspect ratio for the blowout of attached flames observed at higher values of co-flowing stream velocities. The insensitivity of the blowout limits of lifted flames to nozzle geometry observed in our previous work at low co-flowing stream velocities was also explained. Measurements of the fuel concentration at the jet centerline indicated that the mixing process was enhanced with the 1.38 aspect ratio jet compared with the 1.60 aspect ratio jet. On the basis of the obtained experimental data, it was suggested that the higher blowout limits of attached flames for an elliptic jet of 1.38 aspect ratio was due to higher entrainment rates.
Resumo:
One of the problems that slows the development of off-line programming is the low static and dynamic positioning accuracy of robots. Robot calibration improves the positioning accuracy and can also be used as a diagnostic tool in robot production and maintenance. A large number of robot measurement systems are now available commercially. Yet, there is a dearth of systems that are portable, accurate and low cost. In this work a measurement system that can fill this gap in local calibration is presented. The measurement system consists of a single CCD camera mounted on the robot tool flange with a wide angle lens, and uses space resection models to measure the end-effector pose relative to a world coordinate system, considering radial distortions. Scale factors and image center are obtained with innovative techniques, making use of a multiview approach. The target plate consists of a grid of white dots impressed on a black photographic paper, and mounted on the sides of a 90-degree angle plate. Results show that the achieved average accuracy varies from 0.2mm to 0.4mm, at distances from the target from 600mm to 1000mm respectively, with different camera orientations.
Resumo:
Control of an industrial robot is mainly a problem of dynamics. It includes non-linearities, uncertainties and external perturbations that should be considered in the design of control laws. In this work, two control strategies based on variable structure controllers (VSC) and a PD control algorithm are compared in relation to the tracking errors considering friction. The controller's performances are evaluated by adding an static friction model. Simulations and experimental results show it is possible to diminish tracking errors by using a model based friction compensation scheme. A SCARA robot is used to illustrate the conclusions of this paper.
Resumo:
The aim of this master’s thesis is to introduce what is experimental research and how the researcher is able to use this researching method in business-to-business context. This work has been done with analyzing articles of four academic marketing journals from years 1992-2012. In the literature part there is introduction of the nature of the experimental research, its terminology and design. There is also discussion about limitations of experimental research and comparison of experimental research to quasi-experimental design. In the results part there is a review how experimental research has been used in the business-to-business context in the past two decades. In the analysis there is introduction of themes, samplings, different kinds of variables and main findings. The work offers a good understanding to nature of experimental research and useful data for organizing a real experimental study.
Resumo:
Since the most characteristic feature of paraquat poisoning is lung damage, a prospective controlled study was performed on excised rat lungs in order to estimate the intensity of lesion after different doses. Twenty-five male, 2-3-month-old non-SPF Wistar rats, divided into 5 groups, received paraquat dichloride in a single intraperitoneal injection (0, 1, 5, 25, or 50 mg/kg body weight) 24 h before the experiment. Static pressure-volume (PV) curves were performed in air- and saline-filled lungs; an estimator of surface tension and tissue works was computed by integrating the area of both curves and reported as work/ml of volume displacement. Paraquat induced a dose-dependent increase of inspiratory surface tension work that reached a significant two-fold order of magnitude for 25 and 50 mg/kg body weight (P<0.05, ANOVA), sparing lung tissue. This kind of lesion was probably due to functional abnormalities of the surfactant system, as was shown by the increase in the hysteresis of the paraquat groups at the highest doses. Hence, paraquat poisoning provides a suitable model of acute lung injury with alveolar instability that can be easily used in experimental protocols of mechanical ventilation
Resumo:
The yam (Discorea sp) is a tuber rich in carbohydrates, vitamins and mineral salts, besides several components that serve as raw material for medicines. It grows well in tropical and subtropical climates and develops well in zones with an annual pluvial precipitation of around 1300mm, and with cultural treatments, its productivity can exceed 30t/ha. When harvested, the tubers possess about 70% of moisture, and are merchandised "in natura", in the atmospheric temperature, which can cause its fast deterioration. The present work studied the drying of the yam in the form of slices of 1.0 and 2.5cm thickness, as well as in the form of fillets with 1.0 x 1.0 x 5.0cm, with the drying air varying from 40 to 70°C. The equating of the process was accomplished, allowing to simulate the drying as a function of the conditions of the drying air and of the initial and final moisture of the product. Also investigated was the expense of energy as function of the air temperature. The drying in the form of fillets, with the air in a temperature range between 45 and 50°C, was shown to be the most viable process when combining both the quality of the product and the expense of energy.
Resumo:
Poultry carcasses have to be chilled to reduce the central breast temperatures from approximately 40 to 4 °C, which is crucial to ensure safe products. This work investigated the cooling of poultry carcasses by water immersion. Poultry carcasses were taken directly from an industrial processing plant and cooled in a pilot chiller, which was built to investigate the influence of the method and the water stirring intensity on the carcasses cooling. A simplified empiric mathematical model was used to represent the experimental results. These results indicated clearly that the understanding and quantification of heat transfer between the carcass and the cooling water is crucial to improve processes and equipment. The proposed mathematical model is a useful tool to represent the dynamics of carcasses cooling, and it can be used to compare different chiller operational conditions in industrial plants. Therefore, this study reports data and a simple mathematical tool to handle an industrial problem with little information available in the literature.
Resumo:
Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .
Resumo:
The Dudding group is interested in the application of Density Functional Theory (DFT) in developing asymmetric methodologies, and thus the focus of this dissertation will be on the integration of these approaches. Several interrelated subsets of computer aided design and implementation in catalysis have been addressed during the course of these studies. The first of the aims rested upon the advancement of methodologies for the synthesis of biological active C(1)-chiral 3-methylene-indan-1-ols, which in practice lead to the use of a sequential asymmetric Yamamoto-Sakurai-Hosomi allylation/Mizoroki Heck reaction sequence. An important aspect of this work was the utilization of ortho-substituted arylaldehyde reagents which are known to be a problematic class of substrates for existing asymmetric allylation approaches. The second phase of my research program lead to the further development of asymmetric allylation methods using o-arylaldehyde substrates for synthesis of chiral C(3)-substituted phthalides. Apart from the de novo design of these chemistries in silico, which notably utilized water-tolerant, inexpensive, and relatively environmental benign indium metal, this work represented the first computational study of a stereoselective indium-mediated process. Following from these discoveries was the advent of a related, yet catalytic, Ag(I)-catalyzed approach for preparing C(3)-substituted phthalides that from a practical standpoint was complementary in many ways. Not only did this new methodology build upon my earlier work with the integrated (experimental/computational) use of the Ag(I)-catalyzed asymmetric methods in synthesis, it provided fundamental insight arrived at through DFT calculations, regarding the Yamamoto-Sakurai-Hosomi allylation. The development of ligands for unprecedented asymmetric Lewis base catalysis, especially asymmetric allylations using silver and indium metals, followed as a natural extension from these earlier discoveries. To this end, forthcoming as well was the advancement of a family of disubstituted (N-cyclopropenium guanidine/N-imidazoliumyl substituted cyclopropenylimine) nitrogen adducts that has provided fundamental insight into chemical bonding and offered an unprecedented class of phase transfer catalysts (PTC) having far-reaching potential. Salient features of these disubstituted nitrogen species is unprecedented finding of a cyclopropenium based C-H•••πaryl interaction, as well, the presence of a highly dissociated anion projected them to serve as a catalyst promoting fluorination reactions. Attracted by the timely development of these disubstituted nitrogen adducts my last studies as a PhD scholar has addressed the utility of one of the synthesized disubstituted nitrogen adducts as a valuable catalyst for benzylation of the Schiff base N-diphenyl methylene glycine ethyl ester. Additionally, the catalyst was applied for benzylic fluorination, emerging from this exploration was successful fluorination of benzyl bromide and its derivatives in high yields. A notable feature of this protocol is column-free purification of the product and recovery of the catalyst to use in a further reaction sequence.
Resumo:
Affiliation: Louise Potvin: Groupe de recherche interdisciplinaire en santé, Faculté de médecine, Université de Montréal
Resumo:
Ma thèse examine quatre romans de l`époque post-1960 qui s’appuient sur le genre de la littérature prolétarienne du début du vingtième siècle. Se basant sur les recherches récentes sur la littérature de la classe ouvrière, je propose que Pynchon, Doctorow, Ondaatje et Sweatman mettent en lumière les thèmes souvent négligés de cette classe tout en restant esthétiquement progressiste et pertinents. Afin d’explorer les aspects politiques et formels de ces romans, j’utilise la « midfiction », le concept d’Allen Wilde. Ce concept vise les textes qui utilisent les techniques postmodernes et qui acceptent la primauté de la surface, mais qui néanmoins essaient d’être référentiels et d’établir des vérités. Le premier chapitre de ma thèse propose que les romans prolétariens contemporains que j’ai choisis utilisent des stratégies narratives généralement associées avec le postmodernisme, telles que la métafiction, l’ironie et une voix narrative « incohérente », afin de contester l’autorité des discours dominants, notamment les histoires officielles qui ont tendance à minimiser l’importance des mouvements ouvriers. Le deuxième chapitre examine comment les romanciers utilisent des stratégies mimétiques afin de réaliser un facteur de crédibilité qui permet de lier les récits aux des réalités historiques concrètes. Me référant à mon argument du premier chapitre, j’explique que ces romanciers utilisent la référentialité et les voix narratives « peu fiables » et « incohérentes », afin de politiser à nouveau la lutte des classes de la fin du dix-neuvième et des premières décennies du vingtième siècles et de remettre en cause un sens strict de l’histoire empirique. Se basant sur les théories évolutionnistes de la sympathie, le troisième chapitre propose que les représentations des personnages de la classe dirigeante riche illustrent que les structures sociales de l’époque suscitent un sentiment de droit et un manque de sympathie chez les élites qui les font adopter une attitude quasi-coloniale vis-à-vis de la classe ouvrière. Le quatrième chapitre aborde la façon dont les romans en considération négocient les relations entre les classes sociales, la subjectivité et l’espace. Cette section analyse comment, d’un côté, la représentation de l’espace montre que le pouvoir se manifeste au bénéfice de la classe dirigeante, et de l’autre, comment cet espace est récupéré par les ouvriers radicaux et militants afin d’avancer leurs intérêts. Le cinquième chapitre explore comment les romans néo-prolétariens subvertissent ironiquement les tropes du genre prolétarien précédent, ce qui exprimerait l’ambivalence politique et le cynisme généralisé de la fin du vingtième siècle.
Resumo:
The current study is an attempt to find a means of lowering oxalate concentration in individuals susceptible to recurrent calcium oxalate stone disease.The formation of renal stone composed of calcium oxalate is a complex process that remains poorly understood and treatment of idiopathic recurrent stone formers is quite difficult and this area has attracted lots of research workers. The main objective of this work are to study the effect of certain mono and dicarboxylic acids on calcium oxalate crystal growth in vitro, isolation and characterization of oxalate degrading bacteria, study the biochemical effect of sodium glycollate and dicarboxylic acids on oxalate metabolism in experimental stone forming rats and To investigate the effect of dicarboxylic acids on oxalate metabolism in experimental hyperoxaluric rats. Oxalic acid is one of the most highly oxidized organic compound widely distributed in the diets of man and animals, and ingestion of plants that contain high concentration of oxalate may lead to intoxication. Excessive ingestion of dietary oxalate may lead to hyperoxaluria and calcium oxalate stone disease.The formation of calcium oxalate stone in the urine is dependent on the saturation level of both calcium and oxalate. Thus the management of one or both of these ions in individuals susceptible to urolithiasis appears to be important. The control of endogenous oxalate synthesis from its precursors in hyperoxaluric situation is likely to yield beneficial results and can be a useful approach in the medical management of urinary stones. A variety of compounds have been investigated to curtain endogenous oxalate synthesis which is a crucial factor, most of these compounds have not proved to be effective in the in vivo situation and some of them are not free from the toxic effect. The non-operative management of stone disease has been practiced in ancient India in the three famous indigenous systems of medicine, Ayurveda, Unani and Siddha, and proved to be effective.However the efficiency of most of these substances is still questionable and demands further study. Man as well as other mammals cannot metabolize oxalic acid. Excessive ingestion of oxalic acid can arise from oxalate rich food and from its major metabolic precursors, glycollate, glyoxylate and ascorbic acid can lead to an acute oxalate toxicity. Increasedlevels of circulating oxalate, which can result in a variety of diseases including renal failure and oxalate lithiasis. The ability to enzymatically degrade oxalate to less noxious Isubstances, formate and CO2, could benefit a great number of individuals including those afflicted with hyperoxaluria and calcium oxalate stone disease.