1000 resultados para Radioanalytical methods
Resumo:
Työn tarkoituksena oli kehittää analyyttinen erotusmenetelmä eräässä valmistusprosessissa käytettävän hapettavan aineen ja liuottimen välillä syntyvien reaktiotuotteiden tutkimiseen ja analysoimiseen. Lisäksi tarkoituksena oli tutkia prosessiolosuhteiden turvallisuutta. Kirjallisuusosassa käsitellään erilaisia orgaanisia peroksideja, niiden käyttötarkoituksia ja niiden käyttöön liittyviä huomioitavia asioita. Lisäksi tarkastellaan yleisimpiä analyysimenetelmiä, joita on käytetty erilaisten peroksidien analysoinnissa. Näitä analyysimenetelmiä on useimmiten käytetty nestemäisten näytteiden tutkimuksissa. Harvemmin on analysoitu kaasu- ja kiintoainenäytteitä. Kokeellisessa osassa kehitettiin kirjallisuuden perusteella peroksidiyhdisteille identifiointimenetelmä ja tutkittiin prosessin näytteet. Analyysimenetelmiksi valittiin iodometrinen titraus ja HPLC-UV-MS-menetelmä. Lisäksi käytettiin peroksidimittaukseen soveltuvia testiliuskoja. Tutkimus osoitti, että iodometrisen titrauksen ja testiliuskojen perusteella näytteissä oli vähäisiä määriä peroksideja viikon jälkeen peroksidilisäyksestä. HPLC-UV-MS-analyysien perusteella näytteiden analysointia häiritsi selluloosa, jota löytyi jokaisesta näytteestä.
Resumo:
Statistical analyses of measurements that can be described by statistical models are of essence in astronomy and in scientific inquiry in general. The sensitivity of such analyses, modelling approaches, and the consequent predictions, is sometimes highly dependent on the exact techniques applied, and improvements therein can result in significantly better understanding of the observed system of interest. Particularly, optimising the sensitivity of statistical techniques in detecting the faint signatures of low-mass planets orbiting the nearby stars is, together with improvements in instrumentation, essential in estimating the properties of the population of such planets, and in the race to detect Earth-analogs, i.e. planets that could support liquid water and, perhaps, life on their surfaces. We review the developments in Bayesian statistical techniques applicable to detections planets orbiting nearby stars and astronomical data analysis problems in general. We also discuss these techniques and demonstrate their usefulness by using various examples and detailed descriptions of the respective mathematics involved. We demonstrate the practical aspects of Bayesian statistical techniques by describing several algorithms and numerical techniques, as well as theoretical constructions, in the estimation of model parameters and in hypothesis testing. We also apply these algorithms to Doppler measurements of nearby stars to show how they can be used in practice to obtain as much information from the noisy data as possible. Bayesian statistical techniques are powerful tools in analysing and interpreting noisy data and should be preferred in practice whenever computational limitations are not too restrictive.
Resumo:
Different methods for lymphatic mapping in dogs, such as infusing tissues with vital dyes or radioactive substances, have been studied, aiming at the early detection of lymph node metastasis. Thus, one could anticipate therapeutic measures and, consequently, prolong the survival and improve the quality of life of the patients. The objectives of this experiment were to locate the nodes responsible for draining the uterine body and horns and to try to establish the relationship between the uterus and the medial iliac lymph nodes to contribute to the early diagnosis and prognosis of uterine disorders. We studied 15 female dogs divided into two groups (5 dead and 10 intraoperative ovariohysterectomy bitches). The dye used was patent blue V (Patent Bleu V®). It was observed that the iliac lymph node chain receives much of the uterine (horns) drainage. This method should be considered for safer studies of uterine sanity. This information suggests that evaluating these lymph nodes will allow correlating changes in their physiological status with uterine pathologies.
Resumo:
The papermaking industry has been continuously developing intelligent solutions to characterize the raw materials it uses, to control the manufacturing process in a robust way, and to guarantee the desired quality of the end product. Based on the much improved imaging techniques and image-based analysis methods, it has become possible to look inside the manufacturing pipeline and propose more effective alternatives to human expertise. This study is focused on the development of image analyses methods for the pulping process of papermaking. Pulping starts with wood disintegration and forming the fiber suspension that is subsequently bleached, mixed with additives and chemicals, and finally dried and shipped to the papermaking mills. At each stage of the process it is important to analyze the properties of the raw material to guarantee the product quality. In order to evaluate properties of fibers, the main component of the pulp suspension, a framework for fiber characterization based on microscopic images is proposed in this thesis as the first contribution. The framework allows computation of fiber length and curl index correlating well with the ground truth values. The bubble detection method, the second contribution, was developed in order to estimate the gas volume at the delignification stage of the pulping process based on high-resolution in-line imaging. The gas volume was estimated accurately and the solution enabled just-in-time process termination whereas the accurate estimation of bubble size categories still remained challenging. As the third contribution of the study, optical flow computation was studied and the methods were successfully applied to pulp flow velocity estimation based on double-exposed images. Finally, a framework for classifying dirt particles in dried pulp sheets, including the semisynthetic ground truth generation, feature selection, and performance comparison of the state-of-the-art classification techniques, was proposed as the fourth contribution. The framework was successfully tested on the semisynthetic and real-world pulp sheet images. These four contributions assist in developing an integrated factory-level vision-based process control.
Resumo:
Stochastic approximation methods for stochastic optimization are considered. Reviewed the main methods of stochastic approximation: stochastic quasi-gradient algorithm, Kiefer-Wolfowitz algorithm and adaptive rules for them, simultaneous perturbation stochastic approximation (SPSA) algorithm. Suggested the model and the solution of the retailer's profit optimization problem and considered an application of the SQG-algorithm for the optimization problems with objective functions given in the form of ordinary differential equation.
Resumo:
Protein engineering aims to improve the properties of enzymes and affinity reagents by genetic changes. Typical engineered properties are affinity, specificity, stability, expression, and solubility. Because proteins are complex biomolecules, the effects of specific genetic changes are seldom predictable. Consequently, a popular strategy in protein engineering is to create a library of genetic variants of the target molecule, and render the population in a selection process to sort the variants by the desired property. This technique, called directed evolution, is a central tool for trimming protein-based products used in a wide range of applications from laundry detergents to anti-cancer drugs. New methods are continuously needed to generate larger gene repertoires and compatible selection platforms to shorten the development timeline for new biochemicals. In the first study of this thesis, primer extension mutagenesis was revisited to establish higher quality gene variant libraries in Escherichia coli cells. In the second study, recombination was explored as a method to expand the number of screenable enzyme variants. A selection platform was developed to improve antigen binding fragment (Fab) display on filamentous phages in the third article and, in the fourth study, novel design concepts were tested by two differentially randomized recombinant antibody libraries. Finally, in the last study, the performance of the same antibody repertoire was compared in phage display selections as a genetic fusion to different phage capsid proteins and in different antibody formats, Fab vs. single chain variable fragment (ScFv), in order to find out the most suitable display platform for the library at hand. As a result of the studies, a novel gene library construction method, termed selective rolling circle amplification (sRCA), was developed. The method increases mutagenesis frequency close to 100% in the final library and the number of transformants over 100-fold compared to traditional primer extension mutagenesis. In the second study, Cre/loxP recombination was found to be an appropriate tool to resolve the DNA concatemer resulting from error-prone RCA (epRCA) mutagenesis into monomeric circular DNA units for higher efficiency transformation into E. coli. Library selections against antigens of various size in the fourth study demonstrated that diversity placed closer to the antigen binding site of antibodies supports generation of antibodies against haptens and peptides, whereas diversity at more peripheral locations is better suited for targeting proteins. The conclusion from a comparison of the display formats was that truncated capsid protein three (p3Δ) of filamentous phage was superior to the full-length p3 and protein nine (p9) in obtaining a high number of uniquely specific clones. Especially for digoxigenin, a difficult hapten target, the antibody repertoire as ScFv-p3Δ provided the clones with the highest affinity for binding. This thesis on the construction, design, and selection of gene variant libraries contributes to the practical know-how in directed evolution and contains useful information for scientists in the field to support their undertakings.
Resumo:
This study focused on identifying various system boundaries and evaluating methods of estimating energy performance of biogas production. First, the output-input ratio method used for evaluating energy performance from the system boundaries was reviewed. Secondly, ways to assess the efficiency of biogas use and parasitic energy demand were investigated. Thirdly, an approach for comparing biogas production to other energy production methods was evaluated. Data from an existing biogas plant, located in Finland, was used for the evaluation of the methods. The results indicate that calculating and comparing the output-input ratios (Rpr1, Rpr2, Rut, Rpl and Rsy) can be used in evaluating the performance of biogas production system. In addition, the parasitic energy demand calculations (w) and the efficiency of utilizing produced biogas (η) provide detailed information on energy performance of the biogas plant. Furthermore, Rf and energy output in relation to total solid mass of feedstock (FO/TS) are useful in comparing biogas production with other energy recovery technologies. As a conclusion it is essential for the comparability of biogas plants that their energy performance would be calculated in a more consistent manner in the future.
Resumo:
In today's logistics environment, there is a tremendous need for accurate cost information and cost allocation. Companies searching for the proper solution often come across with activity-based costing (ABC) or one of its variations which utilizes cost drivers to allocate the costs of activities to cost objects. In order to allocate the costs accurately and reliably, the selection of appropriate cost drivers is essential in order to get the benefits of the costing system. The purpose of this study is to validate the transportation cost drivers of a Finnish wholesaler company and ultimately select the best possible driver alternatives for the company. The use of cost driver combinations as an alternative is also studied. The study is conducted as a part of case company's applied ABC-project using the statistical research as the main research method supported by a theoretical, literature based method. The main research tools featured in the study include simple and multiple regression analyses, which together with the literature and observations based practicality analysis forms the basis for the advanced methods. The results suggest that the most appropriate cost driver alternatives are the delivery drops and internal delivery weight. The possibility of using cost driver combinations is not suggested as their use doesn't provide substantially better results while increasing the measurement costs, complexity and load of use at the same time. The use of internal freight cost drivers is also questionable as the results indicate weakening trend in the cost allocation capabilities towards the end of the period. Therefore more research towards internal freight cost drivers should be conducted before taking them in use.
Resumo:
Today lean-philosophy has gathered a lot of popularity and interest in many industries. This customer-oriented philosophy helps to understand customer’s value creation which can be used to improve efficiency. A comprehensive study of lean and lean-methods in service industry were created in this research. In theoretical part lean-philosophy is studied in different levels which will help to understand its diversity. To support lean, this research also presents basic concepts of process management. Lastly theoretical part presents a development model to support process development in systematical way. The empirical part of the study was performed by performing experimental measurements during the service center’s product return process and by analyzing this data. Measurements were used to map out factors that have a negative influence on the process flow. Several development propositions were discussed to remove these factors. Problems mainly occur due to challenges in controlling customers and due to the lack of responsibility and continuous improvement on operational level. Development propositions concern such factors as change in service center’s physical environment, standardization of work tasks and training. These factors will remove waste in the product return process and support the idea of continuous improvement.
Resumo:
The success of conservation systems such as no-till depends on adequate soil cover throughout the year, which is possible through the use of cover crops. For this purpose the species belonging to the genus Urochloa has stood out by virtue of its hardiness and tolerance to drought. Aiming ground cover for the no-till system, the objective was to evaluate the establishment of two species of the genus Urochloa, in three sowing methods, in the weed suppression and the sensitivity of these forages to glyphosate. The study design was a randomized block with a 2 x 3 x 3 factorial arrangement, in which factor A was composed of Urochloa ruziziensis and Urochloa hybrid CIAT 36087 cv. Mulato II, factor B was formed by sowing methods: sown without embedding, sown with light embedding and sown in rows, and factor C was composed of three doses of glyphosate (0.975, 1.625 and 2.275 kg ha-1 of acid equivalent). For determination of weed suppression, assessment of biomass yield and soil cover was performed, by brachiaria and weeds, at 30, 60, 90, 120 and 258 days after sowing. Visual assessment of the desiccation efficiency at 7 and 14 days after herbicide application was performed. It is concluded that embedding Urochloa seeds stands out in relation to sowing in the soil surface. Urochloa ruziziensis is more efficient in the dry weight yield, weed suppression, in addition to being more sensitive to glyphosate herbicide.
Resumo:
Mobility of atrazine in soil has contributed to the detection of levels above the legal limit in surface water and groundwater in Europe and the United States. The use of new formulations can reduce or minimize the impacts caused by the intensive use of this herbicide in Brazil, mainly in regions with higher agricultural intensification. The objective of this study was to compare the leaching of a commercial formulation of atrazine (WG) with a controlled release formulation (xerogel) using bioassay and chromatographic methods of analysis. The experiment was a split plot randomized block design with four replications, in a (2 x 6) + 1 arrangement. The main formulations of atrazine (WG and xerogel) were allocated in the plots, and the herbicide concentrations (0, 3200, 3600, 4200, 5400 and 8000 g ha-1), in the subplots. Leaching was determined comparatively by using bioassays with oat and chromatographic analysis. The results showed a greater concentration of the herbicide in the topsoil (0-4 cm) in the treatment with the xerogel formulation in comparison with the commercial formulation, which contradicts the results obtained with bioassays, probably because the amount of herbicide available for uptake by plants in the xerogel formulation is less than that available in the WG formulation.
Resumo:
A field experiment was conducted for two consecutive years to study the effect of fertilizer application methods and inter and intra-row weed-crop competition durations on density and biomass of different weeds and growth, grain yield and yield components of maize. The experimental treatments comprised of two fertilizer application methods (side placement and below seed placement) and inter and intra-row weed-crop competition durations each for 15, 30, 45, and 60 days after emergence, as well as through the crop growing period. Fertilizer application method didn't affect weed density, biomass, and grain yield of maize. Below seed fertilizer placement generally resulted in less mean weed dry weight and more crop leaf area index, growth rate, grain weight per cob and 1000 grain weight. Minimum number of weeds and dry weight were recorded in inter-row or intra-row weed-crop competition for 15 DAE. Number of cobs per plant, grain weight per cob, 1000 grain weight and grain yield decreased with an increase in both inter-row and intra-row weed-crop competition durations. Maximum mean grain yield of 6.35 and 6.33 tha-1 were recorded in inter-row and intra-row weed competition for 15 DAE, respectively.
Resumo:
The purpose of this study was to explore software development methods and quality assurance practices used by South Korean software industry. Empirical data was collected by conducting a survey that focused on three main parts: software life cycle models and methods, software quality assurance including quality standards, the strengths and weaknesses of South Korean software industry. The results of the completed survey showed that the use of agile methods is slightly surpassing the use of traditional software development methods. The survey also revealed an interesting result that almost half of the South Korean companies do not use any software quality assurance plan in their projects. For the state of South Korean software industry large number of the respondents thought that despite of the weakness, the status of software development in South Korea will improve in the future.
Resumo:
The purpose of this thesis was to study the design of demand forecasting processes and management of demand. In literature review were different processes found and forecasting methods and techniques interviewed. Also role of bullwhip effect in supply chain was identified and how to manage it with information sharing operations. In the empirical part of study is at first described current situation and challenges in case company. After that will new way to handle demand introduced with target budget creation and how information sharing with 5 products and a few customers would bring benefits to company. Also the new S&OP process created within this study and organization for it.
Neuroethologic differences in sleep deprivation induced by the single- and multiple-platform methods
Resumo:
It has been proposed that the multiple-platform method (MP) for desynchronized sleep (DS) deprivation eliminates the stress induced by social isolation and by the restriction of locomotion in the single-platform (SP) method. MP, however, induces a higher increase in plasma corticosterone and ACTH levels than SP. Since deprivation is of heuristic value to identify the functional role of this state of sleep, the objective of the present study was to determine the behavioral differences exhibited by rats during sleep deprivation induced by these two methods. All behavioral patterns exhibited by a group of 7 albino male Wistar rats submitted to 4 days of sleep deprivation by the MP method (15 platforms, spaced 150 mm apart) and by 7 other rats submitted to sleep deprivation by the SP method were recorded in order to elaborate an ethogram. The behavioral patterns were quantitated in 10 replications by naive observers using other groups of 7 rats each submitted to the same deprivation schedule. Each quantification session lasted 35 min and the behavioral patterns presented by each rat over a period of 5 min were counted. The results obtained were: a) rats submitted to the MP method changed platforms at a mean rate of 2.62 ± 1.17 platforms h-1 animal-1; b) the number of episodes of noninteractive waking patterns for the MP animals was significantly higher than that for SP animals (1077 vs 768); c) additional episodes of waking patterns (26.9 ± 18.9 episodes/session) were promoted by social interaction in MP animals; d) the cumulative number of sleep episodes observed in the MP test (311) was significantly lower (chi-square test, 1 d.f., P<0.05) than that observed in the SP test (534); e) rats submitted to the MP test did not show the well-known increase in ambulatory activity observed after the end of the SP test; f) comparison of 6 MP and 6 SP rats showed a significantly shorter latency to the onset of DS in MP rats (7.8 ± 4.3 and 29.0 ± 25.0 min, respectively; Student t-test, P<0.05). We conclude that the social interaction occurring in the MP test generates additional stress since it increases the time of forced wakefulness and reduces the time of rest promoted by synchronized sleep.