984 resultados para Diffusion measurements
Resumo:
The objective of this study was to predict by means of Artificial Neural Network (ANN), multilayer perceptrons, the texture attributes of light cheesecurds perceived by trained judges based on instrumental texture measurements. Inputs to the network were the instrumental texture measurements of light cheesecurd (imitative and fundamental parameters). Output variables were the sensory attributes consistency and spreadability. Nine light cheesecurd formulations composed of different combinations of fat and water were evaluated. The measurements obtained by the instrumental and sensory analyses of these formulations constituted the data set used for training and validation of the network. Network training was performed using a back-propagation algorithm. The network architecture selected was composed of 8-3-9-2 neurons in its layers, which quickly and accurately predicted the sensory texture attributes studied, showing a high correlation between the predicted and experimental values for the validation data set and excellent generalization ability, with a validation RMSE of 0.0506.
Resumo:
This study aimed at comparing both the results of wheat flour quality assessed by the new equipment Wheat Gluten Quality Analyser (WGQA) and those obtained by the extensigraph and farinograph. Fifty-nine wheat samples were evaluated for protein and gluten contents; the rheological properties of gluten and wheat flour were assessed using the WGQA and the extensigraph/farinograph methods, respectively, in addition to the baking test. Principal component analysis (PCA) and linear regression were used to evaluate the results. The parameters of energy and maximum resistance to extension determined by the extensigraph and WGQA showed an acceptable level for the linear correlation within the range from 0.6071 to 0.6511. The PCA results obtained using WGQA and the other rheological apparatus showed values similar to those expected for wheat flours in the baking test. Although all equipment used was effective in assessing the behavior of strong and weak flours, the results of medium strength wheat flour varied. WGQA has shown to use less amount of sample and to be faster and easier to use in relation to the other instruments used.
Resumo:
The two central goals of this master's thesis are to serve as a guidebook on the determination of uncertainty in efficiency measurements and to investigate sources of uncertainty in efficiency measurements in the field of electric drives by a literature review, mathematical modeling and experimental means. The influence of individual sources of uncertainty on the total instrumental uncertainty is investigated with the help of mathematical models derived for a balance and a direct air cooled calorimeter. The losses of a frequency converter and an induction motor are measured with the input-output method and a balance calorimeter at 50 and 100 % loads. A software linking features of Matlab and Excel is created to process measurement data, calculate uncertainties and to calculate and visualize results. The uncertainties are combined with both the worst case and the realistic perturbation method and distributions of uncertainty by source are shown based on experimental results. A comparison of the calculated uncertainties suggests that the balance calorimeter determines losses more accurately than the input-output method with a relative RPM uncertainty of 1.46 % compared to 3.78 - 12.74 % respectively with 95 % level of confidence at the 93 % induction motor efficiency or higher. As some principles in uncertainty analysis are open to interpretation the views and decisions of the analyst can have noticeable influence on the uncertainty in the measurement result.
Resumo:
Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.
Resumo:
Preventive maintenance of frequency converters has been based on pre-planned replace-ment of wearing or ageing components. Exchange intervals follow component life-time expectations which are based on empirical knowledge or schedules defined by manufac-turer. However, the lifetime of a component can vary significantly, because drives are used in very different operating environments and applications. The main objective of the research was to provide information on methods, i.e. how in-verter's operating condition can be measured reliably under field conditions. At first, the research focused on critical components such as current transducers, IGBTs and DC link capacitor bank, because these aging have already been identified. Of these, the DC link capacitor measurement method was selected for closer examination. With this method, the total capacitance and its total series resistance can be measured. The suitability of the measuring procedure was estimated on the basis of practical measurements. The research was made by using so called triangulation method, including a literature review, simulations and practical measurements. Based on the results, the new measu-rement method seems suitable with some reservations to practical measurements. How-ever, the measuring method should be further developed in order to improve its reliability.
Resumo:
The thesis focuses on light water reactors (pressurized water reactors, boiling water reactors) and measurement techniques for basic thermal hydraulics parameters that are used in a nuclear power plant. The goal of this work is a development of laboratory exercises for basic nuclear thermal hydraulics measurements.
Resumo:
Currency is something people deal with every day in their lives. The contemporary society is very much revolving around currencies. Even though technological development has been rapid, the principle of currency has stayed relatively unchanged for a long time. Bitcoin is a digital currency that introduced an alternative to other digital currencies, and to the traditional physical currencies. Bitcoin is peer-to-peer, open source, and it erases the need of a third party in transactions. Bitcoin has since inception gained certain fame, but it has not established itself as a common currency in the world. The purpose of this study was to analyse what kind of potential does Bitcoin have to become a widely accepted currency in day-to-day transactions. The main research question was divided into three sub questions: • What kind of a process is the diffusion of new innovations? • What kinds of factors speak for the wider adoption of Bitcoin? • What kinds of factors speak against the wider adoption of Bitcoin? The purpose of the study was approached by having diffusion of innovations as the theoretical framework. The four elements in diffusion of innovations are, innovation, communication, time, and social system. The theoretical framework is applied to Bitcoin, and the research questions answered by analysing Bitcoin’s potential diffusion prospects. The body of research data consisted of media texts and statistics. In this study, content analysis was the research method. The main findings of the study are that Bitcoin has clear strengths, but it faces a large amount of uncertainty. Bitcoin’s strong areas are the transactions. They are fast, easy, and cheap. From the innovation diffusion perspective Bitcoin is still relatively unknown, and the general public’s attitudes towards it are sceptical. The research findings purport that Bitcoin has potential demand especially when the financial system of a region is dysfunctional, or when there is a financial crisis. Bitcoin is not very trusted, and the majority of people do not see a reason to start using Bitcoin in the future. A large number of people associate it with illegal activities. In general people are largely unaware of what Bitcoin is or what are the strengths and weaknesses. Bitcoin is an innovative alternative currency. However, unless people see a major need for Bitcoin due to a financial crisis, or dysfunctionality in the financial system, Bitcoin will not become much more widespread as it is today. Bitcoin’s underlying technology can be harnessed to multiple uses. Developments in that field in the future are something that future researchers could look into.
Resumo:
The aim of this master's thesis is to develop a two-dimensional drift-di usion model, which describes charge transport in organic solar cells. The main bene t of a two-dimensional model compared to a one-dimensional one is the inclusion of the nanoscale morphology of the active layer of a bulk heterojunction solar cell. The developed model was used to study recombination dynamics at the donor-acceptor interface. In some cases, it was possible to determine e ective parameters, which reproduce the results of the two-dimensional model in the one-dimensional case. A summary of the theory of charge transport in semiconductors was presented and discussed in the context of organic materials. Additionally, the normalization and discretization procedures required to nd a numerical solution to the charge transport problem were outlined. The charge transport problem was solved by implementing an iterative scheme called successive over-relaxation. The obtained solution is given as position-dependent electric potential, free charge carrier concentrations and current densities in the active layer. An interfacial layer, separating the pure phases, was introduced in order to describe charge dynamics occurring at the interface between the donor and acceptor. For simplicity, an e ective generation of free charge carriers in the interfacial layer was implemented. The pure phases simply act as transport layers for the photogenerated charges. Langevin recombination was assumed in the two-dimensional model and an analysis of the apparent recombination rate in the one-dimensional case is presented. The recombination rate in a two-dimensional model is seen to e ectively look like reduced Langevin recombination at open circuit. Replicating the J-U curves obtained in the two-dimensional model is, however, not possible by introducing a constant reduction factor in the Langevin recombination rate. The impact of an acceptor domain in the pure donor phase was investigated. Two cases were considered, one where the acceptor domain is isolated and another where it is connected to the bulk of the acceptor. A comparison to the case where no isolated domains exist was done in order to quantify the observed reduction in the photocurrent. The results show that all charges generated at the isolated domain are lost to recombination, but the domain does not have a major impact on charge transport. Trap-assisted recombination at interfacial trap states was investigated, as well as the surface dipole caused by the trapped charges. A theoretical expression for the ideality factor n_id as a function of generation was derived and shown to agree with simulation data. When the theoretical expression was fitted to simulation data, no interface dipole was observed.
Resumo:
We examined three different algorithms used in diffusion Monte Carlo (DMC) to study their precisions and accuracies in predicting properties of isolated atoms, which are H atom ground state, Be atom ground state and H atom first excited state. All three algorithms — basic DMC, minimal stochastic reconfiguration DMC, and pure DMC, each with future-walking, are successfully impletmented in ground state energy and simple moments calculations with satisfactory results. Pure diffusion Monte Carlo with future-walking algorithm is proven to be the simplest approach with the least variance. Polarizabilities for Be atom ground state and H atom first excited state are not satisfactorily estimated in the infinitesimal differentiation approach. Likewise, an approach using the finite field approximation with an unperturbed wavefunction for the latter system also fails. However, accurate estimations for the a-polarizabilities are obtained by using wavefunctions that come from the time-independent perturbation theory. This suggests the flaw in our approach to polarizability estimation for these difficult cases rests with our having assumed the trial function is unaffected by infinitesimal perturbations in the Hamiltonian.
Resumo:
A Czerny Mount double monochromator is used to measure Raman scattered radiation near 90" from a crystalline, Silicon sample. Incident light is provided by a mixed gas Kr-Ar laser, operating at 5145 A. The double monochromator is calibrated to true wavelength by comparison of Kr and Ar emission Une positions (A) to grating position (A) display [1]. The relationship was found to be hnear and can be described by, y = 1.219873a; - 1209.32, (1) where y is true wavelength (A) and xis grating position display (A). The Raman emission spectra are collected via C"*""*" encoded software, which displays a mV signal from a Photodetector and allows stepping control of the gratings via an A/D interface. [2] The software collection parameters, detector temperature and optics are optimised to yield the best quality spectra. The inclusion of a cryostat allows for temperatmre dependent capabihty ranging from 4 K to w 350 K. Silicon Stokes temperatm-e dependent Raman spectra, generally show agreement with Uterature results [3] in their frequency haxdening, FWHM reduction and intensity increase as temperature is reduced. Tests reveal that a re-alignment of the double monochromator is necessary before spectral resolution can approach literature standard. This has not yet been carried out due to time constraints.
Resumo:
The diffusion of Co60 in the body centered cubic beta phase of a ZrSOTi SO alloy has been studied at 900°, 1200°, and 1440°C. The results confirm earlier unpublished data obtained by Kidson17 • The temperature dependence of the diffusion coefficient is unusual and suggests that at least two and possibly three mechanisms may be operative Annealing of the specimen in the high B.C.C. region prior to the deposition of the tracer results in a large reduction in the diffusion coefficient. The possible significance of this effect is discussed in terms of rapid transport along dislocation network.
Resumo:
A system comprised of a Martin-Puplett type polarizing interferometer and a Helium-3 cryostat was developed to study the transmission of materials in the very-far-infrared region of the spectrum. This region is of significant interest due to the low-energy excitations which many materials exhibit. The experimental transmission spectrum contains information concerning the optical properties of the material. The set-up of this system is described in detail along with the adaptations and improvements which have been made to the system to ensure the best results. Transmission experiments carried out with this new set-up for two different varieties of materials: superconducting thin films of lead and biological proteins, are discussed. Several thin films of lead deposited on fused silica quartz substrates were studied. From the ratio of the transmission in the superconducting state to that in the normal state the superconducting energy gap was determined to be approximately 25 cm-1 which corresponds to 2~/kBTc rv 5 in agreement with literature data. Furthermore, in agreement with theoretical predictions, the maximum in the transmission ratio was observed to increase as the film thickness was increased. These results provide verification of the system's ability to accurately measure the optical properties of thin low-Tc superconducting films. Transmission measurements were carried out on double deionized water, and a variety of different concentrations by weight of the globular protein, Bovine Serum Albumin, in the sol, gel and crystalline forms. The results of the water study agree well with literature values and thus further illustrate the reproducibility of the system. The results of the protein experiments, although preliminary, indicate that as the concentration increases the samples become more transparent. Some weak structure in the frequency dependent absorption coefficient, which is more prominent in crystalline samples, may be due to low frequency vibrations of the protein molecules.
Resumo:
Several recent studies have described the period of impaired alertness and performance known as sleep inertia that occurs upon awakening from a full night of sleep. They report that sleep inertia dissipates in a saturating exponential manner, the exact time course being task dependent, but generally persisting for one to two hours. A number of factors, including sleep architecture, sleep depth and circadian variables are also thought to affect the duration and intensity. The present study sought to replicate their findings for subjective alertness and reaction time and also to examine electrophysiological changes through the use of event-related potentials (ERPs). Secondly, several sleep parameters were examined for potential effects on the initial intensity of sleep inertia. Ten participants spent two consecutive nights and subsequent mornings in the sleep lab. Sleep architecture was recorded for a fiiU nocturnal episode of sleep based on participants' habitual sleep patterns. Subjective alertness and performance was measured for a 90-minute period after awakening. Alertness was measured every five minutes using the Stanford Sleepiness Scale (SSS) and a visual analogue scale (VAS) of sleepiness. An auditory tone also served as the target stimulus for an oddball task designed to examine the NlOO and P300 components ofthe ERP waveform. The five-minute oddball task was presented at 15-minute intervals over the initial 90-minutes after awakening to obtain six measures of average RT and amplitude and latency for NlOO and P300. Standard polysomnographic recording were used to obtain digital EEG and describe the night of sleep. Power spectral analyses (FFT) were used to calculate slow wave activity (SWA) as a measure of sleep depth for the whole night, 90-minutes before awakening and five minutes before awakening.
Resumo:
Our objective is to develop a diffusion Monte Carlo (DMC) algorithm to estimate the exact expectation values, ($o|^|^o), of multiplicative operators, such as polarizabilities and high-order hyperpolarizabilities, for isolated atoms and molecules. The existing forward-walking pure diffusion Monte Carlo (FW-PDMC) algorithm which attempts this has a serious bias. On the other hand, the DMC algorithm with minimal stochastic reconfiguration provides unbiased estimates of the energies, but the expectation values ($o|^|^) are contaminated by ^, an user specified, approximate wave function, when A does not commute with the Hamiltonian. We modified the latter algorithm to obtain the exact expectation values for these operators, while at the same time eliminating the bias. To compare the efficiency of FW-PDMC and the modified DMC algorithms we calculated simple properties of the H atom, such as various functions of coordinates and polarizabilities. Using three non-exact wave functions, one of moderate quality and the others very crude, in each case the results are within statistical error of the exact values.