17 resultados para Density measurement (specific gravity)
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
In this thesis three experiments with atomic hydrogen (H) at low temperatures T<1 K are presented. Experiments were carried out with two- (2D) and three-dimensional (3D) H gas, and with H atoms trapped in solid H2 matrix. The main focus of this work is on interatomic interactions, which have certain specific features in these three systems considered. A common feature is the very high density of atomic hydrogen, the systems are close to quantum degeneracy. Short range interactions in collisions between atoms are important in gaseous H. The system of H in H2 differ dramatically because atoms remain fixed in the H2 lattice and properties are governed by long-range interactions with the solid matrix and with H atoms. The main tools in our studies were the methods of magnetic resonance, with electron spin resonance (ESR) at 128 GHz being used as the principal detection method. For the first time in experiments with H in high magnetic fields and at low temperatures we combined ESR and NMR to perform electron-nuclear double resonance (ENDOR) as well as coherent two-photon spectroscopy. This allowed to distinguish between different types of interactions in the magnetic resonance spectra. Experiments with 2D H gas utilized the thermal compression method in homogeneous magnetic field, developed in our laboratory. In this work methods were developed for direct studies of 3D H at high density, and for creating high density samples of H in H2. We measured magnetic resonance line shifts due to collisions in the 2D and 3D H gases. First we observed that the cold collision shift in 2D H gas composed of atoms in a single hyperfine state is much smaller than predicted by the mean-field theory. This motivated us to carry out similar experiments with 3D H. In 3D H the cold collision shift was found to be an order of magnitude smaller for atoms in a single hyperfine state than that for a mixture of atoms in two different hyperfine states. The collisional shifts were found to be in fair agreement with the theory, which takes into account symmetrization of the wave functions of the colliding atoms. The origin of the small shift in the 2D H composed of single hyperfine state atoms is not yet understood. The measurement of the shift in 3D H provides experimental determination for the difference of the scattering lengths of ground state atoms. The experiment with H atoms captured in H2 matrix at temperatures below 1 K originated from our work with H gas. We found out that samples of H in H2 were formed during recombination of gas phase H, enabling sample preparation at temperatures below 0.5 K. Alternatively, we created the samples by electron impact dissociation of H2 molecules in situ in the solid. By the latter method we reached highest densities of H atoms reported so far, 3.5(5)x1019 cm-3. The H atoms were found to be stable for weeks at temperatures below 0.5 K. The observation of dipolar interaction effects provides a verification for the density measurement. Our results point to two different sites for H atoms in H2 lattice. The steady-state nuclear polarizations of the atoms were found to be non-thermal. The possibility for further increase of the impurity H density is considered. At higher densities and lower temperatures it might be possible to observe phenomena related to quantum degeneracy in solid.
Resumo:
Summary: Determination of mare colostrum quality by measuring specific gravity
Resumo:
Kiinnostus pienhiukkasia kohtaan on kasvanut voimakkaasti, koska niiden haitallisista terveysvaikutuksista on saatu uutta tietoa. Asiasta on julkaistu lukuisia tieteellisiä tutkimuksia ja viimeisimpien tietojan mukaan kohonneilla pienhiukkaspitoisuuksilla on vaikutusta jopa sydän- ja verisuonisairauksiin. Vakavien terveysvaikutusten ja kiristyvän lainsäädännön vuoksi uusille reaaliaikaisille hiukkasmittalaitteille on kova kysyntä. Tämä työ on osa suurempaa Dekati Oy:ssä toteutettua Autotest –projektia, jossa kehitetään hiukkasmittalaitteita autoteollisuudelle. Tavoitteena työssä oli kehittää hiukkasmittalaitteeseen varaaja, jossa olisi huomattavasti pienemmät pienhiukkashäviöt kuin sähköisen alipaineimpaktorin ELPI:n varaajassa. Lainsäädäntö pohjautuu nykyään pelkästään massapitoisuuden mittaamiseen eikä reaaliaikaisesti massapitoisuutta mittaavaa laitetta ole olemassa. Tässä työssä testattiin uutta hiukkasten tiheydenmääritysmenetelmää, jonka avulla on mahdollista mitata massapitoisuutta reaaliaikaisesti. Suunniteltu varaaja on helppokäyttöinen ja varaustehokkuudeltaan odotusten mukainen, mutta pienhiukkashäviöt ovat edelleen tavoiteltua suuremmat vaikkakin pienemmät kuin ELPI:ssä. Tämä johtuu osittain tilavaraushäviöistä ja osittain koronan sähkökentän vaikutuksesta näytekanavaan. Tiheysmittauksesta saatiin lupaavia tuloksia, mutta jatkokehitystä vaaditaan häiriöiden suodattamiseksi ja kuormituskestävyyden parantamiseksi.
Resumo:
The aim of this work is to invert the ionospheric electron density profile from Riometer (Relative Ionospheric opacity meter) measurement. The newly Riometer instrument KAIRA (Kilpisjärvi Atmospheric Imaging Receiver Array) is used to measure the cosmic HF radio noise absorption that taking place in the D-region ionosphere between 50 to 90 km. In order to invert the electron density profile synthetic data is used to feed the unknown parameter Neq using spline height method, which works by taking electron density profile at different altitude. Moreover, smoothing prior method also used to sample from the posterior distribution by truncating the prior covariance matrix. The smoothing profile approach makes the problem easier to find the posterior using MCMC (Markov Chain Monte Carlo) method.
Resumo:
Streptavidin, a tetrameric protein secreted by Streptomyces avidinii, binds tightly to a small growth factor biotin. One of the numerous applications of this high-affinity system comprises the streptavidin-coated surfaces of bioanalytical assays which serve as universal binders for straightforward immobilization of any biotinylated molecule. Proteins can be immobilized with a lower risk of denaturation using streptavidin-biotin technology in contrast to direct passive adsorption. The purpose of this study was to characterize the properties and effects of streptavidin-coated binding surfaces on the performance of solid-phase immunoassays and to investigate the contributions of surface modifications. Various characterization tools and methods established in the study enabled the convenient monitoring and binding capacity determination of streptavidin-coated surfaces. The schematic modeling of the monolayer surface and the quantification of adsorbed streptavidin disclosed the possibilities and the limits of passive adsorption. The defined yield of 250 ng/cm2 represented approximately 65 % coverage compared with a modelled complete monolayer, which is consistent with theoretical surface models. Modifications such as polymerization and chemical activation of streptavidin resulted in a close to 10-fold increase in the biotin-binding densities of the surface compared with the regular streptavidin coating. In addition, the stability of the surface against leaching was improved by chemical modification. The increased binding densities and capacities enabled wider high-end dynamic ranges in the solid-phase immunoassays, especially when using the fragments of the capture antibodies instead of intact antibodies for the binding of the antigen. The binding capacity of the streptavidin surface was not, by definition, predictive of the low-end performance of the immunoassays nor the assay sensitivity. Other features such as non-specific binding, variation and leaching turned out to be more relevant. The immunoassays that use a direct surface readout measurement of time-resolved fluorescence from a washed surface are dependent on the density of the labeled antibodies in a defined area on the surface. The binding surface was condensed into a spot by coating streptavidin in liquid droplets into special microtiter wells holding a small circular indentation at the bottom. The condensed binding area enabled a denser packing of the labeled antibodies on the surface. This resulted in a 5 - 6-fold increase in the signal-to-background ratios and an equivalent improvement in the detection limits of the solid-phase immunoassays. This work proved that the properties of the streptavidin-coated surfaces can be modified and that the defined properties of the streptavidin-based immunocapture surfaces contribute to the performance of heterogeneous immunoassays.
Resumo:
The main objective of this research is creating a performance measurement system for accounting services of a large paper industry company. In this thesis there are compared different performance measurement system and then selected two systems, which are presented and compared more detailed. Performance Prism system is the used framework in this research. Performance Prism using success maps to determining objectives. Model‟s target areas are divided into five groups: stakeholder satisfaction, stakeholder contribution, strategy, processes and capabilities. The measurement system creation began by identifying stakeholders and defining their objectives. Based on the objectives are created success map. Measures are created based on the objectives and success map. Then is defined needed data for measures. In the final measurement system, there are total just over 40 measures. Each measure is defined specific target level and ownership. Number of measures is fairly large, but this is the first version of the measurement system, so the amount is acceptable.
Resumo:
The objective of the study is to find out how sales performance should be measured and how should sales be steered in a multinational company. The beginning of the study concentrates on the literature regarding sales, performance measurement, sales performance measurement, and sales steering. The empirical part of the study is a case study, in which the information was acquired from interviews with the key personnel of the company. The results of the interviews and the revealed problems were analyzed, and comparison for possible solutions was performed. When measuring sales performance, it is important to discover the specific needs and objectives for such a system. Specific needs should be highlighted in the design of the system. The system should be versatile and the structure of the system should be in line with the organizational structure. The role of the sales performance measurement system was seen to be important in helping sales steering. However, the importance of personal management and especially conversations were seen as really critical issue in the steering. Sales performance measurement could be based on the following perspectives: financial, market, customer, people, and future. That way the sales department could react to the environmental changes more rapidly.
Resumo:
Kuumahiertoprosessi on erittäin energiaintensiivinen prosessi, jonka energianominaiskulutus (EOK) on yleisesti 2–3.5 MWh/bdt. Noin 93 % energiasta kuluu jauhatuksessa jakautuen niin, että kaksi kolmasosaa kuluu päälinjan ja yksi kolmasosa rejektijauhatuksessa. Siksi myös tämän työn tavoite asetettiin vähentämään energian kulutusta juuri pää- ja rejektijauhatuksessa. Päälinjan jauhatuksessa tutkimuskohteiksi valittiin terityksen, tehojaon ja tuotantotason vaikutus EOK:een. Rejektijauhatuksen tehostamiseen pyrittiin yrittämällä vähentää rejektivirtaamaa painelajittelun keinoin. Koska TMP3 laitoksen jauhatuskapasiteettia on nostettu 25 %, tavoite oli nostaa päälinjan lajittelun kapasiteettia saman verran. Toisena tavoitteena oli pienentää rejektisuhdetta pää- ja rejektilajittelussa ja siten vähentää energiankulutusta rejektijauhatuksessa. Näitä tavoitteita lähestyttiin vaihtamalla päälinjan lajittimiin TamScreen-roottorit ja rejektilajittimiin Metso ProFoil-roottorit ja optimoimalla kuitufraktiot sihtirumpu- ja prosessiparametrimuutoksin. Syöttävällä terätyypillä pystyttiin vähentämään EOK:ta 100 kWh/bdt, mutta korkeampi jauhatusintensiteetti johti myös alempiin lujuusominaisuuksiin, korkeampaan ilmanläpäisyyn ja korkeampaan opasiteettiin. Myös tehojaolla voitiin vaikuttaa EOK:een. Kun ensimmäisen vaiheen jauhinta kuormitettiin enemmän, saavutettiin korkeimmillaan 70 kWh/bdt EOK-vähennys. Tuotantotason mittaamisongelmat heikensivät tuotantotasokoeajojen tuloksia siinä määrin, että näiden tulosten perusteella ei voida päätellä, onko EOK tuotantotasoriippuvainen vai ei. Päälinjan lajittelun kapasiteettia pystyttiin nostamaan TS-roottorilla vain 18 % jääden hieman tavoitetasosta. Rejektilajittelussa pystyttiin vähentämään rejektimäärää huomattavasti Metso ProFoil-roottorilla sekä sihtirumpu- ja prosessiparametrimuutoksin. Lajittamokehityksellä saavutettu EOK-vähennys arvioitiin massarejektisuhteen pienentymisen ja rejektijauhatuksessa käytetyn EOK:n avulla olevan noin 130 kWh/bdt. Yhteenvetona voidaan todeta, että tavoite 300 kWh/bdt EOK-vähennyksestä voidaan saavuttaa työssä käytetyillä tavoilla, mikäli niiden täysi potentiaali hyödynnetään tuotannossa.
Resumo:
Prostate-specific antigen (PSA) is a marker that is commonly used in estimating prostate cancer risk. Prostate cancer is usually a slowly progressing disease, which might not cause any symptoms whatsoever. Nevertheless, some cases of cancer are aggressive and need to be treated before they become life-threatening. However, the blood PSA concentration may rise also in benign prostate diseases and using a single total PSA (tPSA) measurement to guide the decision on further examinations leads to many unnecessary biopsies, over-detection, and overtreatment of indolent cancers which would not require treatment. Therefore, there is a need for markers that would better separate cancer from benign disorders, and would also predict cancer aggressiveness. The aim of this study was to evaluate whether intact and nicked forms of free PSA (fPSA-I and fPSA-N) or human kallikrein-related peptidase 2 (hK2) could serve as new tools in estimating prostate cancer risk. First, the immunoassays for fPSA-I and free and total hK2 were optimized so that they would be less prone to assay interference caused by interfering factors present in some blood samples. The optimized assays were shown to work well and were used to study the marker concentrations in the clinical sample panels. The marker levels were measured from preoperative blood samples of prostate cancer patients scheduled for radical prostatectomy. The association of the markers with the cancer stage and grade was studied. It was found that among all tested markers and their combinations especially the ratio of fPSA-N to tPSA and ratio of free PSA (fPSA) to tPSA were associated with both cancer stage and grade. They might be useful in predicting the cancer aggressiveness, but further follow-up studies are necessary to fully evaluate the significance of the markers in this clinical setting. The markers tPSA, fPSA, fPSA-I and hK2 were combined in a statistical model which was previously shown to be able to reduce unnecessary biopsies when applied to large screening cohorts of men with elevated tPSA. The discriminative accuracy of this model was compared to models based on established clinical predictors in reference to biopsy outcome. The kallikrein model and the calculated fPSA-N concentrations (fPSA minus fPSA-I) correlated with the prostate volume and the model, when compared to the clinical models, predicted prostate cancer in biopsy equally well. Hence, the measurement of kallikreins in a blood sample could be used to replace the volume measurement which is time-consuming, needs instrumentation and skilled personnel and is an uncomfortable procedure. Overall, the model could simplify the estimation of prostate cancer risk. Finally, as the fPSA-N seems to be an interesting new marker, a direct immunoassay for measuring fPSA-N concentrations was developed. The analytical performance was acceptable, but the rather complicated assay protocol needs to be improved until it can be used for measuring large sample panels.
Resumo:
Innovative gas cooled reactors, such as the pebble bed reactor (PBR) and the gas cooled fast reactor (GFR) offer higher efficiency and new application areas for nuclear energy. Numerical methods were applied and developed to analyse the specific features of these reactor types with fully three dimensional calculation models. In the first part of this thesis, discrete element method (DEM) was used for a physically realistic modelling of the packing of fuel pebbles in PBR geometries and methods were developed for utilising the DEM results in subsequent reactor physics and thermal-hydraulics calculations. In the second part, the flow and heat transfer for a single gas cooled fuel rod of a GFR were investigated with computational fluid dynamics (CFD) methods. An in-house DEM implementation was validated and used for packing simulations, in which the effect of several parameters on the resulting average packing density was investigated. The restitution coefficient was found out to have the most significant effect. The results can be utilised in further work to obtain a pebble bed with a specific packing density. The packing structures of selected pebble beds were also analysed in detail and local variations in the packing density were observed, which should be taken into account especially in the reactor core thermal-hydraulic analyses. Two open source DEM codes were used to produce stochastic pebble bed configurations to add realism and improve the accuracy of criticality calculations performed with the Monte Carlo reactor physics code Serpent. Russian ASTRA criticality experiments were calculated. Pebble beds corresponding to the experimental specifications within measurement uncertainties were produced in DEM simulations and successfully exported into the subsequent reactor physics analysis. With the developed approach, two typical issues in Monte Carlo reactor physics calculations of pebble bed geometries were avoided. A novel method was developed and implemented as a MATLAB code to calculate porosities in the cells of a CFD calculation mesh constructed over a pebble bed obtained from DEM simulations. The code was further developed to distribute power and temperature data accurately between discrete based reactor physics and continuum based thermal-hydraulics models to enable coupled reactor core calculations. The developed method was also found useful for analysing sphere packings in general. CFD calculations were performed to investigate the pressure losses and heat transfer in three dimensional air cooled smooth and rib roughened rod geometries, housed inside a hexagonal flow channel representing a sub-channel of a single fuel rod of a GFR. The CFD geometry represented the test section of the L-STAR experimental facility at Karlsruhe Institute of Technology and the calculation results were compared to the corresponding experimental results. Knowledge was gained of the adequacy of various turbulence models and of the modelling requirements and issues related to the specific application. The obtained pressure loss results were in a relatively good agreement with the experimental data. Heat transfer in the smooth rod geometry was somewhat under predicted, which can partly be explained by unaccounted heat losses and uncertainties. In the rib roughened geometry heat transfer was severely under predicted by the used realisable k − epsilon turbulence model. An additional calculation with a v2 − f turbulence model showed significant improvement in the heat transfer results, which is most likely due to the better performance of the model in separated flow problems. Further investigations are suggested before using CFD to make conclusions of the heat transfer performance of rib roughened GFR fuel rod geometries. It is suggested that the viewpoints of numerical modelling are included in the planning of experiments to ease the challenging model construction and simulations and to avoid introducing additional sources of uncertainties. To facilitate the use of advanced calculation approaches, multi-physical aspects in experiments should also be considered and documented in a reasonable detail.
Resumo:
The aim of the work presented in this study was to demonstrate the wide applicability of a single-label quenching resonance energy transfer (QRET) assay based on time-resolved lanthanide luminescence. QRET technology is proximity dependent method utilizing weak and unspecific interaction between soluble quencher molecule and lanthanide chelate. The interaction between quencher and chelate is lost when the ligand binds to its target molecule. The properties of QRET technology are especially useful in high throughput screening (HTS) assays. At the beginning of this study, only end-point type QRET technology was available. To enable efficient study of enzymatic reactions, the QRET technology was further developed to enable measurement of reaction kinetics. This was performed using proteindeoxyribonuclei acid (DNA) interaction as a first tool to monitor reaction kinetics. Later, the QRET was used to study nucleotide exchange reaction kinetics and mutation induced effects to the small GTPase activity. Small GTPases act as a molecular switch shifting between active GTP bound and inactive GDP bound conformation. The possibility of monitoring reaction kinetics using the QRET technology was evaluated using two homogeneous assays: a direct growth factor detection assay and a nucleotide exchange monitoring assay with small GTPases. To complete the list, a heterogeneous assay for monitoring GTP hydrolysis using small GTPases, was developed. All these small GTPase assays could be performed using nanomolar protein concentrations without GTPase pretreatment. The results from these studies demonstrated that QRET technology can be used to monitor reaction kinetics and further enable the possibility to use the same method for screening.
Resumo:
Electrical machine drives are the most electrical energy-consuming systems worldwide. The largest proportion of drives is found in industrial applications. There are, however many other applications that are also based on the use of electrical machines, because they have a relatively high efficiency, a low noise level, and do not produce local pollution. Electrical machines can be classified into several categories. One of the most commonly used electrical machine types (especially in the industry) is induction motors, also known as asynchronous machines. They have a mature production process and a robust rotor construction. However, in the world pursuing higher energy efficiency with reasonable investments not every application receives the advantage of using this type of motor drives. The main drawback of induction motors is the fact that they need slipcaused and thus loss-generating current in the rotor, and additional stator current for magnetic field production along with the torque-producing current. This can reduce the electric motor drive efficiency, especially in low-speed, low-power applications. Often, when high torque density is required together with low losses, it is desirable to apply permanent magnet technology, because in this case there is no need to use current to produce the basic excitation of the machine. This promotes the effectiveness of copper use in the stator, and further, there is no rotor current in these machines. Again, if permanent magnets with a high remanent flux density are used, the air gap flux density can be higher than in conventional induction motors. These advantages have raised the popularity of PMSMs in some challenging applications, such as hybrid electric vehicles (HEV), wind turbines, and home appliances. Usually, a correctly designed PMSM has a higher efficiency and consequently lower losses than its induction machine counterparts. Therefore, the use of these electrical machines reduces the energy consumption of the whole system to some extent, which can provide good motivation to apply permanent magnet technology to electrical machines. However, the cost of high performance rare earth permanent magnets in these machines may not be affordable in many industrial applications, because the tight competition between the manufacturers dictates the rules of low-cost and highly robust solutions, where asynchronous machines seem to be more feasible at the moment. Two main electromagnetic components of an electrical machine are the stator and the rotor. In the case of a conventional radial flux PMSM, the stator contains magnetic circuit lamination and stator winding, and the rotor consists of rotor steel (laminated or solid) and permanent magnets. The lamination itself does not significantly influence the total cost of the machine, even though it can considerably increase the construction complexity, as it requires a special assembly arrangement. However, thin metal sheet processing methods are very effective and economically feasible. Therefore, the cost of the machine is mainly affected by the stator winding and the permanent magnets. The work proposed in this doctoral dissertation comprises a description and analysis of two approaches of PMSM cost reduction: one on the rotor side and the other on the stator side. The first approach on the rotor side includes the use of low-cost and abundant ferrite magnets together with a tooth-coil winding topology and an outer rotor construction. The second approach on the stator side exploits the use of a modular stator structure instead of a monolithic one. PMSMs with the proposed structures were thoroughly analysed by finite element method based tools (FEM). It was found out that by implementing the described principles, some favourable characteristics of the machine (mainly concerning the machine size) will inevitable be compromised. However, the main target of the proposed approaches is not to compete with conventional rare earth PMSMs, but to reduce the price at which they can be implemented in industrial applications, keeping their dimensions at the same level or lower than those of a typical electrical machine used in the industry at the moment. The measurement results of the prototypes show that the main performance characteristics of these machines are at an acceptable level. It is shown that with certain specific actions it is possible to achieve a desirable efficiency level of the machine with the proposed cost reduction methods.
Resumo:
The power is still today an issue in wearable computing applications. The aim of the present paper is to raise awareness of the power consumption of wearable computing devices in specific scenarios to be able in the future to design energy efficient wireless sensors for context recognition in wearable computing applications. The approach is based on a hardware study. The objective of this paper is to analyze and compare the total power consumption of three representative wearable computing devices in realistic scenarios such as Display, Speaker, Camera and microphone, Transfer by Wi-Fi, Monitoring outdoor physical activity and Pedometer. A scenario based energy model is also developed. The Samsung Galaxy Nexus I9250 smartphone, the Vuzix M100 Smart Glasses and the SimValley Smartwatch AW-420.RX are the three devices representative of their form factors. The power consumption is measured using PowerTutor, an android energy profiler application with logging option and using unknown parameters so it is adjusted with the USB meter. The result shows that the screen size is the main parameter influencing the power consumption. The power consumption for an identical scenario varies depending on the wearable devices meaning that others components, parameters or processes might impact on the power consumption and further study is needed to explain these variations. This paper also shows that different inputs (touchscreen is more efficient than buttons controls) and outputs (speaker sensor is more efficient than display sensor) impact the energy consumption in different way. This paper gives recommendations to reduce the energy consumption in healthcare wearable computing application using the energy model.
Resumo:
This thesis examines how content marketing is used in B2B customer acquisition and how content marketing performance measurement system is built and utilized in this context. Literature related to performance measurement, branding and buyer behavior is examined in the theoretical part in order to identify the elements influence on content marketing performance measurement design and usage. Qualitative case study is chosen in order to gain deep understanding of the phenomenon studied. The case company is a Finnish software vendor, which operates in B2B markets and has practiced content marketing for approximately two years. The in-depth interviews were conducted with three employees from marketing department. According to findings content marketing performance measurement system’s infrastructure is based on target market’s decision making processes, company’s own customer acquisition process, marketing automation tool and analytics solutions. The main roles of content marketing performance measurement system are measuring performance, strategy management and learning and improvement. Content marketing objectives in the context of customer acquisition are enhancing brand awareness, influencing brand attitude and lead generation. Both non-financial and financial outcomes are assessed by single phase specific metrics, phase specific overall KPIs and ratings related to lead’s involvement.