980 resultados para ionization probabilities
Resumo:
A scheme for the detection and isolation of actuator faults in linear systems is proposed. A bank of unknown input observers is constructed to generate residual signals which will deviate in characteristic ways in the presence of actuator faults. Residual signals are unaffected by the unknown inputs acting on the system and this decreases the false alarm and miss probabilities. The results are illustrated through a simulation study of actuator fault detection and isolation in a pilot plant doubleeffect evaporator.
Resumo:
Väärinkäytettyjen aineiden seulontaan käytetyn menetelmän tulee olla herkkä, selektiivinen, yksinkertainen, nopea ja toistettava. Työn tavoitteena oli kehittää yksinkertainen, mutta herkkä, esikäsittelymenetelmä bentsodiatsepiinien ja amfetamiinijohdannaisten kvalitatiiviseen seulomiseen virtsasta mikropilarisähkösumutussirun (μPESI) avulla, mikä tarjoaisi vaihtoehdon seulonnassa käytetyille immunologisille menetelmille, joiden herkkyys ja selektiivisyys ovat puutteellisia. Tavoitteena oli samalla tarkastella mikropilarisähkösumutussirun toimivuutta biologisten näytteiden analyysissa. Esikäsittely optimoitiin erikseen bentsodiatsepiineille ja amfetamiinijohdannaisille. Käytettyjä esikäsittelymenetelmiä olivat neste-nesteuutto, kiinteäfaasiuutto Oasis HLB-patruunalla ja ZipTip®-pipetinkärjellä sekä laimennus ja suodatus ilman uuttoa. Mittausten perusteella keskityttiin optimoimaan ZipTip®-uuttoa. Optimoinnissa tutkittavia yhdisteitä spiikattiin 0-virtsaan niiden ennaltamääritetyn raja-arvon verran, bentsodiatsepiineja 200 ng/ml ja amfetamiinijohdannaisia 300 ng/ml. Bentsodiatsepiinien kohdalla optimoitiin kutakin uuton vaihetta ja optimoinnin tuloksena näytteen pH säädettiin arvoon 5, faasi kunnostettiin asetonitriililla, tasapainotettiin ja pestiin veden (pH 5) ja asetonitriilin (10 % v/v) seoksella ja eluoitiin asetonitriilin, muurahaishapon ja veden (95:1:4 v/v/v) seoksella. Amfetamiinijohdannaisten uutossa optimoitiin näytteen ja liuottimien pH-arvoja ja tuloksena näytteen pH säädettiin arvoon 10, faasi kunnostettiin veden ja ammoniumvetykarbonaatin(pH 10, 1:1 v/v) seoksella, tasapainotettiin ja pestiin asetonitriilin ja veden (1:5 v/v) seoksella ja eluoitiin metanolilla. Optimoituja uuttoja testattiin Yhtyneet Medix Laboratorioista toimitetuilla autenttisilla virtsanäytteillä ja saatuja tuloksia verrattiin kvantitatiivisen GC/MS-analyysin tuloksiin. Bentsodiatsepiininäytteet hydrolysoitiin ennen uuttoa herkkyyden parantamiseksi. Autenttiset näytteet analysoitiin Q-TOF-laitteella Viikissä. Lisäksi hydrolysoidut bentsodiatsepiininäytteet mitattiin Yhtyneet Medix Laboratorioiden TOF-laitteella. Kehitetty menetelmä vaatii tulosten perusteella lisää optimointia toimiakseen. Ongelmana oli etenkin toistoissa ilmennyt tulosten hajonta. Manuaalista näytteensyöttöä tulisi kehittää toistettavammaksi. Autenttisten bentsodiatsepiininäytteiden analyysissa ongelmana olivat virheelliset negatiiviset tulokset ja amfetamiinijohdannaisten analyysissa virheelliset positiiviset tulokset. Virheellisiä negatiivisia tuloksia selittää menetelmän herkkyyden puute ja virheellisiä positiivisia tuloksia mittalaitteen, sirujen tai liuottimien likaantuminen.
Resumo:
We report numerical and analytic results for the spatial survival probability for fluctuating one-dimensional interfaces with Edwards-Wilkinson or Kardar-Parisi-Zhang dynamics in the steady state. Our numerical results are obtained from analysis of steady-state profiles generated by integrating a spatially discretized form of the Edwards-Wilkinson equation to long times. We show that the survival probability exhibits scaling behavior in its dependence on the system size and the "sampling interval" used in the measurement for both "steady-state" and "finite" initial conditions. Analytic results for the scaling functions are obtained from a path-integral treatment of a formulation of the problem in terms of one-dimensional Brownian motion. A "deterministic approximation" is used to obtain closed-form expressions for survival probabilities from the formally exact analytic treatment. The resulting approximate analytic results provide a fairly good description of the numerical data.
Resumo:
Aerosol particles deteriorate air quality, atmospheric visibility and our health. They affect the Earth s climate by absorbing and scattering sunlight, forming clouds, and also via several feed-back mechanisms. The net effect on the radiative balance is negative, i.e. cooling, which means that particles counteract the effect of greenhouse gases. However, particles are one of the poorly known pieces in the climate puzzle. Some of the airborne particles are natural, some anthropogenic; some enter the atmosphere in particle form, while others form by gas-to-particle conversion. Unless the sources and dynamical processes shaping the particle population are quantified, they cannot be incorporated into climate models. The molecular level understanding of new particle formation is still inadequate, mainly due to the lack of suitable measurement techniques to detect the smallest particles and their precursors. This thesis has contributed to our ability to measure newly formed particles. Three new condensation particle counter applications for measuring the concentration of nano-particles were developed. The suitability of the methods for detecting both charged and electrically neutral particles and molecular clusters as small as 1 nm in diameter was thoroughly tested both in laboratory and field conditions. It was shown that condensation particle counting has reached the size scale of individual molecules, and besides measuring the concentration they can be used for getting size information. In addition to atmospheric research, the particle counters could have various applications in other fields, especially in nanotechnology. Using the new instruments, the first continuous time series of neutral sub-3 nm particle concentrations were measured at two field sites, which represent two different kinds of environments: the boreal forest and the Atlantic coastline, both of which are known to be hot-spots for new particle formation. The contribution of ions to the total concentrations in this size range was estimated, and it could be concluded that the fraction of ions was usually minor, especially in boreal forest conditions. Since the ionization rate is connected to the amount of cosmic rays entering the atmosphere, the relative contribution of neutral to charged nucleation mechanisms extends beyond academic interest, and links the research directly to current climate debate.
Resumo:
Microchips for use in biomolecular analysis show a lot of promise for medical diagnostics and biomedical basic research. Among the potential advantages are more sensitive and faster analyses as well as reduced cost and sample consumption. Due to scaling laws, the surface are to volume ratios of microfluidic chips is very high. Because of this, tailoring the surface properties and surface functionalization are very important technical issues for microchip development. This thesis studies two different types of functional surfaces, surfaces for open surface capillary microfluidics and surfaces for surface assisted laser desorption ionization mass spectrometry, and combinations thereof. Open surface capillary microfluidics can be used to transport and control liquid samples on easily accessible open surfaces simply based on surface forces, without any connections to pumps or electrical power sources. Capillary filling of open partially wetting grooves is shown to be possible with certain geometries, aspect ratios and contact angles, and a theoretical model is developed to identify complete channel filling domains, as well as partial filling domains. On the other hand, partially wetting surfaces with triangular microstructures can be used for achieving directional wetting, where the water droplets do not spread isotropically, but instead only spread to a predetermined sector. Furthermore, by patterning completely wetting and superhydrophobic areas on the same surface, complex droplet shapes are achieved, as the water stretches to make contact with the wetting surface, but does not enter into the superhydrophobic domains. Surfaces for surface assisted laser desorption ionization mass spectrometry are developed by applying various active thin film coatings on multiple substrates, in order to separate surface and bulk effects. Clear differences are observed between both surface and substrate layers. The best performance surfaces consisted of amorphous silicon coating and an inorganic-organic hybrid substrate, with nanopillars and nanopores. These surfaces are used for matrix-free ionization of drugs, peptides and proteins, and for some analytes, the detection limits were in the high attomoles. Microfluidics and laser desorption ionization surfaces are combined on a functionalized drying platforms, where the surface is used to control the shape of the deposited analyte droplet, and the shape of the initial analyte droplet affects the dried droplet solute deposition pattern. The deposited droplets can then directly detected by mass spectrometry. Utilizing this approach, results of analyte concentration, splitting and separation are demonstrated.
Resumo:
The k-colouring problem is to colour a given k-colourable graph with k colours. This problem is known to be NP-hard even for fixed k greater than or equal to 3. The best known polynomial time approximation algorithms require n(delta) (for a positive constant delta depending on k) colours to colour an arbitrary k-colourable n-vertex graph. The situation is entirely different if we look at the average performance of an algorithm rather than its worst-case performance. It is well known that a k-colourable graph drawn from certain classes of distributions can be ii-coloured almost surely in polynomial time. In this paper, we present further results in this direction. We consider k-colourable graphs drawn from the random model in which each allowed edge is chosen independently with probability p(n) after initially partitioning the vertex set into ii colour classes. We present polynomial time algorithms of two different types. The first type of algorithm always runs in polynomial time and succeeds almost surely. Algorithms of this type have been proposed before, but our algorithms have provably exponentially small failure probabilities. The second type of algorithm always succeeds and has polynomial running time on average. Such algorithms are more useful and more difficult to obtain than the first type of algorithms. Our algorithms work as long as p(n) greater than or equal to n(-1+is an element of) where is an element of is a constant greater than 1/4.
Resumo:
The analysis of lipid compositions from biological samples has become increasingly important. Lipids have a role in cardiovascular disease, metabolic syndrome and diabetes. They also participate in cellular processes such as signalling, inflammatory response, aging and apoptosis. Also, the mechanisms of regulation of cell membrane lipid compositions are poorly understood, partially because a lack of good analytical methods. Mass spectrometry has opened up new possibilities for lipid analysis due to its high resolving power, sensitivity and the possibility to do structural identification by fragment analysis. The introduction of Electrospray ionization (ESI) and the advances in instrumentation revolutionized the analysis of lipid compositions. ESI is a soft ionization method, i.e. it avoids unwanted fragmentation the lipids. Mass spectrometric analysis of lipid compositions is complicated by incomplete separation of the signals, the differences in the instrument response of different lipids and the large amount of data generated by the measurements. These factors necessitate the use of computer software for the analysis of the data. The topic of the thesis is the development of methods for mass spectrometric analysis of lipids. The work includes both computational and experimental aspects of lipid analysis. The first article explores the practical aspects of quantitative mass spectrometric analysis of complex lipid samples and describes how the properties of phospholipids and their concentration affect the response of the mass spectrometer. The second article describes a new algorithm for computing the theoretical mass spectrometric peak distribution, given the elemental isotope composition and the molecular formula of a compound. The third article introduces programs aimed specifically for the analysis of complex lipid samples and discusses different computational methods for separating the overlapping mass spectrometric peaks of closely related lipids. The fourth article applies the methods developed by simultaneously measuring the progress curve of enzymatic hydrolysis for a large number of phospholipids, which are used to determine the substrate specificity of various A-type phospholipases. The data provides evidence that the substrate efflux from bilayer is the key determining factor for the rate of hydrolysis.
Resumo:
Escherichia coli RNA polymerase is a multi-subunit enzyme containing alpha(2)beta beta'omega sigma, which transcribes DNA template to intermediate RNA product in a sequence specific manner. Although most of the subunits are essential for its function, the smallest subunit omega (average molecular mass similar to 10,105 Da) can be deleted without affecting bacterial growth. Creating a mutant of the omega subunit can aid in improving the understanding of its role. Sequencing of rpoZ gene that codes for omega subunit from a mutant variant suggested a substitution mutation at position 60 of the protein: asparagine (N) -> aspartic acid (D). This mutation was verified at the protein level by following a typical mass spectrometry (MS) based bottom-up proteomic approach. Characterization of in-gel trypsin digested samples by reverse phase liquid chromatography (LC) coupled to electrospray ionization (ESI)-tandem mass spectrometry (MS/MS) enabled in ascertaining this mutation. Electron transfer dissociation (ETD) of triply charged (M + 3H)(3+)] tryptic peptides (residues 53-67]), EIEEGLINNQILDVR from wild-type and EIEEGLIDNQILDVR from mutant, facilitated in unambiguously determining the site of mutation at residue 60.
Resumo:
The Thesis presents a state-space model for a basketball league and a Kalman filter algorithm for the estimation of the state of the league. In the state-space model, each of the basketball teams is associated with a rating that represents its strength compared to the other teams. The ratings are assumed to evolve in time following a stochastic process with independent Gaussian increments. The estimation of the team ratings is based on the observed game scores that are assumed to depend linearly on the true strengths of the teams and independent Gaussian noise. The team ratings are estimated using a recursive Kalman filter algorithm that produces least squares optimal estimates for the team strengths and predictions for the scores of the future games. Additionally, if the Gaussianity assumption holds, the predictions given by the Kalman filter maximize the likelihood of the observed scores. The team ratings allow probabilistic inference about the ranking of the teams and their relative strengths as well as about the teams’ winning probabilities in future games. The predictions about the winners of the games are correct 65-70% of the time. The team ratings explain 16% of the random variation observed in the game scores. Furthermore, the winning probabilities given by the model are concurrent with the observed scores. The state-space model includes four independent parameters that involve the variances of noise terms and the home court advantage observed in the scores. The Thesis presents the estimation of these parameters using the maximum likelihood method as well as using other techniques. The Thesis also gives various example analyses related to the American professional basketball league, i.e., National Basketball Association (NBA), and regular seasons played in year 2005 through 2010. Additionally, the season 2009-2010 is discussed in full detail, including the playoffs.
Resumo:
Vegetation maps and bioclimatic zone classifications communicate the vegetation of an area and are used to explain how the environment regulates the occurrence of plants on large scales. Many practises and methods for dividing the world’s vegetation into smaller entities have been presented. Climatic parameters, floristic characteristics, or edaphic features have been relied upon as decisive factors, and plant species have been used as indicators for vegetation types or zones. Systems depicting vegetation patterns that mainly reflect climatic variation are termed ‘bioclimatic’ vegetation maps. Based on these it has been judged logical to deduce that plants moved between corresponding bioclimatic areas should thrive in the target location, whereas plants moved from a different zone should languish. This principle is routinely applied in forestry and horticulture but actual tests of the validity of bioclimatic maps in this sense seem scanty. In this study I tested the Finnish bioclimatic vegetation zone system (BZS). Relying on the plant collection of Helsinki University Botanic Garden’s Kumpula collection, which according to the BZS is situated at the northern limit of the hemiboreal zone, I aimed to test how the plants’ survival depends on their provenance. My expectation was that plants from the hemiboreal or southern boreal zones should do best in Kumpula, whereas plants from more southern and more northern zones should show progressively lower survival probabilities. I estimated probability of survival using collection database information of plant accessions of known wild origin grown in Kumpula since the mid 1990s, and logistic regression models. The total number of accessions I included in the analyses was 494. Because of problems with some accessions I chose to separately analyse a subset of the complete data, which included 379 accessions. I also analysed different growth forms separately in order to identify differences in probability of survival due to different life strategies. In most analyses accessions of temperate and hemiarctic origin showed lower survival probability than those originating from any of the boreal subzones, which among them exhibited rather evenly high probabilities. Exceptionally mild and wet winters during the study period may have killed off hemiarctic plants. Some winters may have been too harsh for temperate accessions. Trees behaved differently: they showed an almost steadily increasing survival probability from temperate to northern boreal origins. Various factors that could not be controlled for may have affected the results, some of which were difficult to interpret. This was the case in particular with herbs, for which the reliability of the analysis suffered because of difficulties in managing their curatorial data. In all, the results gave some support to the BZS, and especially its hierarchical zonation. However, I question the validity of the formulation of the hypothesis I tested since it may not be entirely justified by the BZS, which was designed for intercontinental comparison of vegetation zones, but not specifically for transcontinental provenance trials. I conclude that botanic gardens should pay due attention to information management and curational practices to ensure the widest possible applicability of their plant collections.
Resumo:
The hazards associated with major accident hazard (MAN) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identification and quantification of these hazards related to chemical industries. Fault tree analysis (FTA) is an established technique in hazard identification. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. This paper outlines the estimation of the probability of release of chlorine from storage and filling facility of chlor-alkali industry using FTA. An attempt has also been made to arrive at the probability of chlorine release using expert elicitation and proven fuzzy logic technique for Indian conditions. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two-dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor involved in expert elicitation. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Questions of the small size of non-industrial private forest (NIPF) holdings in Finland are considered and factors affecting their partitioning are analyzed. This work arises out of Finnish forest policy statements in which the small average size of holdings has been seen to have a negative influence on the economics of forestry. A survey of the literature indicates that the size of holdings is an important factor determining the costs of logging and silvicultural operations, while its influence on the timber supply is slight. The empirical data are based on a sample of 314 holdings collected by interviewing forest owners in the years 1980-86. In 1990-91 the same holdings were resurveyed by means of a postal inquiry and partly by interviewing forest owners. The principal objective in compiling the data is to assist in quantifying ownership factors that influence partitioning among different kinds of NIPF holdings. Thus the mechanism of partitioning were described and a maximum likelihood logistic regression model was constructed using seven independent holding and ownership variables. One out of four holdings had undergone partitioning in conjunction with a change in ownership, one fifth among family owned holdings and nearly a half among jointly owned holdings. The results of the logistic regression model indicate, for instance, that the odds on partitioning is about three times greater for jointly owned holdings than for family owned ones. Also, the probabilities of partitioning were estimated and the impact of independent dichotomous variables on the probability of partitioning ranged between 0.02 and 0.10. The low value of the Hosmer-Lemeshow test statistic indicates a good fit of the model and the rate of correct classification was estimated to be 88 per cent with a cutoff point of 0.5. The average size of holdings undergoing ownership changes decreased from 29.9 ha to 28.7 ha over the approximate interval 1983-90. In addition, the transition probability matrix showed that the trends towards smaller size categories mostly involved in the small size categories, less than 20 ha. The results of the study can be used in considering the effects of the small size of holdings for forestry and if the purpose is to influence partitioning through forest or rural policy.
Resumo:
The effect of dipolar cross correlation in 1H---1H nuclear Overhauser effect experiments is investigated by detailed calculation in an ABX spin system. It is found that in weakly coupled spin systems, the cross-correlation effects are limited to single-quantum transition probabilities and decrease in magnitude as ωτc increases. Strong coupling, however, mixes the states and the cross correlations affect the zero-quantum and double-quantum transition probabilities as well. The effect of cross correlation in steady-state and transient NOE experiments is studied as a function of strong coupling and ωτc. The results for steady-state NOE experiments are calculated analytically and those for transient NOE experiments are calculated numerically. The NOE values for the A and B spins have been calculated by assuming nonselective perturbation of all the transitions of the X spin. A significant effect of cross correlation is found in transient NOE experiments of weakly as well as strongly coupled spins when the multiplets are resolved. Cross correlation manifests itself largely as a multiplet effect in the transient NOE of weakly coupled spins for nonselective perturbation of all X transitions. This effect disappears for a measuring pulse of 90° or when the multiplets are not resolved. For steady-state experiments, the effect of cross correlation is analytically zero for weakly coupled spins and small for strongly coupled spins.
Resumo:
Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.
Resumo:
Photoelectron spectroscopy (PES) provides valuable information on the ionization energies of atoms and molecules. The ionization energy (IE) is given by the relation.hv = IE + T where hv is t h e energy of the radiation and T i s the kinetic energy of the electron. The IEs are directly related to the orbital energies (Koopmans' theorem). By employing UV radiation (HeI. 21.2 eV. or HeII. 40.8 eV). extensive data on the ionization of valence electrons in organic molecules have been obtained in recent years. These studies of UV photoelectron spectroscopy. originated by Turner, have provided a direct probe into the energy levels of organic molecules. Molecular orbital calculations of various degrees of sophistication are generally employed to make assignments of the PES bands. Analysis of the vibrational structure of PES bands has not only provided structural information on the molecular ions, but has also been of value in band assignments. Dewar and co-workers [1, 2) presented summaries of available PES data on organic molecules in 1969 and 1970. Turner et al. [3] published a handbook of Hel spectra of organic molecules in 1970. Since then, a few books [4-7] discussing the principles and applications of UV photoelectron spectroscopy have appeared of which special mention should be made of the recent article by Heilbronner and Maier [7]. There has, however, been no comprehensive review of the vast amount of data on the UV-PES of organic molecules published in the literature since 1970.