949 resultados para work function measurements
Resumo:
The problem is solved using the Love function and Flügge shell theory. Numerical work has been done with a computer for various values of shell geometry parameters and elastic constants.
Resumo:
Volatile organic compounds (VOCs) are emitted into the atmosphere from natural and anthropogenic sources, vegetation being the dominant source on a global scale. Some of these reactive compounds are deemed major contributors or inhibitors to aerosol particle formation and growth, thus making VOC measurements essential for current climate change research. This thesis discusses ecosystem scale VOC fluxes measured above a boreal Scots pine dominated forest in southern Finland. The flux measurements were performed using the micrometeorological disjunct eddy covariance (DEC) method combined with proton transfer reaction mass spectrometry (PTR-MS), which is an online technique for measuring VOC concentrations. The measurement, calibration, and calculation procedures developed in this work proved to be well suited to long-term VOC concentration and flux measurements with PTR-MS. A new averaging approach based on running averaged covariance functions improved the determination of the lag time between wind and concentration measurements, which is a common challenge in DEC when measuring fluxes near the detection limit. The ecosystem scale emissions of methanol, acetaldehyde, and acetone were substantial. These three oxygenated VOCs made up about half of the total emissions, with the rest comprised of monoterpenes. Contrary to the traditional assumption that monoterpene emissions from Scots pine originate mainly as evaporation from specialized storage pools, the DEC measurements indicated a significant contribution from de novo biosynthesis to the ecosystem scale monoterpene emissions. This thesis offers practical guidelines for long-term DEC measurements with PTR-MS. In particular, the new averaging approach to the lag time determination seems useful in the automation of DEC flux calculations. Seasonal variation in the monoterpene biosynthesis and the detailed structure of a revised hybrid algorithm, describing both de novo and pool emissions, should be determined in further studies to improve biological realism in the modelling of monoterpene emissions from Scots pine forests. The increasing number of DEC measurements of oxygenated VOCs will probably enable better estimates of the role of these compounds in plant physiology and tropospheric chemistry. Keywords: disjunct eddy covariance, lag time determination, long-term flux measurements, proton transfer reaction mass spectrometry, Scots pine forests, volatile organic compounds
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Electronic transport in the high temperature paramagnetic regime of the colossal magnetoresistive oxides, La(1-x)A(x)MnO(3), A=Ca, Sr, Ba, x similar or equal to 0.1-0.3, has been investigated using resistivity measurements. The main motivation for this work is to relook into the actual magnitude of the activation energy for transport in a number of manganites and study its variation as a function of hole doping (x), average A-site cation radius (< r(A)>), cationic disorder (sigma(2)) and strain (epsilon(zz)). We show that contrary to current practice, the description of a single activation energy in this phase is not entirely accurate. Our results clearly reveal a strong dependence of the activation energy on the hole doping as well as disorder. Comparing the results across different substituent species with different < r(A)> reveals the importance of sigma(2) as a metric to qualify any analysis based on (r(A)). (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications.
Resumo:
We have developed a novel nanoparticle tracking based interface microrheology technique to perform in situ studies on confined complex fluids. To demonstrate the power of this technique, we show, for the first time, how in situ glass formation in polymers confined at air-water interface can be directly probed by monitoring variation of the mean square displacement of embedded nanoparticles as a function of surface density. We have further quantified the appearance of dynamic heterogeneity and hence vitrification in polymethyl methacrylate monolayers above a certain surface density, through the variation of non-Gaussian parameter of the probes. (C) 2010 American Institute of Physics. [doi:10.1063/1.3471584].
Resumo:
The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications. (C) 2005 Elsevier B. V. All rights reserved.
Resumo:
Gene expression is one of the most critical factors influencing the phenotype of a cell. As a result of several technological advances, measuring gene expression levels has become one of the most common molecular biological measurements to study the behaviour of cells. The scientific community has produced enormous and constantly increasing collection of gene expression data from various human cells both from healthy and pathological conditions. However, while each of these studies is informative and enlighting in its own context and research setup, diverging methods and terminologies make it very challenging to integrate existing gene expression data to a more comprehensive view of human transcriptome function. On the other hand, bioinformatic science advances only through data integration and synthesis. The aim of this study was to develop biological and mathematical methods to overcome these challenges and to construct an integrated database of human transcriptome as well as to demonstrate its usage. Methods developed in this study can be divided in two distinct parts. First, the biological and medical annotation of the existing gene expression measurements needed to be encoded by systematic vocabularies. There was no single existing biomedical ontology or vocabulary suitable for this purpose. Thus, new annotation terminology was developed as a part of this work. Second part was to develop mathematical methods correcting the noise and systematic differences/errors in the data caused by various array generations. Additionally, there was a need to develop suitable computational methods for sample collection and archiving, unique sample identification, database structures, data retrieval and visualization. Bioinformatic methods were developed to analyze gene expression levels and putative functional associations of human genes by using the integrated gene expression data. Also a method to interpret individual gene expression profiles across all the healthy and pathological tissues of the reference database was developed. As a result of this work 9783 human gene expression samples measured by Affymetrix microarrays were integrated to form a unique human transcriptome resource GeneSapiens. This makes it possible to analyse expression levels of 17330 genes across 175 types of healthy and pathological human tissues. Application of this resource to interpret individual gene expression measurements allowed identification of tissue of origin with 92.0% accuracy among 44 healthy tissue types. Systematic analysis of transcriptional activity levels of 459 kinase genes was performed across 44 healthy and 55 pathological tissue types and a genome wide analysis of kinase gene co-expression networks was done. This analysis revealed biologically and medically interesting data on putative kinase gene functions in health and disease. Finally, we developed a method for alignment of gene expression profiles (AGEP) to perform analysis for individual patient samples to pinpoint gene- and pathway-specific changes in the test sample in relation to the reference transcriptome database. We also showed how large-scale gene expression data resources can be used to quantitatively characterize changes in the transcriptomic program of differentiating stem cells. Taken together, these studies indicate the power of systematic bioinformatic analyses to infer biological and medical insights from existing published datasets as well as to facilitate the interpretation of new molecular profiling data from individual patients.
Resumo:
Using path integrals, we derive an exact expression-valid at all times t-for the distribution P(Q,t) of the heat fluctuations Q of a Brownian particle trapped in a stationary harmonic well. We find that P(Q, t) can be expressed in terms of a modified Bessel function of zeroth order that in the limit t > infinity exactly recovers the heat distribution function obtained recently by Imparato et al. Phys. Rev. E 76, 050101(R) (2007)] from the approximate solution to a Fokker-Planck equation. This long-time result is in very good agreement with experimental measurements carried out by the same group on the heat effects produced by single micron-sized polystyrene beads in a stationary optical trap. An earlier exact calculation of the heat distribution function of a trapped particle moving at a constant speed v was carried out by van Zon and Cohen Phys. Rev. E 69, 056121 (2004)]; however, this calculation does not provide an expression for P(Q, t) itself, but only its Fourier transform (which cannot be analytically inverted), nor can it be used to obtain P(Q, t) for the case v=0.
Resumo:
The accompanying collective research report is the result of the research project in 198690 between The Finnish Academy and the former Soviet Academy of Sciences. The project was organized around common field work in Finland and in the former Soviet Union and theoretical analyses of tree growth determining processes. Based on theoretical analyses, dynamic stand growth models were made and their parameters were determined utilizing the field results. Annual cycle affects the tree growth. Our theoretical approach was based on adaptation to local climate conditions from Lapland to South Russia. The initiation of growth was described as a simple low and high temperature accumulation driven model. Linking the theoretical model with long term temperature data allowed us to analyze what type of temperature response produced favorable outcome in different climates. Initiation of growth consumes the carbohydrate reserves in plants. We measured the dynamics of insoluble and soluble sugars in the very northern and Karelian conditions. Clear cyclical pattern was observed but the differences between locations were surprisingly small. Analysis of field measurements of CO2 exchange showed that irradiance is the dominating factor causing variation in photosynthetic rate in natural conditions during summer. The effect of other factors is so small that they can be omitted without any considerable loss of accuracy. A special experiment carried out in Hyytiälä showed that the needle living space, defined as the ratio between the shoot cylindric volume and needle surface area, correlates with the shoot photosynthesis. The penetration of irradiance into Scots pine canopy is a complicated phenomenon because of the movement of the sun on the sky and the complicated structure of branches and needles. A moderately simple but balanced forest radiation regime submodel was constructed. It consists of the tree crown and forest structure, the gap probability calculation and the consideration of spatial and temporal variation of radiation inside the forest. The common field excursions in different geographical regions resulted in a lot of experimental data of regularities of woody structures. The water transport seems to be a good common factor to analyse these properties of tree structure. There are evident regressions between cross-sectional areas measured at different locations along the water pathway from fine roots to needles. The observed regressions have clear geographical trends. For example, the same cross-sectional area can support three times higher needle mass in South Russia than in Lapland. Geographical trends can also be seen in shoot and needle structure. Analysis of data published by several Russian authors show, that one ton of needles transpire 42 ton of water a year. This annual amount of transpiration seems to be independent of geographical location, year and site conditions. The produced theoretical and experimental material is utilised in the development of stand growth model that describes the growth and development of Scots pine stands in Finland and the former Soviet Union. The core of the model is carbon and nutrient balances. This means that carbon obtained in photosynthesis is consumed for growth and maintenance and nutrients are taken according to the metabolic needs. The annual photosynthetic production by trees in the stand is determined as a function of irradiance and shading during the active period. The utilisation of the annual photosynthetic production to the growth of different components of trees is based on structural regularities. Since the fundamental metabolic processes are the same in all locations the same growth model structure can be applied in the large range of Scots pine. The annual photosynthetic production and structural regularities determining the allocation of resources have geographical features. The common field measurements enable the application of the model to the analysis of growth and development of stands growing on the five locations of experiments. The model enables the analysis of geographical differences in the growth of Scots pine. For example, the annual photosynthetic production of a 100-year-old stand at Voronez is 3.5 times higher than in Lapland. The share consumed to needle growth (30 %) and to growth of branches (5 %) seems to be the same in all locations. In contrast, the share of fine roots is decreasing when moving from north to south. It is 20 % in Lapland, 15 % in Hyytiälä Central Finland and Kentjärvi Karelia and 15 % in Voronez South Russia. The stem masses (115113 ton/ha) are rather similar in Hyytiälä, Kentjärvi and Voronez, but rather low (50 ton/ha) in Lapland. In Voronez the height of the trees reach 29 m being in Hyytiälä and Kentjärvi 22 m and in Lapland only 14 m. The present approach enables utilization of structural and functional knowledge, gained in places of intensive research, in the analysis of growth and development of any stand. This opens new possibilities for growth research and also for applications in forestry practice.
Resumo:
Toeplitz operators are among the most important classes of concrete operators with applications to several branches of pure and applied mathematics. This doctoral thesis deals with Toeplitz operators on analytic Bergman, Bloch and Fock spaces. Usually, a Toeplitz operator is a composition of multiplication by a function and a suitable projection. The present work deals with generalizing the notion to the case where the function is replaced by a distributional symbol. Fredholm theory for Toeplitz operators with matrix-valued symbols is also considered. The subject of this thesis belongs to the areas of complex analysis, functional analysis and operator theory. This work contains five research articles. The articles one, three and four deal with finding suitable distributional classes in Bergman, Fock and Bloch spaces, respectively. In each case the symbol class to be considered turns out to be a certain weighted Sobolev-type space of distributions. The Bergman space setting is the most straightforward. When dealing with Fock spaces, some difficulties arise due to unboundedness of the complex plane and the properties of the Gaussian measure in the definition. In the Bloch-type spaces an additional logarithmic weight must be introduced. Sufficient conditions for boundedness and compactness are derived. The article two contains a portion showing that under additional assumptions, the condition for Bergman spaces is also necessary. The fifth article deals with Fredholm theory for Toeplitz operators having matrix-valued symbols. The essential spectra and index theorems are obtained with the help of Hardy space factorization and the Berezin transform, for instance. The article two also has a part dealing with matrix-valued symbols in a non-reflexive Bergman space, in which case a condition on the oscillation of the symbol (a logarithmic VMO-condition) must be added.
Resumo:
Interest in the applicability of fluctuation theorems to the thermodynamics of single molecules in external potentials has recently led to calculations of the work and total entropy distributions of Brownian oscillators in static and time-dependent electromagnetic fields. These calculations, which are based on solutions to a Smoluchowski equation, are not easily extended to a consideration of the other thermodynamic quantity of interest in such systems-the heat exchanges of the particle alone-because of the nonlinear dependence of the heat on a particle's stochastic trajectory. In this paper, we show that a path integral approach provides an exact expression for the distribution of the heat fluctuations of a charged Brownian oscillator in a static magnetic field. This approach is an extension of a similar path integral approach applied earlier by our group to the calculation of the heat distribution function of a trapped Brownian particle, which was found, in the limit of long times, to be consistent with experimental data on the thermal interactions of single micron-sized colloids in a viscous solvent.
Resumo:
Bulk glasses of Ge(20)Se(80-x)ln(x) (O less than or equal to x less than or equal to 18) have been used for measurements of heat capacity at constant pressure (C-p) using a differential scanning calorimeter. These measurements reveal the chemical threshold in these glasses as a function of composition. The results are discussed in the light of microscopic phase separation in these glasses.
Resumo:
We propose, for the first time, a reinforcement learning (RL) algorithm with function approximation for traffic signal control. Our algorithm incorporates state-action features and is easily implementable in high-dimensional settings. Prior work, e. g., the work of Abdulhai et al., on the application of RL to traffic signal control requires full-state representations and cannot be implemented, even in moderate-sized road networks, because the computational complexity exponentially grows in the numbers of lanes and junctions. We tackle this problem of the curse of dimensionality by effectively using feature-based state representations that use a broad characterization of the level of congestion as low, medium, or high. One advantage of our algorithm is that, unlike prior work based on RL, it does not require precise information on queue lengths and elapsed times at each lane but instead works with the aforementioned described features. The number of features that our algorithm requires is linear to the number of signaled lanes, thereby leading to several orders of magnitude reduction in the computational complexity. We perform implementations of our algorithm on various settings and show performance comparisons with other algorithms in the literature, including the works of Abdulhai et al. and Cools et al., as well as the fixed-timing and the longest queue algorithms. For comparison, we also develop an RL algorithm that uses full-state representation and incorporates prioritization of traffic, unlike the work of Abdulhai et al. We observe that our algorithm outperforms all the other algorithms on all the road network settings that we consider.
Resumo:
A novel series of vesicle-forming ion-paired amphiphiles, bis(hexadecyldimethylammonium)alkane dipalmitate (1a-1h), containing four chains were synthesized with two isolated headgroups. In each of these amphiphiles, the two headgroup charges are separated by a flexible polymethylene spacer chain -[(CH2)(m)]- of varying lengths (m) such that the length and the conformation of the spacer chain determine the intra-"monomer" headgroup separation. Transmission electron microscopy indicated that each of these forms bilayer membranes upon dispersion in aqueous media. The vesicular properties of these aggregates have been examined by differential scanning calorimetry and temperature-dependent fluorescence anisotropy measurements. Interestingly, their T-m values decreased with the increase in the m value. Thus while the apparent T-m of the lipid with m = 2 (1a) is 74.1 degrees C, the corresponding value observed for the lipid with m = 12 (1h) is 38.9 degrees C. The fluorescence anisotropy values (r) for 1b-1g were quite high (r similar to 0.3) compared to that of 1h (r similar to 0.23) at 20-30 degrees C in their gel states. On the other hand, the r value for vesicular 1b beyond melting was higher (0.1) compared to any of those for 1c-1h (similar to 0.04-0.06). X-ray diffraction of the cast films was performed to understand the nature and the thickness of these membrane organizations. The membrane widths ranged from 30 to 51 A as the m values varied. The entrapment of a small water-soluble solute, riboflavin, by the individual vesicular aggregates, and their sustenance: under an imposed transmembrane pH gradient have also been examined. These results show that all lipid vesicles entrap riboflavin and that generally the resistance to OH- permeation decreases with the increase in m value. Finally,all the above observations were comparatively analyzed, and on the basis of the calculated structures of these lipids, it was possible to conclude that membrane propel-ties can be modulated by spacer chain length variation of the ion-paired amphiphiles.