934 resultados para standard gas analysis
Resumo:
Pressure wave refrigerators (PWR) refrigerate the gas through periodical expansion waves. Due to its simple structure and robustness, PWR may have many potential applications if the efficiency becomes competitive with existing alternative devices. In order to improve the efficiency, the characteristics of wave propagation in a PWR are studied by experiment, numerical simulation and theoretical analysis. Based on the experimental results and numerical simulation, a simplified model is suggested, which includes the assumptions of flux-equilibrium and conservation of the free energy. This allows the independent analysis of the operation parameters and design specifics. Furthermore, the optimum operation condition can be deduced. Some considerations to improve the PWR efficiency are also given.
Resumo:
Two important issues in electron beam physical vapor deposition (EBPVD) are addressed. The first issue is a validity condition of the classical cosine law widely used in the engineering context. This requires a breakdown criterion of the free molecular assumption on which the cosine law is established. Using the analytical solution of free molecular effusion flow, the number of collisions (N-c) for a particle moving from an evaporative source to a substrate is estimated that is proven inversely proportional to the local Knudsen number at the evaporation surface. N-c = 1 is adopted as a breakdown criterion of the free molecular assumption, and it is verified by experimental data and DSMC results. The second issue is how to realize the uniform distributions of thickness and component over a large-area thin film. Our analysis shows that at relatively low evaporation rates the goal is easy achieved through arranging the evaporative source positions properly and rotating the substrate.
Resumo:
179 p.
Resumo:
ENGLISH: A two-stage sampling design is used to estimate the variances of the numbers of yellowfin in different age groups caught in the eastern Pacific Ocean. For purse seiners, the primary sampling unit (n) is a brine well containing fish from a month-area stratum; the number of fish lengths (m) measured from each well are the secondary units. The fish cannot be selected at random from the wells because of practical limitations. The effects of different sampling methods and other factors on the reliability and precision of statistics derived from the length-frequency data were therefore examined. Modifications are recommended where necessary. Lengths of fish measured during the unloading of six test wells revealed two forms of inherent size stratification: 1) short-term disruptions of existing pattern of sizes, and 2) transition zones between long-term trends in sizes. To some degree, all wells exhibited cyclic changes in mean size and variance during unloading. In half of the wells, it was observed that size selection by the unloaders induced a change in mean size. As a result of stratification, the sequence of sizes removed from all wells was non-random, regardless of whether a well contained fish from a single set or from more than one set. The number of modal sizes in a well was not related to the number of sets. In an additional well composed of fish from several sets, an experiment on vertical mixing indicated that a representative sample of the contents may be restricted to the bottom half of the well. The contents of the test wells were used to generate 25 simulated wells and to compare the results of three sampling methods applied to them. The methods were: (1) random sampling (also used as a standard), (2) protracted sampling, in which the selection process was extended over a large portion of a well, and (3) measuring fish consecutively during removal from the well. Repeated sampling by each method and different combinations indicated that, because the principal source of size variation occurred among primary units, increasing n was the most effective way to reduce the variance estimates of both the age-group sizes and the total number of fish in the landings. Protracted sampling largely circumvented the effects of size stratification, and its performance was essentially comparable to that of random sampling. Sampling by this method is recommended. Consecutive-fish sampling produced more biased estimates with greater variances. Analysis of the 1988 length-frequency samples indicated that, for age groups that appear most frequently in the catch, a minimum sampling frequency of one primary unit in six for each month-area stratum would reduce the coefficients of variation (CV) of their size estimates to approximately 10 percent or less. Additional stratification of samples by set type, rather than month-area alone, further reduced the CV's of scarce age groups, such as the recruits, and potentially improved their accuracy. The CV's of recruitment estimates for completely-fished cohorts during the 198184 period were in the vicinity of 3 to 8 percent. Recruitment estimates and their variances were also relatively insensitive to changes in the individual quarterly catches and variances, respectively, of which they were composed. SPANISH: Se usa un diseño de muestreo de dos etapas para estimar las varianzas de los números de aletas amari11as en distintos grupos de edad capturados en el Océano Pacifico oriental. Para barcos cerqueros, la unidad primaria de muestreo (n) es una bodega de salmuera que contenía peces de un estrato de mes-área; el numero de ta11as de peces (m) medidas de cada bodega es la unidad secundaria. Limitaciones de carácter practico impiden la selección aleatoria de peces de las bodegas. Por 10 tanto, fueron examinados los efectos de distintos métodos de muestreo y otros factores sobre la confiabilidad y precisión de las estadísticas derivadas de los datos de frecuencia de ta11a. Se recomiendan modificaciones donde sean necesarias. Las ta11as de peces medidas durante la descarga de seis bodegas de prueba revelaron dos formas de estratificación inherente por ta11a: 1) perturbaciones a corto plazo en la pauta de ta11as existente, y 2) zonas de transición entre las tendencias a largo plazo en las ta11as. En cierto grado, todas las bodegas mostraron cambios cíclicos en ta11a media y varianza durante la descarga. En la mitad de las bodegas, se observo que selección por ta11a por los descargadores indujo un cambio en la ta11a media. Como resultado de la estratificación, la secuencia de ta11as sacadas de todas las bodegas no fue aleatoria, sin considerar si una bodega contenía peces de un solo lance 0 de mas de uno. El numero de ta11as modales en una bodega no estaba relacionado al numero de lances. En una bodega adicional compuesta de peces de varios lances, un experimento de mezcla vertical indico que una muestra representativa del contenido podría estar limitada a la mitad inferior de la bodega. Se uso el contenido de las bodegas de prueba para generar 25 bodegas simuladas y comparar los resultados de tres métodos de muestreo aplicados a estas. Los métodos fueron: (1) muestreo aleatorio (usado también como norma), (2) muestreo extendido, en el cual el proceso de selección fue extendido sobre una porción grande de una bodega, y (3) medición consecutiva de peces durante la descarga de la bodega. EI muestreo repetido con cada método y distintas combinaciones de n y m indico que, puesto que la fuente principal de variación de ta11a ocurría entre las unidades primarias, aumentar n fue la manera mas eficaz de reducir las estimaciones de la varianza de las ta11as de los grupos de edad y el numero total de peces en los desembarcos. El muestreo extendido evito mayormente los efectos de la estratificación por ta11a, y su desempeño fue esencialmente comparable a aquel del muestreo aleatorio. Se recomienda muestrear con este método. El muestreo de peces consecutivos produjo estimaciones mas sesgadas con mayores varianzas. Un análisis de las muestras de frecuencia de ta11a de 1988 indico que, para los grupos de edad que aparecen con mayor frecuencia en la captura, una frecuencia de muestreo minima de una unidad primaria de cada seis para cada estrato de mes-área reduciría los coeficientes de variación (CV) de las estimaciones de ta11a correspondientes a aproximadamente 10% 0 menos. Una estratificación adicional de las muestras por tipo de lance, y no solamente mes-área, redujo aun mas los CV de los grupos de edad escasos, tales como los reclutas, y mejoró potencialmente su precisión. Los CV de las estimaciones del reclutamiento para las cohortes completamente pescadas durante 1981-1984 fueron alrededor de 3-8%. Las estimaciones del reclutamiento y sus varianzas fueron también relativamente insensibles a cambios en las capturas de trimestres individuales y las varianzas, respectivamente, de las cuales fueron derivadas. (PDF contains 70 pages)
Resumo:
Thermal fluctuation approach is widely used to monitor association kinetics of surface-bound receptor-ligand interactions. Various protocols such as sliding standard deviation (SD) analysis (SSA) and Page's test analysis (PTA) have been used to estimate two-dimensional (2D) kinetic rates from the time course of displacement of molecular carrier. In the current work, we compared the estimations from both SSA and modified PTA using measured data from an optical trap assay and simulated data from a random number generator. Our results indicated that both SSA and PTA were reliable in estimating 2D kinetic rates. Parametric analysis also demonstrated that such the estimations were sensitive to parameters such as sampling rate, sliding window size, and threshold. These results furthered the understandings in quantifying the biophysics of receptor-ligand interactions.
Resumo:
Fluid diffusion in glassy polymers proceeds in ways that are not explained by the standard diffusion model. Although the reasons for the anomalous effects are not known, much of the observed behavior is attributed to the long times that polymers below their glass transition temperature take to adjust to changes in their condition. The slow internal relaxations of the polymer chains ensure that the material properties are history-dependent, and also allow both local inhomogeneities and differential swelling to occur. Two models are developed in this thesis with the intent of accounting for these effects in the diffusion process.
In Part I, a model is developed to account for both the history dependence of the glassy polymer, and the dual sorption which occurs when gas molecules are immobilized by the local heterogeneities. A preliminary study of a special case of this model is conducted, showing the existence of travelling wave solutions and using perturbation techniques to investigate the effect of generalized diffusion mechanisms on their form. An integral averaging method is used to estimate the penetrant front position.
In Part II, a model is developed for particle diffusion along with displacements in isotropic viscoelastic materials. The nonlinear dependence of the materials on the fluid concentration is taken into account, while pure displacements are assumed to remain in the range of linear viscoelasticity. A fairly general model is obtained for three-dimensional irrotational movements, with the development of the model being based on the assumptions of irreversible thermodynamics. With the help of some dimensional analysis, this model is simplified to a version which is proposed to be studied for Case II behavior.
Resumo:
The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.
It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.
The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.
Resumo:
This thesis consists of three separate studies of roles that black holes might play in our universe.
In the first part we formulate a statistical method for inferring the cosmological parameters of our universe from LIGO/VIRGO measurements of the gravitational waves produced by coalescing black-hole/neutron-star binaries. This method is based on the cosmological distance-redshift relation, with "luminosity distances" determined directly, and redshifts indirectly, from the gravitational waveforms. Using the current estimates of binary coalescence rates and projected "advanced" LIGO noise spectra, we conclude that by our method the Hubble constant should be measurable to within an error of a few percent. The errors for the mean density of the universe and the cosmological constant will depend strongly on the size of the universe, varying from about 10% for a "small" universe up to and beyond 100% for a "large" universe. We further study the effects of random gravitational lensing and find that it may strongly impair the determination of the cosmological constant.
In the second part of this thesis we disprove a conjecture that black holes cannot form in an early, inflationary era of our universe, because of a quantum-field-theory induced instability of the black-hole horizon. This instability was supposed to arise from the difference in temperatures of any black-hole horizon and the inflationary cosmological horizon; it was thought that this temperature difference would make every quantum state that is regular at the cosmological horizon be singular at the black-hole horizon. We disprove this conjecture by explicitly constructing a quantum vacuum state that is everywhere regular for a massless scalar field. We further show that this quantum state has all the nice thermal properties that one has come to expect of "good" vacuum states, both at the black-hole horizon and at the cosmological horizon.
In the third part of the thesis we study the evolution and implications of a hypothetical primordial black hole that might have found its way into the center of the Sun or any other solar-type star. As a foundation for our analysis, we generalize the mixing-length theory of convection to an optically thick, spherically symmetric accretion flow (and find in passing that the radial stretching of the inflowing fluid elements leads to a modification of the standard Schwarzschild criterion for convection). When the accretion is that of solar matter onto the primordial hole, the rotation of the Sun causes centrifugal hangup of the inflow near the hole, resulting in an "accretion torus" which produces an enhanced outflow of heat. We find, however, that the turbulent viscosity, which accompanies the convective transport of this heat, extracts angular momentum from the inflowing gas, thereby buffering the torus into a lower luminosity than one might have expected. As a result, the solar surface will not be influenced noticeably by the torus's luminosity until at most three days before the Sun is finally devoured by the black hole. As a simple consequence, accretion onto a black hole inside the Sun cannot be an answer to the solar neutrino puzzle.
Resumo:
The works presented in this thesis explore a variety of extensions of the standard model of particle physics which are motivated by baryon number (B) and lepton number (L), or some combination thereof. In the standard model, both baryon number and lepton number are accidental global symmetries violated only by non-perturbative weak effects, though the combination B-L is exactly conserved. Although there is currently no evidence for considering these symmetries as fundamental, there are strong phenomenological bounds restricting the existence of new physics violating B or L. In particular, there are strict limits on the lifetime of the proton whose decay would violate baryon number by one unit and lepton number by an odd number of units.
The first paper included in this thesis explores some of the simplest possible extensions of the standard model in which baryon number is violated, but the proton does not decay as a result. The second paper extends this analysis to explore models in which baryon number is conserved, but lepton flavor violation is present. Special attention is given to the processes of μ to e conversion and μ → eγ which are bound by existing experimental limits and relevant to future experiments.
The final two papers explore extensions of the minimal supersymmetric standard model (MSSM) in which both baryon number and lepton number, or the combination B-L, are elevated to the status of being spontaneously broken local symmetries. These models have a rich phenomenology including new collider signatures, stable dark matter candidates, and alternatives to the discrete R-parity symmetry usually built into the MSSM in order to protect against baryon and lepton number violating processes.
Resumo:
47 p.
Resumo:
The laminar to turbulent transition process in boundary layer flows in thermochemical nonequilibrium at high enthalpy is measured and characterized. Experiments are performed in the T5 Hypervelocity Reflected Shock Tunnel at Caltech, using a 1 m length 5-degree half angle axisymmetric cone instrumented with 80 fast-response annular thermocouples, complemented by boundary layer stability computations using the STABL software suite. A new mixing tank is added to the shock tube fill apparatus for premixed freestream gas experiments, and a new cleaning procedure results in more consistent transition measurements. Transition location is nondimensionalized using a scaling with the boundary layer thickness, which is correlated with the acoustic properties of the boundary layer, and compared with parabolized stability equation (PSE) analysis. In these nondimensionalized terms, transition delay with increasing CO2 concentration is observed: tests in 100% and 50% CO2, by mass, transition up to 25% and 15% later, respectively, than air experiments. These results are consistent with previous work indicating that CO2 molecules at elevated temperatures absorb acoustic instabilities in the MHz range, which is the expected frequency of the Mack second-mode instability at these conditions, and also consistent with predictions from PSE analysis. A strong unit Reynolds number effect is observed, which is believed to arise from tunnel noise. NTr for air from 5.4 to 13.2 is computed, substantially higher than previously reported for noisy facilities. Time- and spatially-resolved heat transfer traces are used to track the propagation of turbulent spots, and convection rates at 90%, 76%, and 63% of the boundary layer edge velocity, respectively, are observed for the leading edge, centroid, and trailing edge of the spots. A model constructed with these spot propagation parameters is used to infer spot generation rates from measured transition onset to completion distance. Finally, a novel method to control transition location with boundary layer gas injection is investigated. An appropriate porous-metal injector section for the cone is designed and fabricated, and the efficacy of injected CO2 for delaying transition is gauged at various mass flow rates, and compared with both no injection and chemically inert argon injection cases. While CO2 injection seems to delay transition, and argon injection seems to promote it, the experimental results are inconclusive and matching computations do not predict a reduction in N factor from any CO2 injection condition computed.
Resumo:
We have investigated the spectra of the electromagnetically induced transparency (EIT) when a cell is filled with a buffer gas. Our theoretical results show that the buffer gas can induce a narrower spectra line and steeper dispersion than those of the usual EIT case in a homogeneous and Doppler broadened system. The linewidth decreases with the increase of the buffer gas pressure. This narrow spectra may be applied to quantum information processing, nonlinear optics and atomic frequency standard.
Resumo:
Theoretical and experimental studies of a gas laser amplifier are presented, assuming the amplifier is operating with a saturating optical frequency signal. The analysis is primarily concerned with the effects of the gas pressure and the presence of an axial magnetic field on the characteristics of the amplifying medium. Semiclassical radiation theory is used, along with a density matrix description of the atomic medium which relates the motion of single atoms to the macroscopic observables. A two-level description of the atom, using phenomenological source rates and decay rates, forms the basis of our analysis of the gas laser medium. Pressure effects are taken into account to a large extent through suitable choices of decay rate parameters.
Two methods for calculating the induced polarization of the atomic medium are used. The first method utilizes a perturbation expansion which is valid for signal intensities which barely reach saturation strength, and it is quite general in applicability. The second method is valid for arbitrarily strong signals, but it yields tractable solutions only for zero magnetic field or for axial magnetic fields large enough such that the Zeeman splitting is much larger than the power broadened homogeneous linewidth of the laser transition. The effects of pressure broadening of the homogeneous spectral linewidth are included in both the weak-signal and strong-signal theories; however the effects of Zeeman sublevel-mixing collisions are taken into account only in the weak-signal theory.
The behavior of a He-Ne gas laser amplifier in the presence of an axial magnetic field has been studied experimentally by measuring gain and Faraday rotation of linearly polarized resonant laser signals for various values of input signal intensity, and by measuring nonlinearity - induced anisotropy for elliptically polarized resonant laser signals of various input intensities. Two high-gain transitions in the 3.39-μ region were used for study: a J = 1 to J = 2 (3s2 → 3p4) transition and a J = 1 to J = 1 (3s2 → 3p2) transition. The input signals were tuned to the centers of their respective resonant gain lines.
The experimental results agree quite well with corresponding theoretical expressions which have been developed to include the nonlinear effects of saturation strength signals. The experimental results clearly show saturation of Faraday rotation, and for the J = 1 t o J = 1 transition a Faraday rotation reversal and a traveling wave gain dip are seen for small values of axial magnetic field. The nonlinearity induced anisotropy shows a marked dependence on the gas pressure in the amplifier tube for the J = 1 to J = 2 transition; this dependence agrees with the predictions of the general perturbational or weak signal theory when allowances are made for the effects of Zeeman sublevel-mixing collisions. The results provide a method for measuring the upper (neon 3s2) level quadrupole moment decay rate, the dipole moment decay rates for the 3s2 → 3p4 and 3s2 → 3p2 transitions, and the effects of various types of collision processes on these decay rates.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
The dispersion compensation effect of the chirped fiber grating (CFG) is analyzed theoretically, and analytic expressions are derived for composite second-order (CSO) distortion in analog modulated sub-carrier multiplexed (AM-SCM) cable television (CATV) systems with externally and directly modulated transmitters. Simulations are given for the two kinds of modulations and for standard single mode fiber and non-zero dispersion shift fiber (NZDSF) systems. The results show that CFG could be used as a dispersion compensator in directly modulated systems, but its dispersion coefficient should be adjusted much more precisely than the externally modulated system. The requirements for the NZDSF system could be loosened much. It is proposed that directly modulated source may be used as a transmitter in CATV systems combined with tunable CFG dispersion compensator being adjusted precisely, which may be more cost-effective than externally modulation technology. (c) 2006 Elsevier GmbH. All rights reserved.