21 resultados para the least number heuristic
em CaltechTHESIS
Resumo:
Part I
Potassium bis-(tricyanovinyl) amine, K+N[C(CN)=C(CN)2]2-, crystallizes in the monoclinic system with the space group Cc and lattice constants, a = 13.346 ± 0.003 Å, c = 8.992 ± 0.003 Å, B = 114.42 ± 0.02°, and Z = 4. Three dimensional intensity data were collected by layers perpendicular to b* and c* axes. The crystal structure was refined by the least squares method with anisotropic temperature factor to an R value of 0.064.
The average carbon-carbon and carbon-nitrogen bond distances in –C-CΞN are 1.441 ± 0.016 Å and 1.146 ± 0.014 Å respectively. The bis-(tricyanovinyl) amine anion is approximately planar. The coordination number of the potassium ion is eight with bond distances from 2.890 Å to 3.408 Å. The bond angle C-N-C of the amine nitrogen is 132.4 ± 1.9°. Among six cyano groups in the molecule, two of them are bent by what appear to be significant amounts (5.0° and 7.2°). The remaining four are linear within the experimental error. The bending can probably be explained by molecular packing forces in the crystals.
Part II
The nuclear magnetic resonance of 81Br and 127I in aqueous solutions were studied. The cation-halide ion interactions were studied by studying the effect of the Li+, Na+, K+, Mg++, Cs+ upon the line width of the halide ions. The solvent-halide ion interactions were studied by studying the effects of methanol, acetonitrile, and acetone upon the line width of 81Br and 127I in the aqueous solutions. It was found that the viscosity plays a very important role upon the halide ions line width. There is no specific cation-halide ion interaction for those ions such as Mg++, Di+, Na+, and K+, whereas the Cs+ - halide ion interaction is strong. The effect of organic solvents upon the halide ion line width in aqueous solutions is in the order acetone ˃ acetonitrile ˃ methanol. It is suggested that halide ions do form some stable complex with the solvent molecules and the reason Cs+ can replace one of the ligands in the solvent-halide ion complex.
Part III
An unusually large isotope effect on the bridge hydrogen chemical shift of the enol form of pentanedione-2, 4(acetylacetone) and 3-methylpentanedione-2, 4 has been observed. An attempt has been made to interpret this effect. It is suggested from the deuterium isotope effect studies, temperature dependence of the bridge hydrogen chemical shift studies, IR studies in the OH, OD, and C=O stretch regions, and the HMO calculations, that there may probably be two structures for the enol form of acetylacetone. The difference between these two structures arises mainly from the electronic structure of the π-system. The relative population of these two structures at various temperatures for normal acetylacetone and at room temperature for the deuterated acetylacetone were calculated.
Resumo:
Be it a physical object or a mathematical model, a nonlinear dynamical system can display complicated aperiodic behavior, or "chaos." In many cases, this chaos is associated with motion on a strange attractor in the system's phase space. And the dimension of the strange attractor indicates the effective number of degrees of freedom in the dynamical system.
In this thesis, we investigate numerical issues involved with estimating the dimension of a strange attractor from a finite time series of measurements on the dynamical system.
Of the various definitions of dimension, we argue that the correlation dimension is the most efficiently calculable and we remark further that it is the most commonly calculated. We are concerned with the practical problems that arise in attempting to compute the correlation dimension. We deal with geometrical effects (due to the inexact self-similarity of the attractor), dynamical effects (due to the nonindependence of points generated by the dynamical system that defines the attractor), and statistical effects (due to the finite number of points that sample the attractor). We propose a modification of the standard algorithm, which eliminates a specific effect due to autocorrelation, and a new implementation of the correlation algorithm, which is computationally efficient.
Finally, we apply the algorithm to chaotic data from the Caltech tokamak and the Texas tokamak (TEXT); we conclude that plasma turbulence is not a low- dimensional phenomenon.
Resumo:
Escherichia coli is one of the best studied living organisms and a model system for many biophysical investigations. Despite countless discoveries of the details of its physiology, we still lack a holistic understanding of how these bacteria react to changes in their environment. One of the most important examples is their response to osmotic shock. One of the mechanistic elements protecting cell integrity upon exposure to sudden changes of osmolarity is the presence of mechanosensitive channels in the cell membrane. These channels are believed to act as tension release valves protecting the inner membrane from rupturing. This thesis presents an experimental study of various aspects of mechanosensation in bacteria. We examine cell survival after osmotic shock and how the number of MscL (Mechanosensitive channel of Large conductance) channels expressed in a cell influences its physiology. We developed an assay that allows real-time monitoring of the rate of the osmotic challenge and direct observation of cell morphology during and after the exposure to osmolarity change. The work described in this thesis introduces tools that can be used to quantitatively determine at the single-cell level the number of expressed proteins (in this case MscL channels) as a function of, e.g., growth conditions. The improvement in our quantitative description of mechanosensation in bacteria allows us to address many, so far unsolved, problems, like the minimal number of channels needed for survival, and can begin to paint a clearer picture of why there are so many distinct types of mechanosensitive channels.
Resumo:
A research program was designed (1) to map regional lithological units of the lunar surface based on measurements of spatial variations in spectral reflectance, and, (2) to establish the sequence of the formation of such lithological units from measurements of the accumulated affects of impacting bodies.
Spectral reflectance data were obtained by scanning luminance variations over the lunar surface at three wavelengths (0.4µ, 0.52µ, and 0.7µ). These luminance measurements were reduced to normalized spectral reflectance values relative to a standard area in More Serenitotis. The spectral type of each lunar area was identified from the shape of its reflectance spectrum. From these data lithological units or regions of constant color were identified. The maria fall into two major spectral classes: circular moria like More Serenitotis contain S-type or red material and thin, irregular, expansive maria like Mare Tranquillitatis contain T-type or blue material. Four distinct subtypes of S-type reflectances and two of T-type reflectances exist. As these six subtypes occur in a number of lunar regions, it is concluded that they represent specific types of material rather than some homologous set of a few end members.
The relative ages or sequence of formation of these more units were established from measurements of the accumulated impacts which have occurred since more formation. A model was developed which relates the integrated flux of particles which hove impacted a surface to the distribution of craters as functions of size and shape. Erosion of craters is caused chiefly by small bodies which produce negligible individual changes in crater shape. Hence the shape of a crater can be used to estimate the total number of small impacts that have occurred since the crater was formed. Relative ages of a surface can then be obtained from measurements of the slopes of the walls of the oldest craters formed on the surface. The results show that different maria and regions within them were emplaced at different times. An approximate absolute time scale was derived from Apollo 11 crystallization ages under an assumption of a constant rote of impacting for the last 4 x 10^9 yrs. Assuming, constant flux, the period of mare formation lasted from over 4 x 10^9 yrs to about 1.5 x 10^9 yrs ago.
A synthesis of the results of relative age measurements and of spectral reflectance mapping shows that (1) the formation of the lunar maria occurred in three stages; material of only one spectral type was deposited in each stage, (2) two distinct kinds of maria exist, each type distinguished by morphology, structure, gravity anomalies, time of formation, and spectral reflectance type, and (3) individual maria have complicated histories; they contain a variety of lithic units emplaced at different times.
Resumo:
Cooperative director fluctuations in lipid bilayers have been postulated for many years. ^2H-NMR T_1^(-1), T_(1P)^(-1) , and T_2^(-1); measurements have been used identify these motions and to determine the origin of increased slow bilayer motion upon addition of unlike lipids or proteins to a pure lipid bilayer.
The contribution of cooperative director fluctuations to NMR relaxation in lipid bilayers has been expressed mathematically using the approach of Doane et al.^1 and Pace and Chan.^2 The T_2^(-1)’s of pure dimyristoyllecithin (DML) bilayers deuterated at the 2, 9 and 10, and all positions on both lipid hydrocarbon chains have been measured. Several characteristics of these measurements indicate the presence of cooperative director fluctuations. First of all, T_2^(-1) exhibits a linear dependence on S2/CD. Secondly, T_2^(-1) varies across the ^2H-NMR powder pattern as sin^2 (2, β), where , β is the angle between the average bilayer director and the external magnetic field. Furthermore, these fluctuations are restricted near the lecithin head group suggesting that the head group does not participate in these motions but, rather, anchors the hydrocarbon chains in the bilayer.
T_2^(-1)has been measured for selectively deuterated liquid crystalline DML hilayers to which a host of other lipids and proteins have been added. The T_2^(-1) of the DML bilayer is found to increase drastically when chlorophyll a (chl a) and Gramicidin A' (GA') are added to the bilayer. Both these molecules interfere with the lecithin head group spacing in the bilayer. Molecules such as myristic acid, distearoyllecithin (DSL), phytol, and cholesterol, whose hydrocarbon regions are quite different from DML but which have small,neutral polar head groups, leave cooperative fluctuations in the DML bilayer unchanged.
The effect of chl a on cooperative fluctuations in the DML bilayer has been examined in detail using ^2H-NMR T_1^(-1), T_(1P)^(-1) , and T_2^(-1); measurements. Cooperative fluctuations have been modelled using the continuum theory of the nematic state of liquid crystals. Chl a is found to decrease both the correlation length and the elastic constants in the DML bilayer.
A mismatch between the hydrophobic length of a lipid bilayer and that of an added protein has also been found to change the cooperative properties of the lecithin bilayer. Hydrophobic mismatch has been studied in a series GA' / lecithin bilayers. The dependence of 2H-NMR order parameters and relaxation rates on GA' concentration has been measured in selectively deuterated DML, dipalmitoyllecithin (DPL), and DSL systems. Order parameters, cooperative lengths, and elastic constants of the DML bilayer are most disrupted by GA', while the DSL bilayer is the least perturbed by GA'. Thus, it is concluded that the hydrophobic length of GA' best matches that of the DSL bilayer. Preliminary Raman spectroscopy and Differential Scanning Calorimetry experiments of GA' /lecithin systems support this conclusion. Accommodation of hydrophobic mismatch is used to rationalize the absence of H_(II) phase formation in GA' /DML systems and the observation of H_(II) phase in GA' /DPL and GA' /DSL systems.
1. J. W. Doane and D. L. Johnson, Chem. Phy3. Lett., 6, 291-295 (1970). 2. R. J. Pace and S. I. Chan, J. Chem. Phy3., 16, 4217-4227 (1982).
Resumo:
In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.
The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.
Resumo:
Motivated by needs in molecular diagnostics and advances in microfabrication, researchers started to seek help from microfluidic technology, as it provides approaches to achieve high throughput, high sensitivity, and high resolution. One strategy applied in microfluidics to fulfill such requirements is to convert continuous analog signal into digitalized signal. One most commonly used example for this conversion is digital PCR, where by counting the number of reacted compartments (triggered by the presence of the target entity) out of the total number of compartments, one could use Poisson statistics to calculate the amount of input target.
However, there are still problems to be solved and assumptions to be validated before the technology is widely employed. In this dissertation, the digital quantification strategy has been examined from two angles: efficiency and robustness. The former is a critical factor for ensuring the accuracy of absolute quantification methods, and the latter is the premise for such technology to be practically implemented in diagnosis beyond the laboratory. The two angles are further framed into a “fate” and “rate” determination scheme, where the influence of different parameters is attributed to fate determination step or rate determination step. In this discussion, microfluidic platforms have been used to understand reaction mechanism at single molecule level. Although the discussion raises more challenges for digital assay development, it brings the problem to the attention of the scientific community for the first time.
This dissertation also contributes towards developing POC test in limited resource settings. On one hand, it adds ease of access to the tests by incorporating massively producible, low cost plastic material and by integrating new features that allow instant result acquisition and result feedback. On the other hand, it explores new isothermal chemistry and new strategies to address important global health concerns such as cyctatin C quantification, HIV/HCV detection and treatment monitoring as well as HCV genotyping.
Resumo:
The objective of this thesis is to develop a framework to conduct velocity resolved - scalar modeled (VR-SM) simulations, which will enable accurate simulations at higher Reynolds and Schmidt (Sc) numbers than are currently feasible. The framework established will serve as a first step to enable future simulation studies for practical applications. To achieve this goal, in-depth analyses of the physical, numerical, and modeling aspects related to Sc>>1 are presented, specifically when modeling in the viscous-convective subrange. Transport characteristics are scrutinized by examining scalar-velocity Fourier mode interactions in Direct Numerical Simulation (DNS) datasets and suggest that scalar modes in the viscous-convective subrange do not directly affect large-scale transport for high Sc. Further observations confirm that discretization errors inherent in numerical schemes can be sufficiently large to wipe out any meaningful contribution from subfilter models. This provides strong incentive to develop more effective numerical schemes to support high Sc simulations. To lower numerical dissipation while maintaining physically and mathematically appropriate scalar bounds during the convection step, a novel method of enforcing bounds is formulated, specifically for use with cubic Hermite polynomials. Boundedness of the scalar being transported is effected by applying derivative limiting techniques, and physically plausible single sub-cell extrema are allowed to exist to help minimize numerical dissipation. The proposed bounding algorithm results in significant performance gain in DNS of turbulent mixing layers and of homogeneous isotropic turbulence. Next, the combined physical/mathematical behavior of the subfilter scalar-flux vector is analyzed in homogeneous isotropic turbulence, by examining vector orientation in the strain-rate eigenframe. The results indicate no discernible dependence on the modeled scalar field, and lead to the identification of the tensor-diffusivity model as a good representation of the subfilter flux. Velocity resolved - scalar modeled simulations of homogeneous isotropic turbulence are conducted to confirm the behavior theorized in these a priori analyses, and suggest that the tensor-diffusivity model is ideal for use in the viscous-convective subrange. Simulations of a turbulent mixing layer are also discussed, with the partial objective of analyzing Schmidt number dependence of a variety of scalar statistics. Large-scale statistics are confirmed to be relatively independent of the Schmidt number for Sc>>1, which is explained by the dominance of subfilter dissipation over resolved molecular dissipation in the simulations. Overall, the VR-SM framework presented is quite effective in predicting large-scale transport characteristics of high Schmidt number scalars, however, it is determined that prediction of subfilter quantities would entail additional modeling intended specifically for this purpose. The VR-SM simulations presented in this thesis provide us with the opportunity to overlap with experimental studies, while at the same time creating an assortment of baseline datasets for future validation of LES models, thereby satisfying the objectives outlined for this work.
Resumo:
An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.
The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.
The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).
"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).
The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).
Resumo:
The isotope effect on propagation rate was determined for four homogeneous ethylene polymerization systems. The catalytic system Cp_2Ti(Et)Cl + EtA1Cl_2 has a k^H_p/k^D_p = 1.035 ± 0.03. This result strongly supports an insertion mechanism which does not involve a hydrogen migration during the rate determining step of propagation (Cossee mechanism). Three metal-alkyl free systems were also studied. The catalyst I_2 (PMe_3)_3Ta(neopentylidene)(H) has a k^H_p/k^D_p = 1.709. It is interpreted as a primary isotope effect involving a non-linear a-hydrogen migration during the rate determining step of propagation (Green mechanism). The lanthanide complexes Cp*_2LuMe•Et_2O and Cp*_2YbMe•Et_2O have a k^H_p/k^D_p = 1.46 and 1.25, respectively. They are interpreted as primary isotope effects due to a partial hydrogen migration during the rate determining step of propagation.
The presence of a precoordination or other intermediate species during the polymerization of ethylene by the mentioned metal-alkyl free catalysts was sought by low temperature NMR spectroscopy. However, no evidence for such species was found. If they exist, their concentrations are very small or their lifetimes are shorter than the NMR time scale.
Two titanocene (alkenyl)chlorides (hexenyl 1 and heptenyl 2 were prepared from titanocene dichloride and a THF solution of the corresponding alkenylmagnesium chloride. They do not cyclize in solution when alone, but cyclization to their respective titanocene(methyl(cycloalkyl) chlorides occurs readily in the presence of a Lewis acid. It is demonstrated that such cyclization occurs with the alkenyl ligand within the coordination sphere of the titanium atom. Cyclization of 1 with EtAlCl_2 at 0°C occurs in less than 95 msec (ethylene insertion time), as shown by the presence of 97% cyclopentyl-capped oligomers when polymerizing ethylene with this system. Some alkyl exchange occurs (3%). Cyclization of 2 is slower under the same reaction conditions and is not complete in 95 msec as shown by the presence of both cyclohexyl-capped oligomers (35%) and odd number α-olefin oligomers (50%). Alkyl exchange is more extensive as evidenced by the even number n-alkanes (15%).
Cyclization of 2-d_1 (titanocene(hept-6-en-1-yl-1-d_1)chloride) with EtA1Cl_2 demonstrated that for this system there is no α-hydrogen participation during said process. The cyclization is believed to occur by a Cossee-type mechanism. There was no evidence for precoordination of the alkenyl double bond during the cyclization process.
Resumo:
There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.
In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:
- For a given number of measurements, can we reliably estimate the true signal?
- If so, how good is the reconstruction as a function of the model parameters?
More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.
Resumo:
A substantial amount of important scientific information is contained within astronomical data at the submillimeter and far-infrared (FIR) wavelengths, including information regarding dusty galaxies, galaxy clusters, and star-forming regions; however, these wavelengths are among the least-explored fields in astronomy because of the technological difficulties involved in such research. Over the past 20 years, considerable efforts have been devoted to developing submillimeter- and millimeter-wavelength astronomical instruments and telescopes.
The number of detectors is an important property of such instruments and is the subject of the current study. Future telescopes will require as many as hundreds of thousands of detectors to meet the necessary requirements in terms of the field of view, scan speed, and resolution. A large pixel count is one benefit of the development of multiplexable detectors that use kinetic inductance detector (KID) technology.
This dissertation presents the development of a KID-based instrument including a portion of the millimeter-wave bandpass filters and all aspects of the readout electronics, which together enabled one of the largest detector counts achieved to date in submillimeter-/millimeter-wavelength imaging arrays: a total of 2304 detectors. The work presented in this dissertation has been implemented in the MUltiwavelength Submillimeter Inductance Camera (MUSIC), a new instrument for the Caltech Submillimeter Observatory (CSO).
Resumo:
The object of this investigation is to devise a rapid, fairly accurate, colorimetric analysis for HCN to be used in field work for determining instantaneous concentrations of the gas under fumigating canvas. A large amount of money is expended yearly by the citrus industry of this state in attempting to control and to eradicate the scale pests. Although fumigation with HCN has been practiced tor many years, the progress made has been anything but satisfactory. The greater portion of the work has always been carried on by contractors, who in a large number of cases have been very unscrupulous. The materials and labor are very expensive and the growers have been satisfied to adhere to beaten paths and hope for the best results on scale kill with the least attendant foliage injury. One familiar with fumigating, either from the grower's or the operator's viewpoint, knows that very widely varying results are obtained, even under what are apparently identical condition. Even after discounting for the dishonesty of some operators and the prejudices of the grower, there is still a large variance between desired or expected results and those actually obtained.
Resumo:
Part I
The latent heat of vaporization of n-decane is measured calorimetrically at temperatures between 160° and 340°F. The internal energy change upon vaporization, and the specific volume of the vapor at its dew point are calculated from these data and are included in this work. The measurements are in excellent agreement with available data at 77° and also at 345°F, and are presented in graphical and tabular form.
Part II
Simultaneous material and energy transport from a one-inch adiabatic porous cylinder is studied as a function of free stream Reynolds Number and turbulence level. Experimental data is presented for Reynolds Numbers between 1600 and 15,000 based on the cylinder diameter, and for apparent turbulence levels between 1.3 and 25.0 per cent. n-heptane and n-octane are the evaporating fluids used in this investigation.
Gross Sherwood Numbers are calculated from the data and are in substantial agreement with existing correlations of the results of other workers. The Sherwood Numbers, characterizing mass transfer rates, increase approximately as the 0.55 power of the Reynolds Number. At a free stream Reynolds Number of 3700 the Sherwood Number showed a 40% increase as the apparent turbulence level of the free stream was raised from 1.3 to 25 per cent.
Within the uncertainties involved in the diffusion coefficients used for n-heptane and n-octane, the Sherwood Numbers are comparable for both materials. A dimensionless Frössling Number is computed which characterizes either heat or mass transfer rates for cylinders on a comparable basis. The calculated Frössling Numbers based on mass transfer measurements are in substantial agreement with Frössling Numbers calculated from the data of other workers in heat transfer.
Resumo:
The assembly history of massive galaxies is one of the most important aspects of galaxy formation and evolution. Although we have a broad idea of what physical processes govern the early phases of galaxy evolution, there are still many open questions. In this thesis I demonstrate the crucial role that spectroscopy can play in a physical understanding of galaxy evolution. I present deep near-infrared spectroscopy for a sample of high-redshift galaxies, from which I derive important physical properties and their evolution with cosmic time. I take advantage of the recent arrival of efficient near-infrared detectors to target the rest-frame optical spectra of z > 1 galaxies, from which many physical quantities can be derived. After illustrating the applications of near-infrared deep spectroscopy with a study of star-forming galaxies, I focus on the evolution of massive quiescent systems.
Most of this thesis is based on two samples collected at the W. M. Keck Observatory that represent a significant step forward in the spectroscopic study of z > 1 quiescent galaxies. All previous spectroscopic samples at this redshift were either limited to a few objects, or much shallower in terms of depth. Our first sample is composed of 56 quiescent galaxies at 1 < z < 1.6 collected using the upgraded red arm of the Low Resolution Imaging Spectrometer (LRIS). The second consists of 24 deep spectra of 1.5 < z < 2.5 quiescent objects observed with the Multi-Object Spectrometer For Infra-Red Exploration (MOSFIRE). Together, these spectra span the critical epoch 1 < z < 2.5, where most of the red sequence is formed, and where the sizes of quiescent systems are observed to increase significantly.
We measure stellar velocity dispersions and dynamical masses for the largest number of z > 1 quiescent galaxies to date. By assuming that the velocity dispersion of a massive galaxy does not change throughout its lifetime, as suggested by theoretical studies, we match galaxies in the local universe with their high-redshift progenitors. This allows us to derive the physical growth in mass and size experienced by individual systems, which represents a substantial advance over photometric inferences based on the overall galaxy population. We find a significant physical growth among quiescent galaxies over 0 < z < 2.5 and, by comparing the slope of growth in the mass-size plane dlogRe/dlogM∗ with the results of numerical simulations, we can constrain the physical process responsible for the evolution. Our results show that the slope of growth becomes steeper at higher redshifts, yet is broadly consistent with minor mergers being the main process by which individual objects evolve in mass and size.
By fitting stellar population models to the observed spectroscopy and photometry we derive reliable ages and other stellar population properties. We show that the addition of the spectroscopic data helps break the degeneracy between age and dust extinction, and yields significantly more robust results compared to fitting models to the photometry alone. We detect a clear relation between size and age, where larger galaxies are younger. Therefore, over time the average size of the quiescent population will increase because of the contribution of large galaxies recently arrived to the red sequence. This effect, called progenitor bias, is different from the physical size growth discussed above, but represents another contribution to the observed difference between the typical sizes of low- and high-redshift quiescent galaxies. By reconstructing the evolution of the red sequence starting at z ∼ 1.25 and using our stellar population histories to infer the past behavior to z ∼ 2, we demonstrate that progenitor bias accounts for only half of the observed growth of the population. The remaining size evolution must be due to physical growth of individual systems, in agreement with our dynamical study.
Finally, we use the stellar population properties to explore the earliest periods which led to the formation of massive quiescent galaxies. We find tentative evidence for two channels of star formation quenching, which suggests the existence of two independent physical mechanisms. We also detect a mass downsizing, where more massive galaxies form at higher redshift, and then evolve passively. By analyzing in depth the star formation history of the brightest object at z > 2 in our sample, we are able to put constraints on the quenching timescale and on the properties of its progenitor.
A consistent picture emerges from our analyses: massive galaxies form at very early epochs, are quenched on short timescales, and then evolve passively. The evolution is passive in the sense that no new stars are formed, but significant mass and size growth is achieved by accreting smaller, gas-poor systems. At the same time the population of quiescent galaxies grows in number due to the quenching of larger star-forming galaxies. This picture is in agreement with other observational studies, such as measurements of the merger rate and analyses of galaxy evolution at fixed number density.