4 resultados para SCALE MODELS

em CaltechTHESIS


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents a set of novel methods to biaxially package planar structures by folding and wrapping. The structure is divided into strips connected by folds that can slip during wrapping to accommodate material thickness. These packaging schemes are highly efficient, with theoretical packaging efficiencies approaching 100%. Packaging tests on meter-scale physical models have demonstrated packaging efficiencies of up to 83%. These methods avoid permanent deformation of the structure, allowing an initially flat structure to be deployed to a flat state.

Also presented are structural architectures and deployment schemes that are compatible with these packaging methods. These structural architectures use either in-plane pretension -- suitable for membrane structures -- or out-of-plane bending stiffness to resist loading. Physical models are constructed to realize these structural architectures. The deployment of these types of structures is shown to be controllable and repeatable by conducting experiments on lab-scale models.

These packaging methods, structural architectures, and deployment schemes are applicable to a variety of spacecraft structures such as solar power arrays, solar sails, antenna arrays, and drag sails; they have the potential to enable larger variants of these structures while reducing the packaging volume required. In this thesis, these methods are applied to the preliminary structural design of a space solar power satellite. This deployable spacecraft, measuring 60 m x 60 m, can be packaged into a cylinder measuring 1.5 m in height and 1 m in diameter. It can be deployed to a flat configuration, where it acts as a stiff lightweight support framework for multifunctional tiles that collect sunlight, generate electric power, and transmit it to a ground station on Earth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.

It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.

The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Earth's largest geoid anomalies occur at the lowest spherical harmonic degrees, or longest wavelengths, and are primarily the result of mantle convection. Thermal density contrasts due to convection are partially compensated by boundary deformations due to viscous flow whose effects must be included in order to obtain a dynamically consistent model for the geoid. These deformations occur rapidly with respect to the timescale for convection, and we have analytically calculated geoid response kernels for steady-state, viscous, incompressible, self-gravitating, layered Earth models which include the deformation of boundaries due to internal loads. Both the sign and magnitude of geoid anomalies depend strongly upon the viscosity structure of the mantle as well as the possible presence of chemical layering.

Correlations of various global geophysical data sets with the observed geoid can be used to construct theoretical geoid models which constrain the dynamics of mantle convection. Surface features such as topography and plate velocities are not obviously related to the low-degree geoid, with the exception of subduction zones which are characterized by geoid highs (degrees 4-9). Recent models for seismic heterogeneity in the mantle provide additional constraints, and much of the low-degree (2-3) geoid can be attributed to seismically inferred density anomalies in the lower mantle. The Earth's largest geoid highs are underlain by low density material in the lower mantle, thus requiring compensating deformations of the Earth's surface. A dynamical model for whole mantle convection with a low viscosity upper mantle can explain these observations and successfully predicts more than 80% of the observed geoid variance.

Temperature variations associated with density anomalies in the man tie cause lateral viscosity variations whose effects are not included in the analytical models. However, perturbation theory and numerical tests show that broad-scale lateral viscosity variations are much less important than radial variations; in this respect, geoid models, which depend upon steady-state surface deformations, may provide more reliable constraints on mantle structure than inferences from transient phenomena such as postglacial rebound. Stronger, smaller-scale viscosity variations associated with mantle plumes and subducting slabs may be more important. On the basis of numerical modelling of low viscosity plumes, we conclude that the global association of geoid highs (after slab effects are removed) with hotspots and, perhaps, mantle plumes, is the result of hot, upwelling material in the lower mantle; this conclusion does not depend strongly upon plume rheology. The global distribution of hotspots and the dominant, low-degree geoid highs may correspond to a dominant mode of convection stabilized by the ancient Pangean continental assemblage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this thesis is to characterize the behavior of the smallest turbulent scales in high Karlovitz number (Ka) premixed flames. These scales are particularly important in the two-way coupling between turbulence and chemistry and better understanding of these scales will support future modeling efforts using large eddy simulations (LES). The smallest turbulent scales are studied by considering the vorticity vector, ω, and its transport equation.

Due to the complexity of turbulent combustion introduced by the wide range of length and time scales, the two-dimensional vortex-flame interaction is first studied as a simplified test case. Numerical and analytical techniques are used to discern the dominate transport terms and their effects on vorticity based on the initial size and strength of the vortex. This description of the effects of the flame on a vortex provides a foundation for investigating vorticity in turbulent combustion.

Subsequently, enstrophy, ω2 = ω • ω, and its transport equation are investigated in premixed turbulent combustion. For this purpose, a series of direct numerical simulations (DNS) of premixed n-heptane/air flames are performed, the conditions of which span a wide range of unburnt Karlovitz numbers and turbulent Reynolds numbers. Theoretical scaling analysis along with the DNS results support that, at high Karlovitz number, enstrophy transport is controlled by the viscous dissipation and vortex stretching/production terms. As a result, vorticity scales throughout the flame with the inverse of the Kolmogorov time scale, τη, just as in homogeneous isotropic turbulence. As τη is only a function of the viscosity and dissipation rate, this supports the validity of Kolmogorov’s first similarity hypothesis for sufficiently high Ka numbers (Ka ≳ 100). These conclusions are in contrast to low Karlovitz number behavior, where dilatation and baroclinic torque have a significant impact on vorticity within the flame. Results are unaffected by the transport model, chemical model, turbulent Reynolds number, and lastly the physical configuration.

Next, the isotropy of vorticity is assessed. It is found that given a sufficiently large value of the Karlovitz number (Ka ≳ 100) the vorticity is isotropic. At lower Karlovitz numbers, anisotropy develops due to the effects of the flame on the vortex stretching/production term. In this case, the local dynamics of vorticity in the strain-rate tensor, S, eigenframe are altered by the flame. At sufficiently high Karlovitz numbers, the dynamics of vorticity in this eigenframe resemble that of homogeneous isotropic turbulence.

Combined, the results of this thesis support that both the magnitude and orientation of vorticity resemble the behavior of homogeneous isotropic turbulence, given a sufficiently high Karlovitz number (Ka ≳ 100). This supports the validity of Kolmogorov’s first similarity hypothesis and the hypothesis of local isotropy under these condition. However, dramatically different behavior is found at lower Karlovitz numbers. These conclusions provides/suggests directions for modeling high Karlovitz number premixed flames using LES. With more accurate models, the design of aircraft combustors and other combustion based devices may better mitigate the detrimental effects of combustion, from reducing CO2 and soot production to increasing engine efficiency.