23 resultados para Linear rock cutting
em CaltechTHESIS
Resumo:
This dissertation is concerned with the problem of determining the dynamic characteristics of complicated engineering systems and structures from the measurements made during dynamic tests or natural excitations. Particular attention is given to the identification and modeling of the behavior of structural dynamic systems in the nonlinear hysteretic response regime. Once a model for the system has been identified, it is intended to use this model to assess the condition of the system and to predict the response to future excitations.
A new identification methodology based upon a generalization of the method of modal identification for multi-degree-of-freedom dynaimcal systems subjected to base motion is developed. The situation considered herein is that in which only the base input and the response of a small number of degrees-of-freedom of the system are measured. In this method, called the generalized modal identification method, the response is separated into "modes" which are analogous to those of a linear system. Both parametric and nonparametric models can be employed to extract the unknown nature, hysteretic or nonhysteretic, of the generalized restoring force for each mode.
In this study, a simple four-term nonparametric model is used first to provide a nonhysteretic estimate of the nonlinear stiffness and energy dissipation behavior. To extract the hysteretic nature of nonlinear systems, a two-parameter distributed element model is then employed. This model exploits the results of the nonparametric identification as an initial estimate for the model parameters. This approach greatly improves the convergence of the subsequent optimization process.
The capability of the new method is verified using simulated response data from a three-degree-of-freedom system. The new method is also applied to the analysis of response data obtained from the U.S.-Japan cooperative pseudo-dynamic test of a full-scale six-story steel-frame structure.
The new system identification method described has been found to be both accurate and computationally efficient. It is believed that it will provide a useful tool for the analysis of structural response data.
Resumo:
The various singularities and instabilities which arise in the modulation theory of dispersive wavetrains are studied. Primary interest is in the theory of nonlinear waves, but a study of associated questions in linear theory provides background information and is of independent interest.
The full modulation theory is developed in general terms. In the first approximation for slow modulations, the modulation equations are solved. In both the linear and nonlinear theories, singularities and regions of multivalued modulations are predicted. Higher order effects are considered to evaluate this first order theory. An improved approximation is presented which gives the true behavior in the singular regions. For the linear case, the end result can be interpreted as the overlap of elementary wavetrains. In the nonlinear case, it is found that a sufficiently strong nonlinearity prevents this overlap. Transition zones with a predictable structure replace the singular regions.
For linear problems, exact solutions are found by Fourier integrals and other superposition techniques. These show the true behavior when breaking modulations are predicted.
A numerical study is made for the anharmonic lattice to assess the nonlinear theory. This confirms the theoretical predictions of nonlinear group velocities, group splitting, and wavetrain instability, as well as higher order effects in the singular regions.
Resumo:
A means of assessing the effectiveness of methods used in the numerical solution of various linear ill-posed problems is outlined. Two methods: Tikhonov' s method of regularization and the quasireversibility method of Lattès and Lions are appraised from this point of view.
In the former method, Tikhonov provides a useful means for incorporating a constraint into numerical algorithms. The analysis suggests that the approach can be generalized to embody constraints other than those employed by Tikhonov. This is effected and the general "T-method" is the result.
A T-method is used on an extended version of the backwards heat equation with spatially variable coefficients. Numerical computations based upon it are performed.
The statistical method developed by Franklin is shown to have an interpretation as a T-method. This interpretation, although somewhat loose, does explain some empirical convergence properties which are difficult to pin down via a purely statistical argument.
Resumo:
The general theory of Whitham for slowly-varying non-linear wavetrains is extended to the case where some of the defining partial differential equations cannot be put into conservation form. Typical examples are considered in plasma dynamics and water waves in which the lack of a conservation form is due to dissipation; an additional non-conservative element, the presence of an external force, is treated for the plasma dynamics example. Certain numerical solutions of the water waves problem (the Korteweg-de Vries equation with dissipation) are considered and compared with perturbation expansions about the linearized solution; it is found that the first correction term in the perturbation expansion is an excellent qualitative indicator of the deviation of the dissipative decay rate from linearity.
A method for deriving necessary and sufficient conditions for the existence of a general uniform wavetrain solution is presented and illustrated in the plasma dynamics problem. Peaking of the plasma wave is demonstrated, and it is shown that the necessary and sufficient existence conditions are essentially equivalent to the statement that no wave may have an amplitude larger than the peaked wave.
A new type of fully non-linear stability criterion is developed for the plasma uniform wavetrain. It is shown explicitly that this wavetrain is stable in the near-linear limit. The nature of this new type of stability is discussed.
Steady shock solutions are also considered. By a quite general method, it is demonstrated that the plasma equations studied here have no steady shock solutions whatsoever. A special type of steady shock is proposed, in which a uniform wavetrain joins across a jump discontinuity to a constant state. Such shocks may indeed exist for the Korteweg-de Vries equation, but are barred from the plasma problem because entropy would decrease across the shock front.
Finally, a way of including the Landau damping mechanism in the plasma equations is given. It involves putting in a dissipation term of convolution integral form, and parallels a similar approach of Whitham in water wave theory. An important application of this would be towards resolving long-standing difficulties about the "collisionless" shock.
Resumo:
Some aspects of wave propagation in thin elastic shells are considered. The governing equations are derived by a method which makes their relationship to the exact equations of linear elasticity quite clear. Finite wave propagation speeds are ensured by the inclusion of the appropriate physical effects.
The problem of a constant pressure front moving with constant velocity along a semi-infinite circular cylindrical shell is studied. The behavior of the solution immediately under the leading wave is found, as well as the short time solution behind the characteristic wavefronts. The main long time disturbance is found to travel with the velocity of very long longitudinal waves in a bar and an expression for this part of the solution is given.
When a constant moment is applied to the lip of an open spherical shell, there is an interesting effect due to the focusing of the waves. This phenomenon is studied and an expression is derived for the wavefront behavior for the first passage of the leading wave and its first reflection.
For the two problems mentioned, the method used involves reducing the governing partial differential equations to ordinary differential equations by means of a Laplace transform in time. The information sought is then extracted by doing the appropriate asymptotic expansion with the Laplace variable as parameter.
Resumo:
We consider the following singularly perturbed linear two-point boundary-value problem:
Ly(x) ≡ Ω(ε)D_xy(x) - A(x,ε)y(x) = f(x,ε) 0≤x≤1 (1a)
By ≡ L(ε)y(0) + R(ε)y(1) = g(ε) ε → 0^+ (1b)
Here Ω(ε) is a diagonal matrix whose first m diagonal elements are 1 and last m elements are ε. Aside from reasonable continuity conditions placed on A, L, R, f, g, we assume the lower right mxm principle submatrix of A has no eigenvalues whose real part is zero. Under these assumptions a constructive technique is used to derive sufficient conditions for the existence of a unique solution of (1). These sufficient conditions are used to define when (1) is a regular problem. It is then shown that as ε → 0^+ the solution of a regular problem exists and converges on every closed subinterval of (0,1) to a solution of the reduced problem. The reduced problem consists of the differential equation obtained by formally setting ε equal to zero in (1a) and initial conditions obtained from the boundary conditions (1b). Several examples of regular problems are also considered.
A similar technique is used to derive the properties of the solution of a particular difference scheme used to approximate (1). Under restrictions on the boundary conditions (1b) it is shown that for the stepsize much larger than ε the solution of the difference scheme, when applied to a regular problem, accurately represents the solution of the reduced problem.
Furthermore, the existence of a similarity transformation which block diagonalizes a matrix is presented as well as exponential bounds on certain fundamental solution matrices associated with the problem (1).
Resumo:
The problem of the finite-amplitude folding of an isolated, linearly viscous layer under compression and imbedded in a medium of lower viscosity is treated theoretically by using a variational method to derive finite difference equations which are solved on a digital computer. The problem depends on a single physical parameter, the ratio of the fold wavelength, L, to the "dominant wavelength" of the infinitesimal-amplitude treatment, L_d. Therefore, the natural range of physical parameters is covered by the computation of three folds, with L/L_d = 0, 1, and 4.6, up to a maximum dip of 90°.
Significant differences in fold shape are found among the three folds; folds with higher L/L_d have sharper crests. Folds with L/L_d = 0 and L/L_d = 1 become fan folds at high amplitude. A description of the shape in terms of a harmonic analysis of inclination as a function of arc length shows this systematic variation with L/L_d and is relatively insensitive to the initial shape of the layer. This method of shape description is proposed as a convenient way of measuring the shape of natural folds.
The infinitesimal-amplitude treatment does not predict fold-shape development satisfactorily beyond a limb-dip of 5°. A proposed extension of the treatment continues the wavelength-selection mechanism of the infinitesimal treatment up to a limb-dip of 15°; after this stage the wavelength-selection mechanism no longer operates and fold shape is mainly determined by L/L_d and limb-dip.
Strain-rates and finite strains in the medium are calculated f or all stages of the L/L_d = 1 and L/L_d = 4.6 folds. At limb-dips greater than 45° the planes of maximum flattening and maximum flattening rat e show the characteristic orientation and fanning of axial-plane cleavage.
Resumo:
This thesis focuses mainly on linear algebraic aspects of combinatorics. Let N_t(H) be an incidence matrix with edges versus all subhypergraphs of a complete hypergraph that are isomorphic to H. Richard M. Wilson and the author find the general formula for the Smith normal form or diagonal form of N_t(H) for all simple graphs H and for a very general class of t-uniform hypergraphs H.
As a continuation, the author determines the formula for diagonal forms of integer matrices obtained from other combinatorial structures, including incidence matrices for subgraphs of a complete bipartite graph and inclusion matrices for multisets.
One major application of diagonal forms is in zero-sum Ramsey theory. For instance, Caro's results in zero-sum Ramsey numbers for graphs and Caro and Yuster's results in zero-sum bipartite Ramsey numbers can be reproduced. These results are further generalized to t-uniform hypergraphs. Other applications include signed bipartite graph designs.
Research results on some other problems are also included in this thesis, such as a Ramsey-type problem on equipartitions, Hartman's conjecture on large sets of designs and a matroid theory problem proposed by Welsh.
Resumo:
This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices.
Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.
Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided.
The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds.
In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.
Resumo:
The concept of a "projection function" in a finite-dimensional real or complex normed linear space H (the function PM which carries every element into the closest element of a given subspace M) is set forth and examined.
If dim M = dim H - 1, then PM is linear. If PN is linear for all k-dimensional subspaces N, where 1 ≤ k < dim M, then PM is linear.
The projective bound Q, defined to be the supremum of the operator norm of PM for all subspaces, is in the range 1 ≤ Q < 2, and these limits are the best possible. For norms with Q = 1, PM is always linear, and a characterization of those norms is given.
If H also has an inner product (defined independently of the norm), so that a dual norm can be defined, then when PM is linear its adjoint PMH is the projection on (kernel PM)⊥ by the dual norm. The projective bounds of a norm and its dual are equal.
The notion of a pseudo-inverse F+ of a linear transformation F is extended to non-Euclidean norms. The distance from F to the set of linear transformations G of lower rank (in the sense of the operator norm ∥F - G∥) is c/∥F+∥, where c = 1 if the range of F fills its space, and 1 ≤ c < Q otherwise. The norms on both domain and range spaces have Q = 1 if and only if (F+)+ = F for every F. This condition is also sufficient to prove that we have (F+)H = (FH)+, where the latter pseudo-inverse is taken using dual norms.
In all results, the real and complex cases are handled in a completely parallel fashion.
Resumo:
An experimental study was made of the interaction of phosphate rock and aqueous inorganic orthophosphate, calcium, and hydroxyl ions. A model of the reaction was developed by observing electron diffraction patterns in conjunction with concentration changes of chemical components. The model was applied in explaining the performance of batch suspensions of powdered phosphate rock and packed columns of granular phosphate rock. In both cases the reaction consisted initially of a rapid nucleation phase that occurred in a time period of minutes. In the batch system the calcium phosphate nuclei then ripened into larger micro-crystals of hydroxyapatite, which eventually became indistinguishable from the original phosphate rock surface. During column operation the high supersaturation ratio that existed after the rapid nucleation phase resulted in a layer of small nuclei that covered a slowly growing hydroxyapatite crystal.
The column steady-state rate constant was found to increase with increasing temperature, pH, and fluoride concentration, and to decrease with increasing concentrations of magnesium sulfate, ammonium chloride, and bicarbonate ion.
An engineering feasibility study indicated that, based on economic considerations, nucleation of apatite on phosphate rock ore has a potential use as a wastewater phosphate removal treatment process.
Resumo:
Isotope dilution thorium and uranium analyses of the Harleton chondrite show a larger scatter than previously observed in equilibrated ordinary chondrites (EOC). The linear correlation of Th/U with 1/U in Harleton (and all EOC data) is produced by variation in the chlorapatite to merrillite mixing ratio. Apatite variations control the U concentrations. Phosphorus variations are compensated by inverse variations in U to preserve the Th/U vs. 1/U correlation. Because the Th/U variations reflect phosphate ampling, a weighted Th/U average should converge to an improved solar system Th/U. We obtain Th/U=3.53 (1-mean=0.10), significantly lower and more precise than previous estimates.
To test whether apatite also produces Th/U variation in CI and CM chondrites, we performed P analyses on the solutions from leaching experiments of Orgueil and Murchison meteorites.
A linear Th/U vs. 1/U correlation in CI can be explained by redistribution of hexavalent U by aqueous fluids into carbonates and sulfates.
Unlike CI and EOC, whole rock Th/U variations in CMs are mostly due to Th variations. A Th/U vs. 1/U linear correlation suggested by previous data for CMs is not real. We distinguish 4 components responsible for the whole rock Th/U variations: (1) P and actinide-depleted matrix containing small amounts of U-rich carbonate/sulfate phases (similar to CIs); (2) CAIs and (3) chondrules are major reservoirs for actinides, (4) an easily leachable phase of high Th/U. likely carbonate produced by CAI alteration. Phosphates play a minor role as actinide and P carrier phases in CM chondrites.
Using our Th/U and minimum galactic ages from halo globular clusters, we calculate relative supernovae production rates for 232Th/238U and 235U/238U for different models of r-process nucleosynthesis. For uniform galactic production, the beginning of the r-process nucleosynthesis must be less than 13 Gyr. Exponentially decreasing production is also consistent with a 13 Gyr age, but very slow decay times are required (less than 35 Gyr), approaching the uniform production. The 15 Gyr Galaxy requires either a fast initial production growth (infall time constant less than 0.5 Gyr) followed by very low decrease (decay time constant greater than 100 Gyr), or the fastest possible decrease (≈8 Gyr) preceded by slow in fall (≈7.5 Gyr).
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Despite years of research on low-angle detachments, much about them remains enigmatic. This thesis addresses some of the uncertainty regarding two particular detachments, the Mormon Peak detachment in Nevada and the Heart Mountain detachment in Wyoming and Montana.
Constraints on the geometry and kinematics of emplacement of the Mormon Peak detachment are provided by detailed geologic mapping of the Meadow Valley Mountains, along with an analysis of structural data within the allochthon in the Mormon Mountains. Identifiable structures well suited to constrain the kinematics of the detachment include a newly mapped, Sevier-age monoclinal flexure in the hanging wall of the detachment. This flexure, including the syncline at its base and the anticline at its top, can be readily matched to the base and top of the frontal Sevier thrust ramp, which is exposed in the footwall of the detachment to the east in the Mormon Mountains and Tule Springs Hills. The ~12 km of offset of these structural markers precludes the radial sliding hypothesis for emplacement of the allochthon.
The role of fluids in the slip along faults is a widely investigated topic, but the use of carbonate clumped-isotope thermometry to investigate these fluids is new. Faults rocks from within ~1 m of the Mormon Peak detachment, including veins, breccias, gouges, and host rocks, were analyzed for carbon, oxygen, and clumped-isotope measurements. The data indicate that much of the carbonate breccia and gouge material along the detachment is comminuted host rock, as expected. Measurements in vein material indicate that the fluid system is dominated by meteoric water, whose temperature indicates circulation to substantial depths (c. 4 km) in the upper crust near the fault zone.
Slip along the subhorizontal Heart Mountain detachment is particularly enigmatic, and many different mechanisms for failure have been proposed, predominantly involving catastrophic failure. Textural evidence of multiple slip events is abundant, and include multiple brecciation events and cross-cutting clastic dikes. Footwall deformation is observed in numerous exposures of the detachment. Stylolitic surfaces and alteration textures within and around “banded grains” previously interpreted to be an indicator of high-temperature fluidization along the fault suggest their formation instead via low-temperature dissolution and alteration processes. There is abundant textural evidence of the significant role of fluids along the detachment via pressure solution. The process of pressure solution creep may be responsible for enabling multiple slip events on the low-angle detachment, via a local rotation of the stress field.
Clumped-isotope thermometry of fault rocks associated with the Heart Mountain detachment indicates that despite its location on the flanks of a volcano that was active during slip, the majority of carbonate along the Heart Mountain detachment does not record significant heating above ambient temperatures (c. 40-70°C). Instead, cold meteoric fluids infiltrated the detachment breccia, and carbonate precipitated under ambient temperatures controlled by structural depth. Locally, fault gouge does preserve hot temperatures (>200°C), as is observed in both the Mormon Peak detachment and Heart Mountain detachment areas. Samples with very hot temperatures attributable to frictional shear heating are present but rare. They appear to be best preserved in hanging wall structures related to the detachment, rather than along the main detachment.
Evidence is presented for the prevalence of relatively cold, meteoric fluids along both shallow crustal detachments studied, and for protracted histories of slip along both detachments. Frictional heating is evident from both areas, but is a minor component of the preserved fault rock record. Pressure solution is evident, and might play a role in initiating slip on the Heart Mountain fault, and possibly other low-angle detachments.
Resumo:
High-resolution orbital and in situ observations acquired of the Martian surface during the past two decades provide the opportunity to study the rock record of Mars at an unprecedented level of detail. This dissertation consists of four studies whose common goal is to establish new standards for the quantitative analysis of visible and near-infrared data from the surface of Mars. Through the compilation of global image inventories, application of stratigraphic and sedimentologic statistical methods, and use of laboratory analogs, this dissertation provides insight into the history of past depositional and diagenetic processes on Mars. The first study presents a global inventory of stratified deposits observed in images from the High Resolution Image Science Experiment (HiRISE) camera on-board the Mars Reconnaissance Orbiter. This work uses the widespread coverage of high-resolution orbital images to make global-scale observations about the processes controlling sediment transport and deposition on Mars. The next chapter presents a study of bed thickness distributions in Martian sedimentary deposits, showing how statistical methods can be used to establish quantitative criteria for evaluating the depositional history of stratified deposits observed in orbital images. The third study tests the ability of spectral mixing models to obtain quantitative mineral abundances from near-infrared reflectance spectra of clay and sulfate mixtures in the laboratory for application to the analysis of orbital spectra of sedimentary deposits on Mars. The final study employs a statistical analysis of the size, shape, and distribution of nodules observed by the Mars Science Laboratory Curiosity rover team in the Sheepbed mudstone at Yellowknife Bay in Gale crater. This analysis is used to evaluate hypotheses for nodule formation and to gain insight into the diagenetic history of an ancient habitable environment on Mars.