957 resultados para Solid Model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes an algorithm for constructing the solid model (boundary representation) from pout data measured from the faces of the object. The poznt data is assumed to be clustered for each face. This algorithm does not require any compuiier model of the part to exist and does not require any topological infarmation about the part to be input by the user. The property that a convex solid can be constructed uniquely from geometric input alone is utilized in the current work. Any object can be represented a5 a combznatzon of convex solids. The proposed algorithm attempts to construct convex polyhedra from the given input. The polyhedra so obtained are then checked against the input data for containment and those polyhedra, that satisfy this check, are combined (using boolean union operation) to realise the solid model. Results of implementation are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a methodology to reconstruct 3D biological organs from image sequences or other scan data using readily available free softwares with the final goal of using the organs (3D solids) for finite element analysis. The methodology deals with issues such as segmentation, conversion to polygonal surface meshes, and finally conversion of these meshes to 3D solids. The user is able to control the detail or the level of complexity of the solid constructed. The methodology is illustrated using 3D reconstruction of a porcine liver as an example. Finally, the reconstructed liver is imported into the commercial software ANSYS, and together with a cyst inside the liver, a nonlinear analysis performed. The results confirm that the methodology can be used for obtaining 3D geometry of biological organs. The results also demonstrate that the geometry obtained by following this methodology can be used for the nonlinear finite element analysis of organs. The methodology (or the procedure) would be of use in surgery planning and surgery simulation since both of these extensively use finite elements for numerical simulations and it is better if these simulations are carried out on patient specific organ geometries. Instead of following the present methodology, it would cost a lot to buy a commercial software which can reconstruct 3D biological organs from scanned image sequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Load-Unload Response Ratio (LURR) method is an intermediate-term earthquake prediction approach that has shown considerable promise. It involves calculating the ratio of a specified energy release measure during loading and unloading where loading and unloading periods are determined from the earth tide induced perturbations in the Coulomb Failure Stress on optimally oriented faults. In the lead-up to large earthquakes, high LURR values are frequently observed a few months or years prior to the event. These signals may have a similar origin to the observed accelerating seismic moment release (AMR) prior to many large earthquakes or may be due to critical sensitivity of the crust when a large earthquake is imminent. As a first step towards studying the underlying physical mechanism for the LURR observations, numerical studies are conducted using the particle based lattice solid model (LSM) to determine whether LURR observations can be reproduced. The model is initialized as a heterogeneous 2-D block made up of random-sized particles bonded by elastic-brittle links. The system is subjected to uniaxial compression from rigid driving plates on the upper and lower edges of the model. Experiments are conducted using both strain and stress control to load the plates. A sinusoidal stress perturbation is added to the gradual compressional loading to simulate loading and unloading cycles and LURR is calculated. The results reproduce signals similar to those observed in earthquake prediction practice with a high LURR value followed by a sudden drop prior to macroscopic failure of the sample. The results suggest that LURR provides a good predictor for catastrophic failure in elastic-brittle systems and motivate further research to study the underlying physical mechanisms and statistical properties of high LURR values. The results provide encouragement for earthquake prediction research and the use of advanced simulation models to probe the physics of earthquakes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A set of cylindrical porous titanium test samples were produced using the three-dimensional printing and sintering method with samples sintered at 900 °C, 1000 °C, 1100 °C, 1200 °C or 1300 °C. Following compression testing, it was apparent that the stress-strain curves were similar in shape to the curves that represent cellular solids. This is despite a relative density twice as high as what is considered the threshold for defining a cellular solid. As final sintering temperature increased, the compressive behaviour developed from being elastic-brittle to elastic-plastic and while Young's modulus remained fairly constant in the region of 1.5 GPa, there was a corresponding increase in 0.2% proof stress of approximately 40-80 MPa. The cellular solid model consists of two equations that predict Young's modulus and yield or proof stress. By fitting to experimental data and consideration of porous morphology, appropriate changes to the geometry constants allow modification of the current models to predict with better accuracy the behaviour of porous materials with higher relative densities (lower porosity).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We obtain a diagonal solution of the dual reflection equation for the elliptic A(n-1)((1)) solid-on-solid model. The isomorphism between the solutions of the reflection equation and its dual is studied. (C) 2004 American Institute of Physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical tests of Load-Unload Response Ratio (LURR) signals are carried in order to verify statistical robustness of the previous studies using the Lattice Solid Model (MORA et al., 2002b). In each case 24 groups of samples with the same macroscopic parameters (tidal perturbation amplitude A, period T and tectonic loading rate k) but different particle arrangements are employed. Results of uni-axial compression experiments show that before the normalized time of catastrophic failure, the ensemble average LURR value rises significantly, in agreement with the observations of high LURR prior to the large earthquakes. In shearing tests, two parameters are found to control the correlation between earthquake occurrence and tidal stress. One is, A/(kT) controlling the phase shift between the peak seismicity rate and the peak amplitude of the perturbation stress. With an increase of this parameter, the phase shift is found to decrease. Another parameter, AT/k, controls the height of the probability density function (Pdf) of modeled seismicity. As this parameter increases, the Pdf becomes sharper and narrower, indicating a strong triggering. Statistical studies of LURR signals in shearing tests also suggest that except in strong triggering cases, where LURR cannot be calculated due to poor data in unloading cycles, the larger events are more likely to occur in higher LURR periods than the smaller ones, supporting the LURR hypothesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Lattice Solid Model has been used successfully as a virtual laboratory to simulate fracturing of rocks, the dynamics of faults, earthquakes and gouge processes. However, results from those simulations show that in order to make the next step towards more realistic experiments it will be necessary to use models containing a significantly larger number of particles than current models. Thus, those simulations will require a greatly increased amount of computational resources. Whereas the computing power provided by single processors can be expected to increase according to Moore's law, i.e., to double every 18-24 months, parallel computers can provide significantly larger computing power today. In order to make this computing power available for the simulation of the microphysics of earthquakes, a parallel version of the Lattice Solid Model has been implemented. Benchmarks using large models with several millions of particles have shown that the parallel implementation of the Lattice Solid Model can achieve a high parallel-efficiency of about 80% for large numbers of processors on different computer architectures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, 3-D Lattice Solid Model (LSMearth or LSM) was extended by introducing particle-scale rotation. In the new model, for each 3-D particle, we introduce six degrees of freedom: Three for translational motion, and three for orientation. Six kinds of relative motions are permitted between two neighboring particles, and six interactions are transferred, i.e., radial, two shearing forces, twisting and two bending torques. By using quaternion algebra, relative rotation between two particles is decomposed into two sequence-independent rotations such that all interactions due to the relative motions between interactive rigid bodies can be uniquely decided. After incorporating this mechanism and introducing bond breaking under torsion and bending into the LSM, several tests on 2-D and 3-D rock failure under uni-axial compression are carried out. Compared with the simulations without the single particle rotational mechanism, the new simulation results match more closely experimental results of rock fracture and hence, are encouraging. Since more parameters are introduced, an approach for choosing the new parameters is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the insight gained from 2-D particle models, and given that the dynamics of crustal faults occur in 3-D space, the question remains, how do the 3-D fault gouge dynamics differ from those in 2-D? Traditionally, 2-D modeling has been preferred over 3-D simulations because of the computational cost of solving 3-D problems. However, modern high performance computing architectures, combined with a parallel implementation of the Lattice Solid Model (LSM), provide the opportunity to explore 3-D fault micro-mechanics and to advance understanding of effective constitutive relations of fault gouge layers. In this paper, macroscopic friction values from 2-D and 3-D LSM simulations, performed on an SGI Altix 3700 super-cluster, are compared. Two rectangular elastic blocks of bonded particles, with a rough fault plane and separated by a region of randomly sized non-bonded gouge particles, are sheared in opposite directions by normally-loaded driving plates. The results demonstrate that the gouge particles in the 3-D models undergo significant out-of-plane motion during shear. The 3-D models also exhibit a higher mean macroscopic friction than the 2-D models for varying values of interparticle friction. 2-D LSM gouge models have previously been shown to exhibit accelerating energy release in simulated earthquake cycles, supporting the Critical Point hypothesis. The 3-D models are shown to also display accelerating energy release, and good fits of power law time-to-failure functions to the cumulative energy release are obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The particle-based lattice solid model developed to study the physics of rocks and the nonlinear dynamics of earthquakes is refined by incorporating intrinsic friction between particles. The model provides a means for studying the causes of seismic wave attenuation, as well as frictional heat generation, fault zone evolution, and localisation phenomena. A modified velocity-Verlat scheme that allows friction to be precisely modelled is developed. This is a difficult computational problem given that a discontinuity must be accurately simulated by the numerical approach (i.e., the transition from static to dynamical frictional behaviour). This is achieved using a half time step integration scheme. At each half time step, a nonlinear system is solved to compute the static frictional forces and states of touching particle-pairs. Improved efficiency is achieved by adaptively adjusting the time step increment, depending on the particle velocities in the system. The total energy is calculated and verified to remain constant to a high precision during simulations. Numerical experiments show that the model can be applied to the study of earthquake dynamics, the stick-slip instability, heat generation, and fault zone evolution. Such experiments may lead to a conclusive resolution of the heat flow paradox and improved understanding of earthquake precursory phenomena and dynamics. (C) 1999 Academic Press.