46 resultados para Fracture Criteria
Resumo:
The complete fracture behaviour of ductile double edge notched tension (DENT) specimen is analysed with an approximate model, which is then used to discuss the essential work of fracture (EWF) concept. The model results are compared with the experimental results for an aluminium alloy 6082-O. The restrictions on the ligament size for valid application of the EWF method are discussed with the aid of the model. The model is used to suggest an improved method of obtaining the cohesive stress-displacement relationship for the fracture process zone (FPZ).
Resumo:
The perceived wisdom about thin sheet fracture is that (i) the crack propagates under mixed mode I & III giving rise to a slant through-thickness fracture profile and (ii) the fracture toughness remains constant at low thickness and eventually decreases with increasing thickness. In the present study, fracture tests performed on thin DENT plates of various thicknesses made of stainless steel, mild steel, 6082-O and NS4 aluminium alloys, brass, bronze, lead, and zinc systematically exhibit (i) mode I “bath-tub”, i.e. “cup & cup”, fracture profiles with limited shear lips and significant localized necking (more than 50% thickness reduction), (ii) a fracture toughness that linearly increases with increasing thickness (in the range of 0.5–5 mm). The different contributions to the work expended during fracture of these materials are separated based on dimensional considerations. The paper emphasises the two parts of the work spent in the fracture process zone: the necking work and the “fracture” work. Experiments show that, as expected, the work of necking per unit area linearly increases with thickness. For a typical thickness of 1 mm, both fracture and necking contributions have the same order of magnitude in most of the metals investigated. A model is developed in order to independently evaluate the work of necking, which successfully predicts the experimental values. Furthermore, it enables the fracture energy to be derived from tests performed with only one specimen thickness. In a second modelling step, the work of fracture is computed using an enhanced void growth model valid in the quasi plane stress regime. The fracture energy varies linearly with the yield stress and void spacing and is a strong function of the hardening exponent and initial void volume fraction. The coupling of the two models allows the relative contributions of necking versus fracture to be quantified with respect to (i) the two length scales involved in this problem, i.e. the void spacing and the plate thickness, and (ii) the flow properties of the material. Each term can dominate depending on the properties of the material which explains the different behaviours reported in the literature about thin plate fracture toughness and its dependence with thickness.
Resumo:
Investigation of the fracture mode for hard and soft wheat endosperm was aimed at gaining a better understanding of the fragmentation process. Fracture mechanical characterization was based on the three-point bending test which enables stable crack propagation to take place in small rectangular pieces of wheat endosperm. The crack length can be measured in situ by using an optical microscope with light illumination from the side of the specimen or from the back of the specimen. Two new techniques were developed and used to estimate the fracture toughness of wheat endosperm, a geometric approach and a compliance method. The geometric approach gave average fracture toughness values of 53.10 and 27.0 J m(-2) for hard and soft endosperm, respectively. Fracture toughness estimated using the compliance method gave values of 49.9 and 29.7 J m(-2) for hard and soft endosperm, respectively. Compressive properties of the endosperm in three mutually perpendicular axes revealed that the hard and soft endosperms are isotropic composites. Scanning electron microscopy (SEM) observation of the fracture surfaces and the energy-time curves of loading-unloading cycles revealed that there was a plastic flow during crack propagation for both the hard and soft endosperms, and confirmed that the fracture mode is significantly related to the adhesion level between starch granules and the protein matrix.
Resumo:
A series of three-point bend tests using single edge notched testpieces of pure polycrystalline ice have been performed at three different temperatures (–20°C, –30°C and –40°C). The displacement rate was varied from 1 mm/min to 100 mm/min, producing the crack tip strain rates from about 10–3 to 10–1 s–1. The results show that (a) the fracture toughness of pure polycrystalline ice given by the critical stress intensity factor (K IC) is much lower than that measured from the J—integral under identical conditions; (b) from the determination of K IC, the fracture toughness of pure polycrystalline ice decreases with increasing strain rate and there is good power law relationship between them; (c) from the measurement of the J—integral, a different tendency was appeared: when the crack tip strain rate exceeds a critical value of 6 × 10–3 s–1, the fracture toughness is almost constant but when the crack tip strain rate is less than this value, the fracture toughness increases with decreasing crack tip strain rate. Re-examination of the mechanisms of rate-dependent fracture toughness of pure polycrystalline ice shows that the effect of strain rate is related not only to the blunting of crack tips due to plasticity, creep and stress relaxation but also to the nucleation and growth of microcracks in the specimen.
Resumo:
Research and informed debate reveals that institutional practices in relation to research degree examining vary considerably across the sector. Within a context of accountability and quality assurance/total quality management, the range and specificity of criteria that are used to judge doctoral work is of particular relevance. First, a review of the literature indicates that, although interest in and concern about the process is burgeoning, there is little empirical research published from which practitioners can draw guidance. The second part of the paper reviews that available research, drawing conclusions about issues that seem to pertain at a general level across disciplines and institutions. Lest the variation is an artefact of discipline difference, the third part of the paper focuses on a within discipline study. Criteria expected/predicted by supervisors are compared and contrasted with those anticipated and experienced by candidates and with those implemented and considered important by examiners. The results are disturbing.
Resumo:
More than thirty years ago, Amari and colleagues proposed a statistical framework for identifying structurally stable macrostates of neural networks from observations of their microstates. We compare their stochastic stability criterion with a deterministic stability criterion based on the ergodic theory of dynamical systems, recently proposed for the scheme of contextual emergence and applied to particular inter-level relations in neuroscience. Stochastic and deterministic stability criteria for macrostates rely on macro-level contexts, which make them sensitive to differences between different macro-levels.
Resumo:
The main activity carried out by the geophysicist when interpreting seismic data, in terms of both importance and time spent is tracking (or picking) seismic events. in practice, this activity turns out to be rather challenging, particularly when the targeted event is interrupted by discontinuities such as geological faults or exhibits lateral changes in seismic character. In recent years, several automated schemes, known as auto-trackers, have been developed to assist the interpreter in this tedious and time-consuming task. The automatic tracking tool available in modem interpretation software packages often employs artificial neural networks (ANN's) to identify seismic picks belonging to target events through a pattern recognition process. The ability of ANNs to track horizons across discontinuities largely depends on how reliably data patterns characterise these horizons. While seismic attributes are commonly used to characterise amplitude peaks forming a seismic horizon, some researchers in the field claim that inherent seismic information is lost in the attribute extraction process and advocate instead the use of raw data (amplitude samples). This paper investigates the performance of ANNs using either characterisation methods, and demonstrates how the complementarity of both seismic attributes and raw data can be exploited in conjunction with other geological information in a fuzzy inference system (FIS) to achieve an enhanced auto-tracking performance.
Resumo:
A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.
Resumo:
An analysis of Stochastic Diffusion Search (SDS), a novel and efficient optimisation and search algorithm, is presented, resulting in a derivation of the minimum acceptable match resulting in a stable convergence within a noisy search space. The applicability of SDS can therefore be assessed for a given problem.