957 resultados para Quasi-3D mechanics model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the absence of an external frame of reference-i.e., in background independent theories such as general relativity-physical degrees of freedom must describe relations between systems. Using a simple model, we investigate how such a relational quantum theory naturally arises by promoting reference systems to the status of dynamical entities. Our goal is twofold. First, we demonstrate using elementary quantum theory how any quantum mechanical experiment admits a purely relational description at a fundamental. Second, we describe how the original non-relational theory approximately emerges from the fully relational theory when reference systems become semi-classical. Our technique is motivated by a Bayesian approach to quantum mechanics, and relies on the noiseless subsystem method of quantum information science used to protect quantum states against undesired noise. The relational theory naturally predicts a fundamental decoherence mechanism, so an arrow of time emerges from a time-symmetric theory. Moreover, our model circumvents the problem of the collapse of the wave packet as the probability interpretation is only ever applied to diagonal density operators. Finally, the physical states of the relational theory can be described in terms of spin networks introduced by Penrose as a combinatorial description of geometry, and widely studied in the loop formulation of quantum gravity. Thus, our simple bottom-up approach (starting from the semiclassical limit to derive the fully relational quantum theory) may offer interesting insights on the low energy limit of quantum gravity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The classical strength profile of continents(1,2) is derived from a quasi-static view of their rheological response to stress-one that does not consider dynamic interactions between brittle and ductile layers. Such interactions result in complexities of failure in the brittle-ductile transition and the need to couple energy to understand strain localization. Here we investigate continental deformation by solving the fully coupled energy, momentum and continuum equations. We show that this approach produces unexpected feedback processes, leading to a significantly weaker dynamic strength evolution. In our model, stress localization focused on the brittle-ductile transition leads to the spontaneous development of mid-crustal detachment faults immediately above the strongest crustal layer. We also find that an additional decoupling layer forms between the lower crust and mantle. Our results explain the development of decoupling layers that are observed to accommodate hundreds of kilometres of horizontal motions during continental deformation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strain localisation is a widespread phenomenon often observed in shear and compressive loading of geomaterials, for example, the fault gouge. It is believed that the main mechanisms of strain localisation are strain softening and mismatch between dilatancy and pressure sensitivity. Observations show that gouge deformation is accompanied by considerable rotations of grains. In our previous work as a model for gouge material, we proposed a continuum description for an assembly of particles of equal radius in which the particle rotation is treated as an independent degree of freedom. We showed that there exist critical values of the model parameters for which the displacement gradient exhibits a pronounced localisation at the mid-surface layers of the fault, even in the absence of inelasticity. Here, we generalise the model to the case of finite deformations characteristic for the gouge deformation. We derive objective constitutive relationships relating the Jaumann rates of stress and moment stress to the relative strain and curvature rates, respectively. The model suggests that the pattern of localisation remains the same as in the linear case. However, the presence of the Jaumann terms leads to the emergence of non-zero normal stresses acting along and perpendicular to the shear layer (with zero hydrostatic pressure), and localised along the mid-line of the gouge; these stress components are absent in the linear model of simple shear. These additional normal stresses, albeit small, cause a change in the direction in which the maximal normal stresses act and in which en-echelon fracturing is formed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the creation of 3D statistical shape models of the knee bones and their use to embed information into a segmentation system for MRIs of the knee. We propose utilising the strong spatial relationship between the cartilages and the bones in the knee by embedding this information into the created models. This information can then be used to automate the initialisation of segmentation algorithms for the cartilages. The approach used to automatically generate the 3D statistical shape models of the bones is based on the point distribution model optimisation framework of Davies. Our implementation of this scheme uses a parameterized surface extraction algorithm, which is used as the basis for the optimisation scheme that automatically creates the 3D statistical shape models. The current approach is illustrated by generating 3D statistical shape models of the patella, tibia and femoral bones from a segmented database of the knee. The use of these models to embed spatial relationship information to aid in the automation of segmentation algorithms for the cartilages is then illustrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an automated segmentation approach for MR images of the knee bones. The bones are the first stage of a segmentation system for the knee, primarily aimed at the automated segmentation of the cartilages. The segmentation is performed using 3D active shape models (ASM), which are initialized using an affine registration to an atlas. The 3D ASMs of the bones are created automatically using a point distribution model optimization scheme. The accuracy and robustness of the segmentation approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A framework that connects computational mechanics and molecular dynamics has been developed and described. As the key parts of the framework, the problem of symbolising molecular trajectory and the associated interrelation between microscopic phase space variables and macroscopic observables of the molecular system are considered. Following Shalizi and Moore, it is shown that causal states, the constituent parts of the main construct of computational mechanics, the e-machine, define areas of the phase space that are optimal in the sense of transferring information from the micro-variables to the macro-observables. We have demonstrated that, based on the decay of their Poincare´ return times, these areas can be divided into two classes that characterise the separation of the phase space into resonant and chaotic areas. The first class is characterised by predominantly short time returns, typical to quasi-periodic or periodic trajectories. This class includes a countable number of areas corresponding to resonances. The second class includes trajectories with chaotic behaviour characterised by the exponential decay of return times in accordance with the Poincare´ theorem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work describes the programme of activities relating to a mechanical study of the Conform extrusion process. The main objective was to provide a basic understanding of the mechanics of the Conform process with particular emphasis placed on modelling using experimental and theoretical considerations. The experimental equipment used includes a state of the art computer-aided data-logging system and high temperature loadcells (up to 260oC) manufactured from tungsten carbide. Full details of the experimental equipment is presented in sections 3 and 4. A theoretical model is given in Section 5. The model presented is based on the upper bound theorem using a variation of the existing extrusion theories combined with temperature changes in the feed metal across the deformation zone. In addition, constitutive equations used in the model have been generated from existing experimental data. Theoretical and experimental data are presented in tabular form in Section 6. The discussion of results includes a comprehensive graphical presentation of the experimental and theoretical data. The main findings are: (i) the establishment of stress/strain relationships and an energy balance in order to study the factors affecting redundant work, and hence a model suitable for design purposes; (ii) optimisation of the process, by determination of the extrusion pressure for the range of reduction and changes in the extrusion chamber geometry at lower wheel speeds; and (iii) an understanding of the control of the peak temperature reach during extrusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent studies have stressed the importance of ‘open innovation’ as a means of enhancing innovation performance. The essence of the open innovation model is to take advantage of external as well as internal knowledge sources in developing and commercialising innovation, so avoiding an excessively narrow internal focus in a key area of corporate activity. Although the external aspect of open innovation is often stressed, another key aspect involves maximising the flow of ideas and knowledge from different sources within the firm, for example through knowledge sharing via the use of cross-functional teams. A fully open innovation approach would therefore combine both aspects i.e. cross-functional teams with boundary-spanning knowledge linkages. This suggests that there should be complementarities between the use cross-functional teams with boundary-spanning knowledge linkages i.e. the returns to implementing open innovation in one innovation activity is should be greater if open innovation is already in place in another innovation activity. However, our findings – based on a large sample of UK and German manufacturing plants – do not support this view. Our results suggest that in practice the benefits envisaged in the open innovation model are not generally achievable by the majority of plants, and that instead the adoption of open innovation across the whole innovation process is likely to reduce innovation outputs. Our results provide some guidance on the type of activities where the adoption of a market-based governance structure such as open innovation may be most valuable. This is likely to be in innovation activities where search is deterministic, activities are separable, and where the required level of knowledge sharing is correspondingly moderate – in other words those activities which are more routinized. For this type of activity market-based governance mechanisms (i.e. open innovation) may well be more efficient than hierarchical governance structures. For other innovation activities where outcomes are more uncertain and unpredictable and the risks of knowledge exchange hazards are greater, quasi-market based governance structures such as open innovation are likely to be subject to rapidly diminishing returns in terms of innovation outputs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research initiates a study of the mechanics of four roll plate bending and provides a methodology to investigate the process experimentally. To carry out the research a suitable model bender was designed and constructed. The model bender was comprehensively instrumented with ten load cells, three torquemeters and a tachometer. A rudimentary analysis of the four roll pre-bending mode considered the three critical bending operations. The analysis also gave an assessment of the model bender capacity for the design stage. The analysis indicated that an increase in the coefficient of friction in the contact region of the pinch rolls and the plate would reduce the pinch resultant force required to end a plate to a particular bend radius. The mechanisms involved in the four roll plate bending process were investigated and a mathematical model evolved to determine the mechanics of four roll thin plate bending. A theoretical and experimental investigation was conducted for the bending of HP30 aluminium plates in both single and multipass bending modes. The study indicated that the multipass plate bending mechanics of the process varied according to the number of bending passes executed and the step decrement of the anticipated finished bend radius in any two successive passes (i.e. the bending route). Experimental results for single pass bending indicated that the rollers normally exert a higher bending load for the steady-continous bending with the pre-inactive side roll oper?tive. For the pre-bending mode and the steady-continous bending mode with the pre-active side roll operative, the former exerted the higher loads. The single pass results also indicated that the force on the side roll, the torque and power steadily increased as the anticipated bend radius decreased. Theoretical predictions for the plate internal resistance to accomplish finished bend radii of between 2500mm and 500mm for multipass bending HP30 aluminium plates, suggested that there was a certain bending route which would effectively optimise the bender capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A review of published literature was made to establish the fundamental aspects of rolling and allow an experimental programme to be planned. Simulated hot rolling tests, using pure lead as a model material, were performed on a laboratory mill to obtain data on load and torque when rolling square section stock. Billet metallurgy and consolidation of representative defects was studied when modelling the rolling of continuously cast square stock with a view to determining optimal reduction schedules that would result in a product having properties to the high level found in fully wrought billets manufactured from large ingots. It is difficult to characterize sufficiently the complexity of the porous central region in a continuously cast billet for accurate modelling. However, holes drilled into a lead billet prior to rolling was found to be a good means of assessing central void consolidation in the laboratory. A rolling schedule of 30% (1.429:1) per pass to a total of 60% (2.5:1) will give a homogeneous, fully recrystallized product. To achieve central consolidation, a total reduction of approximately 70% (3.333:1) is necessary. At the reduction necessary to achieve consolidation, full recrystallization is assured. A theoretical analysis using a simplified variational principle with experimentally derived spread data has been developed for a homogeneous material. An upper bound analysis of a single, centrally situated void has been shown to give good predictions of void closure with reduction and the reduction required for void closure for initial void area fractions 0.45%. A limited number of tests in the works has indicated compliance with the results for void closure obtained in the laboratory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of more realistic constitutive models for granular media, such as sand, requires ingredients which take into account the internal micro-mechanical response to deformation. Unfortunately, at present, very little is known about these mechanisms and therefore it is instructive to find out more about the internal nature of granular samples by conducting suitable tests. In contrast to physical testing the method of investigation used in this study employs the Distinct Element Method. This is a computer based, iterative, time-dependent technique that allows the deformation of granular assemblies to be numerically simulated. By making assumptions regarding contact stiffnesses each individual contact force can be measured and by resolution particle centroid forces can be calculated. Then by dividing particle forces by their respective mass, particle centroid velocities and displacements are obtained by numerical integration. The Distinct Element Method is incorporated into a computer program 'Ball'. This program is effectively a numerical apparatus which forms a logical housing for this method and allows data input and output, and also provides testing control. By using this numerical apparatus tests have been carried out on disc assemblies and many new interesting observations regarding the micromechanical behaviour are revealed. In order to relate the observed microscopic mechanisms of deformation to the flow of the granular system two separate approaches have been used. Firstly a constitutive model has been developed which describes the yield function, flow rule and translation rule for regular assemblies of spheres and discs when subjected to coaxial deformation. Secondly statistical analyses have been carried out using data which was extracted from the simulation tests. These analyses define and quantify granular structure and then show how the force and velocity distributions use the structure to produce the corresponding stress and strain-rate tensors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Particulate solids are complex redundant systems which consist of discrete particles. The interactions between the particles are complex and have been the subject of many theoretical and experimental investigations. Invetigations of particulate material have been restricted by the lack of quantitative information on the mechanisms occurring within an assembly. Laboratory experimentation is limited as information on the internal behaviour can only be inferred from measurements on the assembly boundary, or the use of intrusive measuring devices. In addition comparisons between test data are uncertain due to the difficulty in reproducing exact replicas of physical systems. Nevertheless, theoretical and technological advances require more detailed material information. However, numerical simulation affords access to information on every particle and hence the micro-mechanical behaviour within an assembly, and can replicate desired systems. To use a computer program to numerically simulate material behaviour accurately it is necessary to incorporte realistic interaction laws. This research programme used the finite difference simulation program `BALL', developed by Cundall (1971), which employed linear spring force-displacement laws. It was thus necessary to incorporate more realistic interaction laws. Therefore, this research programme was primarily concerned with the implementation of the normal force-displacement law of Hertz (1882) and the tangential force-displacement laws of Mindlin and Deresiewicz (1953). Within this thesis the contact mechanics theories employed in the program are developed and the adaptations which were necessary to incorporate these laws are detailed. Verification of the new contact force-displacement laws was achieved by simulating a quasi-static oblique contact and single particle oblique impact. Applications of the program to the simulation of large assemblies of particles is given, and the problems in undertaking quasi-static shear tests along with the results from two successful shear tests are described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents an analysis of the stability of complex distribution networks. We present a stability analysis against cascading failures. We propose a spin [binary] model, based on concepts of statistical mechanics. We test macroscopic properties of distribution networks with respect to various topological structures and distributions of microparameters. The equilibrium properties of the systems are obtained in a statistical mechanics framework by application of the replica method. We demonstrate the validity of our approach by comparing it with Monte Carlo simulations. We analyse the network properties in terms of phase diagrams and found both qualitative and quantitative dependence of the network properties on the network structure and macroparameters. The structure of the phase diagrams points at the existence of phase transition and the presence of stable and metastable states in the system. We also present an analysis of robustness against overloading in the distribution networks. We propose a model that describes a distribution process in a network. The model incorporates the currents between any connected hubs in the network, local constraints in the form of Kirchoff's law and a global optimizational criterion. The flow of currents in the system is driven by the consumption. We study two principal types of model: infinite and finite link capacity. The key properties are the distributions of currents in the system. We again use a statistical mechanics framework to describe the currents in the system in terms of macroscopic parameters. In order to obtain observable properties we apply the replica method. We are able to assess the criticality of the level of demand with respect to the available resources and the architecture of the network. Furthermore, the parts of the system, where critical currents may emerge, can be identified. This, in turn, provides us with the characteristic description of the spread of the overloading in the systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of obtaining 3d detailed reconstructions of human faces in real-time and with inexpensive hardware. We present an algorithm based on a monocular multi-spectral photometric-stereo setup. This system is known to capture high-detailed deforming 3d surfaces at high frame rates and without having to use any expensive hardware or synchronized light stage. However, the main challenge of such a setup is the calibration stage, which depends on the lights setup and how they interact with the specific material being captured, in this case, human faces. For this purpose we develop a self-calibration technique where the person being captured is asked to perform a rigid motion in front of the camera, maintaining a neutral expression. Rigidity constrains are then used to compute the head's motion with a structure-from-motion algorithm. Once the motion is obtained, a multi-view stereo algorithm reconstructs a coarse 3d model of the face. This coarse model is then used to estimate the lighting parameters with a stratified approach: In the first step we use a RANSAC search to identify purely diffuse points on the face and to simultaneously estimate this diffuse reflectance model. In the second step we apply non-linear optimization to fit a non-Lambertian reflectance model to the outliers of the previous step. The calibration procedure is validated with synthetic and real data.