940 resultados para Molecular Simulation
Resumo:
In recent years, the advent of new tools for musculoskeletal simulation has increased the potential for significantly improving the ergonomic design process and ergonomic assessment of design. In this paper we investigate the use of one such tool, ‘The AnyBody Modeling System’, applied to solve a one-parameter and yet, complex ergonomic design problem. The aim of this paper is to investigate the potential of computer-aided musculoskeletal modelling in the ergonomic design process, in the same way as CAE technology has been applied to engineering design.
Resumo:
When compared with other arthoplasties, Total Ankle Joint Replacement (TAR) is much less successful. Attempts to remedy this situation by modifying the implant design, for example by making its form more akin to the original ankle anatomy, have largely met with failure. One of the major obstacles is a gap in current knowledge relating to ankle joint force. Specifically this is the lack of reliable data quantifying forces and moments acting on the ankle, in both the healthy and diseased joints. The limited data that does exist is thought to be inaccurate [1] and is based upon simplistic two dimensional discrete and outdated techniques.
Resumo:
In most materials, short stress waves are generated during the process of plastic deformation, phase transformation, crack formation and crack growth. These phenomena are applied in acoustic emission (AE) for the detection of material defects in wide spectrum areas, ranging from non-destructive testing for the detection of materials defects to monitoring of microeismical activity. AE technique is also used for defect source identification and for failure detection. AE waves consist of P waves (primary/longitudinal waves), S waves (shear/transverse waves) and Rayleight (surface) waves as well as reflected and diffracted waves. The propagation of AE waves in various modes has made the determination of source location difficult. In order to use the acoustic emission technique for accurate identification of source location, an understanding of wave propagation of the AE signals at various locations in a plate structure is essential. Furthermore, an understanding of wave propagation can also assist in sensor location for optimum detection of AE signals. In real life, as the AE signals radiate from the source it will result in stress waves. Unless the type of stress wave is known, it is very difficult to locate the source when using the classical propagation velocity equations. This paper describes the simulation of AE waves to identify the source location in steel plate as well as the wave modes. The finite element analysis (FEA) is used for the numerical simulation of wave propagation in thin plate. By knowing the type of wave generated, it is possible to apply the appropriate wave equations to determine the location of the source. For a single plate structure, the results show that the simulation algorithm is effective to simulate different stress waves.
Resumo:
The subtalar joint has been presumed to account for most of the pathologic motion in the foot and ankle, but research has shown that motion at other foot joints is greater than traditionally expected. Although recent research demonstrates the complexity of the kinematic variables in the foot and ankle, it still fails to expand our knowledge of the role of the musculotendinous structures in the biomechanics of the foot and ankle and how this is affected by in-shoe orthoses. The aim of this study was to simulate the effect of in-shoe foot orthoses by manipulation of the ground reaction force (GRF) components and centre of pressure (CoP) to demonstrate the resultant effect on muscle force in selected muscles during both the rearfoot loading response and stance phase of the gait cycle. We found that any medial wedge increases ankle joint load during gait cycle, while a lateral wedge decreases the joint load during the stance phase.
Resumo:
Digital human modelling (DHM) has today matured from research into industrial application. In the automotive domain, DHM has become a commonly used tool in virtual prototyping and human-centred product design. While this generation of DHM supports the ergonomic evaluation of new vehicle design during early design stages of the product, by modelling anthropometry, posture, motion or predicting discomfort, the future of DHM will be dominated by CAE methods, realistic 3D design, and musculoskeletal and soft tissue modelling down to the micro-scale of molecular activity within single muscle fibres. As a driving force for DHM development, the automotive industry has traditionally used human models in the manufacturing sector (production ergonomics, e.g. assembly) and the engineering sector (product ergonomics, e.g. safety, packaging). In product ergonomics applications, DHM share many common characteristics, creating a unique subset of DHM. These models are optimised for a seated posture, interface to a vehicle seat through standardised methods and provide linkages to vehicle controls. As a tool, they need to interface with other analytic instruments and integrate into complex CAD/CAE environments. Important aspects of current DHM research are functional analysis, model integration and task simulation. Digital (virtual, analytic) prototypes or digital mock-ups (DMU) provide expanded support for testing and verification and consider task-dependent performance and motion. Beyond rigid body mechanics, soft tissue modelling is evolving to become standard in future DHM. When addressing advanced issues beyond the physical domain, for example anthropometry and biomechanics, modelling of human behaviours and skills is also integrated into DHM. Latest developments include a more comprehensive approach through implementing perceptual, cognitive and performance models, representing human behaviour on a non-physiologic level. Through integration of algorithms from the artificial intelligence domain, a vision of the virtual human is emerging.
Resumo:
The impact of weather on traffic and its behavior is not well studied in literature primarily due to lack of integrated traffic and weather data. Weather can significant effect the traffic and traffic management measures developed for fine weather might not be optimal for adverse weather. Simulation is an efficient tool for analyzing traffic management measures even before their actual implementation. Therefore, in order to develop and test traffic management measures for adverse weather condition we need to first analyze the effect of weather on fundamental traffic parameters and thereafter, calibrate the simulation model parameters in order to simulate the traffic under adverse weather conditions. In this paper we first, analyses the impact of weather on motorway traffic flow and drivers’ behaviour with traffic data from Swiss motorways and weather data from MeteoSuisse. Thereafter, we develop methodology to calibrate a microscopic simulation model with the aim to utilize the simulation model for simulating traffic under adverse weather conditions. Here, study is performed using AIMSUN, a microscopic traffic simulator.
Resumo:
China has experienced an extraordinary level of economic development since the 1990s, following excessive competition between different regions. This has resulted in many resource and environmental problems. Land resources, for example, are either abused or wasted in many regions. The strategy of development priority zoning (DPZ), proposed by the Chinese National 11th Five-Year Plan, provides an opportunity to solve these problems by coordinating regional development and protection. In line with the rational utilization of land, it is proposed that the DPZ strategy should be integrated with regional land use policy. As there has been little research to date on this issue, this paper introduces a system dynamic (SD) model for assessing land use change in China led by the DPZ strategy. Land use is characterized by the prioritization of land development, land utilization, land harness and land protection (D-U-H-P). By using the Delphi method, a corresponding suitable prioritization of D-U-H-P for the four types of DPZ, including optimized development zones (ODZ), key development zones (KDZ), restricted development zones (RDZ), and forbidden development zones (FDZ) are identified. Suichang County is used as a case study in which to conduct the simulation of land use change under the RDZ strategy. The findings enable a conceptualization to be made of DPZ-led land use change and the identification of further implications for land use planning generally. The SD model also provides a potential tool for local government to combine DPZ strategy at the national level with land use planning at the local level.
Resumo:
The future vehicle navigation for safety applications requires seamless positioning at the accuracy of sub-meter or better. However, standalone Global Positioning System (GPS) or Differential GPS (DGPS) suffer from solution outages while being used in restricted areas such as high-rise urban areas and tunnels due to the blockages of satellite signals. Smoothed DGPS can provide sub-meter positioning accuracy, but not the seamless requirement. A disadvantage of the traditional navigation aids such as Dead Reckoning and Inertial Measurement Unit onboard vehicles are either not accurate enough due to error accumulation or too expensive to be acceptable by the mass market vehicle users. One of the alternative technologies is to use the wireless infrastructure installed in roadside to locate vehicles in regions where the Global Navigation Satellite Systems (GNSS) signals are not available (for example: inside tunnels, urban canyons and large indoor car parks). The examples of roadside infrastructure which can be potentially used for positioning purposes could include Wireless Local Area Network (WLAN)/Wireless Personal Area Network (WPAN) based positioning systems, Ultra-wide band (UWB) based positioning systems, Dedicated Short Range Communication (DSRC) devices, Locata’s positioning technology, and accurate road surface height information over selected road segments such as tunnels. This research reviews and compares the possible wireless technologies that could possibly be installed along roadside for positioning purposes. Models and algorithms of integrating different positioning technologies are also presented. Various simulation schemes are designed to examine the performance benefits of united GNSS and roadside infrastructure for vehicle positioning. The results from these experimental studies have shown a number of useful findings. It is clear that in the open road environment where sufficient satellite signals can be obtained, the roadside wireless measurements contribute very little to the improvement of positioning accuracy at the sub-meter level, especially in the dual constellation cases. In the restricted outdoor environments where only a few GPS satellites, such as those with 45 elevations, can be received, the roadside distance measurements can help improve both positioning accuracy and availability to the sub-meter level. When the vehicle is travelling in tunnels with known heights of tunnel surfaces and roadside distance measurements, the sub-meter horizontal positioning accuracy is also achievable. Overall, simulation results have demonstrated that roadside infrastructure indeed has the potential to provide sub-meter vehicle position solutions for certain road safety applications if the properly deployed roadside measurements are obtainable.
Resumo:
Several studies of the surface effect on bending properties of a nanowire (NW) have been conducted. However, these analyses are mainly based on theoretical predictions, and there is seldom integration study in combination between theoretical predictions and simulation results. Thus, based on the molecular dynamics (MD) simulation and different modified beam theories, a comprehensive theoretical and numerical study for bending properties of nanowires considering surface/intrinsic stress effects and axial extension effect is conducted in this work. The discussion begins from the Euler-Bernoulli beam theory and Timoshenko beam theory augmented with surface effect. It is found that when the NW possesses a relatively small cross-sectional size, these two theories cannot accurately interpret the true surface effect. The incorporation of axial extension effect into Euler-Bernoulli beam theory provides a nonlinear solution that agrees with the nonlinear-elastic experimental and MD results. However, it is still found inaccurate when the NW cross-sectional size is relatively small. Such inaccuracy is also observed for the Euler-Bernoulli beam theory augmented with both contributions from surface effect and axial extension effect. A comprehensive model for completely considering influences from surface stress, intrinsic stress, and axial extension is then proposed, which leads to good agreement with MD simulation results. It is thus concluded that, for NWs with a relatively small cross-sectional size, a simple consideration of surface stress effect is inappropriate, and a comprehensive consideration of the intrinsic stress effect is required.
Resumo:
Based on the molecular dynamics (MD) method, the single-crystalline copper nanowire with different surface defects is investigated through tension simulation. For comparison, the MD tension simulations of perfect nanowire are firstly carried out under different temperatures, strain rates, and sizes. It has concluded that the surface-volume ratio significantly affects the mechanical properties of nanowire. The surface defects on nanowires are then systematically studied in considering different defect orientation and distribution. It is found that the Young’s modulus is insensitive of surface defects. However, the yield strength and yield point show a significant decrease due to the different defects. Different defects are observed to serve as a dislocation source.
Resumo:
Nekoite Ca3Si6O15•7H2O and okenite Ca10Si18O46•18H2O are both hydrated calcium silicates found respectively in contact metamorphosed limestone and in association with zeolites from the alteration of basalts. The minerals form two-Dimensional infinite sheets with other than six-membered rings with 3-, 4-, or 5-membered rings and 8-membered rings. The two minerals have been characterised by Raman, near-infrared and infrared spectroscopy. The Raman spectrum of nekoite is characterised by two sharp peaks at 1061 and 1092 cm-1 with bands of lesser intensity at 974, 994, 1023 and 1132 cm-1. The Raman spectrum of okenite shows an intense single Raman band at 1090 cm-1 with a shoulder band at 1075 cm-1.These bands are assigned to the SiO stretching vibrations of Si2O5 units. Raman water stretching bands of nekoite are observed at 3071, 3380, 3502 and 3567 cm-1. Raman spectrum of okenite shows water stretching bands at 3029, 3284, 3417, 3531 and 3607 cm-1. NIR spectra of the two minerals are subtly different inferring water with different hydrogen bond strengths. By using a Libowitzky empirical formula, hydrogen bond distances based upon these OH stretching vibrations. Two types of hydrogen bonds are distinguished: strong hydrogen bonds associated with structural water and weaker hydrogen bonds assigned to space filling water molecules.
Resumo:
For over half a century, it has been known that the rate of morphological evolution appears to vary with the time frame of measurement. Rates of microevolutionary change, measured between successive generations, were found to be far higher than rates of macroevolutionary change inferred from the fossil record. More recently, it has been suggested that rates of molecular evolution are also time dependent, with the estimated rate depending on the timescale of measurement. This followed surprising observations that estimates of mutation rates, obtained in studies of pedigrees and laboratory mutation-accumulation lines, exceeded long-term substitution rates by an order of magnitude or more. Although a range of studies have provided evidence for such a pattern, the hypothesis remains relatively contentious. Furthermore, there is ongoing discussion about the factors that can cause molecular rate estimates to be dependent on time. Here we present an overview of our current understanding of time-dependent rates. We provide a summary of the evidence for time-dependent rates in animals, bacteria and viruses. We review the various biological and methodological factors that can cause rates to be time dependent, including the effects of natural selection, calibration errors, model misspecification and other artefacts. We also describe the challenges in calibrating estimates of molecular rates, particularly on the intermediate timescales that are critical for an accurate characterization of time-dependent rates. This has important consequences for the use of molecular-clock methods to estimate timescales of recent evolutionary events.
Resumo:
Determining the temporal scale of biological evolution has traditionally been the preserve of paleontology, with the timing of species originations and major diversifications all being read from the fossil record. However, the ages of the earliest (correctly identified) records will underestimate actual origins due to the incomplete nature of the fossil record and the necessity for lineages to have evolved sufficiently divergent morphologies in order to be distinguished. The possibility of inferring divergence times more accurately has been promoted by the idea that the accumulation of genetic change between modern lineages can be used as a molecular clock (Zuckerkandl and Pauling, 1965). In practice, though, molecular dates have often been so old as to be incongruent even with liberal readings of the fossil record. Prominent examples include inferred diversifications of metazoan phyla hundreds of millions of years before their Cambrian fossil record appearances (e.g., Nei et al., 2001) and a basal split between modern birds (Neoaves) that is almost double the age of their earliest recognizable fossils (e.g., Cooper and Penny, 1997).
Resumo:
In recent years, a number of phylogenetic methods have been developed for estimating molecular rates and divergence dates under models that relax the molecular clock constraint by allowing rate change throughout the tree. These methods are being used with increasing frequency, but there have been few studies into their accuracy. We tested the accuracy of several relaxed-clock methods (penalized likelihood and Bayesian inference using various models of rate change) using nucleotide sequences simulated on a nine-taxon tree. When the sequences evolved with a constant rate, the methods were able to infer rates accurately, but estimates were more precise when a molecular clock was assumed. When the sequences evolved under a model of autocorrelated rate change, rates were accurately estimated using penalized likelihood and by Bayesian inference using lognormal and exponential models of rate change, while other models did not perform as well. When the sequences evolved under a model of uncorrelated rate change, only Bayesian inference using an exponential rate model performed well. Collectively, the results provide a strong recommendation for using the exponential model of rate change if a conservative approach to divergence time estimation is required. A case study is presented in which we use a simulation-based approach to examine the hypothesis of elevated rates in the Cambrian period, and it is found that these high rate estimates might be an artifact of the rate estimation method. If this bias is present, then the ages of metazoan divergences would be systematically underestimated. The results of this study have implications for studies of molecular rates and divergence dates.
Time dependency of molecular rate estimates and systematic overestimation of recent divergence times
Resumo:
Studies of molecular evolutionary rates have yielded a wide range of rate estimates for various genes and taxa. Recent studies based on population-level and pedigree data have produced remarkably high estimates of mutation rate, which strongly contrast with substitution rates inferred in phylogenetic (species-level) studies. Using Bayesian analysis with a relaxed-clock model, we estimated rates for three groups of mitochondrial data: avian protein-coding genes, primate protein-coding genes, and primate d-loop sequences. In all three cases, we found a measurable transition between the high, short-term (<1–2 Myr) mutation rate and the low, long-term substitution rate. The relationship between the age of the calibration and the rate of change can be described by a vertically translated exponential decay curve, which may be used for correcting molecular date estimates. The phylogenetic substitution rates in mitochondria are approximately 0.5% per million years for avian protein-coding sequences and 1.5% per million years for primate protein-coding and d-loop sequences. Further analyses showed that purifying selection offers the most convincing explanation for the observed relationship between the estimated rate and the depth of the calibration. We rule out the possibility that it is a spurious result arising from sequence errors, and find it unlikely that the apparent decline in rates over time is caused by mutational saturation. Using a rate curve estimated from the d-loop data, several dates for last common ancestors were calculated: modern humans and Neandertals (354 ka; 222–705 ka), Neandertals (108 ka; 70–156 ka), and modern humans (76 ka; 47–110 ka). If the rate curve for a particular taxonomic group can be accurately estimated, it can be a useful tool for correcting divergence date estimates by taking the rate decay into account. Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution.