975 resultados para discrete velocity models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, Block and coworkers [Visscher, K., Schnitzer, M. J., & Block, S. M. (1999) Nature (London) 400, 184–189 and Schnitzer, M. J., Visscher, K. & Block, S. M. (2000) Nat. Cell Biol. 2, 718–723] have reported extensive observations of individual kinesin molecules moving along microtubules in vitro under controlled loads, F = 1 to 8 pN, with [ATP] = 1 μM to 2 mM. Their measurements of velocity, V, randomness, r, stalling force, and mean run length, L, reveal a need for improved theoretical understanding. We show, presenting explicit formulae that provide a quantitative basis for comparing distinct molecular motors, that their data are satisfactorily described by simple, discrete-state, sequential stochastic models. The simplest (N = 2)-state model with fixed load-distribution factors and kinetic rate constants concordant with stopped-flow experiments, accounts for the global (V, F, L, [ATP]) interdependence and, further, matches relative acceleration observed under assisting loads. The randomness, r(F,[ATP]), is accounted for by a waiting-time distribution, ψ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{1}^{+}}}\end{equation*}\end{document}(t), [for the transition(s) following ATP binding] with a width parameter ν ≡ 〈t〉2/〈(Δt)2〉≃2.5, indicative of a dispersive stroke of mechanicity ≃0.6 or of a few (≳ν − 1) further, kinetically coupled states: indeed, N = 4 (but not N = 3) models do well. The analysis reveals: (i) a substep of d0 = 1.8–2.1 nm on ATP binding (consistent with structurally based suggestions); (ii) comparable load dependence for ATP binding and unbinding; (iii) a strong load dependence for reverse hydrolysis and subsequent reverse rates; and (iv) a large (≳50-fold) increase in detachment rate, with a marked load dependence, following ATP binding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We summarize recent evidence that models of earthquake faults with dynamically unstable friction laws but no externally imposed heterogeneities can exhibit slip complexity. Two models are described here. The first is a one-dimensional model with velocity-weakening stick-slip friction; the second is a two-dimensional elastodynamic model with slip-weakening friction. Both exhibit small-event complexity and chaotic sequences of large characteristic events. The large events in both models are composed of Heaton pulses. We argue that the key ingredients of these models are reasonably accurate representations of the properties of real faults.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the electron dynamics and transport properties of one-dimensional continuous models with random, short-range correlated impurities. We develop a generalized Poincare map formalism to cast the Schrodinger equation for any potential into a discrete set of equations, illustrating its application by means of a specific example. We then concentrate on the case of a Kronig-Penney model with dimer impurities. The previous technique allows us to show that this model presents infinitely many resonances (zeroes of the reflection coefficient at a single dimer) that give rise to a band of extended states, in contradiction with the general viewpoint that all one-dimensional models with random potentials support only localized states. We report on exact transfer-matrix numerical calculations of the transmission coefFicient, density of states, and localization length for various strengths of disorder. The most important conclusion so obtained is that this kind of system has a very large number of extended states. Multifractal analysis of very long systems clearly demonstrates the extended character of such states in the thermodynamic limit. In closing, we brieBy discuss the relevance of these results in several physical contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The organizational structure of the companies in the biomass energy sector, regarding the supply chain management services, can be greatly improved through the use of software decision support tools. These tools should be able to provide real-time alternative scenarios when deviations from the initial production plans are observed. To make this possible it is necessary to have representative production chain process models where several scenarios and solutions can be evaluated accurately. Due to its nature, this type of process is more adequately represented by means of event-based models. In particular, this work presents the modelling of a typical biomass production chain using the computing platform SIMEVENTS. Throughout the article details about the conceptual model, as well as simulation results, are provided

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report quantitative results from three brittle thrust wedge experiments, comparing numerical results directly with each other and with corresponding analogue results. We first test whether the participating codes reproduce predictions from analytical critical taper theory. Eleven codes pass the stable wedge test, showing negligible internal deformation and maintaining the initial surface slope upon horizontal translation over a frictional interface. Eight codes participated in the unstable wedge test that examines the evolution of a wedge by thrust formation from a subcritical state to the critical taper geometry. The critical taper is recovered, but the models show two deformation modes characterised by either mainly forward dipping thrusts or a series of thrust pop-ups. We speculate that the two modes are caused by differences in effective basal boundary friction related to different algorithms for modelling boundary friction. The third experiment examines stacking of forward thrusts that are translated upward along a backward thrust. The results of the seven codes that run this experiment show variability in deformation style, number of thrusts, thrust dip angles and surface slope. Overall, our experiments show that numerical models run with different numerical techniques can successfully simulate laboratory brittle thrust wedge models at the cm-scale. In more detail, however, we find that it is challenging to reproduce sandbox-type setups numerically, because of frictional boundary conditions and velocity discontinuities. We recommend that future numerical-analogue comparisons use simple boundary conditions and that the numerical Earth Science community defines a plasticity test to resolve the variability in model shear zones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Underwater video transects have become a common tool for quantitative analysis of the seafloor. However a major difficulty remains in the accurate determination of the area surveyed as underwater navigation can be unreliable and image scaling does not always compensate for distortions due to perspective and topography. Depending on the camera set-up and available instruments, different methods of surface measurement are applied, which make it difficult to compare data obtained by different vehicles. 3-D modelling of the seafloor based on 2-D video data and a reference scale can be used to compute subtransect dimensions. Focussing on the length of the subtransect, the data obtained from 3-D models created with the software PhotoModeler Scanner are compared with those determined from underwater acoustic positioning (ultra short baseline, USBL) and bottom tracking (Doppler velocity log, DVL). 3-D model building and scaling was successfully conducted on all three tested set-ups and the distortion of the reference scales due to substrate roughness was identified as the main source of imprecision. Acoustic positioning was generally inaccurate and bottom tracking unreliable on rough terrain. Subtransect lengths assessed with PhotoModeler were on average 20% longer than those derived from acoustic positioning due to the higher spatial resolution and the inclusion of slope. On a high relief wall bottom tracking and 3-D modelling yielded similar results. At present, 3-D modelling is the most powerful, albeit the most time-consuming, method for accurate determination of video subtransect dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preventive maintenance actions over the warranty period have an impact on the warranty servicing cost to the manufacturer and the cost to the buyer of fixing failures over the life of the product after the warranty expires. However, preventive maintenance costs money and is worthwhile only when these costs exceed the reduction in other costs. The paper deals with a model to determine when preventive maintenance actions (which rejuvenate the unit) carried out at discrete time instants over the warranty period are worthwhile. The cost of preventive maintenance is borne by the buyer. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modelling of froth transportation, as part of modelling of froth recovery, provides a scale-up procedure for flotation cell design. It can also assist in improving control of flotation operation. Mathematical models of froth velocity on the surface and froth residence time distribution in a cylindrical tank flotation cell are proposed, based on mass balance principle of the air entering the froth. The models take into account factors such as cell size, concentrate launder configuration, use of a froth crowder, cell operating conditions including froth height and air rate, and bubble bursting on the surface. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we investigate the effects of potential models on the description of equilibria of linear molecules (ethylene and ethane) adsorption on graphitized thermal carbon black. GCMC simulation is used as a tool to give adsorption isotherms, isosteric heat of adsorption and the microscopic configurations of these molecules. At the heart of the GCMC are the potential models, describing fluid-fluid interaction and solid-fluid interaction. Here we studied the two potential models recently proposed in the literature, the UA-TraPPE and AUA4. Their impact in the description of adsorption behavior of pure components will be discussed. Mixtures of these components with nitrogen and argon are also studied. Nitrogen is modeled a two-site plus discrete charges while argon as a spherical particle. GCMC simulation is also used for generating simulation mixture isotherms. It is found that co-operation between species occurs when the surface is fractionally covered while competition is important when surface is fully loaded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DEM modelling of the motion of coarse fractions of the charge inside SAG mills has now been well established for more than a decade. In these models the effect of slurry has broadly been ignored due to its complexity. Smoothed particle hydrodynamics (SPH) provides a particle based method for modelling complex free surface fluid flows and is well suited to modelling fluid flow in mills. Previous modelling has demonstrated the powerful ability of SPH to capture dynamic fluid flow effects such as lifters crashing into slurry pools, fluid draining from lifters, flow through grates and pulp lifter discharge. However, all these examples were limited by the ability to model only the slurry in the mill without the charge. In this paper, we represent the charge as a dynamic porous media through which the SPH fluid is then able to flow. The porous media properties (specifically the spatial distribution of porosity and velocity) are predicted by time averaging the mill charge predicted using a large scale DEM model. This allows prediction of transient and steady state slurry distributions in the mill and allows its variation with operating parameters, slurry viscosity and slurry volume, to be explored. (C) 2006 Published by Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current Physiologically based pharmacokinetic (PBPK) models are inductive. We present an additional, different approach that is based on the synthetic rather than the inductive approach to modeling and simulation. It relies on object-oriented programming A model of the referent system in its experimental context is synthesized by assembling objects that represent components such as molecules, cells, aspects of tissue architecture, catheters, etc. The single pass perfused rat liver has been well described in evaluating hepatic drug pharmacokinetics (PK) and is the system on which we focus. In silico experiments begin with administration of objects representing actual compounds. Data are collected in a manner analogous to that in the referent PK experiments. The synthetic modeling method allows for recognition and representation of discrete event and discrete time processes, as well as heterogeneity in organization, function, and spatial effects. An application is developed for sucrose and antipyrine, administered separately and together PBPK modeling has made extensive progress in characterizing abstracted PK properties but this has also been its limitation. Now, other important questions and possible extensions emerge. How are these PK properties and the observed behaviors generated? The inherent heuristic limitations of traditional models have hindered getting meaningful, detailed answers to such questions. Synthetic models of the type described here are specifically intended to help answer such questions. Analogous to wet-lab experimental models, they retain their applicability even when broken apart into sub-components. Having and applying this new class of models along with traditional PK modeling methods is expected to increase the productivity of pharmaceutical research at all levels that make use of modeling and simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.