968 resultados para spatial and stochastic modelling, bio-chemical kinetics, multi-scale simulation, systems biology


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large eddy simulation is performed to study the deflagration to detonation transition phenomenon in an obstructed channel containing premixed stoichiometric hydrogen–air mixture. Two-dimensional filtered reactive Navier–Stokes equations are solved utilizing the artificially thickened flame approach (ATF) for modeling sub-grid scale combustion. To include the effect of induction time, a 27-step detailed mechanism is utilized along with an in situ adaptive tabulation (ISAT) method to reduce the computational cost due to the detailed chemistry. The results show that in the slow flame propagation regime, the flame–vortex interaction and the resulting flame folding and wrinkling are the main mechanisms for the increase of the flame surface and consequently acceleration of the flame. Furthermore, at high speed, the major mechanisms responsible for flame propagation are repeated reflected shock–flame interactions and the resulting baroclinic vorticity. These interactions intensify the rate of heat release and maintain the turbulence and flame speed at high level. During the flame acceleration, it is seen that the turbulent flame enters the ‘thickened reaction zones’ regime. Therefore, it is necessary to utilize the chemistry based combustion model with detailed chemical kinetics to properly capture the salient features of the fast deflagration propagation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract : The structural build-up of fresh cement-based materials has a great impact on their structural performance after casting. Accordingly, the mixture design should be tailored to adapt the kinetics of build-up given the application on hand. The rate of structural build-up of cement-based suspensions at rest is a complex phenomenon affected by both physical and chemical structuration processes. The structuration kinetics are strongly dependent on the mixture’s composition, testing parameters, as well as the shear history. Accurate measurements of build-up rely on the efficiency of the applied pre-shear regime to achieve an initial well-dispersed state as well as the applied stress during the liquid-solid transition. Studying the physical and chemical mechanisms of build-up of cement suspensions at rest can enhance the fundamental understanding of this phenomenon. This can, therefore, allow a better control of the rheological and time-dependent properties of cement-based materials. The research focused on the use of dynamic rheology in investigating the kinetics of structural build-up of fresh cement pastes. The research program was conducted in three different phases. The first phase was devoted to evaluating the dispersing efficiency of various disruptive shear techniques. The investigated shearing profiles included rotational, oscillatory, and combination of both. The initial and final states of suspension’s structure, before and after disruption, were determined by applying a small-amplitude oscillatory shear (SAOS). The difference between the viscoelastic values before and after disruption was used to express the degree of dispersion. An efficient technique to disperse concentrated cement suspensions was developed. The second phase aimed to establish a rheometric approach to dissociate and monitor the individual physical and chemical mechanisms of build-up of cement paste. In this regard, the non-destructive dynamic rheometry was used to investigate the evolutions of both storage modulus and phase angle of inert calcium carbonate and cement suspensions. Two independent build-up indices were proposed. The structural build-up of various cement suspensions made with different cement contents, silica fume replacement percentages, and high-range water reducer dosages was evaluated using the proposed indices. These indices were then compared to the well-known thixotropic index (Athix.). Furthermore, the proposed indices were correlated to the decay in lateral pressure determined for various cement pastes cast in a pressure column. The proposed pre-shearing protocol and build-up indices (phases 1 and 2) were then used to investigate the effect of mixture’s parameters on the kinetics of structural build-up in phase 3. The investigated mixture’s parameters included cement content and fineness, alkali sulfate content, and temperature of cement suspension. Zeta potential, calorimetric, spectrometric measurements were performed to explore the corresponding microstructural changes in cement suspensions, such as inter-particle cohesion, rate of Brownian flocculation, and nucleation rate. A model linking the build-up indices and the microstructural characteristics was developed to predict the build-up behaviour of cement-based suspensions The obtained results showed that oscillatory shear may have a greater effect on dispersing concentrated cement suspension than the rotational shear. Furthermore, the increase in induced shear strain was found to enhance the breakdown of suspension’s structure until a critical point, after which thickening effects dominate. An effective dispersing method is then proposed. This consists of applying a rotational shear around the transitional value between the linear and non-linear variations of the apparent viscosity with shear rate, followed by an oscillatory shear at the crossover shear strain and high angular frequency of 100 rad/s. Investigating the evolutions of viscoelastic properties of inert calcite-based and cement suspensions and allowed establishing two independent build-up indices. The first one (the percolation time) can represent the rest time needed to form the elastic network. On the other hand, the second one (rigidification rate) can describe the increase in stress-bearing capacity of formed network due to cement hydration. In addition, results showed that combining the percolation time and the rigidification rate can provide deeper insight into the structuration process of cement suspensions. Furthermore, these indices were found to be well-correlated to the decay in the lateral pressure of cement suspensions. The variations of proposed build-up indices with mixture’s parameters showed that the percolation time is most likely controlled by the frequency of Brownian collisions, distance between dispersed particles, and intensity of cohesion between cement particles. On the other hand, a higher rigidification rate can be secured by increasing the number of contact points per unit volume of paste, nucleation rate of cement hydrates, and intensity of inter-particle cohesion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Well-designed marine protected area (MPA) networks can deliver a range of ecological, economic and social benefits, and so a great deal of research has focused on developing spatial conservation prioritization tools to help identify important areas. However, whilst these software tools are designed to identify MPA networks that both represent biodiversity and minimize impacts on stakeholders, they do not consider complex ecological processes. Thus, it is difficult to determine the impacts that proposed MPAs could have on marine ecosystem health, fisheries and fisheries sustainability. Using the eastern English Channel as a case study, this paper explores an approach to address these issues by identifying a series of MPA networks using the Marxan and Marxan with Zones conservation planning software and linking them with a spatially explicit ecosystem model developed in Ecopath with Ecosim. We then use these to investigate potential trade-offs associated with adopting different MPA management strategies. Limited-take MPAs, which restrict the use of some fishing gears, could have positive benefits for conservation and fisheries in the eastern English Channel, even though they generally receive far less attention in research on MPA network design. Our findings, however, also clearly indicate that no-take MPAs should form an integral component of proposed MPA networks in the eastern English Channel, as they not only result in substantial increases in ecosystem biomass, fisheries catches and the biomass of commercially valuable target species, but are fundamental to maintaining the sustainability of the fisheries. Synthesis and applications. Using the existing software tools Marxan with Zones and Ecopath with Ecosim in combination provides a powerful policy-screening approach. This could help inform marine spatial planning by identifying potential conflicts and by designing new regulations that better balance conservation objectives and stakeholder interests. In addition, it highlights that appropriate combinations of no-take and limited-take marine protected areas might be the most effective when making trade-offs between long-term ecological benefits and short-term political acceptability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hybrid face recognition, using image (2D) and structural (3D) information, has explored the fusion of Nearest Neighbour classifiers. This paper examines the effectiveness of feature modelling for each individual modality, 2D and 3D. Furthermore, it is demonstrated that the fusion of feature modelling techniques for the 2D and 3D modalities yields performance improvements over the individual classifiers. By fusing the feature modelling classifiers for each modality with equal weights the average Equal Error Rate improves from 12.60% for the 2D classifier and 12.10% for the 3D classifier to 7.38% for the Hybrid 2D+3D clasiffier.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Osteoporotic spinal fractures are a major concern in ageing Western societies. This study develops a multi-scale finite element (FE) model of the osteoporotic lumbar vertebral body to study the mechanics of vertebral compression fracture at both the apparent (whole vertebral body) and micro-structural (internal trabecular bone core)levels. Model predictions were verified against experimental data, and found to provide a reasonably good representation of the mechanics of the osteoporotic vertebral body. This novel modelling methodology will allow detailed investigation of how trabecular bone loss in osteoporosis affects vertebral stiffness and strength in the lumbar spine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stochastic models for competing clonotypes of T cells by multivariate, continuous-time, discrete state, Markov processes have been proposed in the literature by Stirk, Molina-París and van den Berg (2008). A stochastic modelling framework is important because of rare events associated with small populations of some critical cell types. Usually, computational methods for these problems employ a trajectory-based approach, based on Monte Carlo simulation. This is partly because the complementary, probability density function (PDF) approaches can be expensive but here we describe some efficient PDF approaches by directly solving the governing equations, known as the Master Equation. These computations are made very efficient through an approximation of the state space by the Finite State Projection and through the use of Krylov subspace methods when evolving the matrix exponential. These computational methods allow us to explore the evolution of the PDFs associated with these stochastic models, and bimodal distributions arise in some parameter regimes. Time-dependent propensities naturally arise in immunological processes due to, for example, age-dependent effects. Incorporating time-dependent propensities into the framework of the Master Equation significantly complicates the corresponding computational methods but here we describe an efficient approach via Magnus formulas. Although this contribution focuses on the example of competing clonotypes, the general principles are relevant to multivariate Markov processes and provide fundamental techniques for computational immunology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kinematic models are commonly used to quantify foot and ankle kinematics, yet no marker sets or models have been proven reliable or accurate when wearing shoes. Further, the minimal detectable difference of a developed model is often not reported. We present a kinematic model that is reliable, accurate and sensitive to describe the kinematics of the foot–shoe complex and lower leg during walking gait. In order to achieve this, a new marker set was established, consisting of 25 markers applied on the shoe and skin surface, which informed a four segment kinematic model of the foot–shoe complex and lower leg. Three independent experiments were conducted to determine the reliability, accuracy and minimal detectable difference of the marker set and model. Inter-rater reliability of marker placement on the shoe was proven to be good to excellent (ICC = 0.75–0.98) indicating that markers could be applied reliably between raters. Intra-rater reliability was better for the experienced rater (ICC = 0.68–0.99) than the inexperienced rater (ICC = 0.38–0.97). The accuracy of marker placement along each axis was <6.7 mm for all markers studied. Minimal detectable difference (MDD90) thresholds were defined for each joint; tibiocalcaneal joint – MDD90 = 2.17–9.36°, tarsometatarsal joint – MDD90 = 1.03–9.29° and the metatarsophalangeal joint – MDD90 = 1.75–9.12°. These thresholds proposed are specific for the description of shod motion, and can be used in future research designed at comparing between different footwear.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traffic congestion has a significant impact on the economy and environment. Encouraging the use of multimodal transport (public transport, bicycle, park’n’ride, etc.) has been identified by traffic operators as a good strategy to tackle congestion issues and its detrimental environmental impacts. A multi-modal and multi-objective trip planner provides users with various multi-modal options optimised on objectives that they prefer (cheapest, fastest, safest, etc) and has a potential to reduce congestion on both a temporal and spatial scale. The computation of multi-modal and multi-objective trips is a complicated mathematical problem, as it must integrate and utilize a diverse range of large data sets, including both road network information and public transport schedules, as well as optimising for a number of competing objectives, where fully optimising for one objective, such as travel time, can adversely affect other objectives, such as cost. The relationship between these objectives can also be quite subjective, as their priorities will vary from user to user. This paper will first outline the various data requirements and formats that are needed for the multi-modal multi-objective trip planner to operate, including static information about the physical infrastructure within Brisbane as well as real-time and historical data to predict traffic flow on the road network and the status of public transport. It will then present information on the graph data structures representing the road and public transport networks within Brisbane that are used in the trip planner to calculate optimal routes. This will allow for an investigation into the various shortest path algorithms that have been researched over the last few decades, and provide a foundation for the construction of the Multi-modal Multi-objective Trip Planner by the development of innovative new algorithms that can operate the large diverse data sets and competing objectives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematical descriptions of birth–death–movement processes are often calibrated to measurements from cell biology experiments to quantify tissue growth rates. Here we describe and analyze a discrete model of a birth–death-movement process applied to a typical two–dimensional cell biology experiment. We present three different descriptions of the system: (i) a standard mean–field description which neglects correlation effects and clustering; (ii) a moment dynamics description which approximately incorporates correlation and clustering effects, and; (iii) averaged data from repeated discrete simulations which directly incorporates correlation and clustering effects. Comparing these three descriptions indicates that the mean–field and moment dynamics approaches are valid only for certain parameter regimes, and that both these descriptions fail to make accurate predictions of the system for sufficiently fast birth and death rates where the effects of spatial correlations and clustering are sufficiently strong. Without any method to distinguish between the parameter regimes where these three descriptions are valid, it is possible that either the mean–field or moment dynamics model could be calibrated to experimental data under inappropriate conditions, leading to errors in parameter estimation. In this work we demonstrate that a simple measurement of agent clustering and correlation, based on coordination number data, provides an indirect measure of agent correlation and clustering effects, and can therefore be used to make a distinction between the validity of the different descriptions of the birth–death–movement process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motion planning for planetary rovers must consider control uncertainty in order to maintain the safety of the platform during navigation. Modelling such control uncertainty is difficult due to the complex interaction between the platform and its environment. In this paper, we propose a motion planning approach whereby the outcome of control actions is learned from experience and represented statistically using a Gaussian process regression model. This model is used to construct a control policy for navigation to a goal region in a terrain map built using an on-board RGB-D camera. The terrain includes flat ground, small rocks, and non-traversable rocks. We report the results of 200 simulated and 35 experimental trials that validate the approach and demonstrate the value of considering control uncertainty in maintaining platform safety.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to understand and predict how thermal, hydrological,mechanical and chemical (THMC) processes interact is fundamental to many research initiatives and industrial applications. We present (1) a new Thermal– Hydrological–Mechanical–Chemical (THMC) coupling formulation, based on non-equilibrium thermodynamics; (2) show how THMC feedback is incorporated in the thermodynamic approach; (3) suggest a unifying thermodynamic framework for multi-scaling; and (4) formulate a new rationale for assessing upper and lower bounds of dissipation for THMC processes. The technique is based on deducing time and length scales suitable for separating processes using a macroscopic finite time thermodynamic approach. We show that if the time and length scales are suitably chosen, the calculation of entropic bounds can be used to describe three different types of material and process uncertainties: geometric uncertainties,stemming from the microstructure; process uncertainty, stemming from the correct derivation of the constitutive behavior; and uncertainties in time evolution, stemming from the path dependence of the time integration of the irreversible entropy production. Although the approach is specifically formulated here for THMC coupling we suggest that it has a much broader applicability. In a general sense it consists of finding the entropic bounds of the dissipation defined by the product of thermodynamic force times thermodynamic flux which in material sciences corresponds to generalized stress and generalized strain rates, respectively.