996 resultados para PHYSICS, MATHEMATICAL
Resumo:
A mathematical model is presented to understand heat transfer processes during the cooling and re-warming of patients during cardiac surgery. Our compartmental model is able to account for many of the qualitative features observed in the cooling of various regions of the body including the central core containing the majority of organs, the rectal region containing the intestines and the outer peripheral region of skin and muscle. In particular, we focus on the issue of afterdrop: a drop in core temperature following patient re-warming, which can lead to serious post-operative complications. Model results for a typical cooling and re-warming procedure during surgery are in qualitative agreement with experimental data in producing the afterdrop effect and the observed dynamical variation in temperature between the core, rectal and peripheral regions. The influence of heat transfer processes and the volume of each compartmental region on the afterdrop effect is discussed. We find that excess fat on the peripheral and rectal regions leads to an increase in the afterdrop effect. Our model predicts that, by allowing constant re-warming after the core temperature has been raised, the afterdrop effect will be reduced.
Resumo:
Mathematical modeling of bacterial chemotaxis systems has been influential and insightful in helping to understand experimental observations. We provide here a comprehensive overview of the range of mathematical approaches used for modeling, within a single bacterium, chemotactic processes caused by changes to external gradients in its environment. Specific areas of the bacterial system which have been studied and modeled are discussed in detail, including the modeling of adaptation in response to attractant gradients, the intracellular phosphorylation cascade, membrane receptor clustering, and spatial modeling of intracellular protein signal transduction. The importance of producing robust models that address adaptation, gain, and sensitivity are also discussed. This review highlights that while mathematical modeling has aided in understanding bacterial chemotaxis on the individual cell scale and guiding experimental design, no single model succeeds in robustly describing all of the basic elements of the cell. We conclude by discussing the importance of this and the future of modeling in this area.
Resumo:
We review the application of mathematical modeling to understanding the behavior of populations of chemotactic bacteria. The application of continuum mathematical models, in particular generalized Keller-Segel models, is discussed along with attempts to incorporate the microscale (individual) behavior on the macroscale, modeling the interaction between different species of bacteria, the interaction of bacteria with their environment, and methods used to obtain experimentally verified parameter values. We allude briefly to the role of modeling pattern formation in understanding collective behavior within bacterial populations. Various aspects of each model are discussed and areas for possible future research are postulated.
Resumo:
This is the first of two articles presenting a detailed review of the historical evolution of mathematical models applied in the development of building technology, including conventional buildings and intelligent buildings. After presenting the technical differences between conventional and intelligent buildings, this article reviews the existing mathematical models, the abstract levels of these models, and their links to the literature for intelligent buildings. The advantages and limitations of the applied mathematical models are identified and the models are classified in terms of their application range and goal. We then describe how the early mathematical models, mainly physical models applied to conventional buildings, have faced new challenges for the design and management of intelligent buildings and led to the use of models which offer more flexibility to better cope with various uncertainties. In contrast with the early modelling techniques, model approaches adopted in neural networks, expert systems, fuzzy logic and genetic models provide a promising method to accommodate these complications as intelligent buildings now need integrated technologies which involve solving complex, multi-objective and integrated decision problems.
Resumo:
This article is the second part of a review of the historical evolution of mathematical models applied in the development of building technology. The first part described the current state of the art and contrasted various models with regard to the applications to conventional buildings and intelligent buildings. It concluded that mathematical techniques adopted in neural networks, expert systems, fuzzy logic and genetic models, that can be used to address model uncertainty, are well suited for modelling intelligent buildings. Despite the progress, the possible future development of intelligent buildings based on the current trends implies some potential limitations of these models. This paper attempts to uncover the fundamental limitations inherent in these models and provides some insights into future modelling directions, with special focus on the techniques of semiotics and chaos. Finally, by demonstrating an example of an intelligent building system with the mathematical models that have been developed for such a system, this review addresses the influences of mathematical models as a potential aid in developing intelligent buildings and perhaps even more advanced buildings for the future.
Resumo:
This study was an attempt to identify the epistemological roots of knowledge when students carry out hands-on experiments in physics. We found that, within the context of designing a solution to a stated problem, subjects constructed and ran thought experiments intertwined within the processes of conducting physical experiments. We show that the process of alternating between these two modes- empirically experimenting and experimenting in thought- leads towards a convergence on scientifically acceptable concepts. We call this process mutual projection. In the process of mutual projection, external representations were generated. Objects in the physical environment were represented in an imaginary world and these representations were associated with processes in the physical world. It is through this coupling that constituents of both the imaginary world and the physical world gain meaning. We further show that the external representations are rooted in sensory interaction and constitute a semi-symbolic pictorial communication system, a sort of primitive 'language', which is developed as the practical work continues. The constituents of this pictorial communication system are used in the thought experiments taking place in association with the empirical experimentation. The results of this study provide a model of physics learning during hands-on experimentation.
Resumo:
Individuals with elevated levels of plasma low density lipoprotein (LDL) cholesterol (LDL-C) are considered to be at risk of developing coronary heart disease. LDL particles are removed from the blood by a process known as receptor-mediated endocytosis, which occurs mainly in the liver. A series of classical experiments delineated the major steps in the endocytotic process; apolipoprotein B-100 present on LDL particles binds to a specific receptor (LDL receptor, LDL-R) in specialized areas of the cell surface called clathrin-coated pits. The pit comprising the LDL-LDL-R complex is internalized forming a cytoplasmic endosome. Fusion of the endosome with a lysosome leads to degradation of the LDL into its constituent parts (that is, cholesterol, fatty acids, and amino acids), which are released for reuse by the cell, or are excreted. In this paper, we formulate a mathematical model of LDL endocytosis, consisting of a system of ordinary differential equations. We validate our model against existing in vitro experimental data, and we use it to explore differences in system behavior when a single bolus of extracellular LDL is supplied to cells, compared to when a continuous supply of LDL particles is available. Whereas the former situation is common to in vitro experimental systems, the latter better reflects the in vivo situation. We use asymptotic analysis and numerical simulations to study the longtime behavior of model solutions. The implications of model-derived insights for experimental design are discussed.
Resumo:
Elevated levels of low-density-lipoprotein cholesterol (LDL-C) in the plasma are a well-established risk factor for the development of coronary heart disease. Plasma LDL-C levels are in part determined by the rate at which LDL particles are removed from the bloodstream by hepatic uptake. The uptake of LDL by mammalian liver cells occurs mainly via receptor-mediated endocytosis, a process which entails the binding of these particles to specific receptors in specialised areas of the cell surface, the subsequent internalization of the receptor-lipoprotein complex, and ultimately the degradation and release of the ingested lipoproteins' constituent parts. We formulate a mathematical model to study the binding and internalization (endocytosis) of LDL and VLDL particles by hepatocytes in culture. The system of ordinary differential equations, which includes a cholesterol-dependent pit production term representing feedback regulation of surface receptors in response to intracellular cholesterol levels, is analysed using numerical simulations and steady-state analysis. Our numerical results show good agreement with in vitro experimental data describing LDL uptake by cultured hepatocytes following delivery of a single bolus of lipoprotein. Our model is adapted in order to reflect the in vivo situation, in which lipoproteins are continuously delivered to the hepatocyte. In this case, our model suggests that the competition between the LDL and VLDL particles for binding to the pits on the cell surface affects the intracellular cholesterol concentration. In particular, we predict that when there is continuous delivery of low levels of lipoproteins to the cell surface, more VLDL than LDL occupies the pit, since VLDL are better competitors for receptor binding. VLDL have a cholesterol content comparable to LDL particles; however, due to the larger size of VLDL, one pit-bound VLDL particle blocks binding of several LDLs, and there is a resultant drop in the intracellular cholesterol level. When there is continuous delivery of lipoprotein at high levels to the hepatocytes, VLDL particles still out-compete LDL particles for receptor binding, and consequently more VLDL than LDL particles occupy the pit. Although the maximum intracellular cholesterol level is similar for high and low levels of lipoprotein delivery, the maximum is reached more rapidly when the lipoprotein delivery rates are high. The implications of these results for the design of in vitro experiments is discussed.
Resumo:
A new primary model based on a thermodynamically consistent first-order kinetic approach was constructed to describe non-log-linear inactivation kinetics of pressure-treated bacteria. The model assumes a first-order process in which the specific inactivation rate changes inversely with the square root of time. The model gave reasonable fits to experimental data over six to seven orders of magnitude. It was also tested on 138 published data sets and provided good fits in about 70% of cases in which the shape of the curve followed the typical convex upward form. In the remainder of published examples, curves contained additional shoulder regions or extended tail regions. Curves with shoulders could be accommodated by including an additional time delay parameter and curves with tails shoulders could be accommodated by omitting points in the tail beyond the point at which survival levels remained more or less constant. The model parameters varied regularly with pressure, which may reflect a genuine mechanistic basis for the model. This property also allowed the calculation of (a) parameters analogous to the decimal reduction time D and z, the temperature increase needed to change the D value by a factor of 10, in thermal processing, and hence the processing conditions needed to attain a desired level of inactivation; and (b) the apparent thermodynamic volumes of activation associated with the lethal events. The hypothesis that inactivation rates changed as a function of the square root of time would be consistent with a diffusion-limited process.
Resumo:
Molecular size and structure of the gluten polymers that make up the major structural components of wheat are related to their rheological properties via modem polymer rheology concepts. Interactions between polymer chain entanglements and branching are seen to be the key mechanisms determining the rheology of HMW polymers. Recent work confirms the observation that dynamic shear plateau modulus is essentially independent of variations in MW amongst wheat varieties of varying baking performance and is not related to variations in baking performance, and that it is not the size of the soluble glutenin polymers, but the structural and rheological properties of the insoluble polymer fraction that are mainly responsible for variations in baking performance. The rheological properties of gas cell walls in bread doughs are considered to be important in relation to their stability and gas retention during proof and baking, in particular their extensional strain hardening properties. Large deformation rheological properties of gas cell walls were measured using biaxial extension for a number of doughs of varying breadmaking quality at constant strain rate and elevated temperatures in the range 25-60 degrees C. Strain hardening and failure strain of cell walls were both seen to decrease with temperature, with cell walls in good breadmaking doughs remaining stable and retaining their strain hardening properties to higher temperatures (60 degrees C), whilst the cell walls of poor breadmaking doughs became unstable at lower temperatures (45-50 degrees C) and had lower strain hardening. Strain hardening measured at 50 degrees C gave good correlations with baking volume, with the best correlations achieved between those rheological measurements and baking tests which used similar mixing conditions. As predicted by the Considere failure criterion, a strain hardening value of I defines a region below which gas cell walls become unstable, and discriminates well between the baking quality of a range of commercial flour blends of varying quality. This indicates that the stability of gas cell walls during baking is strongly related to their strain hardening properties, and that extensional rheological measurements can be used as predictors of baking quality. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Molecular size and structure of the gluten polymers that make up the major structural components of wheat are related to their rheological properties via modern polymer rheology concepts. Interactions between polymer chain entanglements and branching are seen to be the key mechanisms determining the rheology of HMW polymers. Recent work confirms the observation that dynamic shear plateau modulus is essentially independent of variations in MW amongst wheat varieties of varying baking performance and is not related to variations in baking performance, and that it is not the size of the soluble glutenin polymers, but the structural and rheological properties of the insoluble polymer fraction that are mainly responsible for variations in baking performance. The rheological properties of gas cell walls in bread doughs are considered to be important in relation to their stability and gas retention during proof and baking, in particular their extensional strain hardening properties. Large deformation rheological properties of gas cell walls were measured using biaxial extension for a number of doughs of varying breadmaking quality at constant strain rate and elevated temperatures in the range 25oC to 60oC. Strain hardening and failure strain of cell walls were both seen to decrease with temperature, with cell walls in good breadmaking doughs remaining stable and retaining their strain hardening properties to higher temperatures (60oC), whilst the cell walls of poor breadmaking doughs became unstable at lower temperatures (45oC to 50oC) and had lower strain hardening. Strain hardening measured at 50oC gave good correlations with baking volume, with the best correlations achieved between those rheological measurements and baking tests which used similar mixing conditions. As predicted by the Considere failure criterion, a strain hardening value of 1 defines a region below which gas cell walls become unstable, and discriminates well between the baking quality of a range of commercial flour blends of varying quality. This indicates that the stability of gas cell walls during baking is strongly related to their strain hardening properties, and that extensional rheological measurements can be used as predictors of baking quality.
Resumo:
The mathematical models that describe the immersion-frying period and the post-frying cooling period of an infinite slab or an infinite cylinder were solved and tested. Results were successfully compared with those found in the literature or obtained experimentally, and were discussed in terms of the hypotheses and simplifications made. The models were used as the basis of a sensitivity analysis. Simulations showed that a decrease in slab thickness and core heat capacity resulted in faster crust development. On the other hand, an increase in oil temperature and boiling heat transfer coefficient between the oil and the surface of the food accelerated crust formation. The model for oil absorption during cooling was analysed using the tested post-frying cooling equation to determine the moment in which a positive pressure driving force, allowing oil suction within the pore, originated. It was found that as crust layer thickness, pore radius and ambient temperature decreased so did the time needed to start the absorption. On the other hand, as the effective convective heat transfer coefficient between the air and the surface of the slab increased the required cooling time decreased. In addition, it was found that the time needed to allow oil absorption during cooling was extremely sensitive to pore radius, indicating the importance of an accurate pore size determination in future studies.
Resumo:
This paper identifies the major challenges in the area of pattern formation. The work is also motivated by the need for development of a single framework to surmount these challenges. A framework based on the control of macroscopic parameters is proposed. The issue of transformation of patterns is specifically considered. A definition for transformation and four special cases, namely elementary and geometrical transformations by repositioning all or some robots in the pattern are provided. Two feasible tools for pattern transformation namely, a macroscopic parameter method and a mathematical tool - Moebius transformation also known as the linear fractional transformation are introduced. The realization of the unifying framework considering planning and communication is reported.
Resumo:
The work reported in this paper is motivated by the need for developing swarm pattern transformation methodologies. Two methods, namely a macroscopic method and a mathematical method are investigated for pattern transformation. The first method is based on macroscopic parameters while the second method is based on both microscopic and macroscopic parameters. A formal definition to pattern transformation considering four special cases of transformation is presented. Simulations on a physics simulation engine are used to confirm the feasibility of the proposed transformation methods. A brief comparison between the two methods is also presented.
Resumo:
We introduce transreal analysis as a generalisation of real analysis. We find that the generalisation of the real exponential and logarithmic functions is well defined for all transreal numbers. Hence, we derive well defined values of all transreal powers of all non-negative transreal numbers. In particular, we find a well defined value for zero to the power of zero. We also note that the computation of products via the transreal logarithm is identical to the transreal product, as expected. We then generalise all of the common, real, trigonometric functions to transreal functions and show that transreal (sin x)/x is well defined everywhere. This raises the possibility that transreal analysis is total, in other words, that every function and every limit is everywhere well defined. If so, transreal analysis should be an adequate mathematical basis for analysing the perspex machine - a theoretical, super-Turing machine that operates on a total geometry. We go on to dispel all of the standard counter "proofs" that purport to show that division by zero is impossible. This is done simply by carrying the proof through in transreal arithmetic or transreal analysis. We find that either the supposed counter proof has no content or else that it supports the contention that division by zero is possible. The supposed counter proofs rely on extending the standard systems in arbitrary and inconsistent ways and then showing, tautologously, that the chosen extensions are not consistent. This shows only that the chosen extensions are inconsistent and does not bear on the question of whether division by zero is logically possible. By contrast, transreal arithmetic is total and consistent so it defeats any possible "straw man" argument. Finally, we show how to arrange that a function has finite or else unmeasurable (nullity) values, but no infinite values. This arithmetical arrangement might prove useful in mathematical physics because it outlaws naked singularities in all equations.