41 resultados para Mathematical physics
em Université de Lausanne, Switzerland
Resumo:
In a thermally fluctuating long linear polymeric chain in a solution, the ends, from time to time, approach each other. At such an instance, the chain can be regarded as closed and thus will form a knot or rather a virtual knot. Several earlier studies of random knotting demonstrated that simpler knots show a higher occurrence for shorter random walks than do more complex knots. However, up to now there have been no rules that could be used to predict the optimal length of a random walk, i.e. the length for which a given knot reaches its highest occurrence. Using numerical simulations, we show here that a power law accurately describes the relation between the optimal lengths of random walks leading to the formation of different knots and the previously characterized lengths of ideal knots of a corresponding type.
Resumo:
PURPOSE: The purpose of this study was to develop a mathematical model (sine model, SIN) to describe fat oxidation kinetics as a function of the relative exercise intensity [% of maximal oxygen uptake (%VO2max)] during graded exercise and to determine the exercise intensity (Fatmax) that elicits maximal fat oxidation (MFO) and the intensity at which the fat oxidation becomes negligible (Fatmin). This model included three independent variables (dilatation, symmetry, and translation) that incorporated primary expected modulations of the curve because of training level or body composition. METHODS: Thirty-two healthy volunteers (17 women and 15 men) performed a graded exercise test on a cycle ergometer, with 3-min stages and 20-W increments. Substrate oxidation rates were determined using indirect calorimetry. SIN was compared with measured values (MV) and with other methods currently used [i.e., the RER method (MRER) and third polynomial curves (P3)]. RESULTS: There was no significant difference in the fitting accuracy between SIN and P3 (P = 0.157), whereas MRER was less precise than SIN (P < 0.001). Fatmax (44 +/- 10% VO2max) and MFO (0.37 +/- 0.16 g x min(-1)) determined using SIN were significantly correlated with MV, P3, and MRER (P < 0.001). The variable of dilatation was correlated with Fatmax, Fatmin, and MFO (r = 0.79, r = 0.67, and r = 0.60, respectively, P < 0.001). CONCLUSIONS: The SIN model presents the same precision as other methods currently used in the determination of Fatmax and MFO but in addition allows calculation of Fatmin. Moreover, the three independent variables are directly related to the main expected modulations of the fat oxidation curve. SIN, therefore, seems to be an appropriate tool in analyzing fat oxidation kinetics obtained during graded exercise.
Resumo:
The present thesis is a contribution to the debate on the applicability of mathematics; it examines the interplay between mathematics and the world, using historical case studies. The first part of the thesis consists of four small case studies. In chapter 1, I criticize "ante rem structuralism", proposed by Stewart Shapiro, by showing that his so-called "finite cardinal structures" are in conflict with mathematical practice. In chapter 2, I discuss Leonhard Euler's solution to the Königsberg bridges problem. I propose interpreting Euler's solution both as an explanation within mathematics and as a scientific explanation. I put the insights from the historical case to work against recent philosophical accounts of the Königsberg case. In chapter 3, I analyze the predator-prey model, proposed by Lotka and Volterra. I extract some interesting philosophical lessons from Volterra's original account of the model, such as: Volterra's remarks on mathematical methodology; the relation between mathematics and idealization in the construction of the model; some relevant details in the derivation of the Third Law, and; notions of intervention that are motivated by one of Volterra's main mathematical tools, phase spaces. In chapter 4, I discuss scientific and mathematical attempts to explain the structure of the bee's honeycomb. In the first part, I discuss a candidate explanation, based on the mathematical Honeycomb Conjecture, presented in Lyon and Colyvan (2008). I argue that this explanation is not scientifically adequate. In the second part, I discuss other mathematical, physical and biological studies that could contribute to an explanation of the bee's honeycomb. The upshot is that most of the relevant mathematics is not yet sufficiently understood, and there is also an ongoing debate as to the biological details of the construction of the bee's honeycomb. The second part of the thesis is a bigger case study from physics: the genesis of GR. Chapter 5 is a short introduction to the history, physics and mathematics that is relevant to the genesis of general relativity (GR). Chapter 6 discusses the historical question as to what Marcel Grossmann contributed to the genesis of GR. I will examine the so-called "Entwurf" paper, an important joint publication by Einstein and Grossmann, containing the first tensorial formulation of GR. By comparing Grossmann's part with the mathematical theories he used, we can gain a better understanding of what is involved in the first steps of assimilating a mathematical theory to a physical question. In chapter 7, I introduce, and discuss, a recent account of the applicability of mathematics to the world, the Inferential Conception (IC), proposed by Bueno and Colyvan (2011). I give a short exposition of the IC, offer some critical remarks on the account, discuss potential philosophical objections, and I propose some extensions of the IC. In chapter 8, I put the Inferential Conception (IC) to work in the historical case study: the genesis of GR. I analyze three historical episodes, using the conceptual apparatus provided by the IC. In episode one, I investigate how the starting point of the application process, the "assumed structure", is chosen. Then I analyze two small application cycles that led to revisions of the initial assumed structure. In episode two, I examine how the application of "new" mathematics - the application of the Absolute Differential Calculus (ADC) to gravitational theory - meshes with the IC. In episode three, I take a closer look at two of Einstein's failed attempts to find a suitable differential operator for the field equations, and apply the conceptual tools provided by the IC so as to better understand why he erroneously rejected both the Ricci tensor and the November tensor in the Zurich Notebook.
Resumo:
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa.
Resumo:
KNOTS are usually categorized in terms of topological properties that are invariant under changes in a knot's spatial configuration(1-4). Here we approach knot identification from a different angle, by considering the properties of particular geometrical forms which we define as 'ideal'. For a knot with a given topology and assembled from a tube of uniform diameter, the ideal form is the geometrical configuration having the highest ratio of volume to surface area. Practically, this is equivalent to determining the shortest piece of tube that can be closed to form the knot. Because the notion of an ideal form is independent of absolute spatial scale, the length-to-diameter ratio of a tube providing an ideal representation is constant, irrespective of the tube's actual dimensions. We report the results of computer simulations which show that these ideal representations of knots have surprisingly simple geometrical properties. In particular, there is a simple linear relationship between the length-to-diameter ratio and the crossing number-the number of intersections in a two-dimensional projection of the knot averaged over all directions. We have also found that the average shape of knotted polymeric chains in thermal equilibrium is closely related to the ideal representation of the corresponding knot type. Our observations provide a link between ideal geometrical objects and the behaviour of seemingly disordered systems, and allow the prediction of properties of knotted polymers such as their electrophoretic mobility(5).
Resumo:
The purpose of this study was to develop a two-compartment metabolic model of brain metabolism to assess oxidative metabolism from [1-(11)C] acetate radiotracer experiments, using an approach previously applied in (13)C magnetic resonance spectroscopy (MRS), and compared with an one-tissue compartment model previously used in brain [1-(11)C] acetate studies. Compared with (13)C MRS studies, (11)C radiotracer measurements provide a single uptake curve representing the sum of all labeled metabolites, without chemical differentiation, but with higher temporal resolution. The reliability of the adjusted metabolic fluxes was analyzed with Monte-Carlo simulations using synthetic (11)C uptake curves, based on a typical arterial input function and previously published values of the neuroglial fluxes V(tca)(g), V(x), V(nt), and V(tca)(n) measured in dynamic (13)C MRS experiments. Assuming V(x)(g)=10 × V(tca)(g) and V(x)(n)=V(tca)(n), it was possible to assess the composite glial tricarboxylic acid (TCA) cycle flux V(gt)(g) (V(gt)(g)=V(x)(g) × V(tca)(g)/(V(x)(g)+V(tca)(g))) and the neurotransmission flux V(nt) from (11)C tissue-activity curves obtained within 30 minutes in the rat cortex with a beta-probe after a bolus infusion of [1-(11)C] acetate (n=9), resulting in V(gt)(g)=0.136±0.042 and V(nt)=0.170±0.103 μmol/g per minute (mean±s.d. of the group), in good agreement with (13)C MRS measurements.
Resumo:
Rapid response to: Ortegón M, Lim S, Chisholm D, Mendis S. Cost effectiveness of strategies to combat cardiovascular disease, diabetes, and tobacco use in sub-Saharan Africa and South East Asia: mathematical modelling study. BMJ. 2012 Mar 2;344:e607. doi: 10.1136/bmj.e607. PMID: 22389337.
Resumo:
In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form . A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type can be described by a function of the form where a, b and c are constants depending on and n0 is the minimal number of segments required to form . The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the ACN profile of all closed walks. These points of intersection define the equilibrium length of , i.e., the chain length at which a statistical ensemble of configurations with given knot type -upon cutting, equilibration and reclosure to a new knot type -does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration Rg.
Resumo:
The concept of ideal geometric configurations was recently applied to the classification and characterization of various knots. Different knots in their ideal form (i.e., the one requiring the shortest length of a constant-diameter tube to form a given knot) were shown to have an overall compactness proportional to the time-averaged compactness of thermally agitated knotted polymers forming corresponding knots. This was useful for predicting the relative speed of electrophoretic migration of different DNA knots. Here we characterize the ideal geometric configurations of catenanes (called links by mathematicians), i.e., closed curves in space that are topologically linked to each other. We demonstrate that the ideal configurations of different catenanes show interrelations very similar to those observed in the ideal configurations of knots. By analyzing literature data on electrophoretic separations of the torus-type of DNA catenanes with increasing complexity, we observed that their electrophoretic migration is roughly proportional to the overall compactness of ideal representations of the corresponding catenanes. This correlation does not apply, however, to electrophoretic migration of certain replication intermediates, believed up to now to represent the simplest torus-type catenanes. We propose, therefore, that freshly replicated circular DNA molecules, in addition to forming regular catenanes, may also form hemicatenanes.
Resumo:
ABSTRACT This dissertation investigates the, nature of space-time as described by the theory of general relativity. It mainly argues that space-time can be naturally interpreted as a physical structure in the precise sense of a network of concrete space-time relations among concrete space-time points that do not possess any intrinsic properties and any intrinsic identity. Such an interpretation is fundamentally based on two related key features of general relativity, namely substantive general covariance and background independence, where substantive general covariance is understood as a gauge-theoretic invariance under active diffeomorphisms and background independence is understood in the sense that the metric (or gravitational) field is dynamical and that, strictly speaking, it cannot be uniquely split into a purely gravitational part and a fixed purely inertial part or background. More broadly, a precise notion of (physical) structure is developed within the framework of a moderate version of structural realism understood as a metaphysical claim about what there is in the world. So, the developement of this moderate structural realism pursues two main aims. The first is purely metaphysical, the aim being to develop a coherent metaphysics of structures and of objects (particular attention is paid to the questions of identity and individuality of these latter within this structural realist framework). The second is to argue that moderate structural realism provides a convincing interpretation of the world as described by fundamental physics and in particular of space-time as described by general relativity. This structuralist interpretation of space-time is discussed within the traditional substantivalist-relationalist debate, which is best understood within the broader framework of the question about the relationship between space-time on the one hand and matter on the other. In particular, it is claimed that space-time structuralism does not constitute a 'tertium quid' in the traditional debate. Some new light on the question of the nature of space-time may be shed from the fundamental foundational issue of space-time singularities. Their possible 'non-local' (or global) feature is discussed in some detail and it is argued that a broad structuralist conception of space-time may provide a physically meaningful understanding of space-time singularities, which is not plagued by the conceptual difficulties of the usual atomsitic framework. Indeed, part of these difficulties may come from the standard differential geometric description of space-time, which encodes to some extent this atomistic framework; it raises the question of the importance of the mathematical formalism for the interpretation of space-time.
Resumo:
Synchrotron radiation X-ray tomographic microscopy is a nondestructive method providing ultra-high-resolution 3D digital images of rock microstructures. We describe this method and, to demonstrate its wide applicability, we present 3D images of very different rock types: Berea sandstone, Fontainebleau sandstone, dolomite, calcitic dolomite, and three-phase magmatic glasses. For some samples, full and partial saturation scenarios are considered using oil, water, and air. The rock images precisely reveal the 3D rock microstructure, the pore space morphology, and the interfaces between fluids saturating the same pore. We provide the raw image data sets as online supplementary material, along with laboratory data describing the rock properties. By making these data sets available to other research groups, we aim to stimulate work based on digital rock images of high quality and high resolution. We also discuss and suggest possible applications and research directions that can be pursued on the basis of our data.
Resumo:
Despite their limited proliferation capacity, regulatory T cells (T(regs)) constitute a population maintained over the entire lifetime of a human organism. The means by which T(regs) sustain a stable pool in vivo are controversial. Using a mathematical model, we address this issue by evaluating several biological scenarios of the origins and the proliferation capacity of two subsets of T(regs): precursor CD4(+)CD25(+)CD45RO(-) and mature CD4(+)CD25(+)CD45RO(+) cells. The lifelong dynamics of T(regs) are described by a set of ordinary differential equations, driven by a stochastic process representing the major immune reactions involving these cells. The model dynamics are validated using data from human donors of different ages. Analysis of the data led to the identification of two properties of the dynamics: (1) the equilibrium in the CD4(+)CD25(+)FoxP3(+)T(regs) population is maintained over both precursor and mature T(regs) pools together, and (2) the ratio between precursor and mature T(regs) is inverted in the early years of adulthood. Then, using the model, we identified three biologically relevant scenarios that have the above properties: (1) the unique source of mature T(regs) is the antigen-driven differentiation of precursors that acquire the mature profile in the periphery and the proliferation of T(regs) is essential for the development and the maintenance of the pool; there exist other sources of mature T(regs), such as (2) a homeostatic density-dependent regulation or (3) thymus- or effector-derived T(regs), and in both cases, antigen-induced proliferation is not necessary for the development of a stable pool of T(regs). This is the first time that a mathematical model built to describe the in vivo dynamics of regulatory T cells is validated using human data. The application of this model provides an invaluable tool in estimating the amount of regulatory T cells as a function of time in the blood of patients that received a solid organ transplant or are suffering from an autoimmune disease.
Resumo:
In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = (3 In 2)/(8) approximate to 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance p apart and p is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of p. Our simulation result shows that the model in fact works very well for the entire range of p. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well.