42 resultados para Fractional mathematical model

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: The purpose of this study was to develop a mathematical model (sine model, SIN) to describe fat oxidation kinetics as a function of the relative exercise intensity [% of maximal oxygen uptake (%VO2max)] during graded exercise and to determine the exercise intensity (Fatmax) that elicits maximal fat oxidation (MFO) and the intensity at which the fat oxidation becomes negligible (Fatmin). This model included three independent variables (dilatation, symmetry, and translation) that incorporated primary expected modulations of the curve because of training level or body composition. METHODS: Thirty-two healthy volunteers (17 women and 15 men) performed a graded exercise test on a cycle ergometer, with 3-min stages and 20-W increments. Substrate oxidation rates were determined using indirect calorimetry. SIN was compared with measured values (MV) and with other methods currently used [i.e., the RER method (MRER) and third polynomial curves (P3)]. RESULTS: There was no significant difference in the fitting accuracy between SIN and P3 (P = 0.157), whereas MRER was less precise than SIN (P < 0.001). Fatmax (44 +/- 10% VO2max) and MFO (0.37 +/- 0.16 g x min(-1)) determined using SIN were significantly correlated with MV, P3, and MRER (P < 0.001). The variable of dilatation was correlated with Fatmax, Fatmin, and MFO (r = 0.79, r = 0.67, and r = 0.60, respectively, P < 0.001). CONCLUSIONS: The SIN model presents the same precision as other methods currently used in the determination of Fatmax and MFO but in addition allows calculation of Fatmin. Moreover, the three independent variables are directly related to the main expected modulations of the fat oxidation curve. SIN, therefore, seems to be an appropriate tool in analyzing fat oxidation kinetics obtained during graded exercise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to develop a two-compartment metabolic model of brain metabolism to assess oxidative metabolism from [1-(11)C] acetate radiotracer experiments, using an approach previously applied in (13)C magnetic resonance spectroscopy (MRS), and compared with an one-tissue compartment model previously used in brain [1-(11)C] acetate studies. Compared with (13)C MRS studies, (11)C radiotracer measurements provide a single uptake curve representing the sum of all labeled metabolites, without chemical differentiation, but with higher temporal resolution. The reliability of the adjusted metabolic fluxes was analyzed with Monte-Carlo simulations using synthetic (11)C uptake curves, based on a typical arterial input function and previously published values of the neuroglial fluxes V(tca)(g), V(x), V(nt), and V(tca)(n) measured in dynamic (13)C MRS experiments. Assuming V(x)(g)=10 × V(tca)(g) and V(x)(n)=V(tca)(n), it was possible to assess the composite glial tricarboxylic acid (TCA) cycle flux V(gt)(g) (V(gt)(g)=V(x)(g) × V(tca)(g)/(V(x)(g)+V(tca)(g))) and the neurotransmission flux V(nt) from (11)C tissue-activity curves obtained within 30 minutes in the rat cortex with a beta-probe after a bolus infusion of [1-(11)C] acetate (n=9), resulting in V(gt)(g)=0.136±0.042 and V(nt)=0.170±0.103 μmol/g per minute (mean±s.d. of the group), in good agreement with (13)C MRS measurements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: According to guidelines, patients with coronary artery disease (CAD) should undergo revascularization if myocardial ischemia is present. While coronary angiography (CXA) allows the morphological assessment of CAD, the fractional flow reserve (FFR) has proved to be a complementary invasive test to assess the functional significance of CAD, i.e. to detect ischemia. Perfusion Cardiac Magnetic Resonance (CMR) has turned out to be a robust non-invasive technique to assess myocardial ischemia. The objective: is to compare the cost-effectiveness ratio - defined as the costs per patient correctly diagnosed - of two algorithms used to diagnose hemodynamically significant CAD in relation to the pretest likelihood of CAD: 1) aCMRto assess ischemia before referring positive patients to CXA (CMR + CXA), 2) a CXA in all patients combined with a FFR test in patients with angiographically positive stenoses (CXA + FFR). Methods: The costs, evaluated from the health care system perspective in the Swiss, German, the United Kingdom (UK) and the United States (US) contexts, included public prices of the different tests considered as outpatient procedures, complications' costs and costs induced by diagnosis errors (false negative). The effectiveness criterion wasthe ability to accurately identify apatient with significantCAD.Test performancesused in the model were based on the clinical literature. Using a mathematical model, we compared the cost-effectiveness ratio for both algorithms for hypothetical patient cohorts with different pretest likelihood of CAD. Results: The cost-effectiveness ratio decreased hyperbolically with increasing pretest likelihood of CAD for both strategies. CMR + CXA and CXA + FFR were equally costeffective at a pretest likelihood of CAD of 62% in Switzerland, 67% in Germany, 83% in the UK and 84% in the US with costs of CHF 5'794, Euros 1'472, £ 2'685 and $ 2'126 per patient correctly diagnosed. Below these thresholds, CMR + CXA showed lower costs per patient correctly diagnosed than CXA + FFR. Implications for the health care system/professionals/patients/society These results facilitate decision making for the clinical use of new generations of imaging procedures to detect ischemia. They show to what extent the cost-effectiveness to diagnose CAD depends on the prevalence of the disease.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Folpet is one of the most widely employed fungicides in agriculture. It is typically used in the culture of vegetables, fruits and ornamental plants. Once absorbed in the human body, it has been found to be very reactive, especially in acid conditions. According to various in vitro and in vivo experiments in animals, Folpet is first fractioned at the N-S link when in contact with aqueous solutions and thiol groups. From this non-enzymatic process a phthalimide (PI) molecule is formed, which may be used as a biomarker of exposure, along with the short-lived thiophosgene. We have built a human toxicokinetic model to account for the biotransformation of Folpet into PI and its subsequent excretion while accounting for other non-monitored metabolites. The mathematical parameters of the model were determined accordingly from best-fits to the time courses of PI in blood and urine of five volunteers administered orally 1 mg/kg and dermally 10 mg/kg of Folpet. In both cases, the mean elimination half-life of PI from the body (either through faeces, urine or metabolism) was found to be 31.6 h. The average final fractions of administered dose recovered in urine as PI were 0.025% and 0.002%, for oral and dermal administration, respectively after 96 h. According to the model, when orally administered, PI rapidly hydrolyzes to phthalamic and phthalic acids such that only 0.04% of the PI found in the gastrointestinal tract is absorbed into the blood stream. Likewise, after dermal application, model predicts that only 7.4% of the applied Folpet dose crosses the epidermis. In the model, the PI initial metabolite of Folpet is formed in the dermis and further metabolized prior to reaching systemic circulation, such that only 0.125% of PI formed at the site-of-entry reaches systemic blood. Our mathematical model is in accordance with both measures of blood (R2=0.57 for dermal and R2=0.66 for oral) and urine (R2 =0.98 for dermal and R2=0.99 for oral).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Astrocytes have recently become a major center of interest in neurochemistry with the discoveries on their major role in brain energy metabolism. An interesting way to probe this glial contribution is given by in vivo (13) C NMR spectroscopy coupled with the infusion labeled glial-specific substrate, such as acetate. In this study, we infused alpha-chloralose anesthetized rats with [2-(13) C]acetate and followed the dynamics of the fractional enrichment (FE) in the positions C4 and C3 of glutamate and glutamine with high sensitivity, using (1) H-[(13) C] magnetic resonance spectroscopy (MRS) at 14.1T. Applying a two-compartment mathematical model to the measured time courses yielded a glial tricarboxylic acid (TCA) cycle rate (Vg ) of 0.27 ± 0.02 μmol/g/min and a glutamatergic neurotransmission rate (VNT ) of 0.15 ± 0.01 μmol/g/min. Glial oxidative ATP metabolism thus accounts for 38% of total oxidative metabolism measured by NMR. Pyruvate carboxylase (VPC ) was 0.09 ± 0.01 μmol/g/min, corresponding to 37% of the glial glutamine synthesis rate. The glial and neuronal transmitochondrial fluxes (Vx (g) and Vx (n) ) were of the same order of magnitude as the respective TCA cycle fluxes. In addition, we estimated a glial glutamate pool size of 0.6 ± 0.1 μmol/g. The effect of spectral data quality on the fluxes estimates was analyzed by Monte Carlo simulations. In this (13) C-acetate labeling study, we propose a refined two-compartment analysis of brain energy metabolism based on (13) C turnover curves of acetate, glutamate and glutamine measured with state of the art in vivo dynamic MRS at high magnetic field in rats, enabling a deeper understanding of the specific role of glial cells in brain oxidative metabolism. In addition, the robustness of the metabolic fluxes determination relative to MRS data quality was carefully studied.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Despite their limited proliferation capacity, regulatory T cells (T(regs)) constitute a population maintained over the entire lifetime of a human organism. The means by which T(regs) sustain a stable pool in vivo are controversial. Using a mathematical model, we address this issue by evaluating several biological scenarios of the origins and the proliferation capacity of two subsets of T(regs): precursor CD4(+)CD25(+)CD45RO(-) and mature CD4(+)CD25(+)CD45RO(+) cells. The lifelong dynamics of T(regs) are described by a set of ordinary differential equations, driven by a stochastic process representing the major immune reactions involving these cells. The model dynamics are validated using data from human donors of different ages. Analysis of the data led to the identification of two properties of the dynamics: (1) the equilibrium in the CD4(+)CD25(+)FoxP3(+)T(regs) population is maintained over both precursor and mature T(regs) pools together, and (2) the ratio between precursor and mature T(regs) is inverted in the early years of adulthood. Then, using the model, we identified three biologically relevant scenarios that have the above properties: (1) the unique source of mature T(regs) is the antigen-driven differentiation of precursors that acquire the mature profile in the periphery and the proliferation of T(regs) is essential for the development and the maintenance of the pool; there exist other sources of mature T(regs), such as (2) a homeostatic density-dependent regulation or (3) thymus- or effector-derived T(regs), and in both cases, antigen-induced proliferation is not necessary for the development of a stable pool of T(regs). This is the first time that a mathematical model built to describe the in vivo dynamics of regulatory T cells is validated using human data. The application of this model provides an invaluable tool in estimating the amount of regulatory T cells as a function of time in the blood of patients that received a solid organ transplant or are suffering from an autoimmune disease.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Plants maintain stem cells in their meristems as a source for new undifferentiated cells throughout their life. Meristems are small groups of cells that provide the microenvironment that allows stem cells to prosper. Homeostasis of a stem cell domain within a growing meristem is achieved by signalling between stem cells and surrounding cells. We have here simulated the origin and maintenance of a defined stem cell domain at the tip of Arabidopsis shoot meristems, based on the assumption that meristems are self-organizing systems. The model comprises two coupled feedback regulated genetic systems that control stem cell behaviour. Using a minimal set of spatial parameters, the mathematical model allows to predict the generation, shape and size of the stem cell domain, and the underlying organizing centre. We use the model to explore the parameter space that allows stem cell maintenance, and to simulate the consequences of mutations, gene misexpression and cell ablations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose a finite element approximation of a system of partial differential equations describing the coupling between the propagation of electrical potential and large deformations of the cardiac tissue. The underlying mathematical model is based on the active strain assumption, in which it is assumed that a multiplicative decomposition of the deformation tensor into a passive and active part holds, the latter carrying the information of the electrical potential propagation and anisotropy of the cardiac tissue into the equations of either incompressible or compressible nonlinear elasticity, governing the mechanical response of the biological material. In addition, by changing from an Eulerian to a Lagrangian configuration, the bidomain or monodomain equations modeling the evolution of the electrical propagation exhibit a nonlinear diffusion term. Piecewise quadratic finite elements are employed to approximate the displacements field, whereas for pressure, electrical potentials and ionic variables are approximated by piecewise linear elements. Various numerical tests performed with a parallel finite element code illustrate that the proposed model can capture some important features of the electromechanical coupling, and show that our numerical scheme is efficient and accurate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bone defects in revision knee arthroplasty are often located in load-bearing regions. The goal of this study was to determine whether a physiologic load could be used as an in situ osteogenic signal to the scaffolds filling the bone defects. In order to answer this question, we proposed a novel translation procedure having four steps: (1) determining the mechanical stimulus using finite element method, (2) designing an animal study to measure bone formation spatially and temporally using micro-CT imaging in the scaffold subjected to the estimated mechanical stimulus, (3) identifying bone formation parameters for the loaded and non-loaded cases appearing in a recently developed mathematical model for bone formation in the scaffold and (4) estimating the stiffness and the bone formation in the bone-scaffold construct. With this procedure, we estimated that after 3 years mechanical stimulation increases the bone volume fraction and the stiffness of scaffold by 1.5- and 2.7-fold, respectively, compared to a non-loaded situation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We aimed to determine whether human subjects' reliance on different sources of spatial information encoded in different frames of reference (i.e., egocentric versus allocentric) affects their performance, decision time and memory capacity in a short-term spatial memory task performed in the real world. Subjects were asked to play the Memory game (a.k.a. the Concentration game) without an opponent, in four different conditions that controlled for the subjects' reliance on egocentric and/or allocentric frames of reference for the elaboration of a spatial representation of the image locations enabling maximal efficiency. We report experimental data from young adult men and women, and describe a mathematical model to estimate human short-term spatial memory capacity. We found that short-term spatial memory capacity was greatest when an egocentric spatial frame of reference enabled subjects to encode and remember the image locations. However, when egocentric information was not reliable, short-term spatial memory capacity was greater and decision time shorter when an allocentric representation of the image locations with respect to distant objects in the surrounding environment was available, as compared to when only a spatial representation encoding the relationships between the individual images, independent of the surrounding environment, was available. Our findings thus further demonstrate that changes in viewpoint produced by the movement of images placed in front of a stationary subject is not equivalent to the movement of the subject around stationary images. We discuss possible limitations of classical neuropsychological and virtual reality experiments of spatial memory, which typically restrict the sensory information normally available to human subjects in the real world.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although glycogen (Glyc) is the main carbohydrate storage component, the role of Glyc in the brain during prolonged wakefulness is not clear. The aim of this study was to determine brain Glyc concentration ([]) and turnover time (tau) in euglycemic conscious and undisturbed rats, compared to rats maintained awake for 5h. To measure the metabolism of [1-(13)C]-labeled Glc into Glyc, 23 rats received a [1-(13)C]-labeled Glc solution as drink (10% weight per volume in tap water) ad libitum as their sole source of exogenous carbon for a "labeling period" of either 5h (n=13), 24h (n=5) or 48 h (n=5). Six of the rats labeled for 5h were continuously maintained awake by acoustic, tactile and olfactory stimuli during the labeling period, which resulted in slightly elevated corticosterone levels. Brain [Glyc] measured biochemically after focused microwave fixation in the rats maintained awake (3.9+/-0.2 micromol/g, n=6) was not significantly different from that of the control group (4.0+/-0.1 micromol/g, n=7; t-test, P>0.5). To account for potential variations in plasma Glc isotopic enrichment (IE), Glyc IE was normalized by N-acetyl-aspartate (NAA) IE. A simple mathematical model was developed to derive brain Glyc turnover time as 5.3h with a fit error of 3.2h and NAA turnover time as 15.6h with a fit error of 6.5h, in the control rats. A faster tau(Glyc) (2.9h with a fit error of 1.2h) was estimated in the rats maintained awake for 5h. In conclusion, 5h of prolonged wakefulness mainly activates glycogen metabolism, but has minimal effect on brain [Glyc].

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The capacity to learn to associate sensory perceptions with appropriate motor actions underlies the success of many animal species, from insects to humans. The evolutionary significance of learning has long been a subject of interest for evolutionary biologists who emphasize the bene¬fit yielded by learning under changing environmental conditions, where it is required to flexibly switch from one behavior to another. However, two unsolved questions are particularly impor¬tant for improving our knowledge of the evolutionary advantages provided by learning, and are addressed in the present work. First, because it is possible to learn the wrong behavior when a task is too complex, the learning rules and their underlying psychological characteristics that generate truly adaptive behavior must be identified with greater precision, and must be linked to the specific ecological problems faced by each species. A framework for predicting behavior from the definition of a learning rule is developed here. Learning rules capture cognitive features such as the tendency to explore, or the ability to infer rewards associated to unchosen actions. It is shown that these features interact in a non-intuitive way to generate adaptive behavior in social interactions where individuals affect each other's fitness. Such behavioral predictions are used in an evolutionary model to demonstrate that, surprisingly, simple trial-and-error learn¬ing is not always outcompeted by more computationally demanding inference-based learning, when population members interact in pairwise social interactions. A second question in the evolution of learning is its link with and relative advantage compared to other simpler forms of phenotypic plasticity. After providing a conceptual clarification on the distinction between genetically determined vs. learned responses to environmental stimuli, a new factor in the evo¬lution of learning is proposed: environmental complexity. A simple mathematical model shows that a measure of environmental complexity, the number of possible stimuli in one's environ¬ment, is critical for the evolution of learning. In conclusion, this work opens roads for modeling interactions between evolving species and their environment in order to predict how natural se¬lection shapes animals' cognitive abilities. - La capacité d'apprendre à associer des sensations perceptives à des actions motrices appropriées est sous-jacente au succès évolutif de nombreuses espèces, depuis les insectes jusqu'aux êtres hu¬mains. L'importance évolutive de l'apprentissage est depuis longtemps un sujet d'intérêt pour les biologistes de l'évolution, et ces derniers mettent l'accent sur le bénéfice de l'apprentissage lorsque les conditions environnementales sont changeantes, car dans ce cas il est nécessaire de passer de manière flexible d'un comportement à l'autre. Cependant, deux questions non résolues sont importantes afin d'améliorer notre savoir quant aux avantages évolutifs procurés par l'apprentissage. Premièrement, puisqu'il est possible d'apprendre un comportement incorrect quand une tâche est trop complexe, les règles d'apprentissage qui permettent d'atteindre un com¬portement réellement adaptatif doivent être identifiées avec une plus grande précision, et doivent être mises en relation avec les problèmes écologiques spécifiques rencontrés par chaque espèce. Un cadre théorique ayant pour but de prédire le comportement à partir de la définition d'une règle d'apprentissage est développé ici. Il est démontré que les caractéristiques cognitives, telles que la tendance à explorer ou la capacité d'inférer les récompenses liées à des actions non ex¬périmentées, interagissent de manière non-intuitive dans les interactions sociales pour produire des comportements adaptatifs. Ces prédictions comportementales sont utilisées dans un modèle évolutif afin de démontrer que, de manière surprenante, l'apprentissage simple par essai-et-erreur n'est pas toujours battu par l'apprentissage basé sur l'inférence qui est pourtant plus exigeant en puissance de calcul, lorsque les membres d'une population interagissent socialement par pair. Une deuxième question quant à l'évolution de l'apprentissage concerne son lien et son avantage relatif vis-à-vis d'autres formes plus simples de plasticité phénotypique. Après avoir clarifié la distinction entre réponses aux stimuli génétiquement déterminées ou apprises, un nouveau fac¬teur favorisant l'évolution de l'apprentissage est proposé : la complexité environnementale. Un modèle mathématique permet de montrer qu'une mesure de la complexité environnementale - le nombre de stimuli rencontrés dans l'environnement - a un rôle fondamental pour l'évolution de l'apprentissage. En conclusion, ce travail ouvre de nombreuses perspectives quant à la mo¬délisation des interactions entre les espèces en évolution et leur environnement, dans le but de comprendre comment la sélection naturelle façonne les capacités cognitives des animaux.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study aimed to quantitatively describe and compare whole-body fat oxidation kinetics in cycling and running using a sinusoidal mathematical model (SIN). Thirteen moderately trained individuals (7 men and 6 women) performed two graded exercise tests, with 3-min stages and 1 km h(-1) (or 20 W) increment, on a treadmill and on a cycle ergometer. Fat oxidation rates were determined using indirect calorimetry and plotted as a function of exercise intensity. The SIN model, which includes three independent variables (dilatation, symmetry and translation) that account for main quantitative characteristics of kinetics, provided a mathematical description of fat oxidation kinetics and allowed for determination of the intensity (Fat(max)) that elicits maximal fat oxidation (MFO). While the mean fat oxidation kinetics in cycling formed a symmetric parabolic curve, the mean kinetics during running was characterized by a greater dilatation (i.e., widening of the curve, P < 0.001) and a rightward asymmetry (i.e., shift of the peak of the curve to higher intensities, P = 0.01). Fat(max) was significantly higher in running compared with cycling (P < 0.001), whereas MFO was not significantly different between modes of exercise (P = 0.36). This study showed that the whole-body fat oxidation kinetics during running was characterized by a greater dilatation and a rightward asymmetry compared with cycling. The greater dilatation may be mainly related to the larger muscle mass involved in running while the rightward asymmetry may be induced by the specific type of muscle contraction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Standard proteomics methods allow the relative quantitation of levels of thousands of proteins in two or more samples. While such methods are invaluable for defining the variations in protein concentrations which follow the perturbation of a biological system, they do not offer information on the mechanisms underlying such changes. Expanding on previous work [1], we developed a pulse-chase (pc) variant of SILAC (stable isotope labeling by amino acids in cell culture). pcSILAC can quantitate in one experiment and for two conditions the relative levels of proteins newly synthesized in a given time as well as the relative levels of remaining preexisting proteins. We validated the method studying the drug-mediated inhibition of the Hsp90 molecular chaperone, which is known to lead to increased synthesis of stress response proteins as well as the increased decay of Hsp90 "clients". We showed that pcSILAC can give information on changes in global cellular proteostasis induced by treatment with the inhibitor, which are normally not captured by standard relative quantitation techniques. Furthermore, we have developed a mathematical model and computational framework that uses pcSILAC data to determine degradation constants kd and synthesis rates Vs for proteins in both control and drug-treated cells. The results show that Hsp90 inhibition induced a generalized slowdown of protein synthesis and an increase in protein decay. Treatment with the inhibitor also resulted in widespread protein-specific changes in relative synthesis rates, together with variations in protein decay rates. The latter were more restricted to individual proteins or protein families than the variations in synthesis. Our results establish pcSILAC as a viable workflow for the mechanistic dissection of changes in the proteome which follow perturbations. Data are available via ProteomeXchange with identifier PXD000538.