1000 resultados para Compartmental Modelling
Resumo:
The basic reproduction number is a key parameter in mathematical modelling of transmissible diseases. From the stability analysis of the disease free equilibrium, by applying Routh-Hurwitz criteria, a threshold is obtained, which is called the basic reproduction number. However, the application of spectral radius theory on the next generation matrix provides a different expression for the basic reproduction number, that is, the square root of the previously found formula. If the spectral radius of the next generation matrix is defined as the geometric mean of partial reproduction numbers, however the product of these partial numbers is the basic reproduction number, then both methods provide the same expression. In order to show this statement, dengue transmission modelling incorporating or not the transovarian transmission is considered as a case study. Also tuberculosis transmission and sexually transmitted infection modellings are taken as further examples.
Resumo:
The effect of retrofitting an existing pond on removal efficiency and hydraulic performance was modelled using the commercial software Mike21 and compartmental modelling. The Mike21 model had previously been calibrated on the studied pond. Installation of baffles, the addition of culverts under a causeway and removal of an existing island were all studied as possible improvement measures in the pond. The subsequent effect on hydraulic performance and removal of. suspended solids was then evaluated. Copper, cadmium, BOD, nitrogen and phosphorus removal were,also investigated for that specific improvement measure showing the best results. Outcomes of this study reveal that all measures increase the removal efficiency of suspended solids. The hydraulic efficiency is improved for all cases, except for the case where the island is removed. Compartmental modelling was also used to evaluate hydraulic performance and facilitated a better understanding of the way each of the different measures affected the flow pattern and performance. It was concluded that the installation of baffles is the best of the studied measures resulting in a reduction in the annual load on the receiving lake by approximately 8,000 kg of suspended solids (25% reduction of the annual load), 2 kg of copper (10% reduction of the annual load) and 600 kg of BOD (10% reduction of the annual load).
Resumo:
Lipotoxicity is a condition in which fatty acids (FAs) are not efficiently stored in adipose tissue and overflow to non-adipose tissue, causing organ damages. A defect of adipose tissue FA storage capability can be the primary culprit in the insulin resistance condition that characterizes many of the severe metabolic diseases that affect people nowadays. Obesity, in this regard, constitutes the gateway and risk factor of the major killers of modern society, such as cardiovascular disease and cancer. A deep understanding of the pathogenetic mechanisms that underlie obesity and the insulin resistance syndrome is a challenge for modern medicine. In the last twenty years of scientific research, FA metabolism and dysregulations have been the object of numerous studies. Development of more targeted and quantitative methodologies is required on one hand, to investigate and dissect organ metabolism, on the other hand to test the efficacy and mechanisms of action of novel drugs. The combination of functional and anatomical imaging is an answer to this need, since it provides more understanding and more information than we have ever had. The first purpose of this study was to investigate abnormalities of substrate organ metabolism, with special reference to the FA metabolism in obese drug-naïve subjects at an early stage of disease. Secondly, trimetazidine (TMZ), a metabolic drug supposed to inhibit FA oxidation (FAO), has been for the first time evaluated in obese subjects to test a whole body and organ metabolism improvement based on the hypothesis that FAO is increased at an early stage of the disease. A third objective was to investigate the relationship between ectopic fat accumulation surrounding heart and coronaries, and impaired myocardial perfusion in patients with risk of coronary artery disease (CAD). In the current study a new methodology has been developed with PET imaging with 11C-palmitate and compartmental modelling for the non-invasive in vivo study of liver FA metabolism, and a similar approach has been used to study FA metabolism in the skeletal muscle, the adipose tissue and the heart. The results of the different substudies point in the same direction. Obesity, at the an early stage, is associated with an impairment in the esterification of FAs in adipose tissue and skeletal muscle, which is accompanied by the upregulation in skeletal muscle, liver and heart FAO. The inability to store fat may initiate a cascade of events leading to FA oversupply to lean tissue, overload of the oxidative pathway, and accumulation of toxic lipid species and triglycerides, and it was paralleled by a proportional growth in insulin resistance. In subjects with CAD, the accumulation of ectopic fat inside the pericardium is associated with impaired myocardial perfusion, presumably via a paracrine/vasocrine effect. At the beginning of the disease, TMZ is not detrimental to health; on the contrary at the single organ level (heart, skeletal muscle and liver) it seems beneficial, while no relevant effects were found on adipose tissue function. Taken altogether these findings suggest that adipose tissue storage capability should be preserved, if it is not possible to prevent excessive fat intake in the first place.
Resumo:
Dynamic positron emission tomography (PET) imaging can be used to track the distribution of injected radio-labelled molecules over time in vivo. This is a powerful technique, which provides researchers and clinicians the opportunity to study the status of healthy and pathological tissue by examining how it processes substances of interest. Widely used tracers include 18F-uorodeoxyglucose, an analog of glucose, which is used as the radiotracer in over ninety percent of PET scans. This radiotracer provides a way of quantifying the distribution of glucose utilisation in vivo. The interpretation of PET time-course data is complicated because the measured signal is a combination of vascular delivery and tissue retention effects. If the arterial time-course is known, the tissue time-course can typically be expressed in terms of a linear convolution between the arterial time-course and the tissue residue function. As the residue represents the amount of tracer remaining in the tissue, this can be thought of as a survival function; these functions been examined in great detail by the statistics community. Kinetic analysis of PET data is concerned with estimation of the residue and associated functionals such as ow, ux and volume of distribution. This thesis presents a Markov chain formulation of blood tissue exchange and explores how this relates to established compartmental forms. A nonparametric approach to the estimation of the residue is examined and the improvement in this model relative to compartmental model is evaluated using simulations and cross-validation techniques. The reference distribution of the test statistics, generated in comparing the models, is also studied. We explore these models further with simulated studies and an FDG-PET dataset from subjects with gliomas, which has previously been analysed with compartmental modelling. We also consider the performance of a recently proposed mixture modelling technique in this study.
Resumo:
Résumé: L'évaluation de l'exposition aux nuisances professionnelles représente une étape importante dans l'analyse de poste de travail. Les mesures directes sont rarement utilisées sur les lieux même du travail et l'exposition est souvent estimée sur base de jugements d'experts. Il y a donc un besoin important de développer des outils simples et transparents, qui puissent aider les spécialistes en hygiène industrielle dans leur prise de décision quant aux niveaux d'exposition. L'objectif de cette recherche est de développer et d'améliorer les outils de modélisation destinés à prévoir l'exposition. Dans un premier temps, une enquête a été entreprise en Suisse parmi les hygiénistes du travail afin d'identifier les besoins (types des résultats, de modèles et de paramètres observables potentiels). Il a été constaté que les modèles d'exposition ne sont guère employés dans la pratique en Suisse, l'exposition étant principalement estimée sur la base de l'expérience de l'expert. De plus, l'émissions de polluants ainsi que leur dispersion autour de la source ont été considérés comme des paramètres fondamentaux. Pour tester la flexibilité et la précision des modèles d'exposition classiques, des expériences de modélisations ont été effectuées dans des situations concrètes. En particulier, des modèles prédictifs ont été utilisés pour évaluer l'exposition professionnelle au monoxyde de carbone et la comparer aux niveaux d'exposition répertoriés dans la littérature pour des situations similaires. De même, l'exposition aux sprays imperméabilisants a été appréciée dans le contexte d'une étude épidémiologique sur une cohorte suisse. Dans ce cas, certains expériences ont été entreprises pour caractériser le taux de d'émission des sprays imperméabilisants. Ensuite un modèle classique à deux-zone a été employé pour évaluer la dispersion d'aérosol dans le champ proche et lointain pendant l'activité de sprayage. D'autres expériences ont également été effectuées pour acquérir une meilleure compréhension des processus d'émission et de dispersion d'un traceur, en se concentrant sur la caractérisation de l'exposition du champ proche. Un design expérimental a été développé pour effectuer des mesures simultanées dans plusieurs points d'une cabine d'exposition, par des instruments à lecture directe. Il a été constaté que d'un point de vue statistique, la théorie basée sur les compartiments est sensée, bien que l'attribution à un compartiment donné ne pourrait pas se faire sur la base des simples considérations géométriques. Dans une étape suivante, des données expérimentales ont été collectées sur la base des observations faites dans environ 100 lieux de travail différents: des informations sur les déterminants observés ont été associées aux mesures d'exposition des informations sur les déterminants observés ont été associé. Ces différentes données ont été employées pour améliorer le modèle d'exposition à deux zones. Un outil a donc été développé pour inclure des déterminants spécifiques dans le choix du compartiment, renforçant ainsi la fiabilité des prévisions. Toutes ces investigations ont servi à améliorer notre compréhension des outils des modélisations ainsi que leurs limitations. L'intégration de déterminants mieux adaptés aux besoins des experts devrait les inciter à employer cet outil dans leur pratique. D'ailleurs, en augmentant la qualité des outils des modélisations, cette recherche permettra non seulement d'encourager leur utilisation systématique, mais elle pourra également améliorer l'évaluation de l'exposition basée sur les jugements d'experts et, par conséquent, la protection de la santé des travailleurs. Abstract Occupational exposure assessment is an important stage in the management of chemical exposures. Few direct measurements are carried out in workplaces, and exposures are often estimated based on expert judgements. There is therefore a major requirement for simple transparent tools to help occupational health specialists to define exposure levels. The aim of the present research is to develop and improve modelling tools in order to predict exposure levels. In a first step a survey was made among professionals to define their expectations about modelling tools (what types of results, models and potential observable parameters). It was found that models are rarely used in Switzerland and that exposures are mainly estimated from past experiences of the expert. Moreover chemical emissions and their dispersion near the source have also been considered as key parameters. Experimental and modelling studies were also performed in some specific cases in order to test the flexibility and drawbacks of existing tools. In particular, models were applied to assess professional exposure to CO for different situations and compared with the exposure levels found in the literature for similar situations. Further, exposure to waterproofing sprays was studied as part of an epidemiological study on a Swiss cohort. In this case, some laboratory investigation have been undertaken to characterize the waterproofing overspray emission rate. A classical two-zone model was used to assess the aerosol dispersion in the near and far field during spraying. Experiments were also carried out to better understand the processes of emission and dispersion for tracer compounds, focusing on the characterization of near field exposure. An experimental set-up has been developed to perform simultaneous measurements through direct reading instruments in several points. It was mainly found that from a statistical point of view, the compartmental theory makes sense but the attribution to a given compartment could ñó~be done by simple geometric consideration. In a further step the experimental data were completed by observations made in about 100 different workplaces, including exposure measurements and observation of predefined determinants. The various data obtained have been used to improve an existing twocompartment exposure model. A tool was developed to include specific determinants in the choice of the compartment, thus largely improving the reliability of the predictions. All these investigations helped improving our understanding of modelling tools and identify their limitations. The integration of more accessible determinants, which are in accordance with experts needs, may indeed enhance model application for field practice. Moreover, while increasing the quality of modelling tool, this research will not only encourage their systematic use, but might also improve the conditions in which the expert judgments take place, and therefore the workers `health protection.
Resumo:
Crystallization is a purification method used to obtain crystalline product of a certain crystal size. It is one of the oldest industrial unit processes and commonly used in modern industry due to its good purification capability from rather impure solutions with reasonably low energy consumption. However, the process is extremely challenging to model and control because it involves inhomogeneous mixing and many simultaneous phenomena such as nucleation, crystal growth and agglomeration. All these phenomena are dependent on supersaturation, i.e. the difference between actual liquid phase concentration and solubility. Homogeneous mass and heat transfer in the crystallizer would greatly simplify modelling and control of crystallization processes, such conditions are, however, not the reality, especially in industrial scale processes. Consequently, the hydrodynamics of crystallizers, i.e. the combination of mixing, feed and product removal flows, and recycling of the suspension, needs to be thoroughly investigated. Understanding of hydrodynamics is important in crystallization, especially inlargerscale equipment where uniform flow conditions are difficult to attain. It is also important to understand different size scales of mixing; micro-, meso- and macromixing. Fast processes, like nucleation and chemical reactions, are typically highly dependent on micro- and mesomixing but macromixing, which equalizes the concentrations of all the species within the entire crystallizer, cannot be disregarded. This study investigates the influence of hydrodynamics on crystallization processes. Modelling of crystallizers with the mixed suspension mixed product removal (MSMPR) theory (ideal mixing), computational fluid dynamics (CFD), and a compartmental multiblock model is compared. The importance of proper verification of CFD and multiblock models is demonstrated. In addition, the influence of different hydrodynamic conditions on reactive crystallization process control is studied. Finally, the effect of extreme local supersaturation is studied using power ultrasound to initiate nucleation. The present work shows that mixing and chemical feeding conditions clearly affect induction time and cluster formation, nucleation, growth kinetics, and agglomeration. Consequently, the properties of crystalline end products, e.g. crystal size and crystal habit, can be influenced by management of mixing and feeding conditions. Impurities may have varying impacts on crystallization processes. As an example, manganese ions were shown to replace magnesium ions in the crystal lattice of magnesium sulphate heptahydrate, increasing the crystal growth rate significantly, whereas sodium ions showed no interaction at all. Modelling of continuous crystallization based on MSMPR theory showed that the model is feasible in a small laboratoryscale crystallizer, whereas in larger pilot- and industrial-scale crystallizers hydrodynamic effects should be taken into account. For that reason, CFD and multiblock modelling are shown to be effective tools for modelling crystallization with inhomogeneous mixing. The present work shows also that selection of the measurement point, or points in the case of multiprobe systems, is crucial when process analytical technology (PAT) is used to control larger scale crystallization. The thesis concludes by describing how control of local supersaturation by highly localized ultrasound was successfully applied to induce nucleation and to control polymorphism in reactive crystallization of L-glutamic acid.
Resumo:
An optimal control strategy for the highly active antiretroviral therapy associated to the acquired immunodeficiency syndrome should be designed regarding a comprehensive analysis of the drug chemotherapy behavior in the host tissues, from major viral replication sites to viral sanctuary compartments. Such approach is critical in order to efficiently explore synergistic, competitive and prohibitive relationships among drugs and, hence, therapy costs and side-effect minimization. In this paper, a novel mathematical model for HIV-1 drug chemotherapy dynamics in distinct host anatomic compartments is proposed and theoretically evaluated on fifteen conventional anti-retroviral drugs. Rather than interdependence between drug type and its concentration profile in a host tissue, simulated results suggest that such profile is importantly correlated with the host tissue under consideration. Furthermore, the drug accumulative dynamics are drastically affected by low patient compliance with pharmacotherapy, even when a single dose lacks. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.
Resumo:
Background Pelvic inflammatory disease (PID) results from the ascending spread of microorganisms from the vagina and endocervix to the upper genital tract. PID can lead to infertility, ectopic pregnancy and chronic pelvic pain. The timing of development of PID after the sexually transmitted bacterial infection Chlamydia trachomatis (chlamydia) might affect the impact of screening interventions, but is currently unknown. This study investigates three hypothetical processes for the timing of progression: at the start, at the end, or throughout the duration of chlamydia infection. Methods We develop a compartmental model that describes the trial structure of a published randomised controlled trial (RCT) and allows each of the three processes to be examined using the same model structure. The RCT estimated the effect of a single chlamydia screening test on the cumulative incidence of PID up to one year later. The fraction of chlamydia infected women who progress to PID is obtained for each hypothetical process by the maximum likelihood method using the results of the RCT. Results The predicted cumulative incidence of PID cases from all causes after one year depends on the fraction of chlamydia infected women that progresses to PID and on the type of progression. Progression at a constant rate from a chlamydia infection to PID or at the end of the infection was compatible with the findings of the RCT. The corresponding estimated fraction of chlamydia infected women that develops PID is 10% (95% confidence interval 7-13%) in both processes. Conclusions The findings of this study suggest that clinical PID can occur throughout the course of a chlamydia infection, which will leave a window of opportunity for screening to prevent PID.
Resumo:
BACKGROUND Pathogenic bacteria are often asymptomatically carried in the nasopharynx. Bacterial carriage can be reduced by vaccination and has been used as an alternative endpoint to clinical disease in randomised controlled trials (RCTs). Vaccine efficacy (VE) is usually calculated as 1 minus a measure of effect. Estimates of vaccine efficacy from cross-sectional carriage data collected in RCTs are usually based on prevalence odds ratios (PORs) and prevalence ratios (PRs), but it is unclear when these should be measured. METHODS We developed dynamic compartmental transmission models simulating RCTs of a vaccine against a carried pathogen to investigate how VE can best be estimated from cross-sectional carriage data, at which time carriage should optimally be assessed, and to which factors this timing is most sensitive. In the models, vaccine could change carriage acquisition and clearance rates (leaky vaccine); values for these effects were explicitly defined (facq, 1/fdur). POR and PR were calculated from model outputs. Models differed in infection source: other participants or external sources unaffected by the trial. Simulations using multiple vaccine doses were compared to empirical data. RESULTS The combined VE against acquisition and duration calculated using POR (VEˆacq.dur, (1-POR)×100) best estimates the true VE (VEacq.dur, (1-facq×fdur)×100) for leaky vaccines in most scenarios. The mean duration of carriage was the most important factor determining the time until VEˆacq.dur first approximates VEacq.dur: if the mean duration of carriage is 1-1.5 months, up to 4 months are needed; if the mean duration is 2-3 months, up to 8 months are needed. Minor differences were seen between models with different infection sources. In RCTs with shorter intervals between vaccine doses it takes longer after the last dose until VEˆacq.dur approximates VEacq.dur. CONCLUSION The timing of sample collection should be considered when interpreting vaccine efficacy against bacterial carriage measured in RCTs.
Resumo:
La diabetes comprende un conjunto de enfermedades metabólicas que se caracterizan por concentraciones de glucosa en sangre anormalmente altas. En el caso de la diabetes tipo 1 (T1D, por sus siglas en inglés), esta situación es debida a una ausencia total de secreción endógena de insulina, lo que impide a la mayoría de tejidos usar la glucosa. En tales circunstancias, se hace necesario el suministro exógeno de insulina para preservar la vida del paciente; no obstante, siempre con la precaución de evitar caídas agudas de la glucemia por debajo de los niveles recomendados de seguridad. Además de la administración de insulina, las ingestas y la actividad física son factores fundamentales que influyen en la homeostasis de la glucosa. En consecuencia, una gestión apropiada de la T1D debería incorporar estos dos fenómenos fisiológicos, en base a una identificación y un modelado apropiado de los mismos y de sus sorrespondientes efectos en el balance glucosa-insulina. En particular, los sistemas de páncreas artificial –ideados para llevar a cabo un control automático de los niveles de glucemia del paciente– podrían beneficiarse de la integración de esta clase de información. La primera parte de esta tesis doctoral cubre la caracterización del efecto agudo de la actividad física en los perfiles de glucosa. Con este objetivo se ha llevado a cabo una revisión sistemática de la literatura y meta-análisis que determinen las respuestas ante varias modalidades de ejercicio para pacientes con T1D, abordando esta caracterización mediante unas magnitudes que cuantifican las tasas de cambio en la glucemia a lo largo del tiempo. Por otro lado, una identificación fiable de los periodos con actividad física es un requisito imprescindible para poder proveer de esa información a los sistemas de páncreas artificial en condiciones libres y ambulatorias. Por esta razón, la segunda parte de esta tesis está enfocada a la propuesta y evaluación de un sistema automático diseñado para reconocer periodos de actividad física, clasificando su nivel de intensidad (ligera, moderada o vigorosa); así como, en el caso de periodos vigorosos, identificando también la modalidad de ejercicio (aeróbica, mixta o de fuerza). En este sentido, ambos aspectos tienen una influencia específica en el mecanismo metabólico que suministra la energía para llevar a cabo el ejercicio y, por tanto, en las respuestas glucémicas en T1D. En este trabajo se aplican varias combinaciones de técnicas de aprendizaje máquina y reconocimiento de patrones sobre la fusión multimodal de señales de acelerometría y ritmo cardíaco, las cuales describen tanto aspectos mecánicos del movimiento como la respuesta fisiológica del sistema cardiovascular ante el ejercicio. Después del reconocimiento de patrones se incorpora también un módulo de filtrado temporal para sacar partido a la considerable coherencia temporal presente en los datos, una redundancia que se origina en el hecho de que en la práctica, las tendencias en cuanto a actividad física suelen mantenerse estables a lo largo de cierto tiempo, sin fluctuaciones rápidas y repetitivas. El tercer bloque de esta tesis doctoral aborda el tema de las ingestas en el ámbito de la T1D. En concreto, se propone una serie de modelos compartimentales y se evalúan éstos en función de su capacidad para describir matemáticamente el efecto remoto de las concetraciones plasmáticas de insulina exógena sobre las tasas de eleiminación de la glucosa atribuible a la ingesta; un aspecto hasta ahora no incorporado en los principales modelos de paciente para T1D existentes en la literatura. Los datos aquí utilizados se obtuvieron gracias a un experimento realizado por el Institute of Metabolic Science (Universidad de Cambridge, Reino Unido) con 16 pacientes jóvenes. En el experimento, de tipo ‘clamp’ con objetivo variable, se replicaron los perfiles individuales de glucosa, según lo observado durante una visita preliminar tras la ingesta de una cena con o bien alta carga glucémica, o bien baja. Los seis modelos mecanísticos evaluados constaban de: a) submodelos de doble compartimento para las masas de trazadores de glucosa, b) un submodelo de único compartimento para reflejar el efecto remoto de la insulina, c) dos tipos de activación de este mismo efecto remoto (bien lineal, bien con un punto de corte), y d) diversas condiciones iniciales. ABSTRACT Diabetes encompasses a series of metabolic diseases characterized by abnormally high blood glucose concentrations. In the case of type 1 diabetes (T1D), this situation is caused by a total absence of endogenous insulin secretion, which impedes the use of glucose by most tissues. In these circumstances, exogenous insulin supplies are necessary to maintain patient’s life; although caution is always needed to avoid acute decays in glycaemia below safe levels. In addition to insulin administrations, meal intakes and physical activity are fundamental factors influencing glucose homoeostasis. Consequently, a successful management of T1D should incorporate these two physiological phenomena, based on an appropriate identification and modelling of these events and their corresponding effect on the glucose-insulin balance. In particular, artificial pancreas systems –designed to perform an automated control of patient’s glycaemia levels– may benefit from the integration of this type of information. The first part of this PhD thesis covers the characterization of the acute effect of physical activity on glucose profiles. With this aim, a systematic review of literature and metaanalyses are conduced to determine responses to various exercise modalities in patients with T1D, assessed via rates-of-change magnitudes to quantify temporal variations in glycaemia. On the other hand, a reliable identification of physical activity periods is an essential prerequisite to feed artificial pancreas systems with information concerning exercise in ambulatory, free-living conditions. For this reason, the second part of this thesis focuses on the proposal and evaluation of an automatic system devised to recognize physical activity, classifying its intensity level (light, moderate or vigorous) and for vigorous periods, identifying also its exercise modality (aerobic, mixed or resistance); since both aspects have a distinctive influence on the predominant metabolic pathway involved in fuelling exercise, and therefore, in the glycaemic responses in T1D. Various combinations of machine learning and pattern recognition techniques are applied on the fusion of multi-modal signal sources, namely: accelerometry and heart rate measurements, which describe both mechanical aspects of movement and the physiological response of the cardiovascular system to exercise. An additional temporal filtering module is incorporated after recognition in order to exploit the considerable temporal coherence (i.e. redundancy) present in data, which stems from the fact that in practice, physical activity trends are often maintained stable along time, instead of fluctuating rapid and repeatedly. The third block of this PhD thesis addresses meal intakes in the context of T1D. In particular, a number of compartmental models are proposed and compared in terms of their ability to describe mathematically the remote effect of exogenous plasma insulin concentrations on the disposal rates of meal-attributable glucose, an aspect which had not yet been incorporated to the prevailing T1D patient models in literature. Data were acquired in an experiment conduced at the Institute of Metabolic Science (University of Cambridge, UK) on 16 young patients. A variable-target glucose clamp replicated their individual glucose profiles, observed during a preliminary visit after ingesting either a high glycaemic-load or a low glycaemic-load evening meal. The six mechanistic models under evaluation here comprised: a) two-compartmental submodels for glucose tracer masses, b) a single-compartmental submodel for insulin’s remote effect, c) two types of activations for this remote effect (either linear or with a ‘cut-off’ point), and d) diverse forms of initial conditions.
Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel((R))
Resumo:
This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel((R)) spreadsheet was implemented with the use of Solver((R)) and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.
Resumo:
Shot peening is a cold-working mechanical process in which a shot stream is propelled against a component surface. Its purpose is to introduce compressive residual stresses on component surfaces for increasing the fatigue resistance. This process is widely applied in springs due to the cyclical loads requirements. This paper presents a numerical modelling of shot peening process using the finite element method. The results are compared with experimental measurements of the residual stresses, obtained by the X-rays diffraction technique, in leaf springs submitted to this process. Furthermore, the results are compared with empirical and numerical correlations developed by other authors.
Resumo:
This work presents a thermoeconomic optimization methodology for the analysis and design of energy systems. This methodology involves economic aspects related to the exergy conception, in order to develop a tool to assist the equipment selection, operation mode choice as well as to optimize the thermal plants design. It also presents the concepts related to exergy in a general scope and in thermoeconomics which combines the thermal sciences principles (thermodynamics, heat transfer, and fluid mechanics) and the economic engineering in order to rationalize energy systems investment decisions, development and operation. Even in this paper, it develops a thermoeconomic methodology through the use of a simple mathematical model, involving thermodynamics parameters and costs evaluation, also defining the objective function as the exergetic production cost. The optimization problem evaluation is developed for two energy systems. First is applied to a steam compression refrigeration system and then to a cogeneration system using backpressure steam turbine. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.