930 resultados para Skin absorption -- Mathematical models
Resumo:
The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.
Resumo:
This paper focuses on minimizing printed circuit board (PCB) assembly time for a chipshootermachine, which has a movable feeder carrier holding components, a movable X–Y table carrying a PCB, and a rotary turret with multiple assembly heads. The assembly time of the machine depends on two inter-related optimization problems: the component sequencing problem and the feeder arrangement problem. Nevertheless, they were often regarded as two individual problems and solved separately. This paper proposes two complete mathematical models for the integrated problem of the machine. The models are verified by two commercial packages. Finally, a hybrid genetic algorithm previously developed by the authors is presented to solve the model. The algorithm not only generates the optimal solutions quickly for small-sized problems, but also outperforms the genetic algorithms developed by other researchers in terms of total assembly time.
Resumo:
The preparation and characterisation of collagen:PCL composites for manufacture of tissue engineered skin substitutes and models are reported. Films having collagen:PCL (w/w) ratios of 1:4, 1:8 and 1:20 were prepared by impregnation of lyophilised collagen mats by PCL solutions followed by solvent evaporation. In vitro assays of collagen release and residual collagen content revealed an expected inverse relationship between the collagen release rate and the content of synthetic polymer in the composite that may be exploited for controlled presentation and release of biopharmaceuticals such as growth factors. DSC analysis revealed the characteristic melting point of PCL at around 60°C and a tendency for the collagen component, at high loading, to impede crystallinity development within the PCL phase. The preparation of fibroblast/composite constructs was investigated using cell culture as a first stage in mimicking the dermal/epidermal structure of skin. Fibroblasts were found to attach and proliferate on all the composites investigated reaching a maximum of 2×105/cm2 on 1:20 collagen:PCL materials at day 8 with cell numbers declining thereafter. Keratinocyte growth rates were similar on all types of collagen:PCL materials investigated reaching a maximum of 6.6×104/cm2 at day 6. The results revealed that composite films of collagen and PCL are favourable substrates for growth of fibroblasts and keratinocytes and may find utility for skin repair. © 2003 Elsevier Ltd. All rights reserved.
Resumo:
* This paper was made according to the program of fundamental scientific research of the Presidium of the Russian Academy of Sciences «Mathematical simulation and intellectual systems», the project "Theoretical foundation of the intellectual systems based on ontologies for intellectual support of scientific researches".
Resumo:
For metal and metal halide vapor lasers excited by high frequency pulsed discharge, the thermal effect mainly caused by the radial temperature distribution is of considerable importance for stable laser operation and improvement of laser output characteristics. A short survey of the obtained analytical and numerical-analytical mathematical models of the temperature profile in a high-powered He-SrBr2 laser is presented. The models are described by the steady-state heat conduction equation with mixed type nonlinear boundary conditions for the arbitrary form of the volume power density. A complete model of radial heat flow between the two tubes is established for precise calculating the inner wall temperature. The models are applied for simulating temperature profiles for newly designed laser. The author’s software prototype LasSim is used for carrying out the mathematical models and simulations.
Resumo:
The development of a new set of frost property measurement techniques to be used in the control of frost growth and defrosting processes in refrigeration systems was investigated. Holographic interferometry and infrared thermometry were used to measure the temperature of the frost-air interface, while a beam element load sensor was used to obtain the weight of a deposited frost layer. The proposed measurement techniques were tested for the cases of natural and forced convection, and the characteristic charts were obtained for a set of operational conditions. ^ An improvement of existing frost growth mathematical models was also investigated. The early stage of frost nucleation was commonly not considered in these models and instead an initial value of layer thickness and porosity was regularly assumed. A nucleation model to obtain the droplet diameter and surface porosity at the end of the early frosting period was developed. The drop-wise early condensation in a cold flat plate under natural convection to a hot (room temperature) and humid air was modeled. A nucleation rate was found, and the relation of heat to mass transfer (Lewis number) was obtained. It was found that the Lewis number was much smaller than unity, which is the standard value usually assumed for most frosting numerical models. The nucleation model was validated against available experimental data for the early nucleation and full growth stages of the frosting process. ^ The combination of frost top temperature and weight variation signals can now be used to control the defrosting timing and the developed early nucleation model can now be used to simulate the entire process of frost growth in any surface material. ^
Resumo:
Nel presente lavoro, ho studiato e trovato le soluzioni esatte di un modello matematico applicato ai recettori cellulari della famiglia delle integrine. Nel modello le integrine sono considerate come un sistema a due livelli, attivo e non attivo. Quando le integrine si trovano nello stato inattivo possono diffondere nella membrana, mentre quando si trovano nello stato attivo risultano cristallizzate nella membrana, incapaci di diffondere. La variazione di concentrazione nella superficie cellulare di una sostanza chiamata attivatore dà luogo all’attivazione delle integrine. Inoltre, questi eterodimeri possono legare una molecola inibitrice con funzioni di controllo e regolazione, che chiameremo v, la quale, legandosi al recettore, fa aumentare la produzione della sostanza attizzatrice, che chiameremo u. In questo modo si innesca un meccanismo di retroazione positiva. L’inibitore v regola il meccanismo di produzione di u, ed assume, pertanto, il ruolo di modulatore. Infatti, grazie a questo sistema di fine regolazione il meccanismo di feedback positivo è in grado di autolimitarsi. Si costruisce poi un modello di equazioni differenziali partendo dalle semplici reazioni chimiche coinvolte. Una volta che il sistema di equazioni è impostato, si possono desumere le soluzioni per le concentrazioni dell’inibitore e dell’attivatore per un caso particolare dei parametri. Infine, si può eseguire un test per vedere cosa predice il modello in termini di integrine. Per farlo, ho utilizzato un’attivazione del tipo funzione gradino e l’ho inserita nel sistema, valutando la dinamica dei recettori. Si ottiene in questo modo un risultato in accordo con le previsioni: le integrine legate si trovano soprattutto ai limiti della zona attivata, mentre le integrine libere vengono a mancare nella zona attivata.
Resumo:
In perifusion cell cultures, the culture medium flows continuously through a chamber containing immobilized cells and the effluent is collected at the end. In our main applications, gonadotropin releasing hormone (GnRH) or oxytocin is introduced into the chamber as the input. They stimulate the cells to secrete luteinizing hormone (LH), which is collected in the effluent. To relate the effluent LH concentration to the cellular processes producing it, we develop and analyze a mathematical model consisting of coupled partial differential equations describing the intracellular signaling and the movement of substances in the cell chamber. We analyze three different data sets and give cellular mechanisms that explain the data. Our model indicates that two negative feedback loops, one fast and one slow, are needed to explain the data and we give their biological bases. We demonstrate that different LH outcomes in oxytocin and GnRH stimulations might originate from different receptor dynamics. We analyze the model to understand the influence of parameters, like the rate of the medium flow or the fraction collection time, on the experimental outcomes. We investigate how the rate of binding and dissociation of the input hormone to and from its receptor influence its movement down the chamber. Finally, we formulate and analyze simpler models that allow us to predict the distortion of a square pulse due to hormone-receptor interactions and to estimate parameters using perifusion data. We show that in the limit of high binding and dissociation the square pulse moves as a diffusing Gaussian and in this limit the biological parameters can be estimated.
Resumo:
Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.
Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.
Resumo:
Fire is a form of uncontrolled combustion which generates heat, smoke, toxic and irritant gases. All of these products are harmful to man and account for the heavy annual cost of 800 lives and £1,000,000,000 worth of property damage in Britain alone. The new discipline of Fire Safety Engineering has developed as a means of reducing these unacceptable losses. One of the main tools of Fire Safety Engineering is the mathematical model and over the past 15 years a number of mathematical models have emerged to cater for the needs of this discipline. Part of the difficulty faced by the Fire Safety Engineer is the selection of the most appropriate modelling tool to use for the job. To make an informed choice it is essential to have a good understanding of the various modelling approaches, their capabilities and limitations. In this paper some of the fundamental modelling tools used to predict fire and evacuation are investigated as are the issues associated with their use and recent developments in modelling technology.
Resumo:
Les travaux sur la nutrition en vitamines B des ruminants montrent des résultats très variés sur les quantités de ces nutriments disponibles pour l’animal selon la nature de la ration. Ces divergences sont dues à des changements des populations microbiennes dans le rumen, causées par les facteurs physico-chimiques de la ration. Une amélioration de la compréhension des effets de la nature de la diète sur la synthèse et l’utilisation des vitamines B dans le rumen pourrait aider à identifier les conditions sous lesquelles une supplémentation en ces vitamines serait bénéfique pour la vache. Le but de ce travail de thèse est donc d’améliorer la compréhension des effets de l’espèce fourragère, de la maturité et de la longueur des particules de fourrage sur les apports en vitamines B chez la vache laitière. Pour évaluer chacune de ces variables, les concentrations de thiamine, riboflavine, niacine, vitamine B6, folates et vitamine B12 ont été mesurées dans les échantillons d’aliments et de digesta duodénal recueillis lors de trois projets réalisés à l’Université du Michigan par l’équipe du Dr. M. Allen. Dans la première étude, l’effet de l’espèce fourragère des ensilages a été évalué au cours de deux expériences similaires durant lesquelles les vaches recevaient une diète à base d’ensilage de luzerne ou de dactyle. Les diètes à base de luzerne ont été associées à une augmentation de la dégradation de la thiamine et de la vitamine B6 dans le rumen par rapport aux diètes à base d’ensilage de dactyle. La deuxième étude visait à évaluer les effets de la maturité des plantes lors de la mise en silo sur les quantités de vitamines B disponibles pour la vache; les deux expériences se différenciaient par l’espèce fourragère étudiée, soit la luzerne ou le dactyle. Une récolte à un stade de maturité plus élevé a augmenté les flux duodénaux de thiamine, de niacine et de folates lorsque les vaches recevaient des diètes à base d’ensilage de luzerne mais n’a diminué que le flux duodénal de riboflavine chez les animaux recevant des diètes à base d’ensilage de dactyle. La troisième étude a comparé les effets de la longueur de coupe (10 vs. 19 mm) d’ensilages de luzerne et de dactyle sur le devenir des vitamines B dans le système digestif de la vache laitière. Cette étude a permis de constater qu’une augmentation du temps de séchage au champ diminuait les concentrations de vitamines B dans les ensilages. Cependant, la taille des particules des ensilages de luzerne et de dactyle n’a pas affecté les quantités des vitamines B arrivant au duodénum des vaches. En général, les résultats de ces études montrent qu’il existe une corrélation négative entre la synthèse de riboflavine, de niacine et de vitamine B6 et leur ingestion, suggérant une possible régulation de la quantité de ces vitamines B par les microorganismes du rumen. De plus, l’ingestion d’amidon et d’azote a été corrélée positivement avec la synthèse de thiamine, de folates et de vitamine B12, et négativement avec la synthèse de niacine. Ces corrélations suggèrent que les microorganismes qui utilisent préférentiellement l’amidon jouent un rôle majeur pour la synthèse ou la dégradation de ces vitamines. De plus, la présence d’une quantité suffisante d’azote semble avoir un impact majeur sur ces processus. La suite de ces travaux devrait viser la modélisation de ces données afin de mieux appréhender la physiologie de la digestion de ces vitamines et permettre la création de modèles mathématiques capables de prédire les quantités de vitamines disponibles pour les vaches. Ces modèles permettront, lorsqu’intégrés aux logiciels de formulation de ration, d’élaborer une diète plus précise, ce qui améliorera la santé du troupeau et la performance laitière et augmentera les profits du producteur.
Resumo:
The analysis of steel and composite frames has traditionally been carried out by idealizing beam-to-column connections as either rigid or pinned. Although some advanced analysis methods have been proposed to account for semi-rigid connections, the performance of these methods strongly depends on the proper modeling of connection behavior. The primary challenge of modeling beam-to-column connections is their inelastic response and continuously varying stiffness, strength, and ductility. In this dissertation, two distinct approaches—mathematical models and informational models—are proposed to account for the complex hysteretic behavior of beam-to-column connections. The performance of the two approaches is examined and is then followed by a discussion of their merits and deficiencies. To capitalize on the merits of both mathematical and informational representations, a new approach, a hybrid modeling framework, is developed and demonstrated through modeling beam-to-column connections. Component-based modeling is a compromise spanning two extremes in the field of mathematical modeling: simplified global models and finite element models. In the component-based modeling of angle connections, the five critical components of excessive deformation are identified. Constitutive relationships of angles, column panel zones, and contact between angles and column flanges, are derived by using only material and geometric properties and theoretical mechanics considerations. Those of slip and bolt hole ovalization are simplified by empirically-suggested mathematical representation and expert opinions. A mathematical model is then assembled as a macro-element by combining rigid bars and springs that represent the constitutive relationship of components. Lastly, the moment-rotation curves of the mathematical models are compared with those of experimental tests. In the case of a top-and-seat angle connection with double web angles, a pinched hysteretic response is predicted quite well by complete mechanical models, which take advantage of only material and geometric properties. On the other hand, to exhibit the highly pinched behavior of a top-and-seat angle connection without web angles, a mathematical model requires components of slip and bolt hole ovalization, which are more amenable to informational modeling. An alternative method is informational modeling, which constitutes a fundamental shift from mathematical equations to data that contain the required information about underlying mechanics. The information is extracted from observed data and stored in neural networks. Two different training data sets, analytically-generated and experimental data, are tested to examine the performance of informational models. Both informational models show acceptable agreement with the moment-rotation curves of the experiments. Adding a degradation parameter improves the informational models when modeling highly pinched hysteretic behavior. However, informational models cannot represent the contribution of individual components and therefore do not provide an insight into the underlying mechanics of components. In this study, a new hybrid modeling framework is proposed. In the hybrid framework, a conventional mathematical model is complemented by the informational methods. The basic premise of the proposed hybrid methodology is that not all features of system response are amenable to mathematical modeling, hence considering informational alternatives. This may be because (i) the underlying theory is not available or not sufficiently developed, or (ii) the existing theory is too complex and therefore not suitable for modeling within building frame analysis. The role of informational methods is to model aspects that the mathematical model leaves out. Autoprogressive algorithm and self-learning simulation extract the missing aspects from a system response. In a hybrid framework, experimental data is an integral part of modeling, rather than being used strictly for validation processes. The potential of the hybrid methodology is illustrated through modeling complex hysteretic behavior of beam-to-column connections. Mechanics-based components of deformation such as angles, flange-plates, and column panel zone, are idealized to a mathematical model by using a complete mechanical approach. Although the mathematical model represents envelope curves in terms of initial stiffness and yielding strength, it is not capable of capturing the pinching effects. Pinching is caused mainly by separation between angles and column flanges as well as slip between angles/flange-plates and beam flanges. These components of deformation are suitable for informational modeling. Finally, the moment-rotation curves of the hybrid models are validated with those of the experimental tests. The comparison shows that the hybrid models are capable of representing the highly pinched hysteretic behavior of beam-to-column connections. In addition, the developed hybrid model is successfully used to predict the behavior of a newly-designed connection.
Resumo:
It is nowadays recognized that the risk of human co-exposure to multiple mycotoxins is real. In the last years, a number of studies have approached the issue of co-exposure and the best way to develop a more precise and realistic assessment. Likewise, the growing concern about the combined effects of mycotoxins and their potential impact on human health has been reflected by the increasing number of toxicological studies on the combined toxicity of these compounds. Nevertheless, risk assessment of these toxins, still follows the conventional paradigm of single exposure and single effects, incorporating only the possibility of additivity but not taking into account the complex dynamics associated to interactions between different mycotoxins or between mycotoxins and other food contaminants. Considering that risk assessment is intimately related to the establishment of regulatory guidelines, once the risk assessment is completed, an effort to reduce or manage the risk should be followed to protect public health. Risk assessment of combined human exposure to multiple mycotoxins thus poses several challenges to scientists, risk assessors and risk managers and opens new avenues for research. This presentation aims to give an overview of the different challenges posed by the likelihood of human co-exposure to mycotoxins and the possibility of interactive effects occurring after absorption, towards knowledge generation to support a more accurate human risk assessment and risk management. For this purpose, a physiologically-based framework that includes knowledge on the bioaccessibility, toxicokinetics and toxicodynamics of multiple toxins is proposed. Regarding exposure assessment, the need of harmonized food consumption data, availability of multianalyte methods for mycotoxin quantification, management of left-censored data and use of probabilistic models will be highlight, in order to develop a more precise and realistic exposure assessment. On the other hand, the application of predictive mathematical models to estimate mycotoxins’ combined effects from in vitro toxicity studies will be also discussed. Results from a recent Portuguese project aimed at exploring the toxic effects of mixtures of mycotoxins in infant foods and their potential health impact will be presented as a case study, illustrating the different aspects of risk assessment highlighted in this presentation. Further studies on hazard and exposure assessment of multiple mycotoxins, using harmonized approaches and methodologies, will be crucial towards an improvement in data quality and contributing to holistic risk assessment and risk management strategies for multiple mycotoxins in foodstuffs.