185 resultados para Thermal modeling
Resumo:
OBJECTIVE: This study was undertaken to investigate how aging affects dermal microvascular reactivity in skin areas differentially exposed to sunlight, and therefore to different degrees of photoaging. METHODS: We assessed, in young (18-30 years, n = 13) and aged males (≥60 years, n = 13), the thigh, forearm, and forehead's skin vasodilatory response to local heating (LTH) with a LDI. In each subject and at each location, local Tskin was brought from 34°C (baseline) to 39 or 41°C for 30 minutes, to effect submaximal vasodilation, with maximal vasodilation then elicited by further heating to 44°C. RESULTS: The CVCs evaluated at baseline and after maximal vasodilation (CVCmax ) were higher in the forehead than in the two other anatomical locations. On all locations, CVCmax decreased with age but less markedly in the forehead compared to the two other locations. When expressed in % of CVCmax , the plateau increase of CVCs in response to submaximal temperatures (39 and 41°C) did not vary with age, and minimally so with location. CONCLUSION: Skin aging, whether intrinsic or combined with photoaging, reduces the maximal vasodilatory capacity of the dermal microcirculation, but not its reactivity to local heating.
Resumo:
In this thesis, I develop analytical models to price the value of supply chain investments under demand uncer¬tainty. This thesis includes three self-contained papers. In the first paper, we investigate the value of lead-time reduction under the risk of sudden and abnormal changes in demand forecasts. We first consider the risk of a complete and permanent loss of demand. We then provide a more general jump-diffusion model, where we add a compound Poisson process to a constant-volatility demand process to explore the impact of sudden changes in demand forecasts on the value of lead-time reduction. We use an Edgeworth series expansion to divide the lead-time cost into that arising from constant instantaneous volatility, and that arising from the risk of jumps. We show that the value of lead-time reduction increases substantially in the intensity and/or the magnitude of jumps. In the second paper, we analyze the value of quantity flexibility in the presence of supply-chain dis- intermediation problems. We use the multiplicative martingale model and the "contracts as reference points" theory to capture both positive and negative effects of quantity flexibility for the downstream level in a supply chain. We show that lead-time reduction reduces both supply-chain disintermediation problems and supply- demand mismatches. We furthermore analyze the impact of the supplier's cost structure on the profitability of quantity-flexibility contracts. When the supplier's initial investment cost is relatively low, supply-chain disin¬termediation risk becomes less important, and hence the contract becomes more profitable for the retailer. We also find that the supply-chain efficiency increases substantially with the supplier's ability to disintermediate the chain when the initial investment cost is relatively high. In the third paper, we investigate the value of dual sourcing for the products with heavy-tailed demand distributions. We apply extreme-value theory and analyze the effects of tail heaviness of demand distribution on the optimal dual-sourcing strategy. We find that the effects of tail heaviness depend on the characteristics of demand and profit parameters. When both the profit margin of the product and the cost differential between the suppliers are relatively high, it is optimal to buffer the mismatch risk by increasing both the inventory level and the responsive capacity as demand uncertainty increases. In that case, however, both the optimal inventory level and the optimal responsive capacity decrease as the tail of demand becomes heavier. When the profit margin of the product is relatively high, and the cost differential between the suppliers is relatively low, it is optimal to buffer the mismatch risk by increasing the responsive capacity and reducing the inventory level as the demand uncertainty increases. In that case, how¬ever, it is optimal to buffer with more inventory and less capacity as the tail of demand becomes heavier. We also show that the optimal responsive capacity is higher for the products with heavier tails when the fill rate is extremely high.
Resumo:
Advancements in high-throughput technologies to measure increasingly complex biological phenomena at the genomic level are rapidly changing the face of biological research from the single-gene single-protein experimental approach to studying the behavior of a gene in the context of the entire genome (and proteome). This shift in research methodologies has resulted in a new field of network biology that deals with modeling cellular behavior in terms of network structures such as signaling pathways and gene regulatory networks. In these networks, different biological entities such as genes, proteins, and metabolites interact with each other, giving rise to a dynamical system. Even though there exists a mature field of dynamical systems theory to model such network structures, some technical challenges are unique to biology such as the inability to measure precise kinetic information on gene-gene or gene-protein interactions and the need to model increasingly large networks comprising thousands of nodes. These challenges have renewed interest in developing new computational techniques for modeling complex biological systems. This chapter presents a modeling framework based on Boolean algebra and finite-state machines that are reminiscent of the approach used for digital circuit synthesis and simulation in the field of very-large-scale integration (VLSI). The proposed formalism enables a common mathematical framework to develop computational techniques for modeling different aspects of the regulatory networks such as steady-state behavior, stochasticity, and gene perturbation experiments.
Resumo:
Two diffuse soil CO2 flux surveys from the southern Lakki plain show that CO2 is mainly released from the hydrothermal explosion craters. The correspondence between high CO2 fluxes and elevated soil temperatures suggests that a flux of hot hydrothermal fluids ascends towards the surface. Steam mostly condenses near the surface and the heat given off is conductively transferred to the atmosphere through the soil, accompanied by a large CO2 flux. Tt was calculated, that 68 t d(-1) of hydrothermal CO2 are released through the total surveyed area of similar to1.3 km(2) Admitting that a steam flux of 2200 t d(-1) accompanies this CO2 flux, the thermal energy released through steam condensation amounts to 58 MW.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
To study the influence of the menstrual cycle on whole body thermal balance and on thermoregulatory mechanisms, metabolic heat production (M) was measured by indirect calorimetry and total heat losses (H) were measured by direct calorimetry in nine women during the follicular (F) and the luteal (L) phases of the menstrual cycle. The subjects were studied while exposed for 90 min to neutral environmental conditions (ambient temperature 28 degrees C, relative humidity 40%) in a direct calorimeter. The values of M and H were not modified by the phase of the menstrual cycle. Furthermore, in both phases the subjects were in thermal equilibrium because M was similar to H (69.7 +/- 1.8 and 72.1 +/- 1.8 W in F and 70.4 +/- 1.9 and 71.4 +/- 1.7 W in L phases, respectively). Tympanic temperature (Tty) was 0.24 +/- 0.07 degrees C higher in the L than in the F phase (P less than 0.05), whereas mean skin temperature (Tsk) was unchanged. Calculated skin thermal conductance (Ksk) was lower in the L (17.9 +/- 0.6 W.m-2.degrees C-1) than in the F phase (20.1 +/- 1.1 W.m-2.degrees C-1; P less than 0.05). Calculated skin blood flow (Fsk) was also lower in the L (0.101 +/- 0.008 l.min-1.m-2) than in the F phase (0.131 +/- 0.015 l.min-1.m-2; P less than 0.05). Differences in Tty, Ksk, and Fsk were not correlated with changes in plasma progesterone concentration. It is concluded that, during the L phase, a decreased thermal conductance in women exposed to a neutral environment allows the maintenance of a higher internal temperature.
Resumo:
Nicotine in a smoky indoor air environment can be determined using graphitized carbon black as a solid sorbent in quartz tubes. The temperature stability, high purity, and heat absorption characteristics of the sorbent, as well as the permeability of the quartz tubes to microwaves, enable the thermal desorption by means of microwaves after active sampling. Permeation and dynamic dilution procedures for the generation of nicotine in the vapor phase at low and high concentrations are used to evaluate the performances of the sampler. Tube preparation is described and the microwave desorption temperature is measured. Breakthrough volume is determined to allow sampling at 0.1-1 L/min for definite periods of time. The procedure is tested for the determination of gas and paticulate phase nicotine in sidestream smoke produced in an experimental chamber.
Resumo:
The nuclear matrix, a proteinaceous network believed to be a scaffolding structure determining higher-order organization of chromatin, is usually prepared from intact nuclei by a series of extraction steps. In most cell types investigated the nuclear matrix does not spontaneously resist these treatments but must be stabilized before the application of extracting agents. Incubation of isolated nuclei at 37C or 42C in buffers containing Mg++ has been widely employed as stabilizing agent. We have previously demonstrated that heat treatment induces changes in the distribution of three nuclear scaffold proteins in nuclei prepared in the absence of Mg++ ions. We studied whether different concentrations of Mg++ (2.0-5 mM) affect the spatial distribution of nuclear matrix proteins in nuclei isolated from K562 erythroleukemia cells and stabilized by heat at either 37C or 42C. Five proteins were studied, two of which were RNA metabolism-related proteins (a 105-kD component of splicing complexes and an RNP component), one a 126-kD constituent of a class of nuclear bodies, and two were components of the inner matrix network. The localization of proteins was determined by immunofluorescent staining and confocal scanning laser microscope. Mg++ induced significant changes of antigen distribution even at the lowest concentration employed, and these modifications were enhanced in parallel with increase in the concentration of the divalent cation. The different sensitivity to heat stabilization and Mg++ of these nuclear proteins might reflect a different degree of association with the nuclear scaffold and can be closely related to their functional or structural role.
Resumo:
Vegetation has a profound effect on flow and sediment transport processes in natural rivers, by increasing both skin friction and form drag. The increase in drag introduces a drag discontinuity between the in-canopy flow and the flow above, which leads to the development of an inflection point in the velocity profile, resembling a free shear layer. Therefore, drag acts as the primary driver for the entire canopy system. Most current numerical hydraulic models which incorporate vegetation rely either on simple, static plant forms, or canopy-scaled drag terms. However, it is suggested that these are insufficient as vegetation canopies represent complex, dynamic, porous blockages within the flow, which are subject to spatially and temporally dynamic drag forces. Here we present a dynamic drag methodology within a CFD framework. Preliminary results for a benchmark cylinder case highlight the accuracy of the method, and suggest its applicability to more complex cases.
Resumo:
La présente thèse s'intitule "Développent et Application des Méthodologies Computationnelles pour la Modélisation Qualitative". Elle comprend tous les différents projets que j'ai entrepris en tant que doctorante. Plutôt qu'une mise en oeuvre systématique d'un cadre défini a priori, cette thèse devrait être considérée comme une exploration des méthodes qui peuvent nous aider à déduire le plan de processus regulatoires et de signalisation. Cette exploration a été mue par des questions biologiques concrètes, plutôt que par des investigations théoriques. Bien que tous les projets aient inclus des systèmes divergents (réseaux régulateurs de gènes du cycle cellulaire, réseaux de signalisation de cellules pulmonaires) ainsi que des organismes (levure à fission, levure bourgeonnante, rat, humain), nos objectifs étaient complémentaires et cohérents. Le projet principal de la thèse est la modélisation du réseau de l'initiation de septation (SIN) du S.pombe. La cytokinèse dans la levure à fission est contrôlée par le SIN, un réseau signalant de protéines kinases qui utilise le corps à pôle-fuseau comme échafaudage. Afin de décrire le comportement qualitatif du système et prédire des comportements mutants inconnus, nous avons décidé d'adopter l'approche de la modélisation booléenne. Dans cette thèse, nous présentons la construction d'un modèle booléen étendu du SIN, comprenant la plupart des composantes et des régulateurs du SIN en tant que noeuds individuels et testable expérimentalement. Ce modèle utilise des niveaux d'activité du CDK comme noeuds de contrôle pour la simulation d'évènements du SIN à différents stades du cycle cellulaire. Ce modèle a été optimisé en utilisant des expériences d'un seul "knock-out" avec des effets phénotypiques connus comme set d'entraînement. Il a permis de prédire correctement un set d'évaluation de "knock-out" doubles. De plus, le modèle a fait des prédictions in silico qui ont été validées in vivo, permettant d'obtenir de nouvelles idées de la régulation et l'organisation hiérarchique du SIN. Un autre projet concernant le cycle cellulaire qui fait partie de cette thèse a été la construction d'un modèle qualitatif et minimal de la réciprocité des cyclines dans la S.cerevisiae. Les protéines Clb dans la levure bourgeonnante présentent une activation et une dégradation caractéristique et séquentielle durant le cycle cellulaire, qu'on appelle communément les vagues des Clbs. Cet évènement est coordonné avec la courbe d'activation inverse du Sic1, qui a un rôle inhibitoire dans le système. Pour l'identification des modèles qualitatifs minimaux qui peuvent expliquer ce phénomène, nous avons sélectionné des expériences bien définies et construit tous les modèles minimaux possibles qui, une fois simulés, reproduisent les résultats attendus. Les modèles ont été filtrés en utilisant des simulations ODE qualitatives et standardisées; seules celles qui reproduisaient le phénotype des vagues ont été gardées. L'ensemble des modèles minimaux peut être utilisé pour suggérer des relations regulatoires entre les molécules participant qui peuvent ensuite être testées expérimentalement. Enfin, durant mon doctorat, j'ai participé au SBV Improver Challenge. Le but était de déduire des réseaux spécifiques à des espèces (humain et rat) en utilisant des données de phosphoprotéines, d'expressions des gènes et des cytokines, ainsi qu'un réseau de référence, qui était mis à disposition comme donnée préalable. Notre solution pour ce concours a pris la troisième place. L'approche utilisée est expliquée en détail dans le dernier chapitre de la thèse. -- The present dissertation is entitled "Development and Application of Computational Methodologies in Qualitative Modeling". It encompasses the diverse projects that were undertaken during my time as a PhD student. Instead of a systematic implementation of a framework defined a priori, this thesis should be considered as an exploration of the methods that can help us infer the blueprint of regulatory and signaling processes. This exploration was driven by concrete biological questions, rather than theoretical investigation. Even though the projects involved divergent systems (gene regulatory networks of cell cycle, signaling networks in lung cells), as well as organisms (fission yeast, budding yeast, rat, human), our goals were complementary and coherent. The main project of the thesis is the modeling of the Septation Initiation Network (SIN) in S.pombe. Cytokinesis in fission yeast is controlled by the SIN, a protein kinase signaling network that uses the spindle pole body as scaffold. In order to describe the qualitative behavior of the system and predict unknown mutant behaviors we decided to adopt a Boolean modeling approach. In this thesis, we report the construction of an extended, Boolean model of the SIN, comprising most SIN components and regulators as individual, experimentally testable nodes. The model uses CDK activity levels as control nodes for the simulation of SIN related events in different stages of the cell cycle. The model was optimized using single knock-out experiments of known phenotypic effect as a training set, and was able to correctly predict a double knock-out test set. Moreover, the model has made in silico predictions that have been validated in vivo, providing new insights into the regulation and hierarchical organization of the SIN. Another cell cycle related project that is part of this thesis was to create a qualitative, minimal model of cyclin interplay in S.cerevisiae. CLB proteins in budding yeast present a characteristic, sequential activation and decay during the cell cycle, commonly referred to as Clb waves. This event is coordinated with the inverse activation curve of Sic1, which has an inhibitory role in the system. To generate minimal qualitative models that can explain this phenomenon, we selected well-defined experiments and constructed all possible minimal models that, when simulated, reproduce the expected results. The models were filtered using standardized qualitative ODE simulations; only the ones reproducing the wave-like phenotype were kept. The set of minimal models can be used to suggest regulatory relations among the participating molecules, which will subsequently be tested experimentally. Finally, during my PhD I participated in the SBV Improver Challenge. The goal was to infer species-specific (human and rat) networks, using phosphoprotein, gene expression and cytokine data and a reference network provided as prior knowledge. Our solution to the challenge was selected as in the final chapter of the thesis.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.