986 resultados para 3D-modeling
Resumo:
OBJECTIVE: To evaluate the public health impact of statin prescribing strategies based on the Justification for the Use of Statins in Primary Prevention: An Intervention Trial Evaluating Rosuvastatin Study (JUPITER). METHODS: We studied 2268 adults aged 35-75 without cardiovascular disease in a population-based study in Switzerland in 2003-2006. We assessed the eligibility for statins according to the Adult Treatment Panel III (ATPIII) guidelines, and by adding "strict" (hs-CRP≥2.0mg/L and LDL-cholesterol <3.4mmol/L), and "extended" (hs-CRP≥2.0mg/L alone) JUPITER-like criteria. We estimated the proportion of CHD deaths potentially prevented over 10years in the Swiss population. RESULTS: Fifteen % were already taking statins, 42% were eligible by ATPIII guidelines, 53% by adding "strict", and 62% by adding "extended" criteria, with a total of 19% newly eligible. The number needed to treat with statins to avoid one CHD death over 10years was 38 for ATPIII, 84 for "strict" and 92 for "extended" JUPITER-like criteria. ATPIII would prevent 17% of CHD deaths, compared with 20% for ATPIII+"strict" and 23% for ATPIII + "extended" criteria (+6%). CONCLUSION: Implementing JUPITER-like strategies would make statin prescribing for primary prevention more common and less efficient than it is with current guidelines.
Resumo:
High-energy charged particles in the van Allen radiation belts and in solar energetic particle events can damage satellites on orbit leading to malfunctions and loss of satellite service. Here we describe some recent results from the SPACECAST project on modelling and forecasting the radiation belts, and modelling solar energetic particle events. We describe the SPACECAST forecasting system that uses physical models that include wave-particle interactions to forecast the electron radiation belts up to 3 h ahead. We show that the forecasts were able to reproduce the >2 MeV electron flux at GOES 13 during the moderate storm of 7-8 October 2012, and the period following a fast solar wind stream on 25-26 October 2012 to within a factor of 5 or so. At lower energies of 10- a few 100 keV we show that the electron flux at geostationary orbit depends sensitively on the high-energy tail of the source distribution near 10 RE on the nightside of the Earth, and that the source is best represented by a kappa distribution. We present a new model of whistler mode chorus determined from multiple satellite measurements which shows that the effects of wave-particle interactions beyond geostationary orbit are likely to be very significant. We also present radial diffusion coefficients calculated from satellite data at geostationary orbit which vary with Kp by over four orders of magnitude. We describe a new automated method to determine the position at the shock that is magnetically connected to the Earth for modelling solar energetic particle events and which takes into account entropy, and predict the form of the mean free path in the foreshock, and particle injection efficiency at the shock from analytical theory which can be tested in simulations.
Resumo:
The hydrogeological properties and responses of a productive aquifer in northeastern Switzerland are investigated. For this purpose, 3D crosshole electrical resistivity tomography (ERT) is used to define the main lithological structures within the aquifer (through static inversion) and to monitor the water infiltration from an adjacent river. During precipitation events and subsequent river flooding, the river water resistivity increases. As a consequence, the electrical characteristics of the infiltrating water can be used as a natural tracer to delineate preferential flow paths and flow velocities. The focus is primarily on the experiment installation, data collection strategy, and the structural characterization of the site and a brief overview of the ERT monitoring results. The monitoring system comprises 18 boreholes each equipped with 10 electrodes straddling the entire thickness of the gravel aquifer. A multi-channel resistivity system programmed to cycle through various four-point electrode configurations of the 180 electrodes in a rolling sequence allows for the measurement of approximately 15,500 apparent resistivity values every 7 h on a continuous basis. The 3D static ERT inversion of data acquired under stable hydrological conditions provides a base model for future time-lapse inversion studies and the means to investigate the resolving capability of our acquisition scheme. In particular, it enables definition of the main lithological structures within the aquifer. The final ERT static model delineates a relatively high-resistivity, low-porosity, intermediate-depth layer throughout the investigated aquifer volume that is consistent with results from well logging and seismic and radar tomography models. The next step will be to define and implement an appropriate time-lapse ERT inversion scheme using the river water as a natural tracer. The main challenge will be to separate the superposed time-varying effects of water table height, temperature, and salinity variations associated with the infiltrating water.
Resumo:
In this thesis, I develop analytical models to price the value of supply chain investments under demand uncer¬tainty. This thesis includes three self-contained papers. In the first paper, we investigate the value of lead-time reduction under the risk of sudden and abnormal changes in demand forecasts. We first consider the risk of a complete and permanent loss of demand. We then provide a more general jump-diffusion model, where we add a compound Poisson process to a constant-volatility demand process to explore the impact of sudden changes in demand forecasts on the value of lead-time reduction. We use an Edgeworth series expansion to divide the lead-time cost into that arising from constant instantaneous volatility, and that arising from the risk of jumps. We show that the value of lead-time reduction increases substantially in the intensity and/or the magnitude of jumps. In the second paper, we analyze the value of quantity flexibility in the presence of supply-chain dis- intermediation problems. We use the multiplicative martingale model and the "contracts as reference points" theory to capture both positive and negative effects of quantity flexibility for the downstream level in a supply chain. We show that lead-time reduction reduces both supply-chain disintermediation problems and supply- demand mismatches. We furthermore analyze the impact of the supplier's cost structure on the profitability of quantity-flexibility contracts. When the supplier's initial investment cost is relatively low, supply-chain disin¬termediation risk becomes less important, and hence the contract becomes more profitable for the retailer. We also find that the supply-chain efficiency increases substantially with the supplier's ability to disintermediate the chain when the initial investment cost is relatively high. In the third paper, we investigate the value of dual sourcing for the products with heavy-tailed demand distributions. We apply extreme-value theory and analyze the effects of tail heaviness of demand distribution on the optimal dual-sourcing strategy. We find that the effects of tail heaviness depend on the characteristics of demand and profit parameters. When both the profit margin of the product and the cost differential between the suppliers are relatively high, it is optimal to buffer the mismatch risk by increasing both the inventory level and the responsive capacity as demand uncertainty increases. In that case, however, both the optimal inventory level and the optimal responsive capacity decrease as the tail of demand becomes heavier. When the profit margin of the product is relatively high, and the cost differential between the suppliers is relatively low, it is optimal to buffer the mismatch risk by increasing the responsive capacity and reducing the inventory level as the demand uncertainty increases. In that case, how¬ever, it is optimal to buffer with more inventory and less capacity as the tail of demand becomes heavier. We also show that the optimal responsive capacity is higher for the products with heavier tails when the fill rate is extremely high.
Resumo:
Advancements in high-throughput technologies to measure increasingly complex biological phenomena at the genomic level are rapidly changing the face of biological research from the single-gene single-protein experimental approach to studying the behavior of a gene in the context of the entire genome (and proteome). This shift in research methodologies has resulted in a new field of network biology that deals with modeling cellular behavior in terms of network structures such as signaling pathways and gene regulatory networks. In these networks, different biological entities such as genes, proteins, and metabolites interact with each other, giving rise to a dynamical system. Even though there exists a mature field of dynamical systems theory to model such network structures, some technical challenges are unique to biology such as the inability to measure precise kinetic information on gene-gene or gene-protein interactions and the need to model increasingly large networks comprising thousands of nodes. These challenges have renewed interest in developing new computational techniques for modeling complex biological systems. This chapter presents a modeling framework based on Boolean algebra and finite-state machines that are reminiscent of the approach used for digital circuit synthesis and simulation in the field of very-large-scale integration (VLSI). The proposed formalism enables a common mathematical framework to develop computational techniques for modeling different aspects of the regulatory networks such as steady-state behavior, stochasticity, and gene perturbation experiments.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
THESIS ABSTRACT Garnets are one of the key metamorphic minerals used to study peak metamorphic conditions or crystallization ages. Equilibrium is typically assumed between the garnet and the matrix. This thesis attempts to understand garnet growth in the Zermatt-Saas Fee (ZSF) eclogites, and discusses consequences for Sm/Nd and Lu/Hf dating and the equilibrium assumption. All studied garnets from the ZSF eclogites are strongly zoned in Mn, Fe, Mg, and Ca. Methods based on chemical zoning patterns and on 3D spatial statistics indicate different growth mechanisms depending on the sample studied. Garnets from the Pfulwe area are grown in a system where surface kinetics likely dominated over intergranular diffusion kinetics. Garnets fram two other localities, Nuarsax and Lago di Cignana, seem to have grown in a system where intergranular diffusion kinetics were dominating over surface kinetics, at least during initial growth. Garnets reveal strong prograde REE+Y zoning. They contain narrow central peaks for Lu + Yb + Tm ± Er and at least one additional small peak towards the rim. The REE Sm + Eu + Gd + Tb ± Dy are depleted in the cores but show one prominent peak close to the rim. It is shown that these patterns cam be explained using a transient matrix diffusion model where REE uptake is limited by diffusion in the matrix surrounding the porphyroblast. The secondary peaks in the garnet profiles are interpreted to reflect thermally activated diffusion due to a temperature increase during prograde metamorphism. The model predicts anomalously low 176Lu/177Hf and 147Sm/144Nd ratios in garnets where growth rates are fast compared to diffusion of the REE, which decreases garnet isochron precisions. The sharp Lu zoning was further used to constrain maximum Lu volume diffusion rates in garnet. The modeled minimum pre-exponential diffusion coefficient which fits the measured central peak is in the order of Do = 5.7* 106 m2/s, taking an activation energy of 270 kJ/mol. The latter was chosen in agreement with experimentally determined values. This can be used to estimate a minimum closure temperature of around 630°C for the ZSF zone. Zoning of REE was combined with published Lu/Hf and Sm/Nd age information to redefine the prograde crystallization interval for Lago di Cignana UHP eclogites. Modeling revealed that a prograde growth interval in the order of 25 m.y. is needed to produce the measured spread in ages. RÉSUMÉ Le grenat est un minéral métamorphique clé pour déterminer les conditions du pic de métamorphisme ainsi que l'âge de cristallisation. L'équilibre entre le grenat et la matrice est requis. Cette étude a pour but de comprendre la croissance du grenat dans les éclogites de la zone de Zermatt-Saas Fee (ZSF) et d'examiner quelques conséquences sur les datations Sm/Nd et Lu/Hf. Tous les grenats des éclogites de ZSF étudiés sont fortement zonés en Mn, Fe, Mg et partiellement en Ca. Les différentes méthodes basées sur le modèle de zonation chimique ainsi que sur les statistiques de répartition spatiale en 3D indiquent un mécanisme de croissance différent en fonction de la localité d'échantillonnage. Les grenats provenant de la zone de Pfulwe ont probablement crû dans un système principalement dominé par la cinétique de surface au détriment de 1a cinétique de diffusion intergranulaire. Les grenats provenant de deux autres localités, Nuarsax et Lago di Cignana, semblent avoir cristallisé dans un système dominé par la diffusion intergranulaire, au moins durant les premiers stades de croissance. Les grenats montrent une forte zonation prograde en Terres Rares (REE) ainsi qu'en Y. Les profils présentent au coeur un pic étroit en Lu + Yb+ Tm ± Er et au moins un petit pic supplémentaire vers le bord. Les coeurs des grenats sont appauvris en Sm + Eu + Gd + Tb ± Dy, mais les bords sont marqués par un pic important de ces REE. Ces profils s'expliquent par un modèle de diffusion matricielle dans lequel l'apport en REE est limité par la diffusion dans la matrice environnant les porphyroblastes. Les pics secondaires en bordure de grain reflètent la diffusion activée par l'augmentation de la température lors du métamorphisme prograde. Ce modèle prédit des rapports 176Lu/177Hf et 147Sm/144Nd anormalement bas lorsque les taux de croissance sont plus rapides que la diffusion des REE, ce qui diminue la précision des isochrones impliquant le grenat. La zonation nette en Lu a permis de contraindre le maximum de diffusion volumique par une approche numérique. Le coefficient de diffusion minimum modélisé en adéquation avec les pics mesurés est de l'ordre de Do = 5.7*10-6 m2/s, en prenant une énergie d'activation ~270 kJ/mol déterminée expérimentalement. Ainsi, la température de clôture minimale est estimée aux alentours de 630°C pour la zone ZSF. Des nouvelles données de zonation de REE sont combinées aux âges obtenus avec les rapports Lu/Hf et Sm/Nd qui redéfissent l'intervalle de cristallisation prograde pour les éclogites UHP de Lago di Cignana. La modélisation permet d'attribuer au minimum un intervalle de croissance prograde de 25 Ma afin d'obtenir les âges préalablement mesurés. RESUME GRAND PUBLIC L'un des principaux buts du pétrologue .métamorphique est d'extraire des roches les informations sur l'évolution temporelle, thermique et barométrique qu'elles ont subi au cours de la formation d'une chaîne de montagne. Le grenat est l'un des minéraux clés dans une grande variété de roches métamorphiques. Il a fait l'objet de nombreuses études dans des terrains d'origines variées ou lors d'études expérimentales afin de comprendre ses domaines de stabilité, ses réactions et sa coexistence avec d'autres minéraux. Cela fait du grenat l'un des minéraux les plus attractifs pour la datation des roches. Cependant, lorsqu'on l'utilise pour la datation et/ou pour la géothermobarométrie, on suppose toujours que le grenat croît en équilibre avec les phases coexistantes de la matrice. Pourtant, la croissance d'un minéral est en général liée au processus de déséquilibre. Cette étude a pour but de comprendre comment croît le grenat dans les éclogites de Zermatt - Saas Fee et donc d'évaluer le degré de déséquilibre. Il s'agit aussi d'expliquer les différences d'âges obtenues grâce aux grenats dans les différentes localités de l'unité de Zermatt-Saas Fee. La principale question posée lors de l'étude des mécanismes de croissance du grenat est: Parmi les processus en jeu lors de la croissance du grenat (dissolution des anciens minéraux, transport des éléments vers le nouveau grenat, précipitation d'une nouvelle couche en surface du minéral), lequel est le plus lent et ainsi détermine le degré de déséquilibre? En effet, les grenats d'une des localités (Pfulwe) indiquent que le phénomène d'adhérence en surface est le plus lent, contrairement aux grenats des autres localités (Lago di Cignana, Nuarsax) dans lesquels ce sont les processus de transport qui sont les plus lents. Cela montre que les processus dominants sont variables, même dans des roches similaires de la même unité tectonique. Ceci implique que les processus doivent être déterminés individuellement pour chaque roche afin d'évaluer le degré de déséquilibre du grenat dans la roche. Tous les grenats analysés présentent au coeur une forte concentration de Terres Rares: Lu + Yb + Tm ± Er qui décroît vers le bord du grain. Inversement, les Terres Rares Sm + Eu + Gd + Tb ± Dy sont appauvries au coeur et se concentrent en bordure du grain. La modélisation révèle que ces profils sont-dus à des cinétiques lentes de transport des Terres Rares. De plus, les modèles prédisent des concentrations basses en éléments radiogéniques pères dans certaines roches, ce qui influence fortement sur la précision des âges obtenus par la méthode d'isochrone. Ceci signifie que les roches les plus adaptées pour les datations ne doivent contenir ni beaucoup de grenat ni de très gros cristaux, car dans ce cas, la compétition des éléments entre les cristaux limite à de faibles concentrations la quantité d'éléments pères dans chaque cristal.
Resumo:
Durante los últimos años el Institut Català d’Arquelogia Clàssica, el Museu d’Història de Tarragona, contando con la colaboración de la Generalitat de Catalunya, han desarrallado el proyecto Planimetría Arqueológica de Tárraco, destinado a la elaboración de una planta arqueológica global en la cual se recogieran intervenciones y noticias referentes a los hallazgos arqueológicos existentes. Este trabajo fue publicado utilizando como plataforma de trabajo un SIG construido para tal fin (Macias et al. 2007). Sin embargo, un problema de difícil solución arqueológica venía dado por las transformaciones urbanísticas de la ciudad, sufridas en su mayor parte a lo largo de los siglos XIX y XX. Éstas habían provocado la pérdida irremediable de gran parte de la elevación que acogiera la ciudad romana, cambiando substancialmente su aspecto original. Ante esta situación y como proyecto paralelo a la realización de la Planimetría Arqueológica de Tarragona se plantearon formas de cubrir este vacío. Se presenta en esta comunicación una propuesta metodológica para la reconstrucción de los grandes «vacíos topográficos » originados por la evolución urbanística de Tarragona mediante la obtención e integración en un SIG de diversos tipos de información documental. En estas zonas rebajadas no resulta posible la obtención de información estratigráfica y arqueológica, por lo que es imprescindible la definición de vías metodológicas alternativas basadas en la extrapolación de datos extraídos de la cartografía histórica, panorámicas del XVI o fotografías tomadas en los siglos XIX y XX. Esta técnica permite aplicar los resultados obtenidos en los nuevos análisis interpretativos, complementando así la interpretación arqueológica de la topografía urbana de la ciudad romana. A partir de esta información, y aplicando funciones y técnicas de interpolación propias de un GIS, se propone aquí un modelo de relieve de la ciudad de Tarraco.
Resumo:
The software development industry is constantly evolving. The rise of the agile methodologies in the late 1990s, and new development tools and technologies require growing attention for everybody working within this industry. The organizations have, however, had a mixture of various processes and different process languages since a standard software development process language has not been available. A promising process meta-model called Software & Systems Process Engineering Meta- Model (SPEM) 2.0 has been released recently. This is applied by tools such as Eclipse Process Framework Composer, which is designed for implementing and maintaining processes and method content. Its aim is to support a broad variety of project types and development styles. This thesis presents the concepts of software processes, models, traditional and agile approaches, method engineering, and software process improvement. Some of the most well-known methodologies (RUP, OpenUP, OpenMethod, XP and Scrum) are also introduced with a comparison provided between them. The main focus is on the Eclipse Process Framework and SPEM 2.0, their capabilities, usage and modeling. As a proof of concept, I present a case study of modeling OpenMethod with EPF Composer and SPEM 2.0. The results show that the new meta-model and tool have made it possible to easily manage method content, publish versions with customized content, and connect project tools (such as MS Project) with the process content. The software process modeling also acts as a process improvement activity.
Resumo:
Tumors in non-Hodgkin lymphoma (NHL) patients are often proximal to the major blood vessels in the abdomen or neck. In external-beam radiotherapy, these tumors present a challenge because imaging resolution prevents the beam from being targeted to the tumor lesion without also irradiating the artery wall. This problem has led to potentially life-threatening delayed toxicity. Because radioimmunotherapy has resulted in long-term survival of NHL patients, we investigated whether the absorbed dose (AD) to the artery wall in radioimmunotherapy of NHL is of potential concern for delayed toxicity. SPECT resolution is not sufficient to enable dosimetric analysis of anatomic features of the thickness of the aortic wall. Therefore, we present a model of aortic wall toxicity based on data from 4 patients treated with (131)I-tositumomab. METHODS: Four NHL patients with periaortic tumors were administered pretherapeutic (131)I-tositumomab. Abdominal SPECT and whole-body planar images were obtained at 48, 72, and 144 h after tracer administration. Blood-pool activity concentrations were obtained from regions of interest drawn on the heart on the planar images. Tumor and blood activity concentrations, scaled to therapeutic administered activities-both standard and myeloablative-were input into a geometry and tracking model (GEANT, version 4) of the aorta. The simulated energy deposited in the arterial walls was collected and fitted, and the AD and biologic effective dose values to the aortic wall and tumors were obtained for standard therapeutic and hypothetical myeloablative administered activities. RESULTS: Arterial wall ADs from standard therapy were lower (0.6-3.7 Gy) than those typical from external-beam therapy, as were the tumor ADs (1.4-10.5 Gy). The ratios of tumor AD to arterial wall AD were greater for radioimmunotherapy by a factor of 1.9-4.0. For myeloablative therapy, artery wall ADs were in general less than those typical for external-beam therapy (9.4-11.4 Gy for 3 of 4 patients) but comparable for 1 patient (32.6 Gy). CONCLUSION: Blood vessel radiation dose can be estimated using the software package 3D-RD combined with GEANT modeling. The dosimetry analysis suggested that arterial wall toxicity is highly unlikely in standard dose radioimmunotherapy but should be considered a potential concern and limiting factor in myeloablative therapy.