994 resultados para Modeling Methodology
Resumo:
Much of the analytical modeling of morphogen profiles is based on simplistic scenarios, where the source is abstracted to be point-like and fixed in time, and where only the steady state solution of the morphogen gradient in one dimension is considered. Here we develop a general formalism allowing to model diffusive gradient formation from an arbitrary source. This mathematical framework, based on the Green's function method, applies to various diffusion problems. In this paper, we illustrate our theory with the explicit example of the Bicoid gradient establishment in Drosophila embryos. The gradient formation arises by protein translation from a mRNA distribution followed by morphogen diffusion with linear degradation. We investigate quantitatively the influence of spatial extension and time evolution of the source on the morphogen profile. For different biologically meaningful cases, we obtain explicit analytical expressions for both the steady state and time-dependent 1D problems. We show that extended sources, whether of finite size or normally distributed, give rise to more realistic gradients compared to a single point-source at the origin. Furthermore, the steady state solutions are fully compatible with a decreasing exponential behavior of the profile. We also consider the case of a dynamic source (e.g. bicoid mRNA diffusion) for which a protein profile similar to the ones obtained from static sources can be achieved.
Resumo:
The purposes of this report (Phase II of the project) are to specify in mathematical form the individual modules of the conceptual model developed in Phase I, to identify and evaluate sources of data for the model set, and to develop the transport networks necessary to support the models.
Resumo:
OBJECTIVE: To evaluate the public health impact of statin prescribing strategies based on the Justification for the Use of Statins in Primary Prevention: An Intervention Trial Evaluating Rosuvastatin Study (JUPITER). METHODS: We studied 2268 adults aged 35-75 without cardiovascular disease in a population-based study in Switzerland in 2003-2006. We assessed the eligibility for statins according to the Adult Treatment Panel III (ATPIII) guidelines, and by adding "strict" (hs-CRP≥2.0mg/L and LDL-cholesterol <3.4mmol/L), and "extended" (hs-CRP≥2.0mg/L alone) JUPITER-like criteria. We estimated the proportion of CHD deaths potentially prevented over 10years in the Swiss population. RESULTS: Fifteen % were already taking statins, 42% were eligible by ATPIII guidelines, 53% by adding "strict", and 62% by adding "extended" criteria, with a total of 19% newly eligible. The number needed to treat with statins to avoid one CHD death over 10years was 38 for ATPIII, 84 for "strict" and 92 for "extended" JUPITER-like criteria. ATPIII would prevent 17% of CHD deaths, compared with 20% for ATPIII+"strict" and 23% for ATPIII + "extended" criteria (+6%). CONCLUSION: Implementing JUPITER-like strategies would make statin prescribing for primary prevention more common and less efficient than it is with current guidelines.
Resumo:
Life cycle analyses (LCA) approaches require adaptation to reflect the increasing delocalization of production to emerging countries. This work addresses this challenge by establishing a country-level, spatially explicit life cycle inventory (LCI). This study comprises three separate dimensions. The first dimension is spatial: processes and emissions are allocated to the country in which they take place and modeled to take into account local factors. Emerging economies China and India are the location of production, the consumption occurs in Germany, an Organisation for Economic Cooperation and Development country. The second dimension is the product level: we consider two distinct textile garments, a cotton T-shirt and a polyester jacket, in order to highlight potential differences in the production and use phases. The third dimension is the inventory composition: we track CO2, SO2, NO (x), and particulates, four major atmospheric pollutants, as well as energy use. This third dimension enriches the analysis of the spatial differentiation (first dimension) and distinct products (second dimension). We describe the textile production and use processes and define a functional unit for a garment. We then model important processes using a hierarchy of preferential data sources. We place special emphasis on the modeling of the principal local energy processes: electricity and transport in emerging countries. The spatially explicit inventory is disaggregated by country of location of the emissions and analyzed according to the dimensions of the study: location, product, and pollutant. The inventory shows striking differences between the two products considered as well as between the different pollutants considered. For the T-shirt, over 70% of the energy use and CO2 emissions occur in the consuming country, whereas for the jacket, more than 70% occur in the producing country. This reversal of proportions is due to differences in the use phase of the garments. For SO2, in contrast, over two thirds of the emissions occur in the country of production for both T-shirt and jacket. The difference in emission patterns between CO2 and SO2 is due to local electricity processes, justifying our emphasis on local energy infrastructure. The complexity of considering differences in location, product, and pollutant is rewarded by a much richer understanding of a global production-consumption chain. The inclusion of two different products in the LCI highlights the importance of the definition of a product's functional unit in the analysis and implications of results. Several use-phase scenarios demonstrate the importance of consumer behavior over equipment efficiency. The spatial emission patterns of the different pollutants allow us to understand the role of various energy infrastructure elements. The emission patterns furthermore inform the debate on the Environmental Kuznets Curve, which applies only to pollutants which can be easily filtered and does not take into account the effects of production displacement. We also discuss the appropriateness and limitations of applying the LCA methodology in a global context, especially in developing countries. Our spatial LCI method yields important insights in the quantity and pattern of emissions due to different product life cycle stages, dependent on the local technology, emphasizing the importance of consumer behavior. From a life cycle perspective, consumer education promoting air-drying and cool washing is more important than efficient appliances. Spatial LCI with country-specific data is a promising method, necessary for the challenges of globalized production-consumption chains. We recommend inventory reporting of final energy forms, such as electricity, and modular LCA databases, which would allow the easy modification of underlying energy infrastructure.
Resumo:
High-energy charged particles in the van Allen radiation belts and in solar energetic particle events can damage satellites on orbit leading to malfunctions and loss of satellite service. Here we describe some recent results from the SPACECAST project on modelling and forecasting the radiation belts, and modelling solar energetic particle events. We describe the SPACECAST forecasting system that uses physical models that include wave-particle interactions to forecast the electron radiation belts up to 3 h ahead. We show that the forecasts were able to reproduce the >2 MeV electron flux at GOES 13 during the moderate storm of 7-8 October 2012, and the period following a fast solar wind stream on 25-26 October 2012 to within a factor of 5 or so. At lower energies of 10- a few 100 keV we show that the electron flux at geostationary orbit depends sensitively on the high-energy tail of the source distribution near 10 RE on the nightside of the Earth, and that the source is best represented by a kappa distribution. We present a new model of whistler mode chorus determined from multiple satellite measurements which shows that the effects of wave-particle interactions beyond geostationary orbit are likely to be very significant. We also present radial diffusion coefficients calculated from satellite data at geostationary orbit which vary with Kp by over four orders of magnitude. We describe a new automated method to determine the position at the shock that is magnetically connected to the Earth for modelling solar energetic particle events and which takes into account entropy, and predict the form of the mean free path in the foreshock, and particle injection efficiency at the shock from analytical theory which can be tested in simulations.
Resumo:
In this thesis, I develop analytical models to price the value of supply chain investments under demand uncer¬tainty. This thesis includes three self-contained papers. In the first paper, we investigate the value of lead-time reduction under the risk of sudden and abnormal changes in demand forecasts. We first consider the risk of a complete and permanent loss of demand. We then provide a more general jump-diffusion model, where we add a compound Poisson process to a constant-volatility demand process to explore the impact of sudden changes in demand forecasts on the value of lead-time reduction. We use an Edgeworth series expansion to divide the lead-time cost into that arising from constant instantaneous volatility, and that arising from the risk of jumps. We show that the value of lead-time reduction increases substantially in the intensity and/or the magnitude of jumps. In the second paper, we analyze the value of quantity flexibility in the presence of supply-chain dis- intermediation problems. We use the multiplicative martingale model and the "contracts as reference points" theory to capture both positive and negative effects of quantity flexibility for the downstream level in a supply chain. We show that lead-time reduction reduces both supply-chain disintermediation problems and supply- demand mismatches. We furthermore analyze the impact of the supplier's cost structure on the profitability of quantity-flexibility contracts. When the supplier's initial investment cost is relatively low, supply-chain disin¬termediation risk becomes less important, and hence the contract becomes more profitable for the retailer. We also find that the supply-chain efficiency increases substantially with the supplier's ability to disintermediate the chain when the initial investment cost is relatively high. In the third paper, we investigate the value of dual sourcing for the products with heavy-tailed demand distributions. We apply extreme-value theory and analyze the effects of tail heaviness of demand distribution on the optimal dual-sourcing strategy. We find that the effects of tail heaviness depend on the characteristics of demand and profit parameters. When both the profit margin of the product and the cost differential between the suppliers are relatively high, it is optimal to buffer the mismatch risk by increasing both the inventory level and the responsive capacity as demand uncertainty increases. In that case, however, both the optimal inventory level and the optimal responsive capacity decrease as the tail of demand becomes heavier. When the profit margin of the product is relatively high, and the cost differential between the suppliers is relatively low, it is optimal to buffer the mismatch risk by increasing the responsive capacity and reducing the inventory level as the demand uncertainty increases. In that case, how¬ever, it is optimal to buffer with more inventory and less capacity as the tail of demand becomes heavier. We also show that the optimal responsive capacity is higher for the products with heavier tails when the fill rate is extremely high.
Resumo:
Advancements in high-throughput technologies to measure increasingly complex biological phenomena at the genomic level are rapidly changing the face of biological research from the single-gene single-protein experimental approach to studying the behavior of a gene in the context of the entire genome (and proteome). This shift in research methodologies has resulted in a new field of network biology that deals with modeling cellular behavior in terms of network structures such as signaling pathways and gene regulatory networks. In these networks, different biological entities such as genes, proteins, and metabolites interact with each other, giving rise to a dynamical system. Even though there exists a mature field of dynamical systems theory to model such network structures, some technical challenges are unique to biology such as the inability to measure precise kinetic information on gene-gene or gene-protein interactions and the need to model increasingly large networks comprising thousands of nodes. These challenges have renewed interest in developing new computational techniques for modeling complex biological systems. This chapter presents a modeling framework based on Boolean algebra and finite-state machines that are reminiscent of the approach used for digital circuit synthesis and simulation in the field of very-large-scale integration (VLSI). The proposed formalism enables a common mathematical framework to develop computational techniques for modeling different aspects of the regulatory networks such as steady-state behavior, stochasticity, and gene perturbation experiments.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
This article presents an optimization methodology of batch production processes assembled by shared resources which rely on a mapping of state-events into time-events allowing in this way the straightforward use of a well consolidated scheduling policies developed for manufacturing systems. A technique to generate the timed Petri net representation from a continuous dynamic representation (Differential-Algebraic Equations systems (DAEs)) of the production system is presented together with the main characteristics of a Petri nets-based tool implemented for optimization purposes. This paper describes also how the implemented tool generates the coverability tree and how it can be pruned by a general purpose heuristic. An example of a distillation process with two shared batch resources is used to illustrate the optimization methodology proposed.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The software development industry is constantly evolving. The rise of the agile methodologies in the late 1990s, and new development tools and technologies require growing attention for everybody working within this industry. The organizations have, however, had a mixture of various processes and different process languages since a standard software development process language has not been available. A promising process meta-model called Software & Systems Process Engineering Meta- Model (SPEM) 2.0 has been released recently. This is applied by tools such as Eclipse Process Framework Composer, which is designed for implementing and maintaining processes and method content. Its aim is to support a broad variety of project types and development styles. This thesis presents the concepts of software processes, models, traditional and agile approaches, method engineering, and software process improvement. Some of the most well-known methodologies (RUP, OpenUP, OpenMethod, XP and Scrum) are also introduced with a comparison provided between them. The main focus is on the Eclipse Process Framework and SPEM 2.0, their capabilities, usage and modeling. As a proof of concept, I present a case study of modeling OpenMethod with EPF Composer and SPEM 2.0. The results show that the new meta-model and tool have made it possible to easily manage method content, publish versions with customized content, and connect project tools (such as MS Project) with the process content. The software process modeling also acts as a process improvement activity.
Resumo:
This paper presents a relational positioning methodology for flexibly and intuitively specifying offline programmed robot tasks, as well as for assisting the execution of teleoperated tasks demanding precise movements.In relational positioning, the movements of an object can be restricted totally or partially by specifying its allowed positions in terms of a set of geometric constraints. These allowed positions are found by means of a 3D sequential geometric constraint solver called PMF – Positioning Mobile with respect to Fixed. PMF exploits the fact that in a set of geometric constraints, the rotational component can often be separated from the translational one and solved independently.
Resumo:
ABSTRACT: BACKGROUND: The Psychiatric arm of the population-based CoLaus study (PsyCoLaus) is designed to: 1) establish the prevalence of threshold and subthreshold psychiatric syndromes in the 35 to 66 year-old population of the city of Lausanne (Switzerland); 2) test the validity of postulated definitions for subthreshold mood and anxiety syndromes; 3) determine the associations between psychiatric disorders, personality traits and cardiovascular diseases (CVD), 4) identify genetic variants that can modify the risk for psychiatric disorders and determine whether genetic risk factors are shared between psychiatric disorders and CVD. This paper presents the method as well as somatic and sociodemographic characteristics of the sample. METHODS: All 35 to 66 year-old persons previously selected for the population-based CoLaus survey on risk factors for CVD were asked to participate in a substudy assessing psychiatric conditions. This investigation included the Diagnostic Interview for Genetic Studies to elicit diagnostic criteria for threshold disorders according to DSM-IV and algorithmically defined subthreshold syndromes. Complementary information was gathered on potential risk and protective factors for psychiatric disorders, migraine and on the morbidity of first-degree family members, whereas the collection of DNA and plasma samples was part of the original somatic study (CoLaus). RESULTS: A total of 3,691 individuals completed the psychiatric evaluation (67% participation). The gender distribution of the sample did not differ significantly from that of the general population in the same age range. Although the youngest 5-year band of the cohort was underrepresented and the oldest 5-year band overrepresented, participants of PsyCoLaus and individuals who refused to participate revealed comparable scores on the General Health Questionnaire, a self-rating instrument completed at the somatic exam. CONCLUSIONS: Despite limitations resulting from the relatively low participation in the context of a comprehensive and time-consuming investigation, the PsyCoLaus study should significantly contribute to the current understanding of psychiatric disorders and comorbid somatic conditions by: 1) establishing the clinical relevance of specific psychiatric syndromes below the DSM-IV threshold; 2) determining comorbidity between risk factors for CVD and psychiatric disorders; 3) assessing genetic variants associated with common psychiatric disorders and 4) identifying DNA markers shared between CVD and psychiatric disorders.