958 resultados para Process Modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence of the hip joint formulation on the kinematic response of the model of human gait is investigated throughout this work. To accomplish this goal, the fundamental issues of the modeling process of a planar hip joint under the framework of multibody systems are revisited. In particular, the formulations for the ideal, dry, and lubricated revolute joints are described and utilized for the interaction of femur head inside acetabulum or the hip bone. In this process, the main kinematic and dynamic aspects of hip joints are analyzed. In a simple manner, the forces that are generated during human gait, for both dry and lubricated hip joint models, are computed in terms of the system’s state variables and subsequently introduced into the dynamics equations of motion of the multibody system as external generalized forces. Moreover, a human multibody model is considered, which incorporates the different approaches for the hip articulation, namely ideal joint, dry, and lubricated models. Finally, several computational simulations based on different approaches are performed, and the main results presented and compared to identify differences among the methodologies and procedures adopted in this work. The input conditions to the models correspond to the experimental data capture from an adult male during normal gait. In general, the obtained results in terms of positions do not differ significantly when the different hip joint models are considered. In sharp contrast, the velocity and acceleration plotted vary significantly. The effect of the hip joint modeling approach is clearly measurable and visible in terms of peaks and oscillations of the velocities and accelerations. In general, with the dry hip model, intra-joint force peaks can be observed, which can be associated with the multiple impacts between the femur head and the cup. In turn, when the lubricant is present, the system’s response tends to be smoother due to the damping effects of the synovial fluid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Civil

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main purpose of the poster is to present how the Unified Modeling Language (UML) can be used for diagnosing and optimizing real industrial production systems. By using a car radios production line as a case study, the poster shows the modeling process that can be followed during the analysis phase of complex control applications. In order to guarantee the continuity mapping of the models, the authors propose some guidelines to transform the use cases diagrams into a single object diagram, which is the main diagram for the next phases of the development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work focuses on the modeling and numerical approximations of population balance equations (PBEs) for the simulation of different phenomena occurring in process engineering. The population balance equation (PBE) is considered to be a statement of continuity. It tracks the change in particle size distribution as particles are born, die, grow or leave a given control volume. In the population balance models the one independent variable represents the time, the other(s) are property coordinate(s), e.g., the particle volume (size) in the present case. They typically describe the temporal evolution of the number density functions and have been used to model various processes such as granulation, crystallization, polymerization, emulsion and cell dynamics. The semi-discrete high resolution schemes are proposed for solving PBEs modeling one and two-dimensional batch crystallization models. The schemes are discrete in property coordinates but continuous in time. The resulting ordinary differential equations can be solved by any standard ODE solver. To improve the numerical accuracy of the schemes a moving mesh technique is introduced in both one and two-dimensional cases ...

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Mont Collon mafic complex is one of the best preserved examples of the Early Permian magmatism in the Central Alps, related to the intra-continental collapse of the Variscan belt. It mostly consists (> 95 vol.%) of ol+hy-nonnative plagioclase-wehrlites, olivine- and cpx-gabbros with cumulitic structures, crosscut by acid dikes. Pegmatitic gabbros, troctolites and anorthosites outcrop locally. A well-preserved cumulative, sequence is exposed in the Dents de Bertol area (center of intrusion). PT-calculations indicate that this layered magma chamber emplaced at mid-crustal levels at about 0.5 GPa and 1100 degrees C. The Mont Collon cumulitic rocks record little magmatic differentiation, as illustrated by the restricted range of clinopyroxene mg-number (Mg#(cpx)=83-89). Whole-rock incompatible trace-element contents (e.g. Nb, Zr, Ba) vary largely and without correlation with major-element composition. These features are characteristic of an in-situ crystallization process with variable amounts of interstitial liquid L trapped between the cumulus mineral phases. LA-ICPMS measurements show that trace-element distribution in the latter is homogeneous, pointing to subsolidus re-equilibration between crystals and interstitial melts. A quantitative modeling based on Langmuir's in-situ crystallization equation successfully duplicated the REE concentrations in cumulitic minerals of all rock facies of the intrusion. The calculated amounts of interstitial liquid L vary between 0 and 35% for degrees of differentiation F of 0 to 20%, relative to the least evolved facies of the intrusion. L values are well correlated with the modal proportions of interstitial amphibole and whole-rock incompatible trace-element concentrations (e.g. Zr, Nb) of the tested samples. However, the in-situ crystallization model reaches its limitations with rock containing high modal content of REE-bearing minerals (i.e. zircon), such as pegmatitic gabbros. Dikes of anorthositic composition, locally crosscutting the layered lithologies, evidence that the Mont Collon rocks evolved in open system with mixing of intercumulus liquids of different origins and possibly contrasting compositions. The proposed model is not able to resolve these complex open systems, but migrating liquids could be partly responsible for the observed dispersion of points in some correlation diagrams. Absence of significant differentiation with recurrent lithologies in the cumulitic pile of Dents de Bertol points to an efficiently convective magma chamber, with possible periodic replenishment, (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper uses an infinite hidden Markov model (IIHMM) to analyze U.S. inflation dynamics with a particular focus on the persistence of inflation. The IHMM is a Bayesian nonparametric approach to modeling structural breaks. It allows for an unknown number of breakpoints and is a flexible and attractive alternative to existing methods. We found a clear structural break during the recent financial crisis. Prior to that, inflation persistence was high and fairly constant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PECUBE is a three-dimensional thermal-kinematic code capable of solving the heat production-diffusion-advection equation under a temporally varying surface boundary condition. It was initially developed to assess the effects of time-varying surface topography (relief) on low-temperature thermochronological datasets. Thermochronometric ages are predicted by tracking the time-temperature histories of rock-particles ending up at the surface and by combining these with various age-prediction models. In the decade since its inception, the PECUBE code has been under continuous development as its use became wider and addressed different tectonic-geomorphic problems. This paper describes several major recent improvements in the code, including its integration with an inverse-modeling package based on the Neighborhood Algorithm, the incorporation of fault-controlled kinematics, several different ways to address topographic and drainage change through time, the ability to predict subsurface (tunnel or borehole) data, prediction of detrital thermochronology data and a method to compare these with observations, and the coupling with landscape-evolution (or surface-process) models. Each new development is described together with one or several applications, so that the reader and potential user can clearly assess and make use of the capabilities of PECUBE. We end with describing some developments that are currently underway or should take place in the foreseeable future. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MOTIVATION: In silico modeling of gene regulatory networks has gained some momentum recently due to increased interest in analyzing the dynamics of biological systems. This has been further facilitated by the increasing availability of experimental data on gene-gene, protein-protein and gene-protein interactions. The two dynamical properties that are often experimentally testable are perturbations and stable steady states. Although a lot of work has been done on the identification of steady states, not much work has been reported on in silico modeling of cellular differentiation processes. RESULTS: In this manuscript, we provide algorithms based on reduced ordered binary decision diagrams (ROBDDs) for Boolean modeling of gene regulatory networks. Algorithms for synchronous and asynchronous transition models have been proposed and their corresponding computational properties have been analyzed. These algorithms allow users to compute cyclic attractors of large networks that are currently not feasible using existing software. Hereby we provide a framework to analyze the effect of multiple gene perturbation protocols, and their effect on cell differentiation processes. These algorithms were validated on the T-helper model showing the correct steady state identification and Th1-Th2 cellular differentiation process. AVAILABILITY: The software binaries for Windows and Linux platforms can be downloaded from http://si2.epfl.ch/~garg/genysis.html.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural selection is typically exerted at some specific life stages. If natural selection takes place before a trait can be measured, using conventional models can cause wrong inference about population parameters. When the missing data process relates to the trait of interest, a valid inference requires explicit modeling of the missing process. We propose a joint modeling approach, a shared parameter model, to account for nonrandom missing data. It consists of an animal model for the phenotypic data and a logistic model for the missing process, linked by the additive genetic effects. A Bayesian approach is taken and inference is made using integrated nested Laplace approximations. From a simulation study we find that wrongly assuming that missing data are missing at random can result in severely biased estimates of additive genetic variance. Using real data from a wild population of Swiss barn owls Tyto alba, our model indicates that the missing individuals would display large black spots; and we conclude that genes affecting this trait are already under selection before it is expressed. Our model is a tool to correctly estimate the magnitude of both natural selection and additive genetic variance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite their limited proliferation capacity, regulatory T cells (T(regs)) constitute a population maintained over the entire lifetime of a human organism. The means by which T(regs) sustain a stable pool in vivo are controversial. Using a mathematical model, we address this issue by evaluating several biological scenarios of the origins and the proliferation capacity of two subsets of T(regs): precursor CD4(+)CD25(+)CD45RO(-) and mature CD4(+)CD25(+)CD45RO(+) cells. The lifelong dynamics of T(regs) are described by a set of ordinary differential equations, driven by a stochastic process representing the major immune reactions involving these cells. The model dynamics are validated using data from human donors of different ages. Analysis of the data led to the identification of two properties of the dynamics: (1) the equilibrium in the CD4(+)CD25(+)FoxP3(+)T(regs) population is maintained over both precursor and mature T(regs) pools together, and (2) the ratio between precursor and mature T(regs) is inverted in the early years of adulthood. Then, using the model, we identified three biologically relevant scenarios that have the above properties: (1) the unique source of mature T(regs) is the antigen-driven differentiation of precursors that acquire the mature profile in the periphery and the proliferation of T(regs) is essential for the development and the maintenance of the pool; there exist other sources of mature T(regs), such as (2) a homeostatic density-dependent regulation or (3) thymus- or effector-derived T(regs), and in both cases, antigen-induced proliferation is not necessary for the development of a stable pool of T(regs). This is the first time that a mathematical model built to describe the in vivo dynamics of regulatory T cells is validated using human data. The application of this model provides an invaluable tool in estimating the amount of regulatory T cells as a function of time in the blood of patients that received a solid organ transplant or are suffering from an autoimmune disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a two-factor (Vasicek-CIR) model of the term structure of interest rates and develops its pricing and empirical properties. We assume that default free discount bond prices are determined by the time to maturity and two factors, the long-term interest rate and the spread. Assuming a certain process for both factors, a general bond pricing equation is derived and a closed-form expression for bond prices is obtained. Empirical evidence of the model's performance in comparisson with a double Vasicek model is presented. The main conclusion is that the modeling of the volatility in the long-term rate process can help (in a large amount) to fit the observed data can improve - in a reasonable quantity - the prediction of the future movements in the medium- and long-term interest rates. However, for shorter maturities, it is shown that the pricing errors are, basically, negligible and it is not so clear which is the best model to be used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A remarkable feature of the carcinogenicity of inorganic arsenic is that while human exposures to high concentrations of inorganic arsenic in drinking water are associated with increases in skin, lung, and bladder cancer, inorganic arsenic has not typically caused tumors in standard laboratory animal test protocols. Inorganic arsenic administered for periods of up to 2 yr to various strains of laboratory mice, including the Swiss CD-1, Swiss CR:NIH(S), C57Bl/6p53(+/-), and C57Bl/6p53(+/+), has not resulted in significant increases in tumor incidence. However, Ng et al. (1999) have reported a 40% tumor incidence in C57Bl/6J mice exposed to arsenic in their drinking water throughout their lifetime, with no tumors reported in controls. In order to investigate the potential role of tissue dosimetry in differential susceptibility to arsenic carcinogenicity, a physiologically based pharmacokinetic (PBPK) model for inorganic arsenic in the rat, hamster, monkey, and human (Mann et al., 1996a, 1996b) was extended to describe the kinetics in the mouse. The PBPK model was parameterized in the mouse using published data from acute exposures of B6C3F1 mice to arsenate, arsenite, monomethylarsonic acid (MMA), and dimethylarsinic acid (DMA) and validated using data from acute exposures of C57Black mice. Predictions of the acute model were then compared with data from chronic exposures. There was no evidence of changes in the apparent volume of distribution or in the tissue-plasma concentration ratios between acute and chronic exposure that might support the possibility of inducible arsenite efflux. The PBPK model was also used to project tissue dosimetry in the C57Bl/6J study, in comparison with tissue levels in studies having shorter duration but higher arsenic treatment concentrations. The model evaluation indicates that pharmacokinetic factors do not provide an explanation for the difference in outcomes across the various mouse bioassays. Other possible explanations may relate to strain-specific differences, or to the different durations of dosing in each of the mouse studies, given the evidence that inorganic arsenic is likely to be active in the later stages of the carcinogenic process. [Authors]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a thermal modeling for power management of a new three-dimensional (3-D) thinned dies stacking process. Besides the high concentration of power dissipating sources, which is the direct consequence of the very interesting integration efficiency increase, this new ultra-compact packaging technology can suffer of the poor thermal conductivity (about 700 times smaller than silicon one) of the benzocyclobutene (BCB) used as both adhesive and planarization layers in each level of the stack. Thermal simulation was conducted using three-dimensional (3-D) FEM tool to analyze the specific behaviors in such stacked structure and to optimize the design rules. This study first describes the heat transfer limitation through the vertical path by examining particularly the case of the high dissipating sources under small area. First results of characterization in transient regime by means of dedicated test device mounted in single level structure are presented. For the design optimization, the thermal draining capabilities of a copper grid or full copper plate embedded in the intermediate layer of stacked structure are evaluated as a function of the technological parameters and the physical properties. It is shown an interest for the transverse heat extraction under the buffer devices dissipating most the power and generally localized in the peripheral zone, and for the temperature uniformization, by heat spreading mechanism, in the localized regions where the attachment of the thin die is altered. Finally, all conclusions of this analysis are used for the quantitative projections of the thermal performance of a first demonstrator based on a three-levels stacking structure for space application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.