904 resultados para modeling of arrival processes
Resumo:
The tensile deformation behavior of a range of supersaturated Mg-Al solid solutions and an as-cast magnesium alloy AM60 has been studied. The Mg-Al alloys were tested at room temperature while the alloy AM60 was tested in the temperature range 293-573 K. The differences in the deformation behavior of the alloys is discussed in terms of hardening and softening processes. In order to identify which processes were active, the stress dependence of the strain-hardening coefficient was assessed using Lukac and Balik's model of hardening and softening. The analysis indicates that hardening involves solid solution hardening and interaction with forest dislocations and non-dislocation obstacles such as second phase particles. Cross slip is not a significant recovery process in the temperature range 293-423 K. At temperatures between 473 and 523 K the analysis suggests that softening is controlled by cross slip and climb of dislocations. At temperatures above 523 K softening seems to be controlled by dynamic recrystallisation. (C) 2004 Elsevier B.V. All rights reserved.
Experimental Modeling of Twin-Screw Extrusion Processes to Predict Properties of Extruded Composites
Resumo:
Twin-screw extrusion is used to compound fillers into a polymer matrix in order to improve the properties of the final product. The resultant properties of the composite are determined by the operating conditions used during extrusion processing. Changes in the operating conditions affect the physics of the melt flow, inducing unique composite properties. In the following work, the Residence Stress Distribution methodology has been applied to model both the stress behavior and the property response of a twin-screw compounding process as a function of the operating conditions. The compounding of a pigment into a polymer melt has been investigated to determine the effect of stress on the degree of mixing, which will affect the properties of the composite. In addition, the pharmaceutical properties resulting from the compounding of an active pharmaceutical ingredient are modeled as a function of the operating conditions, indicating the physical behavior inducing the property responses.
Resumo:
During the transformation of the low tide to the high tide, an exactly inverse phenomenon is occurred and the high tidal delta is formed at the mouth upstream. Increasing the tidal range does not affect the nature of this phenomenon and just change its intensity. In this situation, the inlet will be balance over time. A new relationship between equilibrium cross section and tidal prism for different tidal levels as well as sediment grading has been provided which its results are corresponded with results of numerical modeling. In the combination state, the wave height significantly affects the current and sedimentary pattern such that the wave height dimensionless index (Hw/Ht) determines the dominant parameter (the short period wave or tide) in the inlet. It is notable that in this state, the inlet will be balanced over the time. In order to calculate sedimentary phenomena, each of which are individually determined under solely wave and only tide conditions and then they are added. Estimated values are similar to numerical modeling results of the combination state considering nonlinear terms. Also, it is clear that the wave and tide performance is of meaning in the direct relationship with the water level. The water level change causes variations of the position of the breaking line and sedimentary active area. It changes the current and sedimentary pattern coastward while does not change anything seaward. Based on modeling results of sediment transport due to the wave, tide and their combination, it could be said that the erosion at the mouth due to the wave is less than that due to the wave and tide combination. In these situations, tide and wave-tide combination increase the low tidal and high tidal delta volume, respectively. Hence, tide plays an effective role in changing sedimentary phenomena at the channel and mouth downstream. Whereas, short period and combined waves have a crucial role in varying the morphology and sediment transport coast ward.
Resumo:
Investigation of large, destructive earthquakes is challenged by their infrequent occurrence and the remote nature of geophysical observations. This thesis sheds light on the source processes of large earthquakes from two perspectives: robust and quantitative observational constraints through Bayesian inference for earthquake source models, and physical insights on the interconnections of seismic and aseismic fault behavior from elastodynamic modeling of earthquake ruptures and aseismic processes.
To constrain the shallow deformation during megathrust events, we develop semi-analytical and numerical Bayesian approaches to explore the maximum resolution of the tsunami data, with a focus on incorporating the uncertainty in the forward modeling. These methodologies are then applied to invert for the coseismic seafloor displacement field in the 2011 Mw 9.0 Tohoku-Oki earthquake using near-field tsunami waveforms and for the coseismic fault slip models in the 2010 Mw 8.8 Maule earthquake with complementary tsunami and geodetic observations. From posterior estimates of model parameters and their uncertainties, we are able to quantitatively constrain the near-trench profiles of seafloor displacement and fault slip. Similar characteristic patterns emerge during both events, featuring the peak of uplift near the edge of the accretionary wedge with a decay toward the trench axis, with implications for fault failure and tsunamigenic mechanisms of megathrust earthquakes.
To understand the behavior of earthquakes at the base of the seismogenic zone on continental strike-slip faults, we simulate the interactions of dynamic earthquake rupture, aseismic slip, and heterogeneity in rate-and-state fault models coupled with shear heating. Our study explains the long-standing enigma of seismic quiescence on major fault segments known to have hosted large earthquakes by deeper penetration of large earthquakes below the seismogenic zone, where mature faults have well-localized creeping extensions. This conclusion is supported by the simulated relationship between seismicity and large earthquakes as well as by observations from recent large events. We also use the modeling to connect the geodetic observables of fault locking with the behavior of seismicity in numerical models, investigating how a combination of interseismic geodetic and seismological estimates could constrain the locked-creeping transition of faults and potentially their co- and post-seismic behavior.
Resumo:
Unstable density-driven flow can lead to enhanced solute transport in groundwater. Only recently has the complex fingering pattern associated with free convection been documented in field settings. Electrical resistivity (ER) tomography has been used to capture a snapshot of convective instabilities at a single point in time, but a thorough transient analysis is still lacking in the literature. We present the results of a 2 year experimental study at a shallow aquifer in the United Arab Emirates that was designed to specifically explore the transient nature of free convection. ER tomography data documented the presence of convective fingers following a significant rainfall event. We demonstrate that the complex fingering pattern had completely disappeared a year after the rainfall event. The observation is supported by an analysis of the aquifer halite budget and hydrodynamic modeling of the transient character of the fingering instabilities. Modeling results show that the transient dynamics of the gravitational instabilities (their initial development, infiltration into the underlying lower-density groundwater, and subsequent decay) are in agreement with the timing observed in the time-lapse ER measurements. All experimental observations and modeling results are consistent with the hypothesis that a dense brine that infiltrated into the aquifer from a surficial source was the cause of free convection at this site, and that the finite nature of the dense brine source and dispersive mixing led to the decay of instabilities with time. This study highlights the importance of the transience of free convection phenomena and suggests that these processes are more rapid than was previously understood.
Resumo:
Conceptual modeling is an important tool for understanding and revealing weaknesses of business processes. Yet, the current practice in reengineering projects often considers simply the as-is process model as a brain-storming tool. This approach heavily relies on the intuition of the participants and misses a clear description of the quality requirements. Against this background, we identify four generic quality categories of business process quality, and populate them with quality requirements from related research. We refer to the resulting framework as the Quality of Business Process (QoBP) framework. Furthermore, we present the findings from applying the QoBP framework in a case study with a major Australian bank, showing that it helps to systematically fill the white space between as-is and to-be process modeling.
Resumo:
This study tested the utility of a stress and coping model of employee adjustment to a merger. Two hundred and twenty employees completed both questionnaires (Time 1: 3 months after merger implementation; Time 2: 2 years later). Structural equation modeling analyses revealed that positive event characteristics predicted greater appraisals of self-efficacy and less stress at Time 1. Self-efficacy, in turn, predicted greater use of problem-focused coping at Time 2, whereas stress predicted a greater use of problem-focused and avoidance coping. Finally, problem-focused coping predicted higher levels of job satisfaction and identification with the merged organization (Time 2), whereas avoidance coping predicted lower identification.
Resumo:
Over the last 30 years, numerous research groups have attempted to provide mathematical descriptions of the skin wound healing process. The development of theoretical models of the interlinked processes that underlie the healing mechanism has yielded considerable insight into aspects of this critical phenomenon that remain difficult to investigate empirically. In particular, the mathematical modeling of angiogenesis, i.e., capillary sprout growth, has offered new paradigms for the understanding of this highly complex and crucial step in the healing pathway. With the recent advances in imaging and cell tracking, the time is now ripe for an appraisal of the utility and importance of mathematical modeling in wound healing angiogenesis research. The purpose of this review is to pedagogically elucidate the conceptual principles that have underpinned the development of mathematical descriptions of wound healing angiogenesis, specifically those that have utilized a continuum reaction-transport framework, and highlight the contribution that such models have made toward the advancement of research in this field. We aim to draw attention to the common assumptions made when developing models of this nature, thereby bringing into focus the advantages and limitations of this approach. A deeper integration of mathematical modeling techniques into the practice of wound healing angiogenesis research promises new perspectives for advancing our knowledge in this area. To this end we detail several open problems related to the understanding of wound healing angiogenesis, and outline how these issues could be addressed through closer cross-disciplinary collaboration.
Resumo:
A new method of modeling material behavior which accounts for the dynamic metallurgical processes occurring during hot deformation is presented. The approach in this method is to consider the workpiece as a dissipator of power in the total processing system and to evaluate the dissipated power co-contentJ = ∫o σ ε ⋅dσ from the constitutive equation relating the strain rate (ε) to the flow stress (σ). The optimum processing conditions of temperature and strain rate are those corresponding to the maximum or peak inJ. It is shown thatJ is related to the strain-rate sensitivity (m) of the material and reaches a maximum value(J max) whenm = 1. The efficiency of the power dissipation(J/J max) through metallurgical processes is shown to be an index of the dynamic behavior of the material and is useful in obtaining a unique combination of temperature and strain rate for processing and also in delineating the regions of internal fracture. In this method of modeling, noa priori knowledge or evaluation of the atomistic mechanisms is required, and the method is effective even when more than one dissipation process occurs, which is particularly advantageous in the hot processing of commercial alloys having complex microstructures. This method has been applied to modeling of the behavior of Ti-6242 during hot forging. The behavior of α+ β andβ preform microstructures has been exam-ined, and the results show that the optimum condition for hot forging of these preforms is obtained at 927 °C (1200 K) and a strain rate of 1CT•3 s•1. Variations in the efficiency of dissipation with temperature and strain rate are correlated with the dynamic microstructural changes occurring in the material.
Resumo:
Analytical models of IEEE 802.11-based WLANs are invariably based on approximations, such as the well-known mean-field approximations proposed by Bianchi for saturated nodes. In this paper, we provide a new approach for modeling the situation when the nodes are not saturated. We study a State Dependent Attempt Rate (SDAR) approximation to model M queues (one queue per node) served by the CSMA/CA protocol as standardized in the IEEE 802.11 DCF. The approximation is that, when n of the M queues are non-empty, the attempt probability of the n non-empty nodes is given by the long-term attempt probability of n saturated nodes as provided by Bianchi's model. This yields a coupled queue system. When packets arrive to the M queues according to independent Poisson processes, we provide an exact model for the coupled queue system with SDAR service. The main contribution of this paper is to provide an analysis of the coupled queue process by studying a lower dimensional process and by introducing a certain conditional independence approximation. We show that the numerical results obtained from our finite buffer analysis are in excellent agreement with the corresponding results obtained from ns-2 simulations. We replace the CSMA/CA protocol as implemented in the ns-2 simulator with the SDAR service model to show that the SDAR approximation provides an accurate model for the CSMA/CA protocol. We also report the simulation speed-ups thus obtained by our model-based simulation.
Resumo:
Solidification processes are complex in nature, involving multiple phases and several length scales. The properties of solidified products are dictated by the microstructure, the mactostructure, and various defects present in the casting. These, in turn, are governed by the multiphase transport phenomena Occurring at different length scales. In order to control and improve the quality of cast products, it is important to have a thorough understanding of various physical and physicochemical phenomena Occurring at various length scales. preferably through predictive models and controlled experiments. In this context, the modeling of transport phenomena during alloy solidification has evolved over the last few decades due to the complex multiscale nature of the problem. Despite this, a model accounting for all the important length scales directly is computationally prohibitive. Thus, in the past, single-phase continuum models have often been employed with respect to a single length scale to model solidification processing. However, continuous development in understanding the physics of solidification at various length scales oil one hand and the phenomenal growth of computational power oil the other have allowed researchers to use increasingly complex multiphase/multiscale models in recent. times. These models have allowed greater understanding of the coupled micro/macro nature of the process and have made it possible to predict solute segregation and microstructure evolution at different length scales. In this paper, a brief overview of the current status of modeling of convection and macrosegregation in alloy solidification processing is presented.
Resumo:
A mathematical model is developed to simulate oxygen consumption, heat generation and cell growth in solid state fermentation (SSF). The fungal growth on the solid substrate particles results in the increase of the cell film thickness around the particles. The model incorporates this increase in the biofilm size which leads to decrease in the porosity of the substrate bed and diffusivity of oxygen in the bed. The model also takes into account the effect of steric hindrance limitations in SSF. The growth of cells around single particle and resulting expansion of biofilm around the particle is analyzed for simplified zero and first order oxygen consumption kinetics. Under conditions of zero order kinetics, the model predicts upper limit on cell density. The model simulations for packed bed of solid particles in tray bioreactor show distinct limitations on growth due to simultaneous heat and mass transport phenomena accompanying solid state fermentation process. The extent of limitation due to heat and/or mass transport phenomena is analyzed during different stages of fermentation. It is expected that the model will lead to better understanding of the transport processes in SSF, and therefore, will assist in optimal design of bioreactors for SSF.
Resumo:
Solar UV radiation is harmful for life on planet Earth, but fortunately the atmospheric oxygen and ozone absorb almost entirely the most energetic UVC radiation photons. However, part of the UVB radiation and much of the UVA radiation reaches the surface of the Earth, and affect human health, environment, materials and drive atmospheric and aquatic photochemical processes. In order to quantify these effects and processes there is a need for ground-based UV measurements and radiative transfer modeling to estimate the amounts of UV radiation reaching the biosphere. Satellite measurements with their near-global spatial coverage and long-term data conti-nuity offer an attractive option for estimation of the surface UV radiation. This work focuses on radiative transfer theory based methods used for estimation of the UV radiation reaching the surface of the Earth. The objectives of the thesis were to implement the surface UV algorithm originally developed at NASA Goddard Space Flight Center for estimation of the surface UV irradiance from the meas-urements of the Dutch-Finnish built Ozone Monitoring Instrument (OMI), to improve the original surface UV algorithm especially in relation with snow cover, to validate the OMI-derived daily surface UV doses against ground-based measurements, and to demonstrate how the satellite-derived surface UV data can be used to study the effects of the UV radiation. The thesis consists of seven original papers and a summary. The summary includes an introduction of the OMI instrument, a review of the methods used for modeling of the surface UV using satellite data as well as the con-clusions of the main results of the original papers. The first two papers describe the algorithm used for estimation of the surface UV amounts from the OMI measurements as well as the unique Very Fast Delivery processing system developed for processing of the OMI data received at the Sodankylä satellite data centre. The third and the fourth papers present algorithm improvements related to the surface UV albedo of the snow-covered land. Fifth paper presents the results of the comparison of the OMI-derived daily erythemal doses with those calculated from the ground-based measurement data. It gives an estimate of the expected accuracy of the OMI-derived sur-face UV doses for various atmospheric and other conditions, and discusses the causes of the differences between the satellite-derived and ground-based data. The last two papers demonstrate the use of the satellite-derived sur-face UV data. Sixth paper presents an assessment of the photochemical decomposition rates in aquatic environment. Seventh paper presents use of satellite-derived daily surface UV doses for planning of the outdoor material weathering tests.
Resumo:
A model of the precipitation process in reverse micelles has been developed to calculate the size of fine particles obtained therein. While the method shares several features of particle nucleation and growth common to precipitation in large systems, complexities arise in describing the processes of nucleation, due to the extremely small size of a micelle and of particle growth caused by fusion among the micelles. Occupancy of micelles by solubilized molecules is governed by Poisson statistics, implying most of them are empty and cannot nucleate of its own. The model therefore specifies the minimum number of solubilized molecules required to form a nucleus which is used to calculate the homogeneous nucleation rate. Simultaneously, interaction between micelles is assumed to occur by Brownian collision and instantaneous fusion. Analysis of time scales of various events shows growth of particles to be very fast compared to other phenomena occurring. This implies that nonempty micelles either are supersaturated or contain a single precipitated particle and allows application of deterministic population balance equations to describe the evolution of the system with time. The model successfully predicts the experimental measurements of Kandori ct al.(3) on the size of precipitated CaCO3 particles, obtained by carbonation of reverse micelles containing aqueous Ca(OH)(2) solution.
Resumo:
In this paper, we report an analysis of the protein sequence length distribution for 13 bacteria, four archaea and one eukaryote whose genomes have been completely sequenced, The frequency distribution of protein sequence length for all the 18 organisms are remarkably similar, independent of genome size and can be described in terms of a lognormal probability distribution function. A simple stochastic model based on multiplicative processes has been proposed to explain the sequence length distribution. The stochastic model supports the random-origin hypothesis of protein sequences in genomes. Distributions of large proteins deviate from the overall lognormal behavior. Their cumulative distribution follows a power-law analogous to Pareto's law used to describe the income distribution of the wealthy. The protein sequence length distribution in genomes of organisms has important implications for microbial evolution and applications. (C) 1999 Elsevier Science B.V. All rights reserved.