944 resultados para computational fluid dynamic
Resumo:
Surface tension forces are significant at millimeter length-scales, causing profoundly different flow morphologies in microchannels than in macroscale flows. The existence and morphology of thin liquid films is particularly relevant for predicting performance and operational stability of devices containing microscale two phase flows. Analytical, computational, and experimental methods previously employed in the study of thin liquid films are discussed. Thicknesses before and after a novel film morphology, referred to as a `shock,' are measured with a novel film thickness measurement technique that uses confocal microscopy. Film thicknesses predicted by previous work are compared to experimental results. Methods for increasing the accuracy of the confocal film thickness measurement technique are discussed.
Resumo:
Dynamic models for electrophoresis are based upon model equations derived from the transport concepts in solution together with user-inputted conditions. They are able to predict theoretically the movement of ions and are as such the most versatile tool to explore the fundamentals of electrokinetic separations. Since its inception three decades ago, the state of dynamic computer simulation software and its use has progressed significantly and Electrophoresis played a pivotal role in that endeavor as a large proportion of the fundamental and application papers were published in this periodical. Software is available that simulates all basic electrophoretic systems, including moving boundary electrophoresis, zone electrophoresis, ITP, IEF and EKC, and their combinations under almost exactly the same conditions used in the laboratory. This has been employed to show the detailed mechanisms of many of the fundamental phenomena that occur in electrophoretic separations. Dynamic electrophoretic simulations are relevant for separations on any scale and instrumental format, including free-fluid preparative, gel, capillary and chip electrophoresis. This review includes a historical overview, a survey of current simulators, simulation examples and a discussion of the applications and achievements of dynamic simulation.
Resumo:
Opaque products enable service providers to hide specific characteristics of their service fulfillment from the customer until after purchase. Prominent examples include internet-based service providers selling airline tickets without defining details, such as departure time or operating airline, until the booking has been made. Owing to the resulting flexibility in resource utilization, the traditional revenue management process needs to be modified. In this paper, we extend dynamic programming decomposition techniques widely used for traditional revenue management to develop an intuitive capacity control approach that allows for the incorporation of opaque products. In a simulation study, we show that the developed approach significantly outperforms other well-known capacity control approaches adapted to the opaque product setting. Based on the approach, we also provide computational examples of how the share of opaque products as well as the degree of opacity can influence the results.
Resumo:
Exposure Fusion and other HDR techniques generate well-exposed images from a bracketed image sequence while reproducing a large dynamic range that far exceeds the dynamic range of a single exposure. Common to all these techniques is the problem that the smallest movements in the captured images generate artefacts (ghosting) that dramatically affect the quality of the final images. This limits the use of HDR and Exposure Fusion techniques because common scenes of interest are usually dynamic. We present a method that adapts Exposure Fusion, as well as standard HDR techniques, to allow for dynamic scene without introducing artefacts. Our method detects clusters of moving pixels within a bracketed exposure sequence with simple binary operations. We show that the proposed technique is able to deal with a large amount of movement in the scene and different movement configurations. The result is a ghost-free and highly detailed exposure fused image at a low computational cost.
Resumo:
The tail-withdrawal circuit of Aplysia provides a useful model system for investigating synaptic dynamics. Sensory neurons within the circuit manifest several forms of synaptic plasticity. Here, we developed a model of the circuit and investigated the ways in which depression (DEP) and potentiation (POT) contributed to information processing. DEP limited the amount of motor neuron activity that could be elicited by the monosynaptic pathway alone. POT within the monosynaptic pathway did not compensate for DEP. There was, however, a synergistic interaction between POT and the polysynaptic pathway. This synergism extended the dynamic range of the network, and the interplay between DEP and POT made the circuit responded preferentially to long-duration, low-frequency inputs.
Resumo:
The tail-withdrawal circuit of Aplysia provides a useful model system for investigating synaptic dynamics. Sensory neurons within the circuit manifest several forms of synaptic plasticity. Here, we developed a model of the circuit and investigated the ways in which depression (DEP) and potentiation (POT) contributed to information processing. DEP limited the amount of motor neuron activity that could be elicited by the monosynaptic pathway alone. POT within the monosynaptic pathway did not compensate for DEP. There was, however, a synergistic interaction between POT and the polysynaptic pathway. This synergism extended the dynamic range of the network, and the interplay between DEP and POT made the circuit responded preferentially to long-duration, low-frequency inputs.
Resumo:
The impact of initial sample distribution on separation and focusing of analytes in a pH 3–11 gradient formed by 101 biprotic carrier ampholytes under concomitant electroosmotic displacement was studied by dynamic high-resolution computer simulation. Data obtained with application of the analytes mixed with the carrier ampholytes (as is customarily done), as a short zone within the initial carrier ampholyte zone, sandwiched between zones of carrier ampholytes, or introduced before or after the initial carrier ampholyte zone were compared. With sampling as a short zone within or adjacent to the carrier ampholytes, separation and focusing of analytes is shown to proceed as a cationic, anionic, or mixed process and separation of the analytes is predicted to be much faster than the separation of the carrier components. Thus, after the initial separation, analytes continue to separate and eventually reach their focusing locations. This is different to the double-peak approach to equilibrium that takes place when analytes and carrier ampholytes are applied as a homogenous mixture. Simulation data reveal that sample application between two zones of carrier ampholytes results in the formation of a pH gradient disturbance as the concentration of the carrier ampholytes within the fluid element initially occupied by the sample will be lower compared to the other parts of the gradient. As a consequence thereof, the properties of this region are sample matrix dependent, the pH gradient is flatter, and the region is likely to represent a conductance gap (hot spot). Simulation data suggest that sample placed at the anodic side or at the anodic end of the initial carrier ampholyte zone are the favorable configurations for capillary isoelectric focusing with electroosmotic zone mobilization.
Impact of epinephrine and norepinephrine on two dynamic indices in a porcine hemorrhagic shock model
Resumo:
Abstract BACKGROUND: Pulse pressure variations (PPVs) and stroke volume variations (SVVs) are dynamic indices for predicting fluid responsiveness in intensive care unit patients. These hemodynamic markers underscore Frank-Starling law by which volume expansion increases cardiac output (CO). The aim of the present study was to evaluate the impact of the administration of catecholamines on PPV, SVV, and inferior vena cava flow (IVCF). METHODS: In this prospective, physiologic, animal study, hemodynamic parameters were measured in deeply sedated and mechanically ventilated pigs. Systemic hemodynamic and pressure-volume loops obtained by inferior vena cava occlusion were recorded. Measurements were collected during two conditions, that is, normovolemia and hypovolemia, generated by blood removal to obtain a mean arterial pressure value lower than 60 mm Hg. At each condition, CO, IVCF, SVV, and PPV were assessed by catheters and flow meters. Data were compared between the conditions normovolemia and hypovolemia before and after intravenous administrations of norepinephrine and epinephrine using a nonparametric Wilcoxon test. RESULTS: Eight pigs were anesthetized, mechanically ventilated, and equipped. Both norepinephrine and epinephrine significantly increased IVCF and decreased PPV and SVV, regardless of volemic conditions (p < 0.05). However, epinephrine was also able to significantly increase CO regardless of volemic conditions. CONCLUSION: The present study demonstrates that intravenous administrations of norepinephrine and epinephrine increase IVCF, whatever the volemic conditions are. The concomitant decreases in PPV and SVV corroborate the fact that catecholamine administration recruits unstressed blood volume. In this regard, understanding a decrease in PPV and SVV values, after catecholamine administration, as an obvious indication of a restored volemia could be an outright misinterpretation.
Resumo:
The potential and adaptive flexibility of population dynamic P-systems (PDP) to study population dynamics suggests that they may be suitable for modelling complex fluvial ecosystems, characterized by a composition of dynamic habitats with many variables that interact simultaneously. Using as a model a reservoir occupied by the zebra mussel Dreissena polymorpha, we designed a computational model based on P systems to study the population dynamics of larvae, in order to evaluate management actions to control or eradicate this invasive species. The population dynamics of this species was simulated under different scenarios ranging from the absence of water flow change to a weekly variation with different flow rates, to the actual hydrodynamic situation of an intermediate flow rate. Our results show that PDP models can be very useful tools to model complex, partially desynchronized, processes that work in parallel. This allows the study of complex hydroecological processes such as the one presented, where reproductive cycles, temperature and water dynamics are involved in the desynchronization of the population dynamics both, within areas and among them. The results obtained may be useful in the management of other reservoirs with similar hydrodynamic situations in which the presence of this invasive species has been documented.
Resumo:
An axisymmetric, elastic pipe is filled with an incompressible fluid and is immersed in a second, coaxial rigid pipe which contains the same fluid. A pressure pulse in the outer fluid annulus deforms the elastic pipe which invokes a fluid motion in the fluid core. It is the aim of this study to investigate streaming phenomena in the core which may originate from such a fluid-structure interaction. This work presents a numerical solver for such a configuration. It was developed in the OpenFOAM software environment and is based on the Arbitrary Lagrangian Eulerian (ALE) approach for moving meshes. The solver features a monolithic integration of the one-dimensional, coupled system between the elastic structure and the outer fluid annulus into a dynamic boundary condition for the moving surface of the fluid core. Results indicate that our configuration may serve as a mechanical model of the Tullio Phenomenon (sound-induced vertigo).
Resumo:
Received signal strength-based localization systems usually rely on a calibration process that aims at characterizing the propagation channel. However, due to the changing environmental dynamics, the behavior of the channel may change after some time, thus, recalibration processes are necessary to maintain the positioning accuracy. This paper proposes a dynamic calibration method to initially calibrate and subsequently update the parameters of the propagation channel model using a Least Mean Squares approach. The method assumes that each anchor node in the localization infrastructure is characterized by its own propagation channel model. In practice, a set of sniffers is used to collect RSS samples, which will be used to automatically calibrate each channel model by iteratively minimizing the positioning error. The proposed method is validated through numerical simulation, showing that the positioning error of the mobile nodes is effectively reduced. Furthermore, the method has a very low computational cost; therefore it can be used in real-time operation for wireless resource-constrained nodes.
Resumo:
After the experience gained during the past years it seems clear that nonlinear analysis of bridges are very important to compute ductility demands and to localize potential hinges. This is specially true for irregular bridges in which it is not clear weather or not it is possible to use a linear computation followed by a correction using a behaviour factor. To simplify the numerical effort several approximate methods have been proposed. Among them, the so-called Dynamic Plastic Hinge Method in which an evolutionary shape function is used to reduce the structure to a single degree of freedom system seems to mantein a good balance between accuracy and simplicity. This paper presents results obtained in a parametric study conducted under the auspicies of PREC-8 european research program.
Resumo:
System identification deals with the problem of building mathematical models of dynamical systems based on observed data from the system" [1]. In the context of civil engineering, the system refers to a large scale structure such as a building, bridge, or an offshore structure, and identification mostly involves the determination of modal parameters (the natural frequencies, damping ratios, and mode shapes). This paper presents some modal identification results obtained using a state-of-the-art time domain system identification method (data-driven stochastic subspace algorithms [2]) applied to the output-only data measured in a steel arch bridge. First, a three dimensional finite element model was developed for the numerical analysis of the structure using ANSYS. Modal analysis was carried out and modal parameters were extracted in the frequency range of interest, 0-10 Hz. The results obtained from the finite element modal analysis were used to determine the location of the sensors. After that, ambient vibration tests were conducted during April 23-24, 2009. The response of the structure was measured using eight accelerometers. Two stations of three sensors were formed (triaxial stations). These sensors were held stationary for reference during the test. The two remaining sensors were placed at the different measurement points along the bridge deck, in which only vertical and transversal measurements were conducted (biaxial stations). Point estimate and interval estimate have been carried out in the state space model using these ambient vibration measurements. In the case of parametric models (like state space), the dynamic behaviour of a system is described using mathematical models. Then, mathematical relationships can be established between modal parameters and estimated point parameters (thus, it is common to use experimental modal analysis as a synonym for system identification). Stable modal parameters are found using a stabilization diagram. Furthermore, this paper proposes a method for assessing the precision of estimates of the parameters of state-space models (confidence interval). This approach employs the nonparametric bootstrap procedure [3] and is applied to subspace parameter estimation algorithm. Using bootstrap results, a plot similar to a stabilization diagram is developed. These graphics differentiate system modes from spurious noise modes for a given order system. Additionally, using the modal assurance criterion, the experimental modes obtained have been compared with those evaluated from a finite element analysis. A quite good agreement between numerical and experimental results is observed.
Resumo:
The problems being addressed involve the dynamic interaction of solids (structure and foundation) with a liquid (water). Various numerical procedures are reviewed and employed to solve the problem of establishing the expected response of a structure subjected to seismic excitations while duly accounting for those interactions. The methodology is applied to the analysis of dams, lock gates, and large storage tanks, incorporating in some cases a comparison with the results produced by means of simplified analytical procedures.
Resumo:
The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.