984 resultados para order-flow
Resumo:
The present work concerns with the study of debris flows and, in particular, with the related hazard in the Alpine Environment. During the last years several methodologies have been developed to evaluate hazard associated to such a complex phenomenon, whose velocity, impacting force and inappropriate temporal prediction are responsible of the related high hazard level. This research focuses its attention on the depositional phase of debris flows through the application of a numerical model (DFlowz), and on hazard evaluation related to watersheds morphometric, morphological and geological characterization. The main aims are to test the validity of DFlowz simulations and assess sources of errors in order to understand how the empirical uncertainties influence the predictions; on the other side the research concerns with the possibility of performing hazard analysis starting from the identification of susceptible debris flow catchments and definition of their activity level. 25 well documented debris flow events have been back analyzed with the model DFlowz (Berti and Simoni, 2007): derived form the implementation of the empirical relations between event volume and planimetric and cross section inundated areas, the code allows to delineate areas affected by an event by taking into account information about volume, preferential flow path and digital elevation model (DEM) of fan area. The analysis uses an objective methodology for evaluating the accuracy of the prediction and involve the calibration of the model based on factors describing the uncertainty associated to the semi empirical relationships. The general assumptions on which the model is based have been verified although the predictive capabilities are influenced by the uncertainties of the empirical scaling relationships, which have to be necessarily taken into account and depend mostly on errors concerning deposited volume estimation. In addition, in order to test prediction capabilities of physical-based models, some events have been simulated through the use of RAMMS (RApid Mass MovementS). The model, which has been developed by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) in Birmensdorf and the Swiss Federal Institute for Snow and Avalanche Research (SLF) takes into account a one-phase approach based on Voellmy rheology (Voellmy, 1955; Salm et al., 1990). The input file combines the total volume of the debris flow located in a release area with a mean depth. The model predicts the affected area, the maximum depth and the flow velocity in each cell of the input DTM. Relatively to hazard analysis related to watersheds characterization, the database collected by the Alto Adige Province represents an opportunity to examine debris-flow sediment dynamics at the regional scale and analyze lithologic controls. With the aim of advancing current understandings about debris flow, this study focuses on 82 events in order to characterize the topographic conditions associated with their initiation , transportation and deposition, seasonal patterns of occurrence and examine the role played by bedrock geology on sediment transfer.
Resumo:
An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.
Resumo:
As land is developed, the impervious surfaces that are created increase the amount of runoff during rainfall events, disrupting the natural hydrologic cycle, with an increment in volume of runoff and in pollutant loadings. Pollutants deposited or derived from an activity on the land surface will likely end up in stormwater runoff in some concentration, such as nutrients, sediment, heavy metals, hydrocarbons, gasoline additives, pathogens, deicers, herbicides and pesticides. Several of these pollutants are particulate-bound, so it appears clear that sediment removal can provide significant water-quality improvements and it appears to be important the knowledge of the ability of stromwater treatment devices to retain particulate matter. For this reason three different units which remove sediments have been tested through laboratory. In particular a roadside gully pot has been tested under steady hydraulic conditions, varying the characteristics of the influent solids (diameter, particle size distribution and specific gravity). The efficiency in terms of particles retained has been evaluated as a function of influent flow rate and particles characteristics; results have been compared to efficiency evaluated applying an overflow rate model. Furthermore the role of particles settling velocity in efficiency determination has been investigated. After the experimental runs on the gully pot, a standard full-scale model of an hydrodynamic separator (HS) has been tested under unsteady influent flow rate condition, and constant solid concentration at the input. The results presented in this study illustrate that particle separation efficiency of the unit is predominately influenced by operating flow rate, which strongly affects the particles and hydraulic residence time of the system. The efficiency data have been compared to results obtained from a modified overflow rate model; moreover the residence time distribution has been experimentally determined through tracer analyses for several steady flow rates. Finally three testing experiments have been performed for two different configurations of a full-scale model of a clarifier (linear and crenulated) under unsteady influent flow rate condition, and constant solid concentration at the input. The results illustrate that particle separation efficiency of the unit is predominately influenced by the configuration of the unit itself. Turbidity measures have been used to compare turbidity with the suspended sediments concentration, in order to find a correlation between these two values, which can allow to have a measure of the sediments concentration simply installing a turbidity probe.
Resumo:
The objective of this thesis was to improve the commercial CFD software Ansys Fluent to obtain a tool able to perform accurate simulations of flow boiling in the slug flow regime. The achievement of a reliable numerical framework allows a better understanding of the bubble and flow dynamics induced by the evaporation and makes possible the prediction of the wall heat transfer trends. In order to save computational time, the flow is modeled with an axisymmetrical formulation. Vapor and liquid phases are treated as incompressible and in laminar flow. By means of a single fluid approach, the flow equations are written as for a single phase flow, but discontinuities at the interface and interfacial effects need to be accounted for and discretized properly. Ansys Fluent provides a Volume Of Fluid technique to advect the interface and to map the discontinuous fluid properties throughout the flow domain. The interfacial effects are dominant in the boiling slug flow and the accuracy of their estimation is fundamental for the reliability of the solver. Self-implemented functions, developed ad-hoc, are introduced within the numerical code to compute the surface tension force and the rates of mass and energy exchange at the interface related to the evaporation. Several validation benchmarks assess the better performances of the improved software. Various adiabatic configurations are simulated in order to test the capability of the numerical framework in modeling actual flows and the comparison with experimental results is very positive. The simulation of a single evaporating bubble underlines the dominant effect on the global heat transfer rate of the local transient heat convection in the liquid after the bubble transit. The simulation of multiple evaporating bubbles flowing in sequence shows that their mutual influence can strongly enhance the heat transfer coefficient, up to twice the single phase flow value.
Resumo:
A fundamental gap in the current understanding of collapsed structures in the universe concerns the thermodynamical evolution of the ordinary, baryonic component. Unopposed radiative cooling of plasma would lead to the cooling catastrophe, a massive inflow of condensing gas toward the centre of galaxies, groups and clusters. The last generation of multiwavelength observations has radically changed our view on baryons, suggesting that the heating linked to the active galactic nucleus (AGN) may be the balancing counterpart of cooling. In this Thesis, I investigate the engine of the heating regulated by the central black hole. I argue that the mechanical feedback, based on massive subrelativistic outflows, is the key to solving the cooling flow problem, i.e. dramatically quenching the cooling rates for several billion years without destroying the cool-core structure. Using an upgraded version of the parallel 3D hydrodynamic code FLASH, I show that anisotropic AGN outflows can further reproduce fundamental observed features, such as buoyant bubbles, cocoon shocks, sonic ripples, metals dredge-up, and subsonic turbulence. The latter is an essential ingredient to drive nonlinear thermal instabilities, which cause cold gas condensation, a residual of the quenched cooling flow and, later, fuel for the AGN feedback engine. The self-regulated outflows are systematically tested on the scales of massive clusters, groups and isolated elliptical galaxies: in lighter less bound objects the feedback needs to be gentler and less efficient, in order to avoid drastic overheating. In this Thesis, I describe in depth the complex hydrodynamics, involving the coupling of the feedback energy to that of the surrounding hot medium. Finally, I present the merits and flaws of all the proposed models, with a critical eye toward observational concordance.
Resumo:
A novel design based on electric field-free open microwell arrays for the automated continuous-flow sorting of single or small clusters of cells is presented. The main feature of the proposed device is the parallel analysis of cell-cell and cell-particle interactions in each microwell of the array. High throughput sample recovery with a fast and separate transfer from the microsites to standard microtiter plates is also possible thanks to the flexible printed circuit board technology which permits to produce cost effective large area arrays featuring geometries compatible with laboratory equipment. The particle isolation is performed via negative dielectrophoretic forces which convey the particles’ into the microwells. Particles such as cells and beads flow in electrically active microchannels on whose substrate the electrodes are patterned. The introduction of particles within the microwells is automatically performed by generating the required feedback signal by a microscope-based optical counting and detection routine. In order to isolate a controlled number of particles we created two particular configurations of the electric field within the structure. The first one permits their isolation whereas the second one creates a net force which repels the particles from the microwell entrance. To increase the parallelism at which the cell-isolation function is implemented, a new technique based on coplanar electrodes to detect particle presence was implemented. A lock-in amplifying scheme was used to monitor the impedance of the channel perturbed by flowing particles in high-conductivity suspension mediums. The impedance measurement module was also combined with the dielectrophoretic focusing stage situated upstream of the measurement stage, to limit the measured signal amplitude dispersion due to the particles position variation within the microchannel. In conclusion, the designed system complies with the initial specifications making it suitable for cellomics and biotechnology applications.
Resumo:
This work presents a comprehensive methodology for the reduction of analytical or numerical stochastic models characterized by uncertain input parameters or boundary conditions. The technique, based on the Polynomial Chaos Expansion (PCE) theory, represents a versatile solution to solve direct or inverse problems related to propagation of uncertainty. The potentiality of the methodology is assessed investigating different applicative contexts related to groundwater flow and transport scenarios, such as global sensitivity analysis, risk analysis and model calibration. This is achieved by implementing a numerical code, developed in the MATLAB environment, presented here in its main features and tested with literature examples. The procedure has been conceived under flexibility and efficiency criteria in order to ensure its adaptability to different fields of engineering; it has been applied to different case studies related to flow and transport in porous media. Each application is associated with innovative elements such as (i) new analytical formulations describing motion and displacement of non-Newtonian fluids in porous media, (ii) application of global sensitivity analysis to a high-complexity numerical model inspired by a real case of risk of radionuclide migration in the subsurface environment, and (iii) development of a novel sensitivity-based strategy for parameter calibration and experiment design in laboratory scale tracer transport.
Resumo:
Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.
Resumo:
Theta burst stimulation (TBS) is a novel variant of repetitive transcranial magnetic stimulation (rTMS), which induces changes in neuronal excitability persisting up to 1h. When elicited in the primary motor cortex, such physiological modulations might also have an impact on motor behavior. In the present study, we applied TBS in combination with pseudo continuous arterial spin labeling (pCASL) in order to address the question of whether TBS effects are measurable by means of changes in physiological parameters such as cerebral blood flow (CBF) and if TBS-induced plasticity can modify motor behavior. Twelve right-handed healthy subjects were stimulated using an inhibitory TBS protocol at subthreshold stimulation intensity targeted over the right motor cortex. The control condition consisted of within-subject Sham treatment in a crossover design. PCASL was performed before (pre TBS/pre Sham) and immediately after treatment (post TBS/post Sham). During the pCASL runs, the subjects performed a sequential fingertapping task with the left hand at individual maximum speed. There was a significant increase of CBF in the primary motor cortex after TBS, but not after Sham. It is assumed that inhibitory TBS induced a "local virtual lesion" which leads to the mobilization of more neuronal resources. There was no TBS-specific modulation in motor behavior, which might indicate that acute changes in brain plasticity caused by TBS are immediately compensated. This compensatory reaction seems to be observable at the metabolic, but not at the behavioral level.
Resumo:
Our society uses a large diversity of co-existing wired and wireless networks in order to satisfy its communication needs. A cooper- ation between these networks can benefit performance, service availabil- ity and deployment ease, and leads to the emergence of hybrid networks. This position paper focuses on a hybrid mobile-sensor network identify- ing potential advantages and challenges of its use and defining feasible applications. The main value of the paper, however, is in the proposed analysis approach to evaluate the performance at the mobile network side given the mixed mobile-sensor traffic. The approach combines packet- level analysis with modelling of flow-level behaviour and can be applied for the study of various application scenarios. In this paper we consider two applications with distinct traffic models namely multimedia traffic and best-effort traffic.
Resumo:
In Alzheimer's disease (AD) patients, episodic memory impairments are apparent, yet semantic memory difficulties are also observed. While the episodic pathology has been thoroughly studied, the neurophysiological mechanisms of the semantic impairments remain obscure. Semantic dementia (SD) is characterized by isolated semantic memory deficits. The present study aimed to find an early marker of mild AD and SD by employing a semantic priming paradigm during electroencephalogram recordings. Event-related potentials (ERP) of early (P1, N1) and late (N400) word processing stages were obtained to measure semantic memory functions. Separately, baseline cerebral blood flow (CBF) was acquired with arterial spin labeling. Thus, the analysis focused on linear regressions of CBF with ERP topographical similarity indices in order to find the brain structures that showed altered baseline functionality associated with deviant ERPs. All participant groups showed semantic priming in their reaction times. Furthermore, decreased CBF in the temporal lobes was associated with abnormal N400 topography. No significant CBF clusters were found for the early ERPs. Taken together, the neurophysiological results suggested that the automatic spread of activation during semantic word processing was preserved in mild dementia, while controlled access to the words was impaired. These findings suggested that N400-topography alterations might be a potential marker for the detection of early dementia. Such a marker could be beneficial for differential diagnosis due to its low cost and non-invasive application as well as its relationship with semantic memory dysfunctions that are closely associated to the cortical deterioration in regions crucial for semantic word processing.
Resumo:
OBJECTIVE: The use of vasopressors for treatment of hypotension in sepsis may have adverse effects on microcirculatory blood flow in the gastrointestinal tract. The aim of this study was to measure the effects of three vasopressors, commonly used in clinical practice, on microcirculatory blood flow in multiple abdominal organs in sepsis. DESIGN: Random order, cross-over design. SETTING: University laboratory. SUBJECTS: Eight sedated and mechanically ventilated pigs. INTERVENTIONS: Pigs were exposed to fecal peritonitis-induced septic shock. Mesenteric artery flow was measured using ultrasound transit time flowmetry. Microcirculatory flow was measured in gastric, jejunal, and colon mucosa; jejunal muscularis; and pancreas, liver, and kidney using multiple-channel laser Doppler flowmetry. Each animal received a continuous intravenous infusion of epinephrine, norepinephrine, and phenylephrine in a dose increasing mean arterial pressure by 20%. The animals were allowed to recover for 60 mins after each drug before the next was started. MEASUREMENTS AND MAIN RESULTS: During infusion of epinephrine (0.8 +/- 0.2 mug/kg/hr), mean arterial pressure increased from 66 +/- 5 to 83 +/- 5 mm Hg and cardiac index increased by 43 +/- 9%. Norepinephrine (0.7 +/- 0.3 mug/kg/hr) increased mean arterial pressure from 70 +/- 4 to 87 +/- 5 mm Hg and cardiac index by 41 +/- 8%. Both agents caused a significant reduction in superior mesenteric artery flow (11 +/- 4%, p < .05, and 26 +/- 6%, p < .01, respectively) and in microcirculatory blood flow in the jejunal mucosa (21 +/- 5%, p < .01, and 23 +/- 3%, p < .01, respectively) and in the pancreas (16 +/- 3%, p < .05, and 8 +/- 3%, not significant, respectively). Infusion of phenylephrine (3.1 +/- 1.0 mug/kg/min) increased mean arterial pressure from 69 +/- 5 to 85 +/- 6 mm Hg but had no effects on systemic, regional, or microcirculatory flow except for a 30% increase in jejunal muscularis flow (p < .01). CONCLUSIONS: Administration of the vasopressors phenylephrine, epinephrine, and norepinephrine failed to increase microcirculatory blood flow in most abdominal organs despite increased perfusion pressure and-in the case of epinephrine and norepinephrine-increased systemic blood flow. In fact, norepinephrine and epinephrine appeared to divert blood flow away from the mesenteric circulation and decrease microcirculatory blood flow in the jejunal mucosa and pancreas. Phenylephrine, on the other hand, appeared to increase blood pressure without affecting quantitative blood flow or distribution of blood flow.
Resumo:
INTRODUCTION: The objective was to study the effects of a lung recruitment procedure by stepwise increases of mean airway pressure upon organ blood flow and hemodynamics during high-frequency oscillatory ventilation (HFOV) versus pressure-controlled ventilation (PCV) in experimental lung injury. METHODS: Lung damage was induced by repeated lung lavages in seven anesthetized pigs (23-26 kg). In randomized order, HFOV and PCV were performed with a fixed sequence of mean airway pressure increases (20, 25, and 30 mbar every 30 minutes). The transpulmonary pressure, systemic hemodynamics, intracranial pressure, cerebral perfusion pressure, organ blood flow (fluorescent microspheres), arterial and mixed venous blood gases, and calculated pulmonary shunt were determined at each mean airway pressure setting. RESULTS: The transpulmonary pressure increased during lung recruitment (HFOV, from 15 +/- 3 mbar to 22 +/- 2 mbar, P < 0.05; PCV, from 15 +/- 3 mbar to 23 +/- 2 mbar, P < 0.05), and high airway pressures resulted in elevated left ventricular end-diastolic pressure (HFOV, from 3 +/- 1 mmHg to 6 +/- 3 mmHg, P < 0.05; PCV, from 2 +/- 1 mmHg to 7 +/- 3 mmHg, P < 0.05), pulmonary artery occlusion pressure (HFOV, from 12 +/- 2 mmHg to 16 +/- 2 mmHg, P < 0.05; PCV, from 13 +/- 2 mmHg to 15 +/- 2 mmHg, P < 0.05), and intracranial pressure (HFOV, from 14 +/- 2 mmHg to 16 +/- 2 mmHg, P < 0.05; PCV, from 15 +/- 3 mmHg to 17 +/- 2 mmHg, P < 0.05). Simultaneously, the mean arterial pressure (HFOV, from 89 +/- 7 mmHg to 79 +/- 9 mmHg, P < 0.05; PCV, from 91 +/- 8 mmHg to 81 +/- 8 mmHg, P < 0.05), cardiac output (HFOV, from 3.9 +/- 0.4 l/minute to 3.5 +/- 0.3 l/minute, P < 0.05; PCV, from 3.8 +/- 0.6 l/minute to 3.4 +/- 0.3 l/minute, P < 0.05), and stroke volume (HFOV, from 32 +/- 7 ml to 28 +/- 5 ml, P < 0.05; PCV, from 31 +/- 2 ml to 26 +/- 4 ml, P < 0.05) decreased. Blood flows to the heart, brain, kidneys and jejunum were maintained. Oxygenation improved and the pulmonary shunt fraction decreased below 10% (HFOV, P < 0.05; PCV, P < 0.05). We detected no differences between HFOV and PCV at comparable transpulmonary pressures. CONCLUSION: A typical recruitment procedure at the initiation of HFOV improved oxygenation but also decreased systemic hemodynamics at high transpulmonary pressures when no changes of vasoactive drugs and fluid management were performed. Blood flow to the organs was not affected during lung recruitment. These effects were independent of the ventilator mode applied.
Resumo:
Water-saturated debris flows are among some of the most destructive mass movements. Their complex nature presents a challenge for quantitative description and modeling. In order to improve understanding of the dynamics of these flows, it is important to seek a simplified dynamic system underlying their behavior. Models currently in use to describe the motion of debris flows employ depth-averaged equations of motion, typically assuming negligible effects from vertical acceleration. However, in many cases debris flows experience significant vertical acceleration as they move across irregular surfaces, and it has been proposed that friction associated with vertical forces and liquefaction merit inclusion in any comprehensive mechanical model. The intent of this work is to determine the effect of vertical acceleration through a series of laboratory experiments designed to simulate debris flows, testing a recent model for debris flows experimentally. In the experiments, a mass of water-saturated sediment is released suddenly from a holding container, and parameters including rate of collapse, pore-fluid pressure, and bed load are monitored. Experiments are simplified to axial geometry so that variables act solely in the vertical dimension. Steady state equations to infer motion of the moving sediment mass are not sufficient to model accurately the independent solid and fluid constituents in these experiments. The model developed in this work more accurately predicts the bed-normal stress of a saturated sediment mass in motion and illustrates the importance of acceleration and deceleration.
Resumo:
Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.