947 resultados para Location-dependent control-flow patterns
Resumo:
In this paper, we present a framework for Bayesian inference in continuous-time diffusion processes. The new method is directly related to the recently proposed variational Gaussian Process approximation (VGPA) approach to Bayesian smoothing of partially observed diffusions. By adopting a basis function expansion (BF-VGPA), both the time-dependent control parameters of the approximate GP process and its moment equations are projected onto a lower-dimensional subspace. This allows us both to reduce the computational complexity and to eliminate the time discretisation used in the previous algorithm. The new algorithm is tested on an Ornstein-Uhlenbeck process. Our preliminary results show that BF-VGPA algorithm provides a reasonably accurate state estimation using a small number of basis functions.
Resumo:
In recent years structured packings have become more widely used in the process industries because of their improved volumetric efficiency. Most structured packings consist of corrugated sheets placed in the vertical plane The corrugations provide a regular network of channels for vapour liquid contact. Until recently it has been necessary to develop new packings by trial and error, testing new shapes in the laboratory. The orderly repetitive nature of the channel network produced by a structured packing suggests it may be possible to develop improved structured packings by the application of computational fluid dynamics (CFD) to calculate the packing performance and evaluate changes in shape so as to reduce the need for laboratory testing. In this work the CFD package PHOENICS has been used to predict the flow patterns produced in the vapour phase as it passes through the channel network. A particular novelty of the approach is to set up a method of solving the Navier Stokes equations for any particular intersection of channels. The flow pattern of the streams leaving the intersection is then made the input to the downstream intersection. In this way the flow pattern within a section of packing can be calculated. The resulting heat or mass transfer performance can be calculated by other standard CFD procedures. The CFD predictions revealed a circulation developing within the channels which produce a loss in mass transfer efficiency The calculations explained and predicted a change in mass transfer efficiency with depth of the sheets. This effect was also shown experimentally. New shapes of packing were proposed to remove the circulation and these were evaluated using CFD. A new shape was chosen and manufactured. This was tested experimentally and found to have a higher mass transfer efficiency than the standard packing.
Resumo:
Finite element simulations have been performed along side normal mode analysis on the linear stability that examined the development of volumetrically heated flow patterns in a horizontal layer controlled by the Prandtl number, Pr, and the Grashof number, Gr. The fluid was bounded by an isothermal plane above an adiabatic plane. In the simulations performed here, a number of convective polygonal planforms occurred, as Gr increased above the critical Grashof number, Grc at Pr = 7, while roll structures were observed for Pr < 1 at 2Grc.
Resumo:
NMDA receptors (NMDAr) are known to undergo recycling and lateral diffusion in postsynaptic spines and dendrites. However, NMDAr are also present as autoreceptors on glutamate terminals, where they act to facilitate glutamate release, but it is not known whether these receptors are also mobile. We have used functional pharmacological approaches to examine whether NMDA receptors at excitatory synapses in the rat entorhinal cortex are mobile at either postsynaptic sites or in presynaptic terminals. When NMDAr-mediated evoked EPSCs (eEPSCs) were blocked by MK-801, they showed no evidence of recovery when the irreversible blocker was removed, suggesting that postsynaptic NMDAr were relatively stably anchored at these synapses. However, using frequency-dependent facilitation of AMPA receptor (AMPAr)-mediated eEPSCs as a reporter of presynaptic NMDAr activity, we found that when facilitation was blocked with MK-801 there was a rapid (similar to 30-40 min) anomalous recovery upon removal of the antagonist. This was not observed when global NMDAr blockade was induced by combined perfusion with MK-801 and NMDA. Anomalous recovery was accompanied by an increase in frequency of spontaneous EPSCs, and a variable increase in frequency-facilitation. Following recovery from blockade of presynaptic NMDAr with a competitive antagonist, frequency-dependent facilitation of AMPAr-mediated eEPSCs was also transiently enhanced. Finally, an increase in frequency of miniature EPSCs induced by NMDA was succeeded by a persistent decrease. Our data provide the first evidence for mobility of NMDAr in the presynaptic terminals, and may point to a role of this process in activity-dependent control of glutamate release.
Resumo:
Finite element simulations have been performed along side Galerkin-type calculations that examined the development of volumetrically heated flow patterns in a horizontal layer controlled by the Prandtl number, Pr, and the Grashof number, Gr. The fluid was bounded by an isothermal plane above an adiabatic plane. In the simulations performed here, a number of convective polygonal planforms occurred, as Gr increased above the critical Grashof number, Grc at Pr = 7, while roll structures were observed for Pr < 1 at 2Grc.
Resumo:
Software product line modeling aims at capturing a set of software products in an economic yet meaningful way. We introduce a class of variability models that capture the sharing between the software artifacts forming the products of a software product line (SPL) in a hierarchical fashion, in terms of commonalities and orthogonalities. Such models are useful when analyzing and verifying all products of an SPL, since they provide a scheme for divide-and-conquer-style decomposition of the analysis or verification problem at hand. We define an abstract class of SPLs for which variability models can be constructed that are optimal w.r.t. the chosen representation of sharing. We show how the constructed models can be fed into a previously developed algorithmic technique for compositional verification of control-flow temporal safety properties, so that the properties to be verified are iteratively decomposed into simpler ones over orthogonal parts of the SPL, and are not re-verified over the shared parts. We provide tool support for our technique, and evaluate our tool on a small but realistic SPL of cash desks.
Resumo:
Light rainfall is the baseline input to the annual water budget in mountainous landscapes through the tropics and at mid-latitudes. In the Southern Appalachians, the contribution from light rainfall ranges from 50-60% during wet years to 80-90% during dry years, with convective activity and tropical cyclone input providing most of the interannual variability. The Southern Appalachians is a region characterized by rich biodiversity that is vulnerable to land use/land cover changes due to its proximity to a rapidly growing population. Persistent near surface moisture and associated microclimates observed in this region has been well documented since the colonization of the area in terms of species health, fire frequency, and overall biodiversity. The overarching objective of this research is to elucidate the microphysics of light rainfall and the dynamics of low level moisture in the inner region of the Southern Appalachians during the warm season, with a focus on orographically mediated processes. The overarching research hypothesis is that physical processes leading to and governing the life cycle of orographic fog, low level clouds, and precipitation, and their interactions, are strongly tied to landform, land cover, and the diurnal cycles of flow patterns, radiative forcing, and surface fluxes at the ridge-valley scale. The following science questions will be addressed specifically: 1) How do orographic clouds and fog affect the hydrometeorological regime from event to annual scale and as a function of terrain characteristics and land cover?; 2) What are the source areas, governing processes, and relevant time-scales of near surface moisture convergence patterns in the region?; and 3) What are the four dimensional microphysical and dynamical characteristics, including variability and controlling factors and processes, of fog and light rainfall? The research was conducted with two major components: 1) ground-based high-quality observations using multi-sensor platforms and 2) interpretive numerical modeling guided by the analysis of the in situ data collection. Findings illuminate a high level of spatial – down to the ridge scale - and temporal – from event to annual scale - heterogeneity in observations, and a significant impact on the hydrological regime as a result of seeder-feeder interactions among fog, low level clouds, and stratiform rainfall that enhance coalescence efficiency and lead to significantly higher rainfall rates at the land surface. Specifically, results show that enhancement of an event up to one order of magnitude in short-term accumulation can occur as a result of concurrent fog presence. Results also show that events are modulated strongly by terrain characteristics including elevation, slope, geometry, and land cover. These factors produce interactions between highly localized flows and gradients of temperature and moisture with larger scale circulations. Resulting observations of DSD and rainfall patterns are stratified by region and altitude and exhibit clear diurnal and seasonal cycles.
Resumo:
Computational fluid dynamic (CFD) studies of blood flow in cerebrovascular aneurysms have potential to improve patient treatment planning by enabling clinicians and engineers to model patient-specific geometries and compute predictors and risks prior to neurovascular intervention. However, the use of patient-specific computational models in clinical settings is unfeasible due to their complexity, computationally intensive and time-consuming nature. An important factor contributing to this challenge is the choice of outlet boundary conditions, which often involves a trade-off between physiological accuracy, patient-specificity, simplicity and speed. In this study, we analyze how resistance and impedance outlet boundary conditions affect blood flow velocities, wall shear stresses and pressure distributions in a patient-specific model of a cerebrovascular aneurysm. We also use geometrical manipulation techniques to obtain a model of the patient’s vasculature prior to aneurysm development, and study how forces and stresses may have been involved in the initiation of aneurysm growth. Our CFD results show that the nature of the prescribed outlet boundary conditions is not as important as the relative distributions of blood flow through each outlet branch. As long as the appropriate parameters are chosen to keep these flow distributions consistent with physiology, resistance boundary conditions, which are simpler, easier to use and more practical than their impedance counterparts, are sufficient to study aneurysm pathophysiology, since they predict very similar wall shear stresses, time-averaged wall shear stresses, time-averaged pressures, and blood flow patterns and velocities. The only situations where the use of impedance boundary conditions should be prioritized is if pressure waveforms are being analyzed, or if local pressure distributions are being evaluated at specific time points, especially at peak systole, where the use of resistance boundary conditions leads to unnaturally large pressure pulses. In addition, we show that in this specific patient, the region of the blood vessel where the neck of the aneurysm developed was subject to abnormally high wall shear stresses, and that regions surrounding blebs on the aneurysmal surface were subject to low, oscillatory wall shear stresses. Computational models using resistance outlet boundary conditions may be suitable to study patient-specific aneurysm progression in a clinical setting, although several other challenges must be addressed before these tools can be applied clinically.
Resumo:
This paper focuses on two basic issues: the anxiety-generating nature of the interpreting task and the relevance of interpreter trainees’ academic self-concept. The first has already been acknowledged, although not extensively researched, in several papers, and the second has only been mentioned briefly in interpreting literature. This study seeks to examine the relationship between the anxiety and academic self-concept constructs among interpreter trainees. An adapted version of the Foreign Language Anxiety Scale (Horwitz et al., 1986), the Academic Autoconcept Scale (Schmidt, Messoulam & Molina, 2008) and a background information questionnaire were used to collect data. Students’ t-Test analysis results indicated that female students reported experiencing significantly higher levels of anxiety than male students. No significant gender difference in self-concept levels was found. Correlation analysis results suggested, on the one hand, that younger would-be interpreters suffered from higher anxiety levels and students with higher marks tended to have lower anxiety levels; and, on the other hand, that younger students had lower self-concept levels and higher-ability students held higher self-concept levels. In addition, the results revealed that students with higher anxiety levels tended to have lower self-concept levels. Based on these findings, recommendations for interpreting pedagogy are discussed.
Resumo:
Understanding the population structure and patterns of gene flow within species is of fundamental importance to the study of evolution. In the fields of population and evolutionary genetics, measures of genetic differentiation are commonly used to gather this information. One potential caveat is that these measures assume gene flow to be symmetric. However, asymmetric gene flow is common in nature, especially in systems driven by physical processes such as wind or water currents. As information about levels of asymmetric gene flow among populations is essential for the correct interpretation of the distribution of contemporary genetic diversity within species, this should not be overlooked. To obtain information on asymmetric migration patterns from genetic data, complex models based on maximum-likelihood or Bayesian approaches generally need to be employed, often at great computational cost. Here, a new simpler and more efficient approach for understanding gene flow patterns is presented. This approach allows the estimation of directional components of genetic divergence between pairs of populations at low computational effort, using any of the classical or modern measures of genetic differentiation. These directional measures of genetic differentiation can further be used to calculate directional relative migration and to detect asymmetries in gene flow patterns. This can be done in a user-friendly web application called divMigrate-online introduced in this study. Using simulated data sets with known gene flow regimes, we demonstrate that the method is capable of resolving complex migration patterns under a range of study designs.
Resumo:
Les logiciels actuels sont de grandes tailles, complexes et critiques. Le besoin de qualité exige beaucoup de tests, ce qui consomme de grandes quantités de ressources durant le développement et la maintenance de ces systèmes. Différentes techniques permettent de réduire les coûts liés aux activités de test. Notre travail s’inscrit dans ce cadre, est a pour objectif d’orienter l’effort de test vers les composants logiciels les plus à risque à l’aide de certains attributs du code source. À travers plusieurs démarches empiriques menées sur de grands logiciels open source, développés avec la technologie orientée objet, nous avons identifié et étudié les métriques qui caractérisent l’effort de test unitaire sous certains angles. Nous avons aussi étudié les liens entre cet effort de test et les métriques des classes logicielles en incluant les indicateurs de qualité. Les indicateurs de qualité sont une métrique synthétique, que nous avons introduite dans nos travaux antérieurs, qui capture le flux de contrôle ainsi que différentes caractéristiques du logiciel. Nous avons exploré plusieurs techniques permettant d’orienter l’effort de test vers des composants à risque à partir de ces attributs de code source, en utilisant des algorithmes d’apprentissage automatique. En regroupant les métriques logicielles en familles, nous avons proposé une approche basée sur l’analyse du risque des classes logicielles. Les résultats que nous avons obtenus montrent les liens entre l’effort de test unitaire et les attributs de code source incluant les indicateurs de qualité, et suggèrent la possibilité d’orienter l’effort de test à l’aide des métriques.
Resumo:
The building envelope is the principal mean of interaction between indoors and environment, with direct influence on thermal and energy performance of the building. By intervening in the envelope, with the proposal of specific architectural elements, it is possible to promote the use of passive strategies of conditioning, such as natural ventilation. The cross ventilation is recommended by the NBR 15220-3 as the bioclimatic main strategy for the hot and humid climate of Natal/RN, offering among other benefits, the thermal comfort of occupants. The analysis tools of natural ventilation, on the other hand, cover a variety of techniques, from the simplified calculation methods to computer fluid dynamics, whose limitations are discussed in several papers, but without detailing the problems encountered. In this sense, the present study aims to evaluate the potential of wind catchers, envelope elements used to increase natural ventilation in the building, through CFD simplified simulation. Moreover, it seeks to quantify the limitations encountered during the analysis. For this, the procedure adopted to evaluate the elements implementation and efficiency was the CFD simulation, abbreviation for Computer Fluid Dynamics, with the software DesignBuilder CFD. It was defined a base case, where wind catchers were added with various settings, to compare them with each other and appreciate the differences in flows and air speeds encountered. Initially there has been done sensitivity tests for familiarization with the software and observe simulation patterns, mapping the settings used and simulation time for each case simulated. The results show the limitations encountered during the simulation process, as well as an overview of the efficiency and potential of wind catchers, with the increase of ventilation with the use of catchers, differences in air flow patterns and significant increase in air speeds indoors, besides changes found due to different element geometries. It is considered that the software used can help designers during preliminary analysis in the early stages of design
Resumo:
Normal grain growth of calcite was investigated by combining grain size analysis of calcite across the contact aureole of the Adamello pluton, and grain growth modeling based on a thermal model of the surroundings of the pluton. In an unbiased model system, i.e., location dependent variations in temperature-time path, 2/3 and 1/3 of grain growth occurs during pro- and retrograde metamorphism at all locations, respectively. In contrast to this idealized situation, in the field example three groups can be distinguished, which are characterized by variations in their grain size versus temperature relationships: Group I occurs at low temperatures and the grain size remains constant because nano-scale second phase particles of organic origin inhibit grain growth in the calcite aggregates under these conditions. In the presence of an aqueous fluid, these second phases decay at a temperature of about 350 °C enabling the onset of grain growth in calcite. In the following growth period, fluid-enhanced group II and slower group III growth occurs. For group II a continuous and intense grain size increase with T is typical while the grain growth decreases with T for group III. None of the observed trends correlate with experimentally based grain growth kinetics, probably due to differences between nature and experiment which have not yet been investigated (e.g., porosity, second phases). Therefore, grain growth modeling was used to iteratively improve the correlation between measured and modeled grain sizes by optimizing activation energy (Q), pre-exponential factor (k0) and grain size exponent (n). For n=2, Q of 350 kJ/mol, k0 of 1.7×1021 μmns−1 and Q of 35 kJ/mol, k0 of 2.5×10-5 μmns−1 were obtained for group II and III, respectively. With respect to future work, field-data based grain growth modeling might be a promising tool for investigating the influences of secondary effects like porosity and second phases on grain growth in nature, and to unravel differences between nature and experiment.
Resumo:
Gas-liquid two-phase flow is very common in industrial applications, especially in the oil and gas, chemical, and nuclear industries. As operating conditions change such as the flow rates of the phases, the pipe diameter and physical properties of the fluids, different configurations called flow patterns take place. In the case of oil production, the most frequent pattern found is slug flow, in which continuous liquid plugs (liquid slugs) and gas-dominated regions (elongated bubbles) alternate. Offshore scenarios where the pipe lies onto the seabed with slight changes of direction are extremely common. With those scenarios and issues in mind, this work presents an experimental study of two-phase gas-liquid slug flows in a duct with a slight change of direction, represented by a horizontal section followed by a downward sloping pipe stretch. The experiments were carried out at NUEM (Núcleo de Escoamentos Multifásicos UTFPR). The flow initiated and developed under controlled conditions and their characteristic parameters were measured with resistive sensors installed at four pipe sections. Two high-speed cameras were also used. With the measured results, it was evaluated the influence of a slight direction change on the slug flow structures and on the transition between slug flow and stratified flow in the downward section.
Resumo:
The building envelope is the principal mean of interaction between indoors and environment, with direct influence on thermal and energy performance of the building. By intervening in the envelope, with the proposal of specific architectural elements, it is possible to promote the use of passive strategies of conditioning, such as natural ventilation. The cross ventilation is recommended by the NBR 15220-3 as the bioclimatic main strategy for the hot and humid climate of Natal/RN, offering among other benefits, the thermal comfort of occupants. The analysis tools of natural ventilation, on the other hand, cover a variety of techniques, from the simplified calculation methods to computer fluid dynamics, whose limitations are discussed in several papers, but without detailing the problems encountered. In this sense, the present study aims to evaluate the potential of wind catchers, envelope elements used to increase natural ventilation in the building, through CFD simplified simulation. Moreover, it seeks to quantify the limitations encountered during the analysis. For this, the procedure adopted to evaluate the elements implementation and efficiency was the CFD simulation, abbreviation for Computer Fluid Dynamics, with the software DesignBuilder CFD. It was defined a base case, where wind catchers were added with various settings, to compare them with each other and appreciate the differences in flows and air speeds encountered. Initially there has been done sensitivity tests for familiarization with the software and observe simulation patterns, mapping the settings used and simulation time for each case simulated. The results show the limitations encountered during the simulation process, as well as an overview of the efficiency and potential of wind catchers, with the increase of ventilation with the use of catchers, differences in air flow patterns and significant increase in air speeds indoors, besides changes found due to different element geometries. It is considered that the software used can help designers during preliminary analysis in the early stages of design