976 resultados para Work flow


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental two-phase frictional pressure drop and flow boiling heat transfer results are presented for a horizontal 2.32-mm ID stainless-steel tube using R245fa as working fluid. The frictional pressure drop data was obtained under adiabatic and diabatic conditions. Experiments were performed for mass velocities ranging from 100 to 700 kg m−2 s−1 , heat flux from 0 to 55 kW m−2 , exit saturation temperatures of 31 and 41◦C, and vapor qualities from 0.10 to 0.99. Pressures drop gradients and heat transfer coefficients ranging from 1 to 70 kPa m−1 and from 1 to 7 kW m−2 K−1 were measured. It was found that the heat transfer coefficient is a strong function of the heat flux, mass velocity, and vapor quality. Five frictional pressure drop predictive methods were compared against the experimental database. The Cioncolini et al. (2009) method was found to work the best. Six flow boiling heat transfer predictive methods were also compared against the present database. Liu and Winterton (1991), Zhang et al. (2004), and Saitoh et al. (2007) were ranked as the best methods. They predicted the experimental flow boiling heat transfer data with an average error around 19%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to compare the techniques of indirect immunofluorescence assay (IFA) and flow cytometry to clinical and laboratorial evaluation of patients before and after clinical cure and to evaluate the applicability of flow cytometry in post-therapeutic monitoring of patients with American tegumentary leishmaniasis (ATL). Sera from 14 patients before treatment (BT), 13 patients 1 year after treatment (AT), 10 patients 2 and 5 years AT were evaluated. The results from flow cytometry were expressed as levels of IgG reactivity, based on the percentage of positive fluorescent parasites (PPFP). The 1:256 sample dilution allowed us to differentiate individuals BT and AT. Comparative analysis of IFA and flow cytometry by ROC (receiver operating characteristic curve) showed, respectively, AUC (area under curve) = 0.8 (95% CI = 0.64–0.89) and AUC = 0.90 (95% CI = 0.75–0.95), demonstrating that the flow cytometry had equivalent accuracy. Our data demonstrated that 20% was the best cut-off point identified by the ROC curve for the flow cytometry assay. This test showed a sensitivity of 86% and specificity of 77% while the IFA had a sensitivity of 78% and specificity of 85%. The after-treatment screening, through comparative analysis of the technique performance indexes, 1, 2 and 5 years AT, showed an equal performance of the flow cytometry compared with the IFA. However, flow cytometry shows to be a better diagnostic alternative when applied to the study of ATL in the cure criterion. The information obtained in this work opens perspectives to monitor cure after treatment of ATL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The accuracy and performance of current variational optical ow methods have considerably increased during the last years. The complexity of these techniques is high and enough care has to be taken for the implementation. The aim of this work is to present a comprehensible implementation of recent variational optical flow methods. We start with an energy model that relies on brightness and gradient constancy terms and a ow-based smoothness term. We minimize this energy model and derive an e cient implicit numerical scheme. In the experimental results, we evaluate the accuracy and performance of this implementation with the Middlebury benchmark database. We show that it is a competitive solution with respect to current methods in the literature. In order to increase the performance, we use a simple strategy to parallelize the execution on multi-core processors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] In this work we propose a new variational model for the consistent estimation of motion fields. The aim of this work is to develop appropriate spatio-temporal coherence models. In this sense, we propose two main contributions: a nonlinear flow constancy assumption, similar in spirit to the nonlinear brightness constancy assumption, which conveniently relates flow fields at different time instants; and a nonlinear temporal regularization scheme, which complements the spatial regularization and can cope with piecewise continuous motion fields. These contributions pose a congruent variational model since all the energy terms, except the spatial regularization, are based on nonlinear warpings of the flow field. This model is more general than its spatial counterpart, provides more accurate solutions and preserves the continuity of optical flows in time. In the experimental results, we show that the method attains better results and, in particular, it considerably improves the accuracy in the presence of large displacements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The aim of this work is to propose a new method for estimating the backward flow directly from the optical flow. We assume that the optical flow has already been computed and we need to estimate the inverse mapping. This mapping is not bijective due to the presence of occlusions and disocclusions, therefore it is not possible to estimate the inverse function in the whole domain. Values in these regions has to be guessed from the available information. We propose an accurate algorithm to calculate the backward flow uniquely from the optical flow, using a simple relation. Occlusions are filled by selecting the maximum motion and disocclusions are filled with two different strategies: a min-fill strategy, which fills each disoccluded region with the minimum value around the region; and a restricted min-fill approach that selects the minimum value in a close neighborhood. In the experimental results, we show the accuracy of the method and compare the results using these two strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The seminal work of Horn and Schunck [8] is the first variational method for optical flow estimation. It introduced a novel framework where the optical flow is computed as the solution of a minimization problem. From the assumption that pixel intensities do not change over time, the optical flow constraint equation is derived. This equation relates the optical flow with the derivatives of the image. There are infinitely many vector fields that satisfy the optical flow constraint, thus the problem is ill-posed. To overcome this problem, Horn and Schunck introduced an additional regularity condition that restricts the possible solutions. Their method minimizes both the optical flow constraint and the magnitude of the variations of the flow field, producing smooth vector fields. One of the limitations of this method is that, typically, it can only estimate small motions. In the presence of large displacements, this method fails when the gradient of the image is not smooth enough. In this work, we describe an implementation of the original Horn and Schunck method and also introduce a multi-scale strategy in order to deal with larger displacements. For this multi-scale strategy, we create a pyramidal structure of downsampled images and change the optical flow constraint equation with a nonlinear formulation. In order to tackle this nonlinear formula, we linearize it and solve the method iteratively in each scale. In this sense, there are two common approaches: one that computes the motion increment in the iterations, like in ; or the one we follow, that computes the full flow during the iterations, like in. The solutions are incrementally refined ower the scales. This pyramidal structure is a standard tool in many optical flow methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The aim of this work is to propose a model for computing the optical flow in a sequence of images. We introduce a new temporal regularizer that is suitable for large displacements. We propose to decouple the spatial and temporal regularizations to avoid an incongruous formulation. For the spatial regularization we use the Nagel-Enkelmann operator and a newly designed temporal regularization. Our model is based on an energy functional that yields a partial differential equation (PDE). This PDE is embedded into a multipyramidal strategy to recover large displacements. A gradient descent technique is applied at each scale to reach the minimum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] In this work, we describe an implementation of the variational method proposed by Brox et al. in 2004, which yields accurate optical flows with low running times. It has several benefits with respect to the method of Horn and Schunck: it is more robust to the presence of outliers, produces piecewise-smooth flow fields and can cope with constant brightness changes. This method relies on the brightness and gradient constancy assumptions, using the information of the image intensities and the image gradients to find correspondences. It also generalizes the use of continuous L1 functionals, which help mitigate the efect of outliers and create a Total Variation (TV) regularization. Additionally, it introduces a simple temporal regularization scheme that enforces a continuous temporal coherence of the flow fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyse the influence of colour information in optical flow methods. Typically, most of these techniques compute their solutions using grayscale intensities due to its simplicity and faster processing, ignoring the colour features. However, the current processing systems have minimized their computational cost and, on the other hand, it is reasonable to assume that a colour image offers more details from the scene which should facilitate finding better flow fields. The aim of this work is to determine if a multi-channel approach supposes a quite enough improvement to justify its use. In order to address this evaluation, we use a multi-channel implementation of a well-known TV-L1 method. Furthermore, we review the state-of-the-art in colour optical flow methods. In the experiments, we study various solutions using grayscale and RGB images from recent evaluation datasets to verify the colour benefits in motion estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] We analyze the discontinuity preserving problem in TV-L1 optical flow methods. This type of methods typically creates rounded effects at flow boundaries, which usually do not coincide with object contours. A simple strategy to overcome this problem consists in inhibiting the diffusion at high image gradients. In this work, we first introduce a general framework for TV regularizers in optical flow and relate it with some standard approaches. Our survey takes into account several methods that use decreasing functions for mitigating the diffusion at image contours. Consequently, this kind of strategies may produce instabilities in the estimation of the optical flows. Hence, we study the problem of instabilities and show that it actually arises from an ill-posed formulation. From this study, it is possible to come across with different schemes to solve this problem. One of these consists in separating the pure TV process from the mitigating strategy. This has been used in another work and we demonstrate here that it has a good performance. Furthermore, we propose two alternatives to avoid the instability problems: (i) we study a fully automatic approach that solves the problem based on the information of the whole image; (ii) we derive a semi-automatic approach that takes into account the image gradients in a close neighborhood adapting the parameter in each position. In the experimental results, we present a detailed study and comparison between the different alternatives. These methods provide very good results, especially for sequences with a few dominant gradients. Additionally, a surprising effect of these approaches is that they can cope with occlusions. This can be easily achieved by using strong regularizations and high penalizations at image contours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]The aim of this work is to study several strategies for the preservation of flow discontinuities in variational optical flow methods. We analyze the combination of robust functionals and diffusion tensors in the smoothness assumption. Our study includes the use of tensors based on decreasing functions, which has shown to provide good results. However, it presents several limitations and usually does not perform better than other basic approaches. It typically introduces instabilities in the computed motion fields in the form of independent \textit{blobs} of vectors with large magnitude...

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aerosol particles and water vapour are two important constituents of the atmosphere. Their interaction, i.e. thecondensation of water vapour on particles, brings about the formation of cloud, fog, and raindrops, causing the water cycle on the earth, and being responsible for climate changes. Understanding the roles of water vapour and aerosol particles in this interaction has become an essential part of understanding the atmosphere. In this work, the heterogeneous nucleation on pre-existing aerosol particles by the condensation of water vapour in theflow of a capillary nozzle was investigated. Theoretical and numerical modelling as well as experiments on thiscondensation process were included. Based on reasonable results from the theoretical and numerical modelling, an idea of designing a new nozzle condensation nucleus counter (Nozzle-CNC), that is to utilise the capillary nozzle to create an expanding water saturated air flow, was then put forward and various experiments were carried out with this Nozzle-CNC under different experimental conditions. Firstly, the air stream in the long capillary nozzle with inner diameter of 1.0~mm was modelled as a steady, compressible and heat-conducting turbulence flow by CFX-FLOW3D computational program. An adiabatic and isentropic cooling in the nozzle was found. A supersaturation in the nozzle can be created if the inlet flow is water saturated, and its value depends principally on flow velocity or flow rate through the nozzle. Secondly, a particle condensational growth model in air stream was developed. An extended Mason's diffusion growthequation with size correction for particles beyond the continuum regime and with the correction for a certain particle Reynolds number in an accelerating state was given. The modelling results show the rapid condensational growth of aerosol particles, especially for fine size particles, in the nozzle stream, which, on the one hand, may induce evident `over-sizing' and `over-numbering' effects in aerosol measurements as nozzle designs are widely employed for producing accelerating and focused aerosol beams in aerosol instruments like optical particle counter (OPC) and aerodynamical particle sizer (APS). It can, on the other hand, be applied in constructing the Nozzle-CNC. Thirdly, based on the optimisation of theoretical and numerical results, the new Nozzle-CNC was built. Under various experimental conditions such as flow rate, ambient temperature, and the fraction of aerosol in the total flow, experiments with this instrument were carried out. An interesting exponential relation between the saturation in the nozzle and the number concentration of atmospheric nuclei, including hygroscopic nuclei (HN), cloud condensation nuclei (CCN), and traditionally measured atmospheric condensation nuclei (CN), was found. This relation differs from the relation for the number concentration of CCN obtained by other researchers. The minimum detectable size of this Nozzle-CNC is 0.04?m. Although further improvements are still needed, this Nozzle-CNC, in comparison with other CNCs, has severaladvantages such as no condensation delay as particles larger than the critical size grow simultaneously, low diffusion losses of particles, little water condensation at the inner wall of the instrument, and adjustable saturation --- therefore the wide counting region, as well as no calibration compared to non-water condensation substances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work concerns with the study of debris flows and, in particular, with the related hazard in the Alpine Environment. During the last years several methodologies have been developed to evaluate hazard associated to such a complex phenomenon, whose velocity, impacting force and inappropriate temporal prediction are responsible of the related high hazard level. This research focuses its attention on the depositional phase of debris flows through the application of a numerical model (DFlowz), and on hazard evaluation related to watersheds morphometric, morphological and geological characterization. The main aims are to test the validity of DFlowz simulations and assess sources of errors in order to understand how the empirical uncertainties influence the predictions; on the other side the research concerns with the possibility of performing hazard analysis starting from the identification of susceptible debris flow catchments and definition of their activity level. 25 well documented debris flow events have been back analyzed with the model DFlowz (Berti and Simoni, 2007): derived form the implementation of the empirical relations between event volume and planimetric and cross section inundated areas, the code allows to delineate areas affected by an event by taking into account information about volume, preferential flow path and digital elevation model (DEM) of fan area. The analysis uses an objective methodology for evaluating the accuracy of the prediction and involve the calibration of the model based on factors describing the uncertainty associated to the semi empirical relationships. The general assumptions on which the model is based have been verified although the predictive capabilities are influenced by the uncertainties of the empirical scaling relationships, which have to be necessarily taken into account and depend mostly on errors concerning deposited volume estimation. In addition, in order to test prediction capabilities of physical-based models, some events have been simulated through the use of RAMMS (RApid Mass MovementS). The model, which has been developed by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) in Birmensdorf and the Swiss Federal Institute for Snow and Avalanche Research (SLF) takes into account a one-phase approach based on Voellmy rheology (Voellmy, 1955; Salm et al., 1990). The input file combines the total volume of the debris flow located in a release area with a mean depth. The model predicts the affected area, the maximum depth and the flow velocity in each cell of the input DTM. Relatively to hazard analysis related to watersheds characterization, the database collected by the Alto Adige Province represents an opportunity to examine debris-flow sediment dynamics at the regional scale and analyze lithologic controls. With the aim of advancing current understandings about debris flow, this study focuses on 82 events in order to characterize the topographic conditions associated with their initiation , transportation and deposition, seasonal patterns of occurrence and examine the role played by bedrock geology on sediment transfer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biodiesel represents a possible substitute to the fossil fuels; for this reason a good comprehension of the kinetics involved is important. Due to the complexity of the biodiesel mixture a common practice is the use of surrogate molecules to study its reactivity. In this work are presented the experimental and computational results obtained for the oxidation and pyrolysis of methane and methyl formate conducted in a plug flow reactor. The work was divided into two parts: the first one was the setup assembly whilst, in the second one, was realized a comparison between the experimental and model results; these last was obtained using models available in literature. It was started studying the methane since, a validate model was available, in this way was possible to verify the reliability of the experimental results. After this first study the attention was focused on the methyl formate investigation. All the analysis were conducted at different temperatures, pressures and, for the oxidation, at different equivalence ratios. The results shown that, a good comprehension of the kinetics is reach but efforts are necessary to better evaluate kinetics parameters such as activation energy. The results even point out that the realized setup is adapt to study the oxidation and pyrolysis and, for this reason, it will be employed to study a longer chain esters with the aim to better understand the kinetic of the molecules that are part of the biodiesel mixture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study of the pyrolysis and oxidation (phi 0.5-1-2) of methane and methyl formate (phi 0.5) in a laboratory flow reactor (Length = 50 cm, inner diameter = 2.5 cm) has been carried out at 1-4 atm and 300-1300 K temperature range. Exhaust gaseous species analysis was realized using a gas chromatographic system, Varian CP-4900 PRO Mirco-GC, with a TCD detector and using helium as carrier for a Molecular Sieve 5Å column and nitrogen for a COX column, whose temperatures and pressures were respectively of 65°C and 150kPa. Model simulations using NTUA [1], Fisher et al. [12], Grana [13] and Dooley [14] kinetic mechanisms have been performed with CHEMKIN. The work provides a basis for further development and optimization of existing detailed chemical kinetic schemes.