907 resultados para Non-uniform flow
Resumo:
During the last decade, large and costly instruments are being replaced by system based on microfluidic devices. Microfluidic devices hold the promise of combining a small analytical laboratory onto a chip-sized substrate to identify, immobilize, separate, and purify cells, bio-molecules, toxins, and other chemical and biological materials. Compared to conventional instruments, microfluidic devices would perform these tasks faster with higher sensitivity and efficiency, and greater affordability. Dielectrophoresis is one of the enabling technologies for these devices. It exploits the differences in particle dielectric properties to allow manipulation and characterization of particles suspended in a fluidic medium. Particles can be trapped or moved between regions of high or low electric fields due to the polarization effects in non-uniform electric fields. By varying the applied electric field frequency, the magnitude and direction of the dielectrophoretic force on the particle can be controlled. Dielectrophoresis has been successfully demonstrated in the separation, transportation, trapping, and sorting of various biological particles.
Resumo:
This paper describes the improvements achieved in our mosaicking system to assist unmanned underwater vehicle navigation. A major advance has been attained in the processing of images of the ocean floor when light absorption effects are evident. Due to the absorption of natural light, underwater vehicles often require artificial light sources attached to them to provide the adequate illumination for processing underwater images. Unfortunately, these flashlights tend to illuminate the scene in a nonuniform fashion. In this paper a technique to correct non-uniform lighting is proposed. The acquired frames are compensated through a point-by-point division of the image by an estimation of the illumination field. Then, the gray-levels of the obtained image remapped to enhance image contrast. Experiments with real images are presented
Resumo:
This paper proposes a parallel architecture for estimation of the motion of an underwater robot. It is well known that image processing requires a huge amount of computation, mainly at low-level processing where the algorithms are dealing with a great number of data. In a motion estimation algorithm, correspondences between two images have to be solved at the low level. In the underwater imaging, normalised correlation can be a solution in the presence of non-uniform illumination. Due to its regular processing scheme, parallel implementation of the correspondence problem can be an adequate approach to reduce the computation time. Taking into consideration the complexity of the normalised correlation criteria, a new approach using parallel organisation of every processor from the architecture is proposed
Resumo:
El treball desenvolupat en aquesta tesi aprofundeix i aporta solucions innovadores en el camp orientat a tractar el problema de la correspondència en imatges subaquàtiques. En aquests entorns, el que realment complica les tasques de processat és la falta de contorns ben definits per culpa d'imatges esborronades; un fet aquest que es deu fonamentalment a il·luminació deficient o a la manca d'uniformitat dels sistemes d'il·luminació artificials. Els objectius aconseguits en aquesta tesi es poden remarcar en dues grans direccions. Per millorar l'algorisme d'estimació de moviment es va proposar un nou mètode que introdueix paràmetres de textura per rebutjar falses correspondències entre parells d'imatges. Un seguit d'assaigs efectuats en imatges submarines reals han estat portats a terme per seleccionar les estratègies més adients. Amb la finalitat d'aconseguir resultats en temps real, es proposa una innovadora arquitectura VLSI per la implementació d'algunes parts de l'algorisme d'estimació de moviment amb alt cost computacional.
Resumo:
A wind-tunnel study was conducted to investigate ventilation of scalars from urban-like geometries at neighbourhood scale by exploring two different geometries a uniform height roughness and a non-uniform height roughness, both with an equal plan and frontal density of λ p = λ f = 25%. In both configurations a sub-unit of the idealized urban surface was coated with a thin layer of naphthalene to represent area sources. The naphthalene sublimation method was used to measure directly total area-averaged transport of scalars out of the complex geometries. At the same time, naphthalene vapour concentrations controlled by the turbulent fluxes were detected using a fast Flame Ionisation Detection (FID) technique. This paper describes the novel use of a naphthalene coated surface as an area source in dispersion studies. Particular emphasis was also given to testing whether the concentration measurements were independent of Reynolds number. For low wind speeds, transfer from the naphthalene surface is determined by a combination of forced and natural convection. Compared with a propane point source release, a 25% higher free stream velocity was needed for the naphthalene area source to yield Reynolds-number-independent concentration fields. Ventilation transfer coefficients w T /U derived from the naphthalene sublimation method showed that, whilst there was enhanced vertical momentum exchange due to obstacle height variability, advection was reduced and dispersion from the source area was not enhanced. Thus, the height variability of a canopy is an important parameter when generalising urban dispersion. Fine resolution concentration measurements in the canopy showed the effect of height variability on dispersion at street scale. Rapid vertical transport in the wake of individual high-rise obstacles was found to generate elevated point-like sources. A Gaussian plume model was used to analyse differences in the downstream plumes. Intensified lateral and vertical plume spread and plume dilution with height was found for the non-uniform height roughness
Resumo:
Root characteristics of seedlings of five different barley genotypes were analysed in 2D using gel chambers, and in 3D using soil sacs that were destructively harvested and pots of soil that were assessed non-invasively using X-ray microtomography. After 5 days, Chime produced the greatest number of root axes (similar to 6) and Mehola significantly less (similar to 4) in all growing methods. Total root length was longest in GSH01915 and shortest in Mehola for all methods, but both total length and average root diameter were significantly larger for plants grown in gel chambers than those grown in soil. The ranking of particular growth traits (root number, root angular spread) of plants grown in gel plates, soil sacs and X-ray pots was similar, but plants grown in the gel chambers had a different order of ranking for root length to the soil-grown plants. Analysis of angles in soil-grown plants showed that Tadmore had the most even spread of individual roots and Chime had a propensity for non-uniform distribution and root clumping. The roots of Mehola were less well spread than the barley cultivars supporting the suggestion that wild and landrace barleys tend to have a narrower angular spread than modern cultivars. The three dimensional analysis of root systems carried out in this study provides insights into the limitations of screening methods for root traits and useful data for modelling root architecture.
Resumo:
There are several advantages of using metabolic labeling in quantitative proteomics. The early pooling of samples compared to post-labeling methods eliminates errors from different sample processing, protein extraction and enzymatic digestion. Metabolic labeling is also highly efficient and relatively inexpensive compared to commercial labeling reagents. However, methods for multiplexed quantitation in the MS-domain (or ‘non-isobaric’ methods), suffer from signal dilution at higher degrees of multiplexing, as the MS/MS signal for peptide identification is lower given the same amount of peptide loaded onto the column or injected into the mass spectrometer. This may partly be overcome by mixing the samples at non-uniform ratios, for instance by increasing the fraction of unlabeled proteins. We have developed an algorithm for arbitrary degrees of nonisobaric multiplexing for relative protein abundance measurements. We have used metabolic labeling with different levels of 15N, but the algorithm is in principle applicable to any isotope or combination of isotopes. Ion trap mass spectrometers are fast and suitable for LC-MS/MS and peptide identification. However, they cannot resolve overlapping isotopic envelopes from different peptides, which makes them less suitable for MS-based quantitation. Fourier-transform ion cyclotron resonance (FTICR) mass spectrometry is less suitable for LC-MS/MS, but provides the resolving power required to resolve overlapping isotopic envelopes. We therefore combined ion trap LC-MS/MS for peptide identification with FTICR LC-MS for quantitation using chromatographic alignment. We applied the method in a heat shock study in a plant model system (A. thaliana) and compared the results with gene expression data from similar experiments in literature.
Resumo:
Sea-level rise is an important aspect of climate change because of its impact on society and ecosystems. Here we present an intercomparison of results from ten coupled atmosphere-ocean general circulation models (AOGCMs) for sea-level changes simulated for the twentieth century and projected to occur during the twenty first century in experiments following scenario IS92a for greenhouse gases and sulphate aerosols. The model results suggest that the rate of sea-level rise due to thermal expansion of sea water has increased during the twentieth century, but the small set of tide gauges with long records might not be adequate to detect this acceleration. The rate of sea-level rise due to thermal expansion continues to increase throughout the twenty first century, and the projected total is consequently larger than in the twentieth century; for 1990-2090 it amounts to 0.20-0.37 in. This wide range results from systematic uncertainty in modelling of climate change and of heat uptake by the ocean. The AOGCMs agree that sea-level rise is expected to be geographically non-uniform, with some regions experiencing as much as twice the global average, and others practically zero, but they do not agree about the geographical pattern. The lack of agreement indicates that we cannot currently have confidence in projections of local sea- level changes, and reveals a need for detailed analysis and intercomparison in order to understand and reduce the disagreements.
Resumo:
In this paper, we study a model economy that examines the optimal intraday rate. In Freeman’s (1996) paper, he shows that the efficient allocation can be implemented by adopting a policy in which the intraday rate is zero. We modify the production set and show that such a model economy can account for the non-uniform distribution of settlements within a day. In addition, by modifying both the consumption set and the production set, we show that the central bank may be able to implement the planner’s allocation with a positive intraday interest rate.
Resumo:
A system identification algorithm is introduced for Hammerstein systems that are modelled using a non-uniform rational B-spline (NURB) neural network. The proposed algorithm consists of two successive stages. First the shaping parameters in NURB network are estimated using a particle swarm optimization (PSO) procedure. Then the remaining parameters are estimated by the method of the singular value decomposition (SVD). Numerical examples are utilized to demonstrate the efficacy of the proposed approach.
Resumo:
We propose a new algorithm for summarizing properties of large-scale time-evolving networks. This type of data, recording connections that come and go over time, is being generated in many modern applications, including telecommunications and on-line human social behavior. The algorithm computes a dynamic measure of how well pairs of nodes can communicate by taking account of routes through the network that respect the arrow of time. We take the conventional approach of downweighting for length (messages become corrupted as they are passed along) and add the novel feature of downweighting for age (messages go out of date). This allows us to generalize widely used Katz-style centrality measures that have proved popular in network science to the case of dynamic networks sampled at non-uniform points in time. We illustrate the new approach on synthetic and real data.
Resumo:
We address the problem of automatically identifying and restoring damaged and contaminated images. We suggest a novel approach based on a semi-parametric model. This has two components, a parametric component describing known physical characteristics and a more flexible non-parametric component. The latter avoids the need for a detailed model for the sensor, which is often costly to produce and lacking in robustness. We assess our approach using an analysis of electroencephalographic images contaminated by eye-blink artefacts and highly damaged photographs contaminated by non-uniform lighting. These experiments show that our approach provides an effective solution to problems of this type.
Resumo:
The continuous ranked probability score (CRPS) is a frequently used scoring rule. In contrast with many other scoring rules, the CRPS evaluates cumulative distribution functions. An ensemble of forecasts can easily be converted into a piecewise constant cumulative distribution function with steps at the ensemble members. This renders the CRPS a convenient scoring rule for the evaluation of ‘raw’ ensembles, obviating the need for sophisticated ensemble model output statistics or dressing methods prior to evaluation. In this article, a relation between the CRPS score and the quantile score is established. The evaluation of ‘raw’ ensembles using the CRPS is discussed in this light. It is shown that latent in this evaluation is an interpretation of the ensemble as quantiles but with non-uniform levels. This needs to be taken into account if the ensemble is evaluated further, for example with rank histograms.
Resumo:
A morphological instability of a mushy layer due to a forced flow in the melt is analysed. The instability is caused by flow induced in the mushy layer by Bernoulli suction at the crests of a sinusoidally perturbed mush–melt interface. The flow in the mushy layer advects heat away from crests which promotes solidification. Two linear stability analyses are presented: the fundamental mechanism for instability is elucidated by considering the case of uniform flow of an inviscid melt; a more complete analysis is then presented for the case of a parallel shear flow of a viscous melt. The novel instability mechanism we analyse here is contrasted with that investigated by Gilpin et al. (1980) and is found to be more potent for the case of newly forming sea ice.
Resumo:
Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.