37 resultados para Image recognition and processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the influence of strong directional, or bonding, interactions on the phase diagram of complex fluids, and in particular on the liquid-vapour critical point. To this end we revisit a simple model and theory for associating fluids which consist of spherical particles having a hard-core repulsion, complemented by three short-ranged attractive sites on the surface (sticky spots). Two of the spots are of type A and one is of type B; the interactions between each pair of spots have strengths [image omitted], [image omitted] and [image omitted]. The theory is applied over the whole range of bonding strengths and results are interpreted in terms of the equilibrium cluster structures of the coexisting phases. In systems where unlike sites do not interact (i.e. where [image omitted]), the critical point exists all the way to [image omitted]. By contrast, when [image omitted], there is no critical point below a certain finite value of [image omitted]. These somewhat surprising results are rationalised in terms of the different network structures of the two systems: two long AA chains are linked by one BB bond (X-junction) in the former case, and by one AB bond (Y-junction) in the latter. The vapour-liquid transition may then be viewed as the condensation of these junctions and we find that X-junctions condense for any attractive [image omitted] (i.e. for any fraction of BB bonds), whereas condensation of the Y-junctions requires that [image omitted] be above a finite threshold (i.e. there must be a finite fraction of AB bonds).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: Children have a greater risk from radiation, per unit dose, due to increased radiosensitivity and longer life expectancies. It is of paramount importance to reduce the radiation dose received by children. This research concerns chest CT examinations on paediatric patients. The purpose of this study was to compare the image quality and the dose received from imaging with images reconstructed with filtered back projection (FBP) and five strengths of Sinogram-Affirmed Iterative Reconstruction (SAFIRE). Methods: Using a multi-slice CT scanner, six series of images were taken of a paediatric phantom. Two kVp values (80 and 110), 3 mAs values (25, 50 and 100) and 2 slice thicknesses (1 mm and 3 mm) were used. All images were reconstructed with FBP and five strengths of SAFIRE. Ten observers evaluated visual image quality. Dose was measured using CT-Expo. Results: FBP required a higher dose than all SAFIRE strengths to obtain the same image quality for sharpness and noise. For sharpness and contrast image quality ratings of 4, FBP required doses of 6.4 and 6.8 mSv respectively. SAFIRE 5 required doses of 3.4 and 4.3 mSv respectively. Clinical acceptance rate was improved by the higher voltage (110 kV) for all images in comparison to 80 kV, which required a higher dose for acceptable image quality. 3 mm images were typically better quality than 1 mm images. Conclusion: SAFIRE 5 was optimal for dose reduction and image quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The acquisition of a Myocardial Perfusion image (MPI) is of great importance for the diagnosis of the coronary artery disease, since it allows to evaluate which areas of the heart aren’t being properly perfused, in rest and stress situations. This exam is greatly influenced by photon attenuation which creates image artifacts and affects quantification. The acquisition of a Computerized Tomography (CT) image makes it possible to get an atomic images which can be used to perform high-quality attenuation corrections of the radiopharmaceutical distribution, in the MPI image. Studies show that by using hybrid imaging to perform diagnosis of the coronary artery disease, there is an increase on the specificity when evaluating the perfusion of the right coronary artery (RCA). Using an iterative algorithm with a resolution recovery software for the reconstruction, which balances the image quality, the administered activity and the scanning time, we aim to evaluate the influence of attenuation correction on the MPI image and the outcome in perfusion quantification and imaging quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In machine learning and pattern recognition tasks, the use of feature discretization techniques may have several advantages. The discretized features may hold enough information for the learning task at hand, while ignoring minor fluctuations that are irrelevant or harmful for that task. The discretized features have more compact representations that may yield both better accuracy and lower training time, as compared to the use of the original features. However, in many cases, mainly with medium and high-dimensional data, the large number of features usually implies that there is some redundancy among them. Thus, we may further apply feature selection (FS) techniques on the discrete data, keeping the most relevant features, while discarding the irrelevant and redundant ones. In this paper, we propose relevance and redundancy criteria for supervised feature selection techniques on discrete data. These criteria are applied to the bin-class histograms of the discrete features. The experimental results, on public benchmark data, show that the proposed criteria can achieve better accuracy than widely used relevance and redundancy criteria, such as mutual information and the Fisher ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral imaging sensors provide image data containing both spectral and spatial information from the Earth surface. The huge data volumes produced by these sensors put stringent requirements on communications, storage, and processing. This paper presents a method, termed hyperspectral signal subspace identification by minimum error (HySime), that infer the signal subspace and determines its dimensionality without any prior knowledge. The identification of this subspace enables a correct dimensionality reduction yielding gains in algorithm performance and complexity and in data storage. HySime method is unsupervised and fully-automatic, i.e., it does not depend on any tuning parameters. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Arguably, the most difficult task in text classification is to choose an appropriate set of features that allows machine learning algorithms to provide accurate classification. Most state-of-the-art techniques for this task involve careful feature engineering and a pre-processing stage, which may be too expensive in the emerging context of massive collections of electronic texts. In this paper, we propose efficient methods for text classification based on information-theoretic dissimilarity measures, which are used to define dissimilarity-based representations. These methods dispense with any feature design or engineering, by mapping texts into a feature space using universal dissimilarity measures; in this space, classical classifiers (e.g. nearest neighbor or support vector machines) can then be used. The reported experimental evaluation of the proposed methods, on sentiment polarity analysis and authorship attribution problems, reveals that it approximates, sometimes even outperforms previous state-of-the-art techniques, despite being much simpler, in the sense that they do not require any text pre-processing or feature engineering.