899 resultados para Image recognition and processing
Resumo:
Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo Automação e Electrónica Industrial
Resumo:
We investigate the influence of strong directional, or bonding, interactions on the phase diagram of complex fluids, and in particular on the liquid-vapour critical point. To this end we revisit a simple model and theory for associating fluids which consist of spherical particles having a hard-core repulsion, complemented by three short-ranged attractive sites on the surface (sticky spots). Two of the spots are of type A and one is of type B; the interactions between each pair of spots have strengths [image omitted], [image omitted] and [image omitted]. The theory is applied over the whole range of bonding strengths and results are interpreted in terms of the equilibrium cluster structures of the coexisting phases. In systems where unlike sites do not interact (i.e. where [image omitted]), the critical point exists all the way to [image omitted]. By contrast, when [image omitted], there is no critical point below a certain finite value of [image omitted]. These somewhat surprising results are rationalised in terms of the different network structures of the two systems: two long AA chains are linked by one BB bond (X-junction) in the former case, and by one AB bond (Y-junction) in the latter. The vapour-liquid transition may then be viewed as the condensation of these junctions and we find that X-junctions condense for any attractive [image omitted] (i.e. for any fraction of BB bonds), whereas condensation of the Y-junctions requires that [image omitted] be above a finite threshold (i.e. there must be a finite fraction of AB bonds).
Resumo:
Learning and teaching processes, like all human activities, can be mediated through the use of tools. Information and communication technologies are now widespread within education. Their use in the daily life of teachers and learners affords engagement with educational activities at any place and time and not necessarily linked to an institution or a certificate. In the absence of formal certification, learning under these circumstances is known as informal learning. Despite the lack of certification, learning with technology in this way presents opportunities to gather information about and present new ways of exploiting an individual’s learning. Cloud technologies provide ways to achieve this through new architectures, methodologies, and workflows that facilitate semantic tagging, recognition, and acknowledgment of informal learning activities. The transparency and accessibility of cloud services mean that institutions and learners can exploit existing knowledge to their mutual benefit. The TRAILER project facilitates this aim by providing a technological framework using cloud services, a workflow, and a methodology. The services facilitate the exchange of information and knowledge associated with informal learning activities ranging from the use of social software through widgets, computer gaming, and remote laboratory experiments. Data from these activities are shared among institutions, learners, and workers. The project demonstrates the possibility of gathering information related to informal learning activities independently of the context or tools used to carry them out.
Resumo:
Psychosocial interventions have proven to be effective in treating social cognition in people with psychotic disorders. The current study aimed to determine the effects of a metacognitive and social cognition training (MSCT) program, designed to both remediate deficits and correct biases in social cognition. Thirty-five clinically stable outpatients were recruited and assigned to the MSCT program (n = 19) for 10 weeks (18 sessions) or to the TAU group (n = 16), and they all completed pre- and post-treatment assessments of social cognition, cognitive biases, functioning and symptoms. The MSCT group demonstrated a significant improvement in theory of mind, social perception, emotion recognition and social functioning. Additionally, the tendency to jump to conclusions was significantly reduced among the MSCT group after training. There were no differential benefits regarding clinical symptoms except for one trend group effect for general psychopathology. The results support the efficacy of the MSCT format, but further development of the training program is required to increase the benefits related to attributional style.
Resumo:
The following article argues that recognition structures in work relations differ significantly in the sphere of paid work in contrast to unpaid work in private spheres. According to the systematic approach on recognition of Axel Honneth three different levels of recognition are identified: the interpersonal recognition, organisational recognition and societal recognition. Based on this framework it can be stated that recognition structures in the sphere of paid work and in private spheres differ very much. Whereas recognition in private spheres depends very much on personal relations, thus on the interpersonal level, recognition in employment relationships can be moreover built on organisational structures. Comparing recognition structures in both fields it becomes apparent, that recognition in field of employment can be characterised as much more concrete, comparable and measurable. Therefore, it can be concluded that the structural differences of recognition contribute to the high societal and individual importance of employment in contrast to unpaid work in private spheres.
Resumo:
Dissertação apresentada para a obtenção do grau de Doutor em Engenharia Química, especialidade Engenharia da Reacção Química, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Objectives: Children have a greater risk from radiation, per unit dose, due to increased radiosensitivity and longer life expectancies. It is of paramount importance to reduce the radiation dose received by children. This research concerns chest CT examinations on paediatric patients. The purpose of this study was to compare the image quality and the dose received from imaging with images reconstructed with filtered back projection (FBP) and five strengths of Sinogram-Affirmed Iterative Reconstruction (SAFIRE). Methods: Using a multi-slice CT scanner, six series of images were taken of a paediatric phantom. Two kVp values (80 and 110), 3 mAs values (25, 50 and 100) and 2 slice thicknesses (1 mm and 3 mm) were used. All images were reconstructed with FBP and five strengths of SAFIRE. Ten observers evaluated visual image quality. Dose was measured using CT-Expo. Results: FBP required a higher dose than all SAFIRE strengths to obtain the same image quality for sharpness and noise. For sharpness and contrast image quality ratings of 4, FBP required doses of 6.4 and 6.8 mSv respectively. SAFIRE 5 required doses of 3.4 and 4.3 mSv respectively. Clinical acceptance rate was improved by the higher voltage (110 kV) for all images in comparison to 80 kV, which required a higher dose for acceptable image quality. 3 mm images were typically better quality than 1 mm images. Conclusion: SAFIRE 5 was optimal for dose reduction and image quality.
Resumo:
Pine forests constitute some of the most important renewable resources supplying timber, paper and chemical industries, among other functions. Characterization of the volatiles emitted by different Pinus species has proven to be an important tool to decode the process of host tree selection by herbivore insects, some of which cause serious economic damage to pines. Variations in the relative composition of the bouquet of semiochemicals are responsible for the outcome of different biological processes, such as mate finding, egg-laying site recognition and host selection. The volatiles present in phloem samples of four pine species, P. halepensis, P. sylvestris, P. pinaster and P. pinea, were identified and characterized with the aim of finding possible host-plant attractants for native pests, such as the bark beetle Tomicus piniperda. The volatile compounds emitted by phloem samples of pines were extracted by headspace solid-phase micro extraction, using a 2 cm 50/30 mm divinylbenzene/carboxen/polydimethylsiloxane table flex solid-phase microextraction fiber and its contents analyzed by high-resolution gas chromatography, using flame ionization and a non polar and chiral column phases. The components of the volatile fraction emitted by the phloem samples were identified by mass spectrometry using time-of-flight and quadrupole mass analyzers. The estimated relative composition was used to perform a discriminant analysis among pine species, by means of cluster and principal component analysis. It can be concluded that it is possible to discriminate pine species based on the monoterpenes emissions of phloem samples.
Resumo:
The acquisition of a Myocardial Perfusion image (MPI) is of great importance for the diagnosis of the coronary artery disease, since it allows to evaluate which areas of the heart aren’t being properly perfused, in rest and stress situations. This exam is greatly influenced by photon attenuation which creates image artifacts and affects quantification. The acquisition of a Computerized Tomography (CT) image makes it possible to get an atomic images which can be used to perform high-quality attenuation corrections of the radiopharmaceutical distribution, in the MPI image. Studies show that by using hybrid imaging to perform diagnosis of the coronary artery disease, there is an increase on the specificity when evaluating the perfusion of the right coronary artery (RCA). Using an iterative algorithm with a resolution recovery software for the reconstruction, which balances the image quality, the administered activity and the scanning time, we aim to evaluate the influence of attenuation correction on the MPI image and the outcome in perfusion quantification and imaging quality.
Resumo:
In machine learning and pattern recognition tasks, the use of feature discretization techniques may have several advantages. The discretized features may hold enough information for the learning task at hand, while ignoring minor fluctuations that are irrelevant or harmful for that task. The discretized features have more compact representations that may yield both better accuracy and lower training time, as compared to the use of the original features. However, in many cases, mainly with medium and high-dimensional data, the large number of features usually implies that there is some redundancy among them. Thus, we may further apply feature selection (FS) techniques on the discrete data, keeping the most relevant features, while discarding the irrelevant and redundant ones. In this paper, we propose relevance and redundancy criteria for supervised feature selection techniques on discrete data. These criteria are applied to the bin-class histograms of the discrete features. The experimental results, on public benchmark data, show that the proposed criteria can achieve better accuracy than widely used relevance and redundancy criteria, such as mutual information and the Fisher ratio.
Resumo:
Hyperspectral imaging sensors provide image data containing both spectral and spatial information from the Earth surface. The huge data volumes produced by these sensors put stringent requirements on communications, storage, and processing. This paper presents a method, termed hyperspectral signal subspace identification by minimum error (HySime), that infer the signal subspace and determines its dimensionality without any prior knowledge. The identification of this subspace enables a correct dimensionality reduction yielding gains in algorithm performance and complexity and in data storage. HySime method is unsupervised and fully-automatic, i.e., it does not depend on any tuning parameters. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores - Ramo de Sistemas Autónomos
Resumo:
Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.
Resumo:
Arguably, the most difficult task in text classification is to choose an appropriate set of features that allows machine learning algorithms to provide accurate classification. Most state-of-the-art techniques for this task involve careful feature engineering and a pre-processing stage, which may be too expensive in the emerging context of massive collections of electronic texts. In this paper, we propose efficient methods for text classification based on information-theoretic dissimilarity measures, which are used to define dissimilarity-based representations. These methods dispense with any feature design or engineering, by mapping texts into a feature space using universal dissimilarity measures; in this space, classical classifiers (e.g. nearest neighbor or support vector machines) can then be used. The reported experimental evaluation of the proposed methods, on sentiment polarity analysis and authorship attribution problems, reveals that it approximates, sometimes even outperforms previous state-of-the-art techniques, despite being much simpler, in the sense that they do not require any text pre-processing or feature engineering.