912 resultados para 280200 Artificial Intelligence and Signal and Image Processing
Resumo:
The use of mobile robots turns out to be interesting in activities where the action of human specialist is difficult or dangerous. Mobile robots are often used for the exploration in areas of difficult access, such as rescue operations and space missions, to avoid human experts exposition to risky situations. Mobile robots are also used in agriculture for planting tasks as well as for keeping the application of pesticides within minimal amounts to mitigate environmental pollution. In this paper we present the development of a system to control the navigation of an autonomous mobile robot through tracks in plantations. Track images are used to control robot direction by pre-processing them to extract image features. Such features are then submitted to a support vector machine and an artificial neural network in order to find out the most appropriate route. A comparison of the two approaches was performed to ascertain the one presenting the best outcome. The overall goal of the project to which this work is connected is to develop a real time robot control system to be embedded into a hardware platform. In this paper we report the software implementation of a support vector machine and of an artificial neural network, which so far presented respectively around 93% and 90% accuracy in predicting the appropriate route. (C) 2013 The Authors. Published by Elsevier B.V. Selection and peer review under responsibility of the organizers of the 2013 International Conference on Computational Science
Resumo:
This paper presents two diagnostic methods for the online detection of broken bars in induction motors with squirrel-cage type rotors. The wavelet representation of a function is a new technique. Wavelet transform of a function is the improved version of Fourier transform. Fourier transform is a powerful tool for analyzing the components of a stationary signal. But it is failed for analyzing the non-stationary signal whereas wavelet transform allows the components of a non-stationary signal to be analyzed. In this paper, our main goal is to find out the advantages of wavelet transform compared to Fourier transform in rotor failure diagnosis of induction motors.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
In the last decade, the aquatic eddy correlation (EC) technique has proven to be a powerful approach for non-invasive measurements of oxygen fluxes across the sediment water interface. Fundamental to the EC approach is the correlation of turbulent velocity and oxygen concentration fluctuations measured with high frequencies in the same sampling volume. Oxygen concentrations are commonly measured with fast responding electrochemical microsensors. However, due to their own oxygen consumption, electrochemical microsensors are sensitive to changes of the diffusive boundary layer surrounding the probe and thus to changes in the ambient flow velocity. The so-called stirring sensitivity of microsensors constitutes an inherent correlation of flow velocity and oxygen sensing and thus an artificial flux which can confound the benthic flux determination. To assess the artificial flux we measured the correlation between the turbulent flow velocity and the signal of oxygen microsensors in a sealed annular flume without any oxygen sinks and sources. Experiments revealed significant correlations, even for sensors designed to have low stirring sensitivities of ~0.7%. The artificial fluxes depended on ambient flow conditions and, counter intuitively, increased at higher velocities because of the nonlinear contribution of turbulent velocity fluctuations. The measured artificial fluxes ranged from 2 - 70 mmol m**-2 d**-1 for weak and very strong turbulent flow, respectively. Further, the stirring sensitivity depended on the sensor orientation towards the flow. Optical microsensors (optodes) that should not exhibit a stirring sensitivity were tested in parallel and did not show any significant correlation between O2 signals and turbulent flow. In conclusion, EC data obtained with electrochemical sensors can be affected by artificial flux and we recommend using optical microsensors in future EC-studies. Flume experiments were conducted in February 2013 at the Institute for Environmental Sciences, University of Koblenz-Landau Landau. Experiments were performed in a closed oval-shaped acrylic glass flume with cross-sectional width of 4 cm and height of 10 cm and total length of 54 cm. The fluid flow was induced by a propeller driven by a motor and mean flow velocities of up to 20 cm s-1 were generated by applying voltages between 0 V and 4 V DC. The flume was completely sealed with an acrylic glass cover. Oxygen sensors were inserted through rubber seal fittings and allowed positioning the sensors with inclinations to the main flow direction of ~60°, ~95° and ~135°. A Clark type electrochemical O2 microsensor with a low stirring sensitivity (0.7%) was tested and a fast-responding needle-type O2 optode (PyroScience GmbH, Germany) was used as reference as optodes should not be stirring sensitive. Instantaneous three-dimensional flow velocities were measured at 7.4 Hz using stereoscopic particle image velocimetry (PIV). The velocity at the sensor tip was extracted. The correlation of the fluctuating O2 sensor signals and the fluctuating velocities was quantified with a cross-correlation analysis. A significant cross-correlation is equivalent to a significant artificial flux. For a total of 18 experiments the flow velocity was adjusted between 1.7 and 19.2 cm s**-1, and 3 different orientations of the electrochemical sensor were tested with inclination angles of ~60°, ~95° and ~135° with respect to the main flow direction. In experiments 16-18, wavelike flow was induced, whereas in all other experiments the motor was driven by constant voltages. In 7 experiments, O2 was additionally measured by optodes. Although performed simultaneously with the electrochemical sensor, optode measurements are listed as separate experiments (denoted by the attached 'op' in the filename), because the velocity time series was extracted at the optode tip, located at a different position in the flume.
Resumo:
This work presents a method to detect Microcalcifications in Regions of Interest from digitized mammograms. The method is based mainly on the combination of Image Processing, Pattern Recognition and Artificial Intelligence. The Top-Hat transform is a technique based on mathematical morphology operations that, in this work is used to perform contrast enhancement of microcalcifications in the region of interest. In order to find more or less homogeneous regions in the image, we apply a novel image sub-segmentation technique based on Possibilistic Fuzzy c-Means clustering algorithm. From the original region of interest we extract two window-based features, Mean and Deviation Standard, which will be used in a classifier based on a Artificial Neural Network in order to identify microcalcifications. Our results show that the proposed method is a good alternative in the stage of microcalcifications detection, because this stage is an important part of the early Breast Cancer detection
Resumo:
Government agencies responsible for riparian environments are assessing the utility of remote sensing for mapping and monitoring environmental health indicators. The objective of this work was to evaluate IKONOS and Landsat-7 ETM+ imagery for mapping riparian vegetation health indicators in tropical savannas for a section of Keelbottom Creek, Queensland, Australia. Vegetation indices and image texture from IKONOS data were used for estimating percentage canopy cover (r2=0.86). Pan-sharpened IKONOS data were used to map riparian species composition (overall accuracy=55%) and riparian zone width (accuracy within 4 m). Tree crowns could not be automatically delineated due to the lack of contrast between canopies and adjacent grass cover. The ETM+ imagery was suited for mapping the extent of riparian zones. Results presented demonstrate the capabilities of high and moderate spatial resolution imagery for mapping properties of riparian zones, which may be used as riparian environmental health indicators
Resumo:
Yorick Wilks is a central figure in the fields of Natural Language Processing and Artificial Intelligence. His influence extends to many areas and includes contributions to Machines Translation, word sense disambiguation, dialogue modeling and Information Extraction. This book celebrates the work of Yorick Wilks in the form of a selection of his papers which are intended to reflect the range and depth of his work. The volume accompanies a Festschrift which celebrates his contribution to the fields of Computational Linguistics and Artificial Intelligence. The papers include early work carried out at Cambridge University, descriptions of groundbreaking work on Machine Translation and Preference Semantics as well as more recent works on belief modeling and computational semantics. The selected papers reflect Yorick’s contribution to both practical and theoretical aspects of automatic language processing.
Resumo:
Yorick Wilks is a central figure in the fields of Natural Language Processing and Artificial Intelligence. His influence has extends to many areas of these fields and includes contributions to Machine Translation, word sense disambiguation, dialogue modeling and Information Extraction.This book celebrates the work of Yorick Wilks from the perspective of his peers. It consists of original chapters each of which analyses an aspect of his work and links it to current thinking in that area. His work has spanned over four decades but is shown to be pertinent to recent developments in language processing such as the Semantic Web.This volume forms a two-part set together with Words and Intelligence I, Selected Works by Yorick Wilks, by the same editors.
Resumo:
In this paper a prior knowledge representation for Artificial General Intelligence is proposed based on fuzzy rules using linguistic variables. These linguistic variables may be produced by neural network. Rules may be used for generation of basic emotions – positive and negative, which influence on planning and execution of behavior. The representation of Three Laws of Robotics as such prior knowledge is suggested as highest level of motivation in AGI.
Resumo:
The need to provide computers with the ability to distinguish the affective state of their users is a major requirement for the practical implementation of affective computing concepts. This dissertation proposes the application of signal processing methods on physiological signals to extract from them features that can be processed by learning pattern recognition systems to provide cues about a person's affective state. In particular, combining physiological information sensed from a user's left hand in a non-invasive way with the pupil diameter information from an eye-tracking system may provide a computer with an awareness of its user's affective responses in the course of human-computer interactions. In this study an integrated hardware-software setup was developed to achieve automatic assessment of the affective status of a computer user. A computer-based "Paced Stroop Test" was designed as a stimulus to elicit emotional stress in the subject during the experiment. Four signals: the Galvanic Skin Response (GSR), the Blood Volume Pulse (BVP), the Skin Temperature (ST) and the Pupil Diameter (PD), were monitored and analyzed to differentiate affective states in the user. Several signal processing techniques were applied on the collected signals to extract their most relevant features. These features were analyzed with learning classification systems, to accomplish the affective state identification. Three learning algorithms: Naïve Bayes, Decision Tree and Support Vector Machine were applied to this identification process and their levels of classification accuracy were compared. The results achieved indicate that the physiological signals monitored do, in fact, have a strong correlation with the changes in the emotional states of the experimental subjects. These results also revealed that the inclusion of pupil diameter information significantly improved the performance of the emotion recognition system. ^
Resumo:
Information processing in the human brain has always been considered as a source of inspiration in Artificial Intelligence; in particular, it has led researchers to develop different tools such as artificial neural networks. Recent findings in Neurophysiology provide evidence that not only neurons but also isolated and networks of astrocytes are responsible for processing information in the human brain. Artificial neural net- works (ANNs) model neuron-neuron communications. Artificial neuron-glia networks (ANGN), in addition to neuron-neuron communications, model neuron-astrocyte con- nections. In continuation of the research on ANGNs, first we propose, and evaluate a model of adaptive neuro fuzzy inference systems augmented with artificial astrocytes. Then, we propose a model of ANGNs that captures the communications of astrocytes in the brain; in this model, a network of artificial astrocytes are implemented on top of a typical neural network. The results of the implementation of both networks show that on certain combinations of parameter values specifying astrocytes and their con- nections, the new networks outperform typical neural networks. This research opens a range of possibilities for future work on designing more powerful architectures of artificial neural networks that are based on more realistic models of the human brain.
Resumo:
In the Era of precision medicine and big medical data sharing, it is necessary to solve the work-flow of digital radiological big data in a productive and effective way. In particular, nowadays, it is possible to extract information “hidden” in digital images, in order to create diagnostic algorithms helping clinicians to set up more personalized therapies, which are in particular targets of modern oncological medicine. Digital images generated by the patient have a “texture” structure that is not visible but encrypted; it is “hidden” because it cannot be recognized by sight alone. Thanks to artificial intelligence, pre- and post-processing software and generation of mathematical calculation algorithms, we could perform a classification based on non-visible data contained in radiological images. Being able to calculate the volume of tissue body composition could lead to creating clasterized classes of patients inserted in standard morphological reference tables, based on human anatomy distinguished by gender and age, and maybe in future also by race. Furthermore, the branch of “morpho-radiology" is a useful modality to solve problems regarding personalized therapies, which is particularly needed in the oncological field. Actually oncological therapies are no longer based on generic drugs but on target personalized therapy. The lack of gender and age therapies table could be filled thanks to morpho-radiology data analysis application.
Resumo:
The usage of Optical Character Recognition’s (OCR, systems is a widely spread technology into the world of Computer Vision and Machine Learning. It is a topic that interest many field, for example the automotive, where becomes a specialized task known as License Plate Recognition, useful for many application from the automation of toll road to intelligent payments. However, OCR systems need to be very accurate and generalizable in order to be able to extract the text of license plates under high variable conditions, from the type of camera used for acquisition to light changes. Such variables compromise the quality of digitalized real scenes causing the presence of noise and degradation of various type, which can be minimized with the application of modern approaches for image iper resolution and noise reduction. Oneclass of them is known as Generative Neural Networks, which are very strong ally for the solution of this popular problem.
Resumo:
In this paper, a framework for detection of human skin in digital images is proposed. This framework is composed of a training phase and a detection phase. A skin class model is learned during the training phase by processing several training images in a hybrid and incremental fuzzy learning scheme. This scheme combines unsupervised-and supervised-learning: unsupervised, by fuzzy clustering, to obtain clusters of color groups from training images; and supervised to select groups that represent skin color. At the end of the training phase, aggregation operators are used to provide combinations of selected groups into a skin model. In the detection phase, the learned skin model is used to detect human skin in an efficient way. Experimental results show robust and accurate human skin detection performed by the proposed framework.
Resumo:
The classical approach for acoustic imaging consists of beamforming, and produces the source distribution of interest convolved with the array point spread function. This convolution smears the image of interest, significantly reducing its effective resolution. Deconvolution methods have been proposed to enhance acoustic images and have produced significant improvements. Other proposals involve covariance fitting techniques, which avoid deconvolution altogether. However, in their traditional presentation, these enhanced reconstruction methods have very high computational costs, mostly because they have no means of efficiently transforming back and forth between a hypothetical image and the measured data. In this paper, we propose the Kronecker Array Transform ( KAT), a fast separable transform for array imaging applications. Under the assumption of a separable array, it enables the acceleration of imaging techniques by several orders of magnitude with respect to the fastest previously available methods, and enables the use of state-of-the-art regularized least-squares solvers. Using the KAT, one can reconstruct images with higher resolutions than was previously possible and use more accurate reconstruction techniques, opening new and exciting possibilities for acoustic imaging.