7 resultados para Visual Object Identification Task
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
Application of dataset fusion techniques to an object detection task, involving the use of deep learning as convolutional neural networks, to manage to create a single RCNN architecture able to inference with good performances on two distinct datasets with different domains.
Resumo:
Generic object recognition is an important function of the human visual system and everybody finds it highly useful in their everyday life. For an artificial vision system it is a really hard, complex and challenging task because instances of the same object category can generate very different images, depending of different variables such as illumination conditions, the pose of an object, the viewpoint of the camera, partial occlusions, and unrelated background clutter. The purpose of this thesis is to develop a system that is able to classify objects in 2D images based on the context, and identify to which category the object belongs to. Given an image, the system can classify it and decide the correct categorie of the object. Furthermore the objective of this thesis is also to test the performance and the precision of different supervised Machine Learning algorithms in this specific task of object image categorization. Through different experiments the implemented application reveals good categorization performances despite the difficulty of the problem. However this project is open to future improvement; it is possible to implement new algorithms that has not been invented yet or using other techniques to extract features to make the system more reliable. This application can be installed inside an embedded system and after trained (performed outside the system), so it can become able to classify objects in a real-time. The information given from a 3D stereocamera, developed inside the department of Computer Engineering of the University of Bologna, can be used to improve the accuracy of the classification task. The idea is to segment a single object in a scene using the depth given from a stereocamera and in this way make the classification more accurate.
Resumo:
Alpha oscillatory activity has long been associated with perceptual and cognitive processes related to attention control. The aim of this study is to explore the task-dependent role of alpha frequency in a lateralized visuo-spatial detection task. Specifically, the thesis focuses on consolidating the scientific literature's knowledge about the role of alpha frequency in perceptual accuracy, and deepening the understanding of what determines trial-by-trial fluctuations of alpha parameters and how these fluctuations influence overall task performance. The hypotheses, confirmed empirically, were that different implicit strategies are put in place based on the task context, in order to maximize performance with optimal resource distribution (namely alpha frequency, associated positively with performance): “Lateralization” of the attentive resources towards one hemifield should be associated with higher alpha frequency difference between contralateral and ipsilateral hemisphere; “Distribution” of the attentive resources across hemifields should be associated with lower alpha frequency difference between hemispheres; These strategies, used by the participants according to their brain capabilities, have proven themselves adaptive or maladaptive depending on the different tasks to which they have been set: "Distribution" of the attentive resources seemed to be the best strategy when the distribution probability between hemifields was balanced: i.e. the neutral condition task. "Lateralization" of the attentive resources seemed to be more effective when the distribution probability between hemifields was biased towards one hemifield: i.e., the biased condition task.
Resumo:
In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
In this Bachelor Thesis I want to provide readers with tools and scripts for the control of a 7DOF manipulator, backed up by some theory of Robotics and Computer Science, in order to better contextualize the work done. In practice, we will see most common software, and developing environments, used to cope with our task: these include ROS, along with visual simulation by VREP and RVIZ, and an almost "stand-alone" ROS extension called MoveIt!, a very complete programming interface for trajectory planning and obstacle avoidance. As we will better appreciate and understand in the introduction chapter, the capability of detecting collision objects through a camera sensor, and re-plan to the desired end-effector pose, are not enough. In fact, this work is implemented in a more complex system, where recognition of particular objects is needed. Through a package of ROS and customized scripts, a detailed procedure will be provided on how to distinguish a particular object, retrieve its reference frame with respect to a known one, and then allow navigation to that target. Together with technical details, the aim is also to report working scripts and a specific appendix (A) you can refer to, if desiring to put things together.
Resumo:
L'elaborato consiste in uno studio dello stato dell'arte della visual relationship detection. Partendo da una breve introduzione riguardante l'object detection e i privi lavori in cui sono state utilizzate le relazioni verranno affrontati i principali metodi utilizzati per risolvere il problema.