894 resultados para Standardization in robotics
Resumo:
Tactile sensors play an important role in robotics manipulation to perform dexterous and complex tasks. This paper presents a novel control framework to perform dexterous manipulation with multi-fingered robotic hands using feedback data from tactile and visual sensors. This control framework permits the definition of new visual controllers which allow the path tracking of the object motion taking into account both the dynamics model of the robot hand and the grasping force of the fingertips under a hybrid control scheme. In addition, the proposed general method employs optimal control to obtain the desired behaviour in the joint space of the fingers based on an indicated cost function which determines how the control effort is distributed over the joints of the robotic hand. Finally, authors show experimental verifications on a real robotic manipulation system for some of the controllers derived from the control framework.
Resumo:
The use of RGB-D sensors for mapping and recognition tasks in robotics or, in general, for virtual reconstruction has increased in recent years. The key aspect of these kinds of sensors is that they provide both depth and color information using the same device. In this paper, we present a comparative analysis of the most important methods used in the literature for the registration of subsequent RGB-D video frames in static scenarios. The analysis begins by explaining the characteristics of the registration problem, dividing it into two representative applications: scene modeling and object reconstruction. Then, a detailed experimentation is carried out to determine the behavior of the different methods depending on the application. For both applications, we used standard datasets and a new one built for object reconstruction.
Resumo:
3D sensors provides valuable information for mobile robotic tasks like scene classification or object recognition, but these sensors often produce noisy data that makes impossible applying classical keypoint detection and feature extraction techniques. Therefore, noise removal and downsampling have become essential steps in 3D data processing. In this work, we propose the use of a 3D filtering and down-sampling technique based on a Growing Neural Gas (GNG) network. GNG method is able to deal with outliers presents in the input data. These features allows to represent 3D spaces, obtaining an induced Delaunay Triangulation of the input space. Experiments show how the state-of-the-art keypoint detectors improve their performance using GNG output representation as input data. Descriptors extracted on improved keypoints perform better matching in robotics applications as 3D scene registration.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
The EU has become a loose kind ofsocial federation, a fact that has not been adequately taken into account due to the peculiarities ofthe Maastricht strategy for monetary integration. Yet, a new approach to the economic theory offederalism is required ifone wants to analyze the most pressing issues ofEU social policy. The social insurance view of redistribution and stabilization provides for such an approach. This view supports laboratory federalism in which it is the role ofthe EU Commission to contain systems competition in order to preserve "stability in diversity." The role ofthe EU level would be to promote horizontal and vertical learning processes and to make sure that stability concerns ofthe EU are taken seriously by member countries' governments. The minimum requirements framework for social policy that the EU Commission has adopted must be taken as a point of departure, even though it is a less than satisfactory approach from this point of view. Laboratory standardization, in contrast, would not set specific minimum requirements but meta-standards that protect systems functions and safeguard against systems failures.
Resumo:
Desde o início do crescente interesse na área de robótica que a navegação autónoma se apresenta como um problema de complexa resolução que, por isso, desperta vasto interesse no meio científico. Além disso, as capacidades da navegação autónoma aliadas à robótica permitem o desenvolvimento de variadas aplicações. O objectivo da navegação autónoma é conferir, a um dispositivo motor, capacidade de decisão relativa à locomoção. Para o efeito, utilizam-se sensores, como os sensores IMU, o receptor GPS e os encoders, para fornecer os dados essenciais à navegação. A dificuldade encontra-se no correcto processamento destes sinais uma vez que são susceptíveis a fontes de ruído. Este trabalho apresenta um sistema de navegação autónomo aplicado ao controlo de um robot. Para tal, desenvolveu-se uma aplicação que alberga todo o sistema de localização, navegação e controlo, acrescido de uma interface gráfica, que permite a visualização em mapa da movimentação autónoma do robot. Recorre-se ao Filtro de Kalman como método probabilístico de estimação de posição, em que os sinais dos vários sensores são conjugados e filtrados. Foram realizados vários testes de modo a avaliar a capacidade do robot atingir os pontos traçados e a sua autonomia no seguimento da trajectória pretendida.
Resumo:
Nos. 1-56, July 26, 1913-Aug. 15, 1914, were issued weekly in the form of leaflets; no. 57-92, Jan. 1915-Dec. 1917, monthly, in the form of pamphlets, containing studies in government; no. 93-95, irregularly issued.
Resumo:
Mode of access: Internet.
Resumo:
This thesis deals with the challenging problem of designing systems able to perceive objects in underwater environments. In the last few decades research activities in robotics have advanced the state of art regarding intervention capabilities of autonomous systems. State of art in fields such as localization and navigation, real time perception and cognition, safe action and manipulation capabilities, applied to ground environments (both indoor and outdoor) has now reached such a readiness level that it allows high level autonomous operations. On the opposite side, the underwater environment remains a very difficult one for autonomous robots. Water influences the mechanical and electrical design of systems, interferes with sensors by limiting their capabilities, heavily impacts on data transmissions, and generally requires systems with low power consumption in order to enable reasonable mission duration. Interest in underwater applications is driven by needs of exploring and intervening in environments in which human capabilities are very limited. Nowadays, most underwater field operations are carried out by manned or remotely operated vehicles, deployed for explorations and limited intervention missions. Manned vehicles, directly on-board controlled, expose human operators to risks related to the stay in field of the mission, within a hostile environment. Remotely Operated Vehicles (ROV) currently represent the most advanced technology for underwater intervention services available on the market. These vehicles can be remotely operated for long time but they need support from an oceanographic vessel with multiple teams of highly specialized pilots. Vehicles equipped with multiple state-of-art sensors and capable to autonomously plan missions have been deployed in the last ten years and exploited as observers for underwater fauna, seabed, ship wrecks, and so on. On the other hand, underwater operations like object recovery and equipment maintenance are still challenging tasks to be conducted without human supervision since they require object perception and localization with much higher accuracy and robustness, to a degree seldom available in Autonomous Underwater Vehicles (AUV). This thesis reports the study, from design to deployment and evaluation, of a general purpose and configurable platform dedicated to stereo-vision perception in underwater environments. Several aspects related to the peculiar environment characteristics have been taken into account during all stages of system design and evaluation: depth of operation and light conditions, together with water turbidity and external weather, heavily impact on perception capabilities. The vision platform proposed in this work is a modular system comprising off-the-shelf components for both the imaging sensors and the computational unit, linked by a high performance ethernet network bus. The adopted design philosophy aims at achieving high flexibility in terms of feasible perception applications, that should not be as limited as in case of a special-purpose and dedicated hardware. Flexibility is required by the variability of underwater environments, with water conditions ranging from clear to turbid, light backscattering varying with daylight and depth, strong color distortion, and other environmental factors. Furthermore, the proposed modular design ensures an easier maintenance and update of the system over time. Performance of the proposed system, in terms of perception capabilities, has been evaluated in several underwater contexts taking advantage of the opportunity offered by the MARIS national project. Design issues like energy power consumption, heat dissipation and network capabilities have been evaluated in different scenarios. Finally, real-world experiments, conducted in multiple and variable underwater contexts, including open sea waters, have led to the collection of several datasets that have been publicly released to the scientific community. The vision system has been integrated in a state of the art AUV equipped with a robotic arm and gripper, and has been exploited in the robot control loop to successfully perform underwater grasping operations.
Resumo:
L'obiettivo principale della politica di sicurezza alimentare è quello di garantire la salute dei consumatori attraverso regole e protocolli di sicurezza specifici. Al fine di rispondere ai requisiti di sicurezza alimentare e standardizzazione della qualità, nel 2002 il Parlamento Europeo e il Consiglio dell'UE (Regolamento (CE) 178/2002 (CE, 2002)), hanno cercato di uniformare concetti, principi e procedure in modo da fornire una base comune in materia di disciplina degli alimenti e mangimi provenienti da Stati membri a livello comunitario. La formalizzazione di regole e protocolli di standardizzazione dovrebbe però passare attraverso una più dettagliata e accurata comprensione ed armonizzazione delle proprietà globali (macroscopiche), pseudo-locali (mesoscopiche), ed eventualmente, locali (microscopiche) dei prodotti alimentari. L'obiettivo principale di questa tesi di dottorato è di illustrare come le tecniche computazionali possano rappresentare un valido supporto per l'analisi e ciò tramite (i) l’applicazione di protocolli e (ii) miglioramento delle tecniche ampiamente applicate. Una dimostrazione diretta delle potenzialità già offerte dagli approcci computazionali viene offerta nel primo lavoro in cui un virtual screening basato su docking è stato applicato al fine di valutare la preliminare xeno-androgenicità di alcuni contaminanti alimentari. Il secondo e terzo lavoro riguardano lo sviluppo e la convalida di nuovi descrittori chimico-fisici in un contesto 3D-QSAR. Denominata HyPhar (Hydrophobic Pharmacophore), la nuova metodologia così messa a punto è stata usata per esplorare il tema della selettività tra bersagli molecolari strutturalmente correlati e ha così dimostrato di possedere i necessari requisiti di applicabilità e adattabilità in un contesto alimentare. Nel complesso, i risultati ci permettono di essere fiduciosi nel potenziale impatto che le tecniche in silico potranno avere nella identificazione e chiarificazione di eventi molecolari implicati negli aspetti tossicologici e nutrizionali degli alimenti.
Resumo:
The present thesis investigates mode related aspects in biology lecture discourse and attempts to identify the position of this variety along the spontaneous spoken versus planned written language continuum. Nine lectures (of 43,000 words) consisting of three sets of three lectures each, given by the three lecturers at Aston University, make up the corpus. The indeterminacy of the results obtained from the investigation of grammatical complexity as measured in subordination motivates the need to take the analysis beyond sentence level to the study of mode related aspects in the use of sentence-initial connectives, sub-topic shifting and paraphrase. It is found that biology lecture discourse combines features typical of speech and writing at sentence as well as discourse level: thus, subordination is more used than co-ordination, but one degree complexity sentence is favoured; some sentence initial connectives are only found in uses typical of spoken language but sub-topic shift signalling (generally introduced by a connective) typical of planned written language is a major feature of the lectures; syntactic and lexical revision and repetition, interrupted structures are found in the sub-topic shift signalling utterance and paraphrase, but the text is also amenable to analysis into sentence like units. On the other hand, it is also found that: (1) while there are some differences in the use of a given feature, inter-speaker variation is on the whole not significant; (2) mode related aspects are often motivated by the didactic function of the variety; and (3) the structuring of the text follows a sequencing whose boundaries are marked by sub-topic shifting and the summary paraphrase. This study enables us to draw four theoretical conclusions: (1) mode related aspects cannot be approached as a simple dichotomy since a combination of aspects of both speech and writing are found in a given feature. It is necessary to go to the level of textual features to identify mode related aspects; (2) homogeneity is dominant in this sample of lectures which suggests that there is a high level of standardization in this variety; (3) the didactic function of the variety is manifested in some mode related aspects; (4) the features studied play a role in the structuring of the text.
Resumo:
Productivity measurement poses a challenge for service organizations. Conventional management wisdom holds that this challenge is rooted in the difficulty of accurately quantifying service inputs and outputs. Few service firms have adequate service productivity measurement (SPM) systems in place and implementing such systems may involve organizational transformation. Combining field interviews and literature-based insights, the authors develop a conceptual model of antecedents of SPM in service firms and test it using data from 276 service firms. Results indicate that one out of five antecedents affects the choice to use SPM, namely, the degree of service standardization. In addition, all five hypothesized antecedents and one additional antecedent (perceived appropriateness of the current SPM) predict the degree of SPM usage. In particular, the degree of SPM is positively influenced by the degree of service standardization, service customization, investments in service productivity gains, and the appropriateness of current service productivity measures. In turn, customer integration and the perceived difficulty of measuring service productivity negatively affect SPM. The fact that customer integration impedes actual measurement of service productivity is a surprising finding, given that customer integration is widely seen as a means to increase service productivity. The authors conclude with implications for service organizations and directions for research.
Resumo:
Epidemiological surveys are important for obtaining information on the prevalence and etiology of mouth diseases, since the data collected permit health actions to be planned, performed, and assessed. Methodological uniformity is necessary, however, to maintain reproductibility, validity, and reliability, and to allow national and international comparisons. The initiative of the World Health Organization (WHO) as an advisor in ongoing surveys has been extremely useful, stimulating standardization in all countries. In 1991, a Portuguese version of the 1987 third edition of Oral Health Surveys - basic methods, an instruction manual for performing epidemiological surveys, was published and became a reference for many parts of Brazil and the World. The present analysis found conflicting points in relation to the sample size, calibration of the examiners, and criteria for evaluating oral health and treatment needs. In conclusion, due to the dynamic characteristics of scientific knowledge and, considering the regional differences in relation to the development of oral diseases, we recommend that proposals for standardizing surveys be checked periodically. Other important issues may have not been detected in this analysis, urging a thorough discussion within the dentistry community as a whole.
Resumo:
Epidemiological surveys are important for obtaining information on the prevalence and etiology of mouth diseases, since the data collected permit health actions to be planned, performed, and assessed. Methodological uniformity is necessary, however, to maintain reproductibility, validity, and reliability, and to allow national and international comparisons. The initiative of the World Health Organization (WHO) as an advisor in ongoing surveys has been extremely useful, stimulating standardization in all countries. In 1991, a Portuguese version of the 1987 third edition of Oral Health Surveys - basic methods, an instruction manual for performing epidemiological surveys, was published and became a reference for many parts of Brazil and the World. The present analysis found conflicting points in relation to the sample size, calibration of the examiners, and criteria for evaluating oral health and treatment needs. In conclusion, due to the dynamic characteristics of scientific knowledge and, considering the regional differences in relation to the development of oral diseases, we recommend that proposals for standardizing surveys be checked periodically. Other important issues may have not been detected in this analysis, urging a thorough discussion within the dentistry community as a whole.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.