962 resultados para HARDWARE
Resumo:
Previous research in force control has focused on the choice of appropriate servo implementation without corresponding regard to the choice of mechanical hardware. This report analyzes the effect of mechanical properties such as contact compliance, actuator-to-joint compliance, torque ripple, and highly nonlinear dry friction in the transmission mechanisms of a manipulator. A set of requisites for high performance then guides the development of mechanical-design and servo strategies for improved performance. A single-degree-of-freedom transmission testbed was constructed that confirms the predicted effect of Coulomb friction on robustness; design and construction of a cable-driven, four-degree-of- freedom, "whole-arm" manipulator illustrates the recommended design strategies.
Resumo:
Control of machines that exhibit flexibility becomes important when designers attempt to push the state of the art with faster, lighter machines. Three steps are necessary for the control of a flexible planet. First, a good model of the plant must exist. Second, a good controller must be designed. Third, inputs to the controller must be constructed using knowledge of the system dynamic response. There is a great deal of literature pertaining to modeling and control but little dealing with the shaping of system inputs. Chapter 2 examines two input shaping techniques based on frequency domain analysis. The first involves the use of the first deriviate of a gaussian exponential as a driving function template. The second, acasual filtering, involves removal of energy from the driving functions at the resonant frequencies of the system. Chapter 3 presents a linear programming technique for generating vibration-reducing driving functions for systems. Chapter 4 extends the results of the previous chapter by developing a direct solution to the new class of driving functions. A detailed analysis of the new technique is presented from five different perspectives and several extensions are presented. Chapter 5 verifies the theories of the previous two chapters with hardware experiments. Because the new technique resembles common signal filtering, chapter 6 compares the new approach to eleven standard filters. The new technique will be shown to result in less residual vibrations, have better robustness to system parameter uncertainty, and require less computation than other currently used shaping techniques.
Resumo:
We wish to design a diagnostic for a device from knowledge of its structure and function. the diagnostic should achieve both coverage of the faults that can occur in the device, and should strive to achieve specificity in its diagnosis when it detects a fault. A system is described that uses a simple model of hardware structure and function, representing the device in terms of its internal primitive functions and connections. The system designs a diagnostic in three steps. First, an extension of path sensitization is used to design a test for each of the connections in teh device. Next, the resulting tests are improved by increasing their specificity. Finally the tests are ordered so that each relies on the fewest possible connections. We describe an implementation of this system and show examples of the results for some simple devices.
Resumo:
In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.
Resumo:
Conventional parallel computer architectures do not provide support for non-uniformly distributed objects. In this thesis, I introduce sparsely faceted arrays (SFAs), a new low-level mechanism for naming regions of memory, or facets, on different processors in a distributed, shared memory parallel processing system. Sparsely faceted arrays address the disconnect between the global distributed arrays provided by conventional architectures (e.g. the Cray T3 series), and the requirements of high-level parallel programming methods that wish to use objects that are distributed over only a subset of processing elements. A sparsely faceted array names a virtual globally-distributed array, but actual facets are lazily allocated. By providing simple semantics and making efficient use of memory, SFAs enable efficient implementation of a variety of non-uniformly distributed data structures and related algorithms. I present example applications which use SFAs, and describe and evaluate simple hardware mechanisms for implementing SFAs. Keeping track of which nodes have allocated facets for a particular SFA is an important task that suggests the need for automatic memory management, including garbage collection. To address this need, I first argue that conventional tracing techniques such as mark/sweep and copying GC are inherently unscalable in parallel systems. I then present a parallel memory-management strategy, based on reference-counting, that is capable of garbage collecting sparsely faceted arrays. I also discuss opportunities for hardware support of this garbage collection strategy. I have implemented a high-level hardware/OS simulator featuring hardware support for sparsely faceted arrays and automatic garbage collection. I describe the simulator and outline a few of the numerous details associated with a "real" implementation of SFAs and SFA-aware garbage collection. Simulation results are used throughout this thesis in the evaluation of hardware support mechanisms.
Resumo:
"The Structure and Interpretation of Computer Programs" is the entry-level subject in Computer Science at the Massachusetts Institute of Technology. It is required of all students at MIT who major in Electrical Engineering or in Computer Science, as one fourth of the "common core curriculum," which also includes two subjects on circuits and linear systems and a subject on the design of digital systems. We have been involved in the development of this subject since 1978, and we have taught this material in its present form since the fall of 1980 to approximately 600 students each year. Most of these students have had little or no prior formal training in computation, although most have played with computers a bit and a few have had extensive programming or hardware design experience. Our design of this introductory Computer Science subject reflects two major concerns. First we want to establish the idea that a computer language is not just a way of getting a computer to perform operations, but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute. Secondly, we believe that the essential material to be addressed by a subject at this level, is not the syntax of particular programming language constructs, nor clever algorithms for computing particular functions of efficiently, not even the mathematical analysis of algorithms and the foundations of computing, but rather the techniques used to control the intellectual complexity of large software systems.
Resumo:
This paper addresses the problem of efficiently computing the motor torques required to drive a lower-pair kinematic chain (e.g., a typical manipulator arm in free motion, or a mechanical leg in the swing phase) given the desired trajectory; i.e., the Inverse Dynamics problem. It investigates the high degree of parallelism inherent in the computations, and presents two "mathematically exact" formulations especially suited to high-speed, highly parallel implementations using special-purpose hardware or VLSI devices. In principle, the formulations should permit the calculations to run at a speed bounded only by I/O. The first presented is a parallel version of the recent linear Newton-Euler recursive algorithm. The time cost is also linear in the number of joints, but the real-time coefficients are reduced by almost two orders of magnitude. The second formulation reports a new parallel algorithm which shows that it is possible to improve upon the linear time dependency. The real time required to perform the calculations increases only as the [log2] of the number of joints. Either formulation is susceptible to a systolic pipelined architecture in which complete sets of joint torques emerge at successive intervals of four floating-point operations. Hardware requirements necessary to support the algorithm are considered and found not to be excessive, and a VLSI implementation architecture is suggested. We indicate possible applications to incorporating dynamical considerations into trajectory planning, e.g. it may be possible to build an on-line trajectory optimizer.
Resumo:
Introdução. Arquitetura do Sstema BDCana. Requisitos de hardware e software. Instalação e configuração dos software. Instalação do apache. Instalação do Apache. Instalação do PHP. Configuração do Apache. Configuração do PHP. Instalação do MySQL. Instalação do phpMyAdmin. Instalação do sistema BDCana.
Resumo:
Bases de dados da pesquisa agropecuária. Indexação textual com o Lucene. Lucene. Lucene no sistema das bases de dados da pesquisa agropecuária. Comparação de desempenho infra-estrutura de hardware e software. Conjunto de consultas testes e resultados. Ponderações. Nova BDPA.
Resumo:
Requisitos de hardware e software para acesso ao Sítio da Rede. Estrutura do Sistema. Composição do Sítio da Rede Regional de Agroecologia Mantiqueira-Mogiana. Página inicial. Agenda. Notícias. Fórum. Parceiros. Artigos. Álbum de fotos. Links interessados. Perguntas e respostas.
Resumo:
RESUMO: Existem vários métodos para avaliar o crescimento da vegetação e a taxa de cobertura do solo. Medidas precisas e rápidas podem ser obtidas do tratamento digital de imagens geradas de câmeras fotográficas ou de vídeo. Há disponível, no mercado, diversos processadores de imagens que apresentam funções básicas semelhantes, mas com certas particularidades que poderão trazer maior benefício para o usuário, dependendo da aplicação. O SPRING, desenvolvido pelo INPE, é de domínio público, sendo mais abrangente do que um processador de imagens, incluindo funções de geoprocessamento. O ENVI foi desenvolvido para a análise de imagens multiespectrais e hiperespectrais, podendo também ser utilizado para o processamento de imagens obtidas de câmeras de vídeo, por exemplo. O KS-300 é um conjunto de hardware e de software destinado ao processamento e à quantificação de imagens microscópicas, permitindo a captação direta das imagens geradas por meio de lupas, microscópios eletrônicos ou câmeras de vídeo. O SIARCS foi desenvolvido pela Embrapa Instrumentação Agropecuária para tornar mais ágil o processo de captação de dados de um sistema. Este trabalho apresenta os fundamentos teóricos básicos envolvidos na técnica de análise de imagens, com as principais características dos softwares citados acima e sua aplicação na quantificação da taxa de crescimento e da cobertura do solo por espécies vegetais. ABSTRACT: Several methods exist to evaluate the growth of the vegetation and the tax of covering of the soil. Necessary and fast measures can be obtained of the digital treatment of generated images of photographic cameras or of video. There is available, in the market, several processors of images that you/they present similar basic functions, but with certain particularities that can bring larger benefit for the user, depending on the application. SPRING, developed by INPE, it is public domain, being including than a processor of images, including functions. ENVI was developed for the analysis of images multiespectrais and hiperespectrais, could also be used for the processing of obtained images of video cameras, for instance. The KS-300 it is a hardware group and software destined to the processing and quantification of microscopic images, allowing the direct reception of the images generated through magnifying glasses, eletronic microscopes or video cameras. SIARCS was developed by Embrapa Agricultural Instrumentation to turn more agile the process of reception of data of a system. This work presents the basic theoretical foundations involved in the technique of analysis of images, with the main characteristics of the softwares mentioned above and his application in the quantification of the growth tax and of the covering of the soil for vegetable species.
Resumo:
Q. Meng and M. H. Lee, Learning and Control in Assistive Robotics for the Elderly, IEEE Conference on Robotics, Automation and Mechatronics (RAM), Singapore, 2004.
Resumo:
Whelan, K. E. and King, R. D. (2004) Intelligent software for laboratory automation. Trends in Biotechnology 22 (9): 440-445
Resumo:
Sauze, C. and Neal, M. 'An Autonomous Sailing Robot for Ocean Observation', in proceedings of TAROS 2006, Guildford, UK, Sept 4-6th 2006, pages 190-197.
Resumo:
Un robot autobalanceado es un dispositivo que, aun teniendo su centro de masas por encima del eje de giro, consigue mantener el equilibrio. Se basa o aproxima al problema del péndulo invertido. Este proyecto comprende el desarrollo e implementación de un robot autobalanceado basado en la plataforma Arduino. Se utilizará una placa Arduino y se diseñará y fabricará con un shield o tarjeta (PCB), donde se incluirán los elementos hardware que se consideren necesarios. Abarca el estudio y montaje del chasis y los sistemas de sensado, control digital, alimentación y motores