465 resultados para Automação com CLP
Resumo:
Navigation based on visual feedback for robots, working in a closed environment, can be obtained settling a camera in each robot (local vision system). However, this solution requests a camera and capacity of local processing for each robot. When possible, a global vision system is a cheapest solution for this problem. In this case, one or a little amount of cameras, covering all the workspace, can be shared by the entire team of robots, saving the cost of a great amount of cameras and the associated processing hardware needed in a local vision system. This work presents the implementation and experimental results of a global vision system for mobile mini-robots, using robot soccer as test platform. The proposed vision system consists of a camera, a frame grabber and a computer (PC) for image processing. The PC is responsible for the team motion control, based on the visual feedback, sending commands to the robots through a radio link. In order for the system to be able to unequivocally recognize each robot, each one has a label on its top, consisting of two colored circles. Image processing algorithms were developed for the eficient computation, in real time, of all objects position (robot and ball) and orientation (robot). A great problem found was to label the color, in real time, of each colored point of the image, in time-varying illumination conditions. To overcome this problem, an automatic camera calibration, based on clustering K-means algorithm, was implemented. This method guarantees that similar pixels will be clustered around a unique color class. The obtained experimental results shown that the position and orientation of each robot can be obtained with a precision of few millimeters. The updating of the position and orientation was attained in real time, analyzing 30 frames per second
Resumo:
There has been an increasing tendency on the use of selective image compression, since several applications make use of digital images and the loss of information in certain regions is not allowed in some cases. However, there are applications in which these images are captured and stored automatically making it impossible to the user to select the regions of interest to be compressed in a lossless manner. A possible solution for this matter would be the automatic selection of these regions, a very difficult problem to solve in general cases. Nevertheless, it is possible to use intelligent techniques to detect these regions in specific cases. This work proposes a selective color image compression method in which regions of interest, previously chosen, are compressed in a lossless manner. This method uses the wavelet transform to decorrelate the pixels of the image, competitive neural network to make a vectorial quantization, mathematical morphology, and Huffman adaptive coding. There are two options for automatic detection in addition to the manual one: a method of texture segmentation, in which the highest frequency texture is selected to be the region of interest, and a new face detection method where the region of the face will be lossless compressed. The results show that both can be successfully used with the compression method, giving the map of the region of interest as an input
Resumo:
The hardness test is thoroughly used in research and evaluation of materials for quality control. However, this test results are subject to uncertainties caused by the process operator in the moment of the mensuration impression diagonals make by the indenter in the sample. With this mind, an automated equipment of hardness mensuration was developed. The hardness value was obtained starting from the mensuration of plastic deformation suffered by the material to a well-known load. The material deformation was calculated through the mensuration of the difference between the progress and retreat of a diamond indenter on the used sample. It was not necessary, therefore, the manual mensuration of the diagonals, decreasing the mistake source caused by the operator. Tension graphs of versus deformation could be analyzed from data obtained by the accomplished analysis, as well as you became possible a complete observation of the whole process. Following, the hardness results calculated by the experimental apparatus were compared with the results calculated by a commercial microhardness machine with the intention of testing its efficiency. All things considered, it became possible the materials hardness mensuration through an automated method, which minimized the mistakes caused by the operator and increased the analysis reliability
Resumo:
Due to advances in the manufacturing process of orthopedic prostheses, the need for better quality shape reading techniques (i.e. with less uncertainty) of the residual limb of amputees became a challenge. To overcome these problems means to be able in obtaining accurate geometry information of the limb and, consequently, better manufacturing processes of both transfemural and transtibial prosthetic sockets. The key point for this task is to customize these readings trying to be as faithful as possible to the real profile of each patient. Within this context, firstly two prototype versions (α and β) of a 3D mechanical scanner for reading residual limbs shape based on reverse engineering techniques were designed. Prototype β is an improved version of prototype α, despite remaining working in analogical mode. Both prototypes are capable of producing a CAD representation of the limb via appropriated graphical sheets and were conceived to work purely by mechanical means. The first results were encouraging as they were able to achieve a great decrease concerning the degree of uncertainty of measurements when compared to traditional methods that are very inaccurate and outdated. For instance, it's not unusual to see these archaic methods in action by making use of ordinary home kind measure-tapes for exploring the limb's shape. Although prototype β improved the readings, it still required someone to input the plotted points (i.e. those marked in disk shape graphical sheets) to an academic CAD software called OrtoCAD. This task is performed by manual typing which is time consuming and carries very limited reliability. Furthermore, the number of coordinates obtained from the purely mechanical system is limited to sub-divisions of the graphical sheet (it records a point every 10 degrees with a resolution of one millimeter). These drawbacks were overcome by designing the second release of prototype β in which it was developed an electronic variation of the reading table components now capable of performing an automatic reading (i.e. no human intervention in digital mode). An interface software (i.e. drive) was built to facilitate data transfer. Much better results were obtained meaning less degree of uncertainty (it records a point every 2 degrees with a resolution of 1/10 mm). Additionally, it was proposed an algorithm to convert the CAD geometry, used by OrtoCAD, to an appropriate format and enabling the use of rapid prototyping equipment aiming future automation of the manufacturing process of prosthetic sockets.
Resumo:
The interest in the systematic analysis of astronomical time series data, as well as development in astronomical instrumentation and automation over the past two decades has given rise to several questions of how to analyze and synthesize the growing amount of data. These data have led to many discoveries in the areas of modern astronomy asteroseismology, exoplanets and stellar evolution. However, treatment methods and data analysis have failed to follow the development of the instruments themselves, although much effort has been done. In present thesis, we propose new methods of data analysis and two catalogs of the variable stars that allowed the study of rotational modulation and stellar variability. Were analyzed the photometric databases fromtwo distinctmissions: CoRoT (Convection Rotation and planetary Transits) and WFCAM (Wide Field Camera). Furthermore the present work describes several methods for the analysis of photometric data besides propose and refine selection techniques of data using indices of variability. Preliminary results show that variability indices have an efficiency greater than the indices most often used in the literature. An efficient selection of variable stars is essential to improve the efficiency of all subsequent steps. Fromthese analyses were obtained two catalogs; first, fromtheWFCAMdatabase we achieve a catalog with 319 variable stars observed in the photometric bands Y ZJHK. These stars show periods ranging between ∼ 0, 2 to ∼ 560 days whose the variability signatures present RR-Lyrae, Cepheids , LPVs, cataclysmic variables, among many others. Second, from the CoRoT database we selected 4, 206 stars with typical signatures of rotationalmodulation, using a supervised process. These stars show periods ranging between ∼ 0, 33 to ∼ 92 days, amplitude variability between ∼ 0, 001 to ∼ 0, 5 mag, color index (J - H) between ∼ 0, 0 to ∼ 1, 4 mag and spectral type CoRoT FGKM. The WFCAM variable stars catalog is being used to compose a database of light curves to be used as template in an automatic classifier for variable stars observed by the project VVV (Visible and Infrared Survey Telescope for Astronomy) moreover it are a fundamental start point to study different scientific cases. For example, a set of 12 young stars who are in a star formation region and the study of RR Lyrae-whose properties are not well established in the infrared. Based on CoRoT results we were able to show, for the first time, the rotational modulation evolution for an wide homogeneous sample of field stars. The results are inagreement with those expected by the stellar evolution theory. Furthermore, we identified 4 solar-type stars ( with color indices, spectral type, luminosity class and rotation period close to the Sun) besides 400 M-giant stars that we have a special interest to forthcoming studies. From the solar-type stars we can describe the future and past of the Sun while properties of M-stars are not well known. Our results allow concluded that there is a high dependence of the color-period diagram with the reddening in which increase the uncertainties of the age-period realized by previous works using CoRoT data. This thesis provides a large data-set for different scientific works, such as; magnetic activity, cataclysmic variables, brown dwarfs, RR-Lyrae, solar analogous, giant stars, among others. For instance, these data will allow us to study the relationship of magnetic activitywith stellar evolution. Besides these aspects, this thesis presents an improved classification for a significant number of stars in the CoRoT database and introduces a new set of tools that can be used to improve the entire process of the photometric databases analysis
Resumo:
This master s dissertation deals with motivation and the meaning of work amongst bank employees. This is done considering a cognitive perception. Work is understood here under a social and subjective comprehension, once it deals with significance attribution. Motivation is the process that rules choice of the different possibilities of individual behavior, all of which according to the Expectation Theory. This study aims to analyze the implications of the productive restructure, since it is related to technological innovation, organizational changes and management, in motivation and work significance. Thus, the objective of the research is to verify motivational differences and the meaning of work amongst bank employees. This is done in two distinct moments of the productive restructure of bank employees in Natal-RN. The research is divided in two parts. In the first one, changes that occurred in banks between 1999 until 2005 were identified by the means of interviews with 7 bank managers. The analyzed perspective was training intensifying, quality emphasis of customer attendance, the use of automation/technology, staff stabilization, change in staff profile, work intensification, etc. In the second study the Inventory of Motivation and Work Meaning was applied. Thus, questions related to work focus, social demographic data, in 187 bank employees were dealt with. The collected data was compared to data from previous work. It was observed that productive restructure has a reflection in the meaning of work increasing self-expression, economical reward, and responsibility in work conditions. All of the item mention beforehand maintain the level of inhumanness and consummation and respond as being the characteristics of the real work environment. On the other hand, bank employees value less justice, self-expression and more the survival perspective, implying instrumental values to work. As for motivation, it is increased among bank employees. These employees have greater expectations that their work produce results since they believe in their interference in work results
Resumo:
Recently the focus given to Web Services and Semantic Web technologies has provided the development of several research projects in different ways to addressing the Web services composition issue. Meanwhile, the challenge of creating an environment that provides the specification of an abstract business process and that it is automatically implemented by a composite service in a dynamic way is considered a currently open problem. WSDL and BPEL provided by industry support only manual service composition because they lack needed semantics so that Web services are discovered, selected and combined by software agents. Services ontology provided by Semantic Web enriches the syntactic descriptions of Web services to facilitate the automation of tasks, such as discovery and composition. This work presents an environment for specifying and ad-hoc executing Web services-based business processes, named WebFlowAH. The WebFlowAH employs common domain ontology to describe both Web services and business processes. It allows processes specification in terms of users goals or desires that are expressed based on the concepts of such common domain ontology. This approach allows processes to be specified in an abstract high level way, unburdening the user from the underline details needed to effectively run the process workflow
Resumo:
Some programs may have their entry data specified by formalized context-free grammars. This formalization facilitates the use of tools in the systematization and the rise of the quality of their test process. This category of programs, compilers have been the first to use this kind of tool for the automation of their tests. In this work we present an approach for definition of tests from the formal description of the entries of the program. The generation of the sentences is performed by taking into account syntactic aspects defined by the specification of the entries, the grammar. For optimization, their coverage criteria are used to limit the quantity of tests without diminishing their quality. Our approach uses these criteria to drive generation to produce sentences that satisfy a specific coverage criterion. The approach presented is based on the use of Lua language, relying heavily on its resources of coroutines and dynamic construction of functions. With these resources, we propose a simple and compact implementation that can be optimized and controlled in different ways, in order to seek satisfaction the different implemented coverage criteria. To make the use of our tool simpler, the EBNF notation for the specification of the entries was adopted. Its parser was specified in the tool Meta-Environment for rapid prototyping
Resumo:
Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies
Resumo:
A automação consiste em uma importante atividade do processo de teste e é capaz de reduzir significativamente o tempo e custo do desenvolvimento. Algumas ferramentas tem sido propostas para automatizar a realização de testes de aceitação em aplicações Web. Contudo, grande parte delas apresenta limitações importantes tais como necessidade de valoração manual dos casos de testes, refatoração do código gerado e forte dependência com a estrutura das páginas HTML. Neste trabalho, apresentamos uma linguagem de especificação de teste e uma ferramenta concebidas para minimizar os impactos propiciados por essas limitações. A linguagem proposta dá suporte aos critérios de classes de equivalência e a ferramenta, desenvolvida sob a forma de um plug-in para a plataforma Eclipse, permite a geração de casos de teste através de diferentes estratégias de combinação. Para realizar a avaliação da abordagem, utilizamos um dos módulos do Sistema Unificado de Administração Publica (SUAP) do Instituto Federal do Rio Grande do Norte (IFRN). Participaram da avaliação analistas de sistemas e um técnico de informática que atuam como desenvolvedores do sistema utilizado.
Resumo:
Automation has become increasingly necessary during the software test process due to the high cost and time associated with such activity. Some tools have been proposed to automate the execution of Acceptance Tests in Web applications. However, many of them have important limitations such as the strong dependence on the structure of the HTML pages and the need of manual valuing of the test cases. In this work, we present a language for specifying acceptance test scenarios for Web applications called IFL4TCG and a tool that allows the generation of test cases from these scenarios. The proposed language supports the criterion of Equivalence Classes Partition and the tool allows the generation of test cases that meet different combination strategies (i.e., Each-Choice, Base-Choice and All Combinations). In order to evaluate the effectiveness of the proposed solution, we used the language and the associated tool for designing and executing Acceptance Tests on a module of Sistema Unificado de Administração Pública (SUAP) of Instituto Federal Rio Grande do Norte (IFRN). Four Systems Analysts and one Computer Technician, which work as developers of the that system, participated in the evaluation. Preliminary results showed that IFL4TCG can actually help to detect defects in Web applications
Resumo:
Typically Web services contain only syntactic information that describes their interfaces. Due to the lack of semantic descriptions of the Web services, service composition becomes a difficult task. To solve this problem, Web services can exploit the use of ontologies for the semantic definition of service s interface, thus facilitating the automation of discovering, publication, mediation, invocation, and composition of services. However, ontology languages, such as OWL-S, have constructs that are not easy to understand, even for Web developers, and the existing tools that support their use contains many details that make them difficult to manipulate. This paper presents a MDD tool called AutoWebS (Automatic Generation of Semantic Web Services) to develop OWL-S semantic Web services. AutoWebS uses an approach based on UML profiles and model transformations for automatic generation of Web services and their semantic description. AutoWebS offers an environment that provides many features required to model, implement, compile, and deploy semantic Web services
Resumo:
The component-based development of systems revolutionized the software development process, facilitating the maintenance, providing more confiability and reuse. Nevertheless, even with all the advantages of the development of components, their composition is an important concern. The verification through informal tests is not enough to achieve a safe composition, because they are not based on formal semantic models with which we are able to describe precisally a system s behaviour. In this context, formal methods provide ways to accurately specify systems through mathematical notations providing, among other benefits, more safety. The formal method CSP enables the specification of concurrent systems and verification of properties intrinsic to them, as well as the refinement among different models. Some approaches apply constraints using CSP, to check the behavior of composition between components, assisting in the verification of those components in advance. Hence, aiming to assist this process, considering that the software market increasingly requires more automation, reducing work and providing agility in business, this work presents a tool that automatizes the verification of composition among components, in which all complexity of formal language is kept hidden from users. Thus, through a simple interface, the tool BST (BRIC-Tool-Suport) helps to create and compose components, predicting, in advance, undesirable behaviors in the system, such as deadlocks
Resumo:
Control and automation of residential environments domotics is emerging area of computing application. The development of computational systems for domotics is complex, due to the diversity of potential users, and because it is immerse in a context of emotional relationships and familiar construction. Currently, the focus of the development of this kind of system is directed, mainly, to physical and technological aspects. Due to the fact, gestural interaction in the present research is investigated under the view of Human-Computer Interaction (HCI). First, we approach the subject through the construction of a conceptual framework for discussion of challenges from the area, integrated to the dimensions: people, interaction mode and domotics. A further analysis of the domain is accomplished using the theoretical-methodological referential of Organizational Semiotics. After, we define recommendations to the diversity that base/inspire the inclusive design, guided by physical, perceptual and cognitive abilities, which aim to better represent the concerned diversity. Although developers have the support of gestural recognition technologies that help a faster development, these professionals face another difficulty by not restricting the gestural commands of the application to the standard gestures provided by development frameworks. Therefore, an abstraction of the gestural interaction was idealized through a formalization, described syntactically by construction blocks that originates a grammar of the gestural interaction and, semantically, approached under the view of the residential system. So, we define a set of metrics grounded in the recommendations that are described with information from the preestablished grammar, and still, we conceive and implement in Java, under the foundation of this grammar, a residential system based on gestural interaction for usage with Microsoft Kinect. Lastly, we accomplish an experiment with potential end users of the system, aiming to better analyze the research results
Resumo:
The chart of control of Hotelling T2 has been the main statistical device used in monitoring multivariate processes. Currently the technological development of control systems and automation enabled a high rate of collection of information of the production systems in very short time intervals, causing a dependency between the results of observations. This phenomenon known as auto correlation causes in the statistical control of the multivariate processes a high rate of false alarms, prejudicing in the chart performance. This entails the violation of the assumption of independence and normality of the distribution. In this thesis we considered not only the correlation between two variables, but also the dependence between observations of the same variable, that is, auto correlation. It was studied by simulation, the bi variate case and the effect of auto correlation on the performance of the T2 chart of Hotelling.