976 resultados para Cost Environment
Resumo:
In the past few years the so-called gadgets like cellular phones, personal data assistants and digital cameras are more widespread even with less technological aware users. However, for several reasons, the factory-floor itself seems to be hermetic to this changes ... After the fieldbus revolution, the factory-floor has seen an increased use of more and more powerful programmable logic controllers and user interfaces but the way they are used remains almost the same. We believe that new user-computer interaction techniques including multimedia and augmented rcaliry combined with now affordable technologies like wearable computers and wireless networks can change the way the factory personal works together with the roachines and the information system on the factory-floor. This new age is already starting with innovative uses of communication networks on the factory-floor either using "standard" networks or enhancing industrial networks with multimedia and wireless capabilities.
Resumo:
This paper summarises the most important solutions that have emerged from the work carried out by our team within the framework of the EU (IST-1999-11316) project RFieldbus - High Performance Wireless Fieldbus in Industrial Multimedia-Related Environment. Within this project, Profibus was chosen as the fieldbus platform. Essentially, extensions to the current Profibus standard are being developed in order to provide Profibus with wireless, mobility and industrialmultimedia capabilities. In fact, providing these extensions means fulfilling strong requirements, namely to encompass the communication between wired (currently available) and wireless/mobile devices and to support real-time control traffic and multimedia traffic in the same network.
Resumo:
Many-core platforms based on Network-on-Chip (NoC [Benini and De Micheli 2002]) present an emerging technology in the real-time embedded domain. Although the idea to group the applications previously executed on separated single-core devices, and accommodate them on an individual many-core chip offers various options for power savings, cost reductions and contributes to the overall system flexibility, its implementation is a non-trivial task. In this paper we address the issue of application mapping onto a NoCbased many-core platform when considering fundamentals and trends of current many-core operating systems, specifically, we elaborate on a limited migrative application model encompassing a message-passing paradigm as a communication primitive. As the main contribution, we formulate the problem of real-time application mapping, and propose a three-stage process to efficiently solve it. Through analysis it is assured that derived solutions guarantee the fulfilment of posed time constraints regarding worst-case communication latencies, and at the same time provide an environment to perform load balancing for e.g. thermal, energy, fault tolerance or performance reasons.We also propose several constraints regarding the topological structure of the application mapping, as well as the inter- and intra-application communication patterns, which efficiently solve the issues of pessimism and/or intractability when performing the analysis.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações
RadiaLE: A framework for designing and assessing link quality estimators in wireless sensor networks
Resumo:
Stringent cost and energy constraints impose the use of low-cost and low-power radio transceivers in large-scale wireless sensor networks (WSNs). This fact, together with the harsh characteristics of the physical environment, requires a rigorous WSN design. Mechanisms for WSN deployment and topology control, MAC and routing, resource and mobility management, greatly depend on reliable link quality estimators (LQEs). This paper describes the RadiaLE framework, which enables the experimental assessment, design and optimization of LQEs. RadiaLE comprises (i) the hardware components of the WSN testbed and (ii) a software tool for setting-up and controlling the experiments, automating link measurements gathering through packets-statistics collection, and analyzing the collected data, allowing for LQEs evaluation. We also propose a methodology that allows (i) to properly set different types of links and different types of traffic, (ii) to collect rich link measurements, and (iii) to validate LQEs using a holistic and unified approach. To demonstrate the validity and usefulness of RadiaLE, we present two case studies: the characterization of low-power links and a comparison between six representative LQEs. We also extend the second study for evaluating the accuracy of the TOSSIM 2 channel model.
Resumo:
The present work aims to study the feasibility of deploying a farm of sea current turbines for electricity generation in Portugal. An approach to the tides, which are they, how they are formed, its prediction, is held. It is also conducted a study about the energy of sea currents and it is presented some technology about ocean currents too. A model of tidal height and velocity of the currents it is also developed. The energy produced by a hypothetical park, built in Sines (Portugal), is calculated and afterwards, an economical assessment is performed for two possible scenarios and a sensitivity analysis of NVP (Net Present Value) and LCOE (Levelized Cost of Energy) is figured. The conclusions about the feasibility of the projects are also presented. Despite being desired due to its predictability, this energy source is not yet economically viable as it is in an initial state of development. To push investment in this technology a feed-in tariff of, at least €200/MWh, should be considered.
Resumo:
This work focuses on highly dynamic distributed systems with Quality of Service (QoS) constraints (most importantly real-time constraints). To that purpose, real-time applications may benefit from code offloading techniques, so that parts of the application can be offloaded and executed, as services, by neighbour nodes, which are willing to cooperate in such computations. These applications explicitly state their QoS requirements, which are translated into resource requirements, in order to evaluate the feasibility of accepting other applications in the system.
Resumo:
In this paper, we focus on large-scale and dense Cyber- Physical Systems, and discuss methods that tightly integrate communication and computing with the underlying physical environment. We present Physical Dynamic Priority Dominance ((PD)2) protocol that exemplifies a key mechanism to devise low time-complexity communication protocols for large-scale networked sensor systems. We show that using this mechanism, one can compute aggregate quantities such as the maximum or minimum of sensor readings in a time-complexity that is equivalent to essentially one message exchange. We also illustrate the use of this mechanism in a more complex task of computing the interpolation of smooth as well as non-smooth sensor data in very low timecomplexity.
Resumo:
Due to the growing complexity and adaptability requirements of real-time embedded systems, which often exhibit unrestricted inter-dependencies among supported services and user-imposed quality constraints, it is increasingly difficult to optimise the level of service of a dynamic task set within an useful and bounded time. This is even more difficult when intending to benefit from the full potential of an open distributed cooperating environment, where service characteristics are not known beforehand. This paper proposes an iterative refinement approach for a service’s QoS configuration taking into account services’ inter-dependencies and quality constraints, and trading off the achieved solution’s quality for the cost of computation. Extensive simulations demonstrate that the proposed anytime algorithm is able to quickly find a good initial solution and effectively optimises the rate at which the quality of the current solution improves as the algorithm is given more time to run. The added benefits of the proposed approach clearly surpass its reducedoverhead.
Resumo:
Historical buildings are important fingerprints of the history and culture of a region and its communities. Climatic and environmental conditions are often very severe for construction materials, namely in presence of high humidity or in direct contact with water and salts. However, some historical buildings have in our days a very good condition, probably due to careful construction and/or accurate materials selection and to a specific technology. The knowledge of old mortars composition has a fundamental role on the preservation of cultural heritage, allowing information about the used materials, their performance in their specific environment, conducting to adequate and compatible materials to conservation purposes. This article presents two case studies of historical buildings with important defence functions in Lisbon coast, in which ancient lime mortars where used under severe seaside environmental actions. Mortar samples from these two case studies are characterized and the relationship of their composition with the good performance and high durability observed is discussed.
Resumo:
This paper presents a novel approach to WLAN propagation models for use in indoor localization. The major goal of this work is to eliminate the need for in situ data collection to generate the Fingerprinting map, instead, it is generated by using analytical propagation models such as: COST Multi-Wall, COST 231 average wall and Motley- Keenan. As Location Estimation Algorithms kNN (K-Nearest Neighbour) and WkNN (Weighted K-Nearest Neighbour) were used to determine the accuracy of the proposed technique. This work is based on analytical and measurement tools to determine which path loss propagation models are better for location estimation applications, based on Receive Signal Strength Indicator (RSSI).This study presents different proposals for choosing the most appropriate values for the models parameters, like obstacles attenuation and coefficients. Some adjustments to these models, particularly to Motley-Keenan, considering the thickness of walls, are proposed. The best found solution is based on the adjusted Motley-Keenan and COST models that allows to obtain the propagation loss estimation for several environments.Results obtained from two testing scenarios showed the reliability of the adjustments, providing smaller errors in the measured values values in comparison with the predicted values.
Resumo:
This paper discusses the technology of smart floors as a enabler of smart cities. The discussion will be based on technology that is embedded into the environment that enable location, navigation but also wireless power transmission for powering up elements siting on it, typically mobile devices. One of those examples is the smart floor, this implementation follows two paths, one where the floor is passive, and normally passive RFID's are embedded into the floor, they are used to provide intelligence into the surrounding space, this is normally complemented with a battery powered mobile unit that scans the floor for the sensors and communicates the information to a database which locates the mobile device in the environment. The other path for the smart city enabler is where the floor is active and delivers energy for the objects standing on top of it. In this paper these two approaches will be presented, by discussing the technology behind it. © 2014 IEEE.
Resumo:
OBJECTIVE To analyze the cost-effectiveness of treatment regimens with cyclosporine or tacrolimus, five years after renal transplantation.METHODS This cost-effectiveness analysis was based on historical cohort data obtained between 2000 and 2004 and involved 2,022 patients treated with cyclosporine or tacrolimus, matched 1:1 for gender, age, and type and year of transplantation. Graft survival and the direct costs of medical care obtained from the National Health System (SUS) databases were used as outcome results.RESULTS Most of the patients were women, with a mean age of 36.6 years. The most frequent diagnosis of chronic renal failure was glomerulonephritis/nephritis (27.7%). In five years, the tacrolimus group had an average life expectancy gain of 3.96 years at an annual cost of R$78,360.57 compared with the cyclosporine group with a gain of 4.05 years and an annual cost of R$61,350.44.CONCLUSIONS After matching, the study indicated better survival of patients treated with regimens using tacrolimus. However, regimens containing cyclosporine were more cost-effective.
Resumo:
The interest in the development of climbing robots has grown rapidly in the last years. Climbing robots are useful devices that can be adopted in a variety of applications, such as maintenance and inspection in the process and construction industries. These systems are mainly adopted in places where direct access by a human operator is very expensive, because of the need for scaffolding, or very dangerous, due to the presence of an hostile environment. The main motivations are to increase the operation efficiency, by eliminating the costly assembly of scaffolding, or to protect human health and safety in hazardous tasks. Several climbing robots have already been developed, and other are under development, for applications ranging from cleaning to inspection of difficult to reach constructions. A wall climbing robot should not only be light, but also have large payload, so that it may reduce excessive adhesion forces and carry instrumentations during navigation. These machines should be capable of travelling over different types of surfaces, with different inclinations, such as floors, walls, or ceilings, and to walk between such surfaces (Elliot et al. (2006); Sattar et al. (2002)). Furthermore, they should be able of adapting and reconfiguring for various environment conditions and to be self-contained. Up to now, considerable research was devoted to these machines and various types of experimental models were already proposed (according to Chen et al. (2006), over 200 prototypes aimed at such applications had been developed in the world by the year 2006). However, we have to notice that the application of climbing robots is still limited. Apart from a couple successful industrialized products, most are only prototypes and few of them can be found in common use due to unsatisfactory performance in on-site tests (regarding aspects such as their speed, cost and reliability). Chen et al. (2006) present the main design problems affecting the system performance of climbing robots and also suggest solutions to these problems. The major two issues in the design of wall climbing robots are their locomotion and adhesion methods. With respect to the locomotion type, four types are often considered: the crawler, the wheeled, the legged and the propulsion robots. Although the crawler type is able to move relatively faster, it is not adequate to be applied in rough environments. On the other hand, the legged type easily copes with obstacles found in the environment, whereas generally its speed is lower and requires complex control systems. Regarding the adhesion to the surface, the robots should be able to produce a secure gripping force using a light-weight mechanism. The adhesion method is generally classified into four groups: suction force, magnetic, gripping to the surface and thrust force type. Nevertheless, recently new methods for assuring the adhesion, based in biological findings, were proposed. The vacuum type principle is light and easy to control though it presents the problem of supplying compressed air. An alternative, with costs in terms of weight, is the adoption of a vacuum pump. The magnetic type principle implies heavy actuators and is used only for ferromagnetic surfaces. The thrust force type robots make use of the forces developed by thrusters to adhere to the surfaces, but are used in very restricted and specific applications. Bearing these facts in mind, this chapter presents a survey of different applications and technologies adopted for the implementation of climbing robots locomotion and adhesion to surfaces, focusing on the new technologies that are recently being developed to fulfill these objectives. The chapter is organized as follows. Section two presents several applications of climbing robots. Sections three and four present the main locomotion principles, and the main "conventional" technologies for adhering to surfaces, respectively. Section five describes recent biological inspired technologies for robot adhesion to surfaces. Section six introduces several new architectures for climbing robots. Finally, section seven outlines the main conclusions.
Resumo:
In practice the robotic manipulators present some degree of unwanted vibrations. The advent of lightweight arm manipulators, mainly in the aerospace industry, where weight is an important issue, leads to the problem of intense vibrations. On the other hand, robots interacting with the environment often generate impacts that propagate through the mechanical structure and produce also vibrations. In order to analyze these phenomena a robot signal acquisition system was developed. The manipulator motion produces vibrations, either from the structural modes or from endeffector impacts. The instrumentation system acquires signals from several sensors that capture the joint positions, mass accelerations, forces and moments, and electrical currents in the motors. Afterwards, an analysis package, running off-line, reads the data recorded by the acquisition system and extracts the signal characteristics. Due to the multiplicity of sensors, the data obtained can be redundant because the same type of information may be seen by two or more sensors. Because of the price of the sensors, this aspect can be considered in order to reduce the cost of the system. On the other hand, the placement of the sensors is an important issue in order to obtain the suitable signals of the vibration phenomenon. Moreover, the study of these issues can help in the design optimization of the acquisition system. In this line of thought a sensor classification scheme is presented. Several authors have addressed the subject of the sensor classification scheme. White (White, 1987) presents a flexible and comprehensive categorizing scheme that is useful for describing and comparing sensors. The author organizes the sensors according to several aspects: measurands, technological aspects, detection means, conversion phenomena, sensor materials and fields of application. Michahelles and Schiele (Michahelles & Schiele, 2003) systematize the use of sensor technology. They identified several dimensions of sensing that represent the sensing goals for physical interaction. A conceptual framework is introduced that allows categorizing existing sensors and evaluates their utility in various applications. This framework not only guides application designers for choosing meaningful sensor subsets, but also can inspire new systems and leads to the evaluation of existing applications. Today’s technology offers a wide variety of sensors. In order to use all the data from the diversity of sensors a framework of integration is needed. Sensor fusion, fuzzy logic, and neural networks are often mentioned when dealing with problem of combing information from several sensors to get a more general picture of a given situation. The study of data fusion has been receiving considerable attention (Esteban et al., 2005; Luo & Kay, 1990). A survey of the state of the art in sensor fusion for robotics can be found in (Hackett & Shah, 1990). Henderson and Shilcrat (Henderson & Shilcrat, 1984) introduced the concept of logic sensor that defines an abstract specification of the sensors to integrate in a multisensor system. The recent developments of micro electro mechanical sensors (MEMS) with unwired communication capabilities allow a sensor network with interesting capacity. This technology was applied in several applications (Arampatzis & Manesis, 2005), including robotics. Cheekiralla and Engels (Cheekiralla & Engels, 2005) propose a classification of the unwired sensor networks according to its functionalities and properties. This paper presents a development of a sensor classification scheme based on the frequency spectrum of the signals and on a statistical metrics. Bearing these ideas in mind, this paper is organized as follows. Section 2 describes briefly the robotic system enhanced with the instrumentation setup. Section 3 presents the experimental results. Finally, section 4 draws the main conclusions and points out future work.