57 resultados para Reliability in automation
Resumo:
Significant research efforts are being devoted to Body Area Networks (BAN) due to their potential for revolutionizing healthcare practices. Energy-efficiency and communication reliability are critically important for these networks. In an experimental study with three different mote platforms, we show that changes in human body shadowing as well as those in the relative distance and orientation of nodes caused by the common human body movements can result in significant fluctuations in the received signal strength within a BAN. Furthermore, regular movements, such as walking, typically manifest in approximately periodic variations in signal strength. We present an algorithm that predicts the signal strength peaks and evaluate it on real-world data. We present the design of an opportunistic MAC protocol, named BANMAC, that takes advantage of the periodic fluctuations of the signal strength to achieve high reliability even with low transmission power.
Resumo:
Smartphones and other internet enabled devices are now common on our everyday life, thus unsurprisingly a current trend is to adapt desktop PC applications to execute on them. However, since most of these applications have quality of service (QoS) requirements, their execution on resource-constrained mobile devices presents several challenges. One solution to support more stringent applications is to offload some of the applications’ services to surrogate devices nearby. Therefore, in this paper, we propose an adaptable offloading mechanism which takes into account the QoS requirements of the application being executed (particularly its real-time requirements), whilst allowing offloading services to several surrogate nodes. We also present how the proposed computing model can be implemented in an Android environment
Resumo:
The usage of COTS-based multicores is becoming widespread in the field of embedded systems. Providing realtime guarantees at design-time is a pre-requisite to deploy real-time systems on these multicores. This necessitates the consideration of the impact of the contention due to shared low-level hardware resources on the Worst-Case Execution Time (WCET) of the tasks. As a step towards this aim, this paper first identifies the different factors that make the WCET analysis a challenging problem in a typical COTS-based multicore system. Then, we propose and prove, a mathematically correct method to determine tight upper bounds on the WCET of the tasks, when they are co-scheduled on different cores.
Resumo:
Wireless sensor networks (WSNs) emerge as underlying infrastructures for new classes of large-scale networked embedded systems. However, WSNs system designers must fulfill the quality-of-service (QoS) requirements imposed by the applications (and users). Very harsh and dynamic physical environments and extremely limited energy/computing/memory/communication node resources are major obstacles for satisfying QoS metrics such as reliability, timeliness, and system lifetime. The limited communication range of WSN nodes, link asymmetry, and the characteristics of the physical environment lead to a major source of QoS degradation in WSNs-the ldquohidden node problem.rdquo In wireless contention-based medium access control (MAC) protocols, when two nodes that are not visible to each other transmit to a third node that is visible to the former, there will be a collision-called hidden-node or blind collision. This problem greatly impacts network throughput, energy-efficiency and message transfer delays, and the problem dramatically increases with the number of nodes. This paper proposes H-NAMe, a very simple yet extremely efficient hidden-node avoidance mechanism for WSNs. H-NAMe relies on a grouping strategy that splits each cluster of a WSN into disjoint groups of non-hidden nodes that scales to multiple clusters via a cluster grouping strategy that guarantees no interference between overlapping clusters. Importantly, H-NAMe is instantiated in IEEE 802.15.4/ZigBee, which currently are the most widespread communication technologies for WSNs, with only minor add-ons and ensuring backward compatibility with their protocols standards. H-NAMe was implemented and exhaustively tested using an experimental test-bed based on ldquooff-the-shelfrdquo technology, showing that it increases network throughput and transmission success probability up to twice the values obtained without H-NAMe. H-NAMe effectiveness was also demonstrated in a target tracking application with mobile robots - over a WSN deployment.
Resumo:
Variations of manufacturing process parameters and environmental aspects may affect the quality and performance of composite materials, which consequently affects their structural behaviour. Reliability-based design optimisation (RBDO) and robust design optimisation (RDO) searches for safe structural systems with minimal variability of response when subjected to uncertainties in material design parameters. An approach that simultaneously considers reliability and robustness is proposed in this paper. Depending on a given reliability index imposed on composite structures, a trade-off is established between the performance targets and robustness. Robustness is expressed in terms of the coefficient of variation of the constrained structural response weighted by its nominal value. The Pareto normed front is built and the nearest point to the origin is estimated as the best solution of the bi-objective optimisation problem.
Resumo:
Wireless sensor networks (WSNs) are one of today’s most prominent instantiations of the ubiquituous computing paradigm. In order to achieve high levels of integration, WSNs need to be conceived considering requirements beyond the mere system’s functionality. While Quality-of-Service (QoS) is traditionally associated with bit/data rate, network throughput, message delay and bit/packet error rate, we believe that this concept is too strict, in the sense that these properties alone do not reflect the overall quality-ofservice provided to the user/application. Other non-functional properties such as scalability, security or energy sustainability must also be considered in the system design. This paper identifies the most important non-functional properties that affect the overall quality of the service provided to the users, outlining their relevance, state-of-the-art and future research directions.
Resumo:
Hexagonal wireless sensor network refers to a network topology where a subset of nodes have six peer neighbors. These nodes form a backbone for multi-hop communications. In a previous work, we proposed the use of hexagonal topology in wireless sensor networks and discussed its properties in relation to real-time (bounded latency) multi-hop communications in large-scale deployments. In that work, we did not consider the problem of hexagonal topology formation in practice - which is the subject of this research. In this paper, we present a decentralized algorithm that forms the hexagonal topology backbone in an arbitrary but sufficiently dense network deployment. We implemented a prototype of our algorithm in NesC for TinyOS based platforms. We present data from field tests of our implementation, collected using a deployment of fifty wireless sensor nodes.
Resumo:
The simulation analysis is important approach to developing and evaluating the systems in terms of development time and cost. This paper demonstrates the application of Time Division Cluster Scheduling (TDCS) tool for the configuration of IEEE 802.15.4/ZigBee beaconenabled cluster-tree WSNs using the simulation analysis, as an illustrative example that confirms the practical applicability of the tool. The simulation study analyses how the number of retransmissions impacts the reliability of data transmission, the energy consumption of the nodes and the end-to-end communication delay, based on the simulation model that was implemented in the Opnet Modeler. The configuration parameters of the network are obtained directly from the TDCS tool. The simulation results show that the number of retransmissions impacts the reliability, the energy consumption and the end-to-end delay, in a way that improving the one may degrade the others.
Resumo:
The international Electrotechnical Commission (IEC) 61499 architecture incorporated several function block with which distributed control application may be developed, and how these are interpreted and executed. However, due the distributed nature of the control applications, many issues also need to be taken into account. Most of these are due to the new error model and failure modes of the distributed hardware on which the distributed application is executed and also due the incomplete standards definition of the execution models. IEC 61499 frameworks does not clarify how to handle with replication of software and hardware components. In this paper we propose a replication model for IEC 61499 applications and which mechanisms and protocols may be used for their support.
Resumo:
This paper proposes a novel agent-based approach to Meta-Heuristics self-configuration. Meta-heuristics are algorithms with parameters which need to be set up as efficient as possible in order to unsure its performance. A learning module for self-parameterization of Meta-heuristics (MH) in a Multi-Agent System (MAS) for resolution of scheduling problems is proposed in this work. The learning module is based on Case-based Reasoning (CBR) and two different integration approaches are proposed. A computational study is made for comparing the two CBR integration perspectives. Finally, some conclusions are reached and future work outlined.
Resumo:
The characteristics of carbon fibre reinforced laminates have widened their use from aerospace to domestic appliances, and new possibilities for their usage emerge almost daily. In many of the possible applications, the laminates need to be drilled for assembly purposes. It is known that a drilling process that reduces the drill thrust force can decrease the risk of delamination. In this work, damage assessment methods based on data extracted from radiographic images are compared and correlated with mechanical test results—bearing test and delamination onset test—and analytical models. The results demonstrate the importance of an adequate selection of drilling tools and machining parameters to extend the life cycle of these laminates as a consequence of enhanced reliability.
Resumo:
The intensive use of distributed generation based on renewable resources increases the complexity of power systems management, particularly the short-term scheduling. Demand response, storage units and electric and plug-in hybrid vehicles also pose new challenges to the short-term scheduling. However, these distributed energy resources can contribute significantly to turn the shortterm scheduling more efficient and effective improving the power system reliability. This paper proposes a short-term scheduling methodology based on two distinct time horizons: hour-ahead scheduling, and real-time scheduling considering the point of view of one aggregator agent. In each scheduling process, it is necessary to update the generation and consumption operation, and the storage and electric vehicles status. Besides the new operation condition, more accurate forecast values of wind generation and consumption are available, for the resulting of short-term and very short-term methods. In this paper, the aggregator has the main goal of maximizing his profits while, fulfilling the established contracts with the aggregated and external players.
Resumo:
This paper proposes a PSO based approach to increase the probability of delivering power to any load point by identifying new investments in distribution energy systems. The statistical failure and repair data of distribution components is the main basis of the proposed methodology that uses a fuzzyprobabilistic modeling for the components outage parameters. The fuzzy membership functions of the outage parameters of each component are based on statistical records. A Modified Discrete PSO optimization model is developed in order to identify the adequate investments in distribution energy system components which allow increasing the probability of delivering power to any customer in the distribution system at the minimum possible cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a 180 bus distribution network.
Resumo:
This paper proposes a methodology to increase the probability of delivering power to any load point through the identification of new investments. The methodology uses a fuzzy set approach to model the uncertainty of outage parameters, load and generation. A DC fuzzy multicriteria optimization model considering the Pareto front and based on mixed integer non-linear optimization programming is developed in order to identify the adequate investments in distribution networks components which allow increasing the probability of delivering power to all customers in the distribution network at the minimum possible cost for the system operator, while minimizing the non supplied energy cost. To illustrate the application of the proposed methodology, the paper includes a case study which considers an 33 bus distribution network.
Resumo:
The provision of reserves in power systems is of great importance in what concerns keeping an adequate and acceptable level of security and reliability. This need for reserves and the way they are defined and dispatched gain increasing importance in the present and future context of smart grids and electricity markets due to their inherent competitive environment. This paper concerns a methodology proposed by the authors, which aims to jointly and optimally dispatch both generation and demand response resources to provide the amounts of reserve required for the system operation. Virtual Power Players are especially important for the aggregation of small size demand response and generation resources. The proposed methodology has been implemented in MASCEM, a multi agent system also developed at the authors’ research center for the simulation of electricity markets.