880 resultados para Time-Delayed Systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The IEEE 802.15.4 Medium Access Control (MAC) protocol is an enabling technology for time sensitive wireless sensor networks thanks to its Guaranteed-Time Slot (GTS) mechanism in the beacon-enabled mode. However, the protocol only supports explicit GTS allocation, i.e. a node allocates a number of time slots in each superframe for exclusive use. The limitation of this explicit GTS allocation is that GTS resources may quickly disappear, since a maximum of seven GTSs can be allocated in each superframe, preventing other nodes to benefit from guaranteed service. Moreover, the GTSs may be only partially used, resulting in wasted bandwidth. To overcome these limitations, this paper proposes i-GAME, an implicit GTS Allocation Mechanism in beacon-enabled IEEE 802.15.4 networks. The allocation is based on implicit GTS allocation requests, taking into account the traffic specifications and the delay requirements of the flows. The i-GAME approach enables the use of a GTS by multiple nodes, while all their (delay, bandwidth) requirements are still satisfied. For that purpose, we propose an admission control algorithm that enables to decide whether to accept a new GTS allocation request or not, based not only on the remaining time slots, but also on the traffic specifications of the flows, their delay requirements and the available bandwidth resources. We show that our proposal improves the bandwidth utilization compared to the explicit allocation used in the IEEE 802.15.4 protocol standard. We also present some practical considerations for the implementation of i-GAME, ensuring backward compatibility with the IEEE 801.5.4 standard with only minor add-ons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The scarcity and diversity of resources among the devices of heterogeneous computing environments may affect their ability to perform services with specific Quality of Service constraints, particularly in dynamic distributed environments where the characteristics of the computational load cannot always be predicted in advance. Our work addresses this problem by allowing resource constrained devices to cooperate with more powerful neighbour nodes, opportunistically taking advantage of global distributed resources and processing power. Rather than assuming that the dynamic configuration of this cooperative service executes until it computes its optimal output, the paper proposes an anytime approach that has the ability to tradeoff deliberation time for the quality of the solution. Extensive simulations demonstrate that the proposed anytime algorithms are able to quickly find a good initial solution and effectively optimise the rate at which the quality of the current solution improves at each iteration, with an overhead that can be considered negligible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Typically common embedded systems are designed with high resource constraints. Static designs are often chosen to address very specific use cases. On contrast, a dynamic design must be used if the system must supply a real-time service where the input may contain factors of indeterminism. Thus, adding new functionality on these systems is often accomplished by higher development time, tests and costs, since new functionality push the system complexity and dynamics to a higher level. Usually, these systems have to adapt themselves to evolving requirements and changing service requests. In this perspective, run-time monitoring of the system behaviour becomes an important requirement, allowing to dynamically capturing the actual scheduling progress and resource utilization. For this to succeed, operating systems need to expose their internal behaviour and state, making it available to the external applications, usually using a run-time monitoring mechanism. However, such mechanism can impose a burden in the system itself if not wisely used. In this paper we explore this problem and propose a framework, which is intended to provide this run-time mechanism whilst achieving code separation, run-time efficiency and flexibility for the final developer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Different heating systems have been used in pultrusion, where the most widely used heaters are planar resistances. The primary objective of this study was to develop an improved heating system and compare its performance with that of a system with planar resistances. In this study, thermography was used to better understand the temperature profile along the die. Finite element analysis was performed to determine the amount of energy consumed by the heating systems. Improvements were made to the die to test the new heating system, and it was found that the new system reduced the setup time and energy consumption by approximately 57%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main aims of the present study are simultaneously to relate the brazing parameters with: (i) the correspondent interfacial microstructure, (ii) the resultant mechanical properties and (iii) the electrochemical degradation behaviour of AISI 316 stainless steel/alumina brazed joints. Filler metals on such as Ag–26.5Cu–3Ti and Ag–34.5Cu–1.5Ti were used to produce the joints. Three different brazing temperatures (850, 900 and 950 °C), keeping a constant holding time of 20 min, were tested. The objective was to understand the influence of the brazing temperature on the final microstructure and properties of the joints. The mechanical properties of the metal/ceramic (M/C) joints were assessed from bond strength tests carried out using a shear solicitation loading scheme. The fracture surfaces were studied both morphologically and structurally using scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS) and X-ray diffraction analysis (XRD). The degradation behaviour of the M/C joints was assessed by means of electrochemical techniques. It was found that using a Ag–26.5Cu–3Ti brazing alloy and a brazing temperature of 850 °C, produces the best results in terms of bond strength, 234 ± 18 MPa. The mechanical properties obtained could be explained on the basis of the different compounds identified on the fracture surfaces by XRD. On the other hand, the use of the Ag–34.5Cu–1.5Ti brazing alloy and a brazing temperature of 850 °C produces the best results in terms of corrosion rates (lower corrosion current density), 0.76 ± 0.21 μA cm−2. Nevertheless, the joints produced at 850 °C using a Ag–26.5Cu–3Ti brazing alloy present the best compromise between mechanical properties and degradation behaviour, 234 ± 18 MPa and 1.26 ± 0.58 μA cm−2, respectively. The role of Ti diffusion is fundamental in terms of the final value achieved for the M/C bond strength. On the contrary, the Ag and Cu distribution along the brazed interface seem to play the most relevant role in the metal/ceramic joints electrochemical performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dependability is a critical factor in computer systems, requiring high quality validation & verification procedures in the development stage. At the same time, digital devices are getting smaller and access to their internal signals and registers is increasingly complex, requiring innovative debugging methodologies. To address this issue, most recent microprocessors include an on-chip debug (OCD) infrastructure to facilitate common debugging operations. This paper proposes an enhanced OCD infrastructure with the objective of supporting the verification of fault-tolerant mechanisms through fault injection campaigns. This upgraded on-chip debug and fault injection (OCD-FI) infrastructure provides an efficient fault injection mechanism with improved capabilities and dynamic behavior. Preliminary results show that this solution provides flexibility in terms of fault triggering and allows high speed real-time fault injection in memory elements

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fault injection is frequently used for the verification and validation of dependable systems. When targeting real time microprocessor based systems the process becomes significantly more complex. This paper proposes two complementary solutions to improve real time fault injection campaign execution, both in terms of performance and capabilities. The methodology is based on the use of the on-chip debug mechanisms present in modern electronic devices. The main objective is the injection of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented and compared in terms of performance gain and logic overhead.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As electronic devices get smaller and more complex, dependability assurance is becoming fundamental for many mission critical computer based systems. This paper presents a case study on the possibility of using the on-chip debug infrastructures present in most current microprocessors to execute real time fault injection campaigns. The proposed methodology is based on a debugger customized for fault injection and designed for maximum flexibility, and consists of injecting bit-flip type faults on memory elements without modifying or halting the target application. The debugger design is easily portable and applicable to different architectures, providing a flexible and efficient mechanism for verifying and validating fault tolerant components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When considering time series data of variables describing agent interactions in social neurobiological systems, measures of regularity can provide a global understanding of such system behaviors. Approximate entropy (ApEn) was introduced as a nonlinear measure to assess the complexity of a system behavior by quantifying the regularity of the generated time series. However, ApEn is not reliable when assessing and comparing the regularity of data series with short or inconsistent lengths, which often occur in studies of social neurobiological systems, particularly in dyadic human movement systems. Here, the authors present two normalized, nonmodified measures of regularity derived from the original ApEn, which are less dependent on time series length. The validity of the suggested measures was tested in well-established series (random and sine) prior to their empirical application, describing the dyadic behavior of athletes in team games. The authors consider one of the ApEn normalized measures to generate the 95th percentile envelopes that can be used to test whether a particular social neurobiological system is highly complex (i.e., generates highly unpredictable time series). Results demonstrated that suggested measures may be considered as valid instruments for measuring and comparing complexity in systems that produce time series with inconsistent lengths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper provides a review of antennas applied for indoor positioning or localization systems. The desired requirements of those antennas when integrated in anchor nodes (reference nodes) are discussed, according to different localization techniques and their performance. The described antennas will be subdivided into the following sections according to the nature of measurements: received signal strength (RSS), time of flight (ToF), and direction of arrival (DoA). This paper intends to provide a useful guide for antenna designers who are interested in developing suitable antennas for indoor localization systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid increase in the use of microprocessor-based systems in critical areas, where failures imply risks to human lives, to the environment or to expensive equipment, significantly increased the need for dependable systems, able to detect, tolerate and eventually correct faults. The verification and validation of such systems is frequently performed via fault injection, using various forms and techniques. However, as electronic devices get smaller and more complex, controllability and observability issues, and sometimes real time constraints, make it harder to apply most conventional fault injection techniques. This paper proposes a fault injection environment and a scalable methodology to assist the execution of real-time fault injection campaigns, providing enhanced performance and capabilities. Our proposed solutions are based on the use of common and customized on-chip debug (OCD) mechanisms, present in many modern electronic devices, with the main objective of enabling the insertion of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented starting from basic Components Off-The-Shelf (COTS) microprocessors, equipped with real-time OCD infrastructures, to improved solutions based on modified interfaces, and dedicated OCD circuitry that enhance fault injection capabilities and performance. All methodologies and configurations were evaluated and compared concerning performance gain and silicon overhead.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes the signals captured during impacts and vibrations of a mechanical manipulator. In order to acquire and study the signals an experimental setup is implemented. The signals are treated through signal processing tools such as the fast Fourier transform and the short time Fourier transform. The results show that the Fourier spectrum of several signals presents a non integer behavior. The experimental study provides valuable results that can assist in the design of a control system to deal with the unwanted effects of vibrations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of transient dynamical phenomena near bifurcation thresholds has attracted the interest of many researchers due to the relevance of bifurcations in different physical or biological systems. In the context of saddle-node bifurcations, where two or more fixed points collide annihilating each other, it is known that the dynamics can suffer the so-called delayed transition. This phenomenon emerges when the system spends a lot of time before reaching the remaining stable equilibrium, found after the bifurcation, because of the presence of a saddle-remnant in phase space. Some works have analytically tackled this phenomenon, especially in time-continuous dynamical systems, showing that the time delay, tau, scales according to an inverse square-root power law, tau similar to (mu-mu (c) )(-1/2), as the bifurcation parameter mu, is driven further away from its critical value, mu (c) . In this work, we first characterize analytically this scaling law using complex variable techniques for a family of one-dimensional maps, called the normal form for the saddle-node bifurcation. We then apply our general analytic results to a single-species ecological model with harvesting given by a unimodal map, characterizing the delayed transition and the scaling law arising due to the constant of harvesting. For both analyzed systems, we show that the numerical results are in perfect agreement with the analytical solutions we are providing. The procedure presented in this work can be used to characterize the scaling laws of one-dimensional discrete dynamical systems with saddle-node bifurcations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis for the Degree of Master of Science in Biotechnology Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fault injection is frequently used for the verification and validation of the fault tolerant features of microprocessors. This paper proposes the modification of a common on-chip debugging (OCD) infrastructure to add fault injection capabilities and improve performance. The proposed solution imposes a very low logic overhead and provides a flexible and efficient mechanism for the execution of fault injection campaigns, being applicable to different target system architectures.