951 resultados para COMPLEX NETWORKS
Resumo:
It has been shown that in reality at least two general scenarios of data structuring are possible: (a) a self-similar (SS) scenario when the measured data form an SS structure and (b) a quasi-periodic (QP) scenario when the repeated (strongly correlated) data form random sequences that are almost periodic with respect to each other. In the second case it becomes possible to describe their behavior and express a part of their randomness quantitatively in terms of the deterministic amplitude–frequency response belonging to the generalized Prony spectrum. This possibility allows us to re-examine the conventional concept of measurements and opens a new way for the description of a wide set of different data. In particular, it concerns different complex systems when the ‘best-fit’ model pretending to be the description of the data measured is absent but the barest necessity of description of these data in terms of the reduced number of quantitative parameters exists. The possibilities of the proposed approach and detection algorithm of the QP processes were demonstrated on actual data: spectroscopic data recorded for pure water and acoustic data for a test hole. The suggested methodology allows revising the accepted classification of different incommensurable and self-affine spatial structures and finding accurate interpretation of the generalized Prony spectroscopy that includes the Fourier spectroscopy as a partial case.
Resumo:
This paper presents a novel method for the analysis of nonlinear financial and economic systems. The modeling approach integrates the classical concepts of state space representation and time series regression. The analytical and numerical scheme leads to a parameter space representation that constitutes a valid alternative to represent the dynamical behavior. The results reveal that business cycles can be clearly revealed, while the noise effects common in financial indices can elegantly be filtered out of the results.
Resumo:
This paper studies a discrete dynamical system of interacting particles that evolve by interacting among them. The computational model is an abstraction of the natural world, and real systems can range from the huge cosmological scale down to the scale of biological cell, or even molecules. Different conditions for the system evolution are tested. The emerging patterns are analysed by means of fractal dimension and entropy measures. It is observed that the population of particles evolves towards geometrical objects with a fractal nature. Moreover, the time signature of the entropy can be interpreted at the light of complex dynamical systems.
Resumo:
This poster abstract presents smart-HOP, a reliable handoff mechanism for mobility support in Wireless Sensor Networks (WSNs). This technique relies on a fuzzy logic approach applied at two levels: the link quality estimation level and the access point selection level. We present the conceptual design of smart-HOP and then we discuss implementation requirements and challenges.
Resumo:
This paper studies the optimization of complex-order algorithms for the discrete-time control of linear and nonlinear systems. The fundamentals of fractional systems and genetic algorithms are introduced. Based on these concepts, complexorder control schemes and their implementation are evaluated in the perspective of evolutionary optimization. The results demonstrate not only that complex-order derivatives constitute a valuable alternative for deriving control algorithms, but also the feasibility of the adopted optimization strategy.
Resumo:
We present an algorithm for bandwidth allocation for delay-sensitive traffic in multi-hop wireless sensor networks. Our solution considers both periodic as well as aperiodic real-time traffic in an unified manner. We also present a distributed MAC protocol that conforms to the bandwidth allocation and thus satisfies the latency requirements of realtime traffic. Additionally, the protocol provides best-effort service to non real-time traffic. We derive the utilization bounds of our MAC protocol.
Resumo:
Most research work on WSNs has focused on protocols or on specific applications. There is a clear lack of easy/ready-to-use WSN technologies and tools for planning, implementing, testing and commissioning WSN systems in an integrated fashion. While there exists a plethora of papers about network planning and deployment methodologies, to the best of our knowledge none of them helps the designer to match coverage requirements with network performance evaluation. In this paper we aim at filling this gap by presenting an unified toolset, i.e., a framework able to provide a global picture of the system, from the network deployment planning to system test and validation. This toolset has been designed to back up the EMMON WSN system architecture for large-scale, dense, real-time embedded monitoring. It includes network deployment planning, worst-case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset has been paramount to validate the system architecture through DEMMON1, the first EMMON demonstrator, i.e., a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
Chromian spinels are common in the late Cretaceous alkali basalts of the Lisbon volcanic Complex in Portugal. They occur as unzoned inclusions in magnesian olivines of all basalt types and as large spectacularly zoned grains in the groundmass of porphyritic basalts. Microprobe analysis indicate complex cationic exchange in the groundmass zoned spinels due to simple peritectic reactions and in response to changing composition of the basalt liquid. The variation of cationic distribution in zoned chromian-Spinels, reflects very accurately the changing chemistry of the cooling silicate melt and the paragenetical relations of mineral oxides and silicates. Crystallization of initial chromian spinels occurred at T~1200°C and fO2~10-8.5 atm. earlier or contemporaneously with magnesian olivine. The titanomagnetite mantles of zoned chromian spinels crystallized at T~1200°C and much lower fO2.
Resumo:
Most current-generation Wireless Sensor Network (WSN) nodes are equipped with multiple sensors of various types, and therefore support for multi-tasking and multiple concurrent applications is becoming increasingly common. This trend has been fostering the design of WSNs allowing several concurrent users to deploy applications with dissimilar requirements. In this paper, we extend the advantages of a holistic programming scheme by designing a novel compiler-assisted scheduling approach (called REIS) able to identify and eliminate redundancies across applications. To achieve this useful high-level optimization, we model each user application as a linear sequence of executable instructions. We show how well-known string-matching algorithms such as the Longest Common Subsequence (LCS) and the Shortest Common Super-sequence (SCS) can be used to produce an optimal merged monolithic sequence of the deployed applications that takes into account embedded scheduling information. We show that our approach can help in achieving about 60% average energy savings in processor usage compared to the normal execution of concurrent applications.
Resumo:
Several projects in the recent past have aimed at promoting Wireless Sensor Networks as an infrastructure technology, where several independent users can submit applications that execute concurrently across the network. Concurrent multiple applications cause significant energy-usage overhead on sensor nodes, that cannot be eliminated by traditional schemes optimized for single-application scenarios. In this paper, we outline two main optimization techniques for reducing power consumption across applications. First, we describe a compiler based approach that identifies redundant sensing requests across applications and eliminates those. Second, we cluster the radio transmissions together by concatenating packets from independent applications based on Rate-Harmonized Scheduling.
Resumo:
The National Cancer Institute (NCI) method allows the distributions of usual intake of nutrients and foods to be estimated. This method can be used in complex surveys. However, the user must perform additional calculations, such as balanced repeated replication (BRR), in order to obtain standard errors and confidence intervals for the percentiles and mean from the distribution of usual intake. The objective is to highlight adaptations of the NCI method using data from the National Dietary Survey. The application of the NCI method was exemplified analyzing the total energy (kcal) and fruit (g) intake, comparing estimations of mean and standard deviation that were based on the complex design of the Brazilian survey with those assuming simple random sample. Although means point estimates were similar, estimates of standard error using the complex design increased by up to 60% compared to simple random sample. Thus, for valid estimates of food and energy intake for the population, all of the sampling characteristics of the surveys should be taken into account because when these characteristics are neglected, statistical analysis may produce underestimated standard errors that would compromise the results and the conclusions of the survey.
The utilization bound of non-preemptive rate-monotonic scheduling in controller area networks is 25%
Resumo:
Consider a distributed computer system comprising many computer nodes, each interconnected with a controller area network (CAN) bus. We prove that if priorities to message streams are assigned using rate-monotonic (RM) and if the requested capacity of the CAN bus does not exceed 25% then all deadlines are met.
Resumo:
Simulators are indispensable tools to support the development and testing of cooperating objects such as wireless sensor networks (WSN). However, it is often not possible to compare the results of different simulation tools. Thus, the goal of this paper is the specification of a generic simulation platform for cooperating objects. We propose a platform that consists of a set of simulators that together fulfill desired simulator properties. We show that to achieve comparable results the use of a common specification language for the software-under-test is not feasible. Instead, we argue that using common input formats for the simulated environment and common output formats for the results is useful. This again motivates that a simulation tool consisting of a set of existing simulators that are able to use common scenario-input and can produce common output which will bring us a step closer to the vision of achieving comparable simulation results.
Resumo:
Wireless sensor networks (WSNs) emerge as underlying infrastructures for new classes of large-scale networked embedded systems. However, WSNs system designers must fulfill the quality-of-service (QoS) requirements imposed by the applications (and users). Very harsh and dynamic physical environments and extremely limited energy/computing/memory/communication node resources are major obstacles for satisfying QoS metrics such as reliability, timeliness, and system lifetime. The limited communication range of WSN nodes, link asymmetry, and the characteristics of the physical environment lead to a major source of QoS degradation in WSNs-the ldquohidden node problem.rdquo In wireless contention-based medium access control (MAC) protocols, when two nodes that are not visible to each other transmit to a third node that is visible to the former, there will be a collision-called hidden-node or blind collision. This problem greatly impacts network throughput, energy-efficiency and message transfer delays, and the problem dramatically increases with the number of nodes. This paper proposes H-NAMe, a very simple yet extremely efficient hidden-node avoidance mechanism for WSNs. H-NAMe relies on a grouping strategy that splits each cluster of a WSN into disjoint groups of non-hidden nodes that scales to multiple clusters via a cluster grouping strategy that guarantees no interference between overlapping clusters. Importantly, H-NAMe is instantiated in IEEE 802.15.4/ZigBee, which currently are the most widespread communication technologies for WSNs, with only minor add-ons and ensuring backward compatibility with their protocols standards. H-NAMe was implemented and exhaustively tested using an experimental test-bed based on ldquooff-the-shelfrdquo technology, showing that it increases network throughput and transmission success probability up to twice the values obtained without H-NAMe. H-NAMe effectiveness was also demonstrated in a target tracking application with mobile robots - over a WSN deployment.
Resumo:
Modeling the fundamental performance limits of Wireless Sensor Networks (WSNs) is of paramount importance to understand their behavior under the worst-case conditions and to make the appropriate design choices. This is particular relevant for time-sensitive WSN applications, where the timing behavior of the network protocols (message transmission must respect deadlines) impacts on the correct operation of these applications. In that direction this paper contributes with a methodology based on Network Calculus, which enables quick and efficient worst-case dimensioning of static or even dynamically changing cluster-tree WSNs where the data sink can either be static or mobile. We propose closed-form recurrent expressions for computing the worst-case end-to-end delays, buffering and bandwidth requirements across any source-destination path in a cluster-tree WSN. We show how to apply our methodology to the case of IEEE 802.15.4/ZigBee cluster-tree WSNs. Finally, we demonstrate the validity and analyze the accuracy of our methodology through a comprehensive experimental study using commercially available technology, namely TelosB motes running TinyOS.