120 resultados para dark energy experiments
RadiaLE: A framework for designing and assessing link quality estimators in wireless sensor networks
Resumo:
Stringent cost and energy constraints impose the use of low-cost and low-power radio transceivers in large-scale wireless sensor networks (WSNs). This fact, together with the harsh characteristics of the physical environment, requires a rigorous WSN design. Mechanisms for WSN deployment and topology control, MAC and routing, resource and mobility management, greatly depend on reliable link quality estimators (LQEs). This paper describes the RadiaLE framework, which enables the experimental assessment, design and optimization of LQEs. RadiaLE comprises (i) the hardware components of the WSN testbed and (ii) a software tool for setting-up and controlling the experiments, automating link measurements gathering through packets-statistics collection, and analyzing the collected data, allowing for LQEs evaluation. We also propose a methodology that allows (i) to properly set different types of links and different types of traffic, (ii) to collect rich link measurements, and (iii) to validate LQEs using a holistic and unified approach. To demonstrate the validity and usefulness of RadiaLE, we present two case studies: the characterization of low-power links and a comparison between six representative LQEs. We also extend the second study for evaluating the accuracy of the TOSSIM 2 channel model.
Resumo:
With progressing CMOS technology miniaturization, the leakage power consumption starts to dominate the dynamic power consumption. The recent technology trends have equipped the modern embedded processors with the several sleep states and reduced their overhead (energy/time) of the sleep transition. The dynamic voltage frequency scaling (DVFS) potential to save energy is diminishing due to efficient (low overhead) sleep states and increased static (leakage) power consumption. The state-of-the-art research on static power reduction at system level is based on assumptions that cannot easily be integrated into practical systems. We propose a novel enhanced race-to-halt approach (ERTH) to reduce the overall system energy consumption. The exhaustive simulations demonstrate the effectiveness of our approach showing an improvement of up to 8 % over an existing work.
Resumo:
Most research work on WSNs has focused on protocols or on specific applications. There is a clear lack of easy/ready-to-use WSN technologies and tools for planning, implementing, testing and commissioning WSN systems in an integrated fashion. While there exists a plethora of papers about network planning and deployment methodologies, to the best of our knowledge none of them helps the designer to match coverage requirements with network performance evaluation. In this paper we aim at filling this gap by presenting an unified toolset, i.e., a framework able to provide a global picture of the system, from the network deployment planning to system test and validation. This toolset has been designed to back up the EMMON WSN system architecture for large-scale, dense, real-time embedded monitoring. It includes network deployment planning, worst-case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset has been paramount to validate the system architecture through DEMMON1, the first EMMON demonstrator, i.e., a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
Cluster scheduling and collision avoidance are crucial issues in large-scale cluster-tree Wireless Sensor Networks (WSNs). The paper presents a methodology that provides a Time Division Cluster Scheduling (TDCS) mechanism based on the cyclic extension of RCPS/TC (Resource Constrained Project Scheduling with Temporal Constraints) problem for a cluster-tree WSN, assuming bounded communication errors. The objective is to meet all end-to-end deadlines of a predefined set of time-bounded data flows while minimizing the energy consumption of the nodes by setting the TDCS period as long as possible. Sinceeach cluster is active only once during the period, the end-to-end delay of a given flow may span over several periods when there are the flows with opposite direction. The scheduling tool enables system designers to efficiently configure all required parameters of the IEEE 802.15.4/ZigBee beaconenabled cluster-tree WSNs in the network design time. The performance evaluation of thescheduling tool shows that the problems with dozens of nodes can be solved while using optimal solvers.
Resumo:
Electricity is regarded as one of the indispensable means to growth of any country’s economy. This source of power is the heartbeat of everything from the huge metropolitans, industries, worldwide computer networks and our global communication systems down to our homes. Electrical energy is the lifeline for any economic and societal development of a region or country. It is central to develop countries for maintaining acquired life styles and essential to developing countries for industrialisation and escaping poverty.
Resumo:
The simulation analysis is important approach to developing and evaluating the systems in terms of development time and cost. This paper demonstrates the application of Time Division Cluster Scheduling (TDCS) tool for the configuration of IEEE 802.15.4/ZigBee beaconenabled cluster-tree WSNs using the simulation analysis, as an illustrative example that confirms the practical applicability of the tool. The simulation study analyses how the number of retransmissions impacts the reliability of data transmission, the energy consumption of the nodes and the end-to-end communication delay, based on the simulation model that was implemented in the Opnet Modeler. The configuration parameters of the network are obtained directly from the TDCS tool. The simulation results show that the number of retransmissions impacts the reliability, the energy consumption and the end-to-end delay, in a way that improving the one may degrade the others.
Resumo:
Existing work in the context of energy management for real-time systems often ignores the substantial cost of making DVFS and sleep state decisions in terms of time and energy and/or assume very simple models. Within this paper we attempt to explore the parameter space for such decisions and possible constraints faced.
Resumo:
The IEEE 802.15.4 protocol proposes a flexible communication solution for Low-Rate Wireless Personal Area Networks (LR-WPAN) including wireless sensor networks (WSNs). It presents the advantage to fit different requirements of potential applications by adequately setting its parameters. When in beaconenabled mode, the protocol can provide timeliness guarantees by using its Guaranteed Time Slot (GTS) mechanism. However, power-efficiency and timeliness guarantees are often two antagonistic requirements in wireless sensor networks. The purpose of this paper is to analyze and propose a methodology for setting the relevant parameters of IEEE 802.15.4-compliant WSNs that takes into account a proper trade-off between power-efficiency and delay bound guarantees. First, we propose two accurate models of service curves for a GTS allocation as a function of the IEEE 802.15.4 parameters, using Network Calculus formalism. We then evaluate the delay bound guaranteed by a GTS allocation and express it as a function of the duty cycle. Based on the relation between the delay requirement and the duty cycle, we propose a power-efficient superframe selection method that simultaneously reduces power consumption and enables meeting the delay requirements of real-time flows allocating GTSs. The results of this work may pave the way for a powerefficient management of the GTS mechanism in an IEEE 802.15.4 cluster.
Resumo:
The performance of the Weather Research and Forecast (WRF) model in wind simulation was evaluated under different numerical and physical options for an area of Portugal, located in complex terrain and characterized by its significant wind energy resource. The grid nudging and integration time of the simulations were the tested numerical options. Since the goal is to simulate the near-surface wind, the physical parameterization schemes regarding the boundary layer were the ones under evaluation. Also, the influences of the local terrain complexity and simulation domain resolution on the model results were also studied. Data from three wind measuring stations located within the chosen area were compared with the model results, in terms of Root Mean Square Error, Standard Deviation Error and Bias. Wind speed histograms, occurrences and energy wind roses were also used for model evaluation. Globally, the model accurately reproduced the local wind regime, despite a significant underestimation of the wind speed. The wind direction is reasonably simulated by the model especially in wind regimes where there is a clear dominant sector, but in the presence of low wind speeds the characterization of the wind direction (observed and simulated) is very subjective and led to higher deviations between simulations and observations. Within the tested options, results show that the use of grid nudging in simulations that should not exceed an integration time of 2 days is the best numerical configuration, and the parameterization set composed by the physical schemes MM5–Yonsei University–Noah are the most suitable for this site. Results were poorer in sites with higher terrain complexity, mainly due to limitations of the terrain data supplied to the model. The increase of the simulation domain resolution alone is not enough to significantly improve the model performance. Results suggest that error minimization in the wind simulation can be achieved by testing and choosing a suitable numerical and physical configuration for the region of interest together with the use of high resolution terrain data, if available.
Resumo:
In the last twenty years genetic algorithms (GAs) were applied in a plethora of fields such as: control, system identification, robotics, planning and scheduling, image processing, and pattern and speech recognition (Bäck et al., 1997). In robotics the problems of trajectory planning, collision avoidance and manipulator structure design considering a single criteria has been solved using several techniques (Alander, 2003). Most engineering applications require the optimization of several criteria simultaneously. Often the problems are complex, include discrete and continuous variables and there is no prior knowledge about the search space. These kind of problems are very more complex, since they consider multiple design criteria simultaneously within the optimization procedure. This is known as a multi-criteria (or multiobjective) optimization, that has been addressed successfully through GAs (Deb, 2001). The overall aim of multi-criteria evolutionary algorithms is to achieve a set of non-dominated optimal solutions known as Pareto front. At the end of the optimization procedure, instead of a single optimal (or near optimal) solution, the decision maker can select a solution from the Pareto front. Some of the key issues in multi-criteria GAs are: i) the number of objectives, ii) to obtain a Pareto front as wide as possible and iii) to achieve a Pareto front uniformly spread. Indeed, multi-objective techniques using GAs have been increasing in relevance as a research area. In 1989, Goldberg suggested the use of a GA to solve multi-objective problems and since then other researchers have been developing new methods, such as the multi-objective genetic algorithm (MOGA) (Fonseca & Fleming, 1995), the non-dominated sorted genetic algorithm (NSGA) (Deb, 2001), and the niched Pareto genetic algorithm (NPGA) (Horn et al., 1994), among several other variants (Coello, 1998). In this work the trajectory planning problem considers: i) robots with 2 and 3 degrees of freedom (dof ), ii) the inclusion of obstacles in the workspace and iii) up to five criteria that are used to qualify the evolving trajectory, namely the: joint traveling distance, joint velocity, end effector / Cartesian distance, end effector / Cartesian velocity and energy involved. These criteria are used to minimize the joint and end effector traveled distance, trajectory ripple and energy required by the manipulator to reach at destination point. Bearing this ideas in mind, the paper addresses the planning of robot trajectories, meaning the development of an algorithm to find a continuous motion that takes the manipulator from a given starting configuration up to a desired end position without colliding with any obstacle in the workspace. The chapter is organized as follows. Section 2 describes the trajectory planning and several approaches proposed in the literature. Section 3 formulates the problem, namely the representation adopted to solve the trajectory planning and the objectives considered in the optimization. Section 4 studies the algorithm convergence. Section 5 studies a 2R manipulator (i.e., a robot with two rotational joints/links) when the optimization trajectory considers two and five objectives. Sections 6 and 7 show the results for the 3R redundant manipulator with five goals and for other complementary experiments are described, respectively. Finally, section 8 draws the main conclusions.
Resumo:
Remote experimentation laboratories are systems based on real equipment, allowing students to perform practical work through a computer connected to the internet. In engineering fields lab activities play a fundamental role. Distance learning has not demonstrated good results in engineering fields because traditional lab activities cannot be covered by this paradigm. These activities can be set for one or for a group of students who work from different locations. All these configurations lead to considering a flexible model that covers all possibilities (for an individual or a group). An inter-continental network of remote laboratories supported by both European and Latin American institutions of higher education has been formed. In this network context, a learning collaborative model for students working from different locations has been defined. The first considerations are presented.
Resumo:
Mestrado em Engenharia Química. Ramo optimização energética na indústria química
Resumo:
Remote Experimentation is an educational resource that allows teachers to strengthen the practical contents of science & engineering courses. However, building up the interfaces to remote experiments is not a trivial task. Although teachers normally master the practical contents addressed by a particular remote experiment they usually lack the programming skills required to quickly build up the corresponding web interface. This paper describes the automatic generation of experiment interfaces through a web-accessible Java application. The application displays a list of existent modules and once the requested modules have been selected, it generates the code that enables the browser to display the experiment interface. The tools? main advantage is enabling non-tech teachers to create their own remote experiments.
Resumo:
The Basel III will have a significant impact on the European banking sector. In September 2010, supervisors of various countries adopted the new rules proposed by the prudential Committee on Banking Supervision to be applied to the business of credit institutions (hereinafter called ICs) in a phased manner from 2013, assuming to its full implementation by 2019. The purpose of this new regulation is to limit the excessive risk that these institutions took on the period preceding the global financial crisis of 2008. This new regulation is known in slang by Basel III. Depending on the requirement of Basel II for banks and their supervisors to assess the soundness and adequacy of internal risk measurement and credit management systems, the development of methodologies for the validation of internal and external evaluation systems is clearly an important issue . More specifically, there is a need to develop tools to validate the systems used to generate the parameters (such as PD, LGD, EAD and ratings of perceived risk) that serve as starting points for the IRB approach for credit risk. In this context, the work is composed of a number of approaches and tools used to evaluate the robustness of these elements IRB systems.
Resumo:
This paper describes the application of Design State Exploration techniques in the development of a remote lab for projectile motion experiments. The application was enabled by the existence of two independent teams: one composed of a series of internships that started first and another with two grantees that started a few months later. The paper presents evidence on how this approach provided gains in the development process conducted by the second team that benefited from design state exploration studies performed by the first team. This particular aspect is highlighted in relation to the work already presented in the 10th Remote Engineering and Virtual Instrumentation (REV) conference.