32 resultados para key scheduling algorithm
Resumo:
Conferência - 16th International Symposium on Wireless Personal Multimedia Communications (WPMC)- Jun 24-27, 2013
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Electrónica e Telecomunicações
Resumo:
Objectivo do estudo: comparar o desempenho dos algoritmos Pencil Beam Convolution (PBC) e do Analytical Anisotropic Algorithm (AAA) no planeamento do tratamento de tumores de mama com radioterapia conformacional a 3D.
Resumo:
In visual sensor networks, local feature descriptors can be computed at the sensing nodes, which work collaboratively on the data obtained to make an efficient visual analysis. In fact, with a minimal amount of computational effort, the detection and extraction of local features, such as binary descriptors, can provide a reliable and compact image representation. In this paper, it is proposed to extract and code binary descriptors to meet the energy and bandwidth constraints at each sensing node. The major contribution is a binary descriptor coding technique that exploits the correlation using two different coding modes: Intra, which exploits the correlation between the elements that compose a descriptor; and Inter, which exploits the correlation between descriptors of the same image. The experimental results show bitrate savings up to 35% without any impact in the performance efficiency of the image retrieval task. © 2014 EURASIP.
Resumo:
Cloud SLAs compensate customers with credits when average availability drops below certain levels. This is too inflexible because consumers lose non-measurable amounts of performance being only compensated later, in next charging cycles. We propose to schedule virtual machines (VMs), driven by range-based non-linear reductions of utility, different for classes of users and across different ranges of resource allocations: partial utility. This customer-defined metric, allows providers transferring resources between VMs in meaningful and economically efficient ways. We define a comprehensive cost model incorporating partial utility given by clients to a certain level of degradation, when VMs are allocated in overcommitted environments (Public, Private, Community Clouds). CloudSim was extended to support our scheduling model. Several simulation scenarios with synthetic and real workloads are presented, using datacenters with different dimensions regarding the number of servers and computational capacity. We show the partial utility-driven driven scheduling allows more VMs to be allocated. It brings benefits to providers, regarding revenue and resource utilization, allowing for more revenue per resource allocated and scaling well with the size of datacenters when comparing with an utility-oblivious redistribution of resources. Regarding clients, their workloads’ execution time is also improved, by incorporating an SLA-based redistribution of their VM’s computational power.
Resumo:
In this work is discussed the importance of the renewable production forecast in an island environment. A probabilistic forecast based on kernel density estimators is proposed. The aggregation of these forecasts, allows the determination of thermal generation amount needed to schedule and operating a power grid of an island with high penetration of renewable generation. A case study based on electric system of S. Miguel Island is presented. The results show that the forecast techniques are an imperative tool help the grid management.
Resumo:
This paper is on the self-scheduling for a power producer taking part in day-ahead joint energy and spinning reserve markets and aiming at a short-term coordination of wind power plants with concentrated solar power plants having thermal energy storage. The short-term coordination is formulated as a mixed-integer linear programming problem given as the maximization of profit subjected to technical operation constraints, including the ones related to a transmission line. Probability density functions are used to model the variability of the hourly wind speed and the solar irradiation in regard to a negative correlation. Case studies based on an Iberian Peninsula wind and concentrated solar power plants are presented, providing the optimal energy and spinning reserve for the short-term self-scheduling in order to unveil the coordination benefits and synergies between wind and solar resources. Results and sensitivity analysis are in favour of the coordination, showing an increase on profit, allowing for spinning reserve, reducing the need for curtailment, increasing the transmission line capacity factor. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica no Ramo de Automação e Electrónica Industrial
Resumo:
This paper is on the self-scheduling problem for a thermal power producer taking part in a pool-based electricity market as a price-taker, having bilateral contracts and emission-constrained. An approach based on stochastic mixed-integer linear programming approach is proposed for solving the self-scheduling problem. Uncertainty regarding electricity price is considered through a set of scenarios computed by simulation and scenario-reduction. Thermal units are modelled by variable costs, start-up costs and technical operating constraints, such as: forbidden operating zones, ramp up/down limits and minimum up/down time limits. A requirement on emission allowances to mitigate carbon footprint is modelled by a stochastic constraint. Supply functions for different emission allowance levels are accessed in order to establish the optimal bidding strategy. A case study is presented to illustrate the usefulness and the proficiency of the proposed approach in supporting biding strategies. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.
Resumo:
A new family of eight ruthenium(II)-cyclopentadienyl bipyridine derivatives, bearing nitrogen, sulfur, phosphorous and carbonyl sigma bonded coligands, has been synthesized. Compounds bearing nitrogen bonded coligands were found to be unstable in aqueous solution, while the others presented appropriate stabilities for the biologic assays and pursued for determination of IC50 values in ovarian (A2780) and breast (MCF7 and MDAMB231) human cancer cell lines. These studies were also carried out for the [5: HSA] and [6: HSA] adducts (HSA = human serum albumin) and a better performance was found for the first case. Spectroscopic, electrochemical studies by cyclic voltammetry and density functional theory calculations allowed us to get some understanding on the electronic flow directions within the molecules and to find a possible clue concerning the structural features of coligands that can activate bipyridyl ligands toward an increased cytotoxic effect. X-ray structure analysis of compound [Ru(eta(5)-C5H5)(bipy)(PPh3)][PF6] (7; bipy = bipyridine) showed crystallization on C2/c space group with two enantiomers of the [Ru(eta(5)-C5H5)(bipy)(PPh3)](+) cation complex in the racemic crystal packing. (C) 2015 Elsevier Inc All rights reserved.
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.
Resumo:
A new family of eight ruthenium(II)-cyclopentadienyl bipyridine derivatives, bearing nitrogen, sulfur, phosphorous and carbonyl sigma bonded coligands, has been synthesized. Compounds bearing nitrogen bonded coligands were found to be unstable in aqueous solution, while the others presented appropriate stabilities for the biologic assays and pursued for determination of IC50 values in ovarian (A2780) and breast (MCF7 and MDAMB231) human cancer cell lines. These studies were also carried out for the [5: HSA] and [6: HSA] adducts (HSA=human serum albumin) and a better performance was found for the first case. Spectroscopic, electrochemical studies by cyclic voltammetry and density functional theory calculations allowed us to get some understanding on the electronic flow directions within the molecules and to find a possible clue concerning the structural features of coligands that can activate bipyridyl ligands toward an increased cytotoxic effect. X-ray structure analysis of compound [Ru(η(5)-C5H5)(bipy)(PPh3)][PF6] (7; bipy=bipyridine) showed crystallization on C2/c space group with two enantiomers of the [Ru(η(5)-C5H5)(bipy)(PPh3)](+) cation complex in the racemic crystal packing.
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the linear unmixing. The key to linear unmixing is to find the set of reference substances, also called endmembers, that are representative of a given scene. This paper presents the vertex component analysis (VCA) a new method to unmix linear mixtures of hyperspectral sources. The algorithm is unsupervised and exploits a simple geometric fact: endmembers are vertices of a simplex. The algorithm complexity, measured in floating points operations, is O (n), where n is the sample size. The effectiveness of the proposed scheme is illustrated using simulated data.