977 resultados para Short Loadlength, Fast Algorithms
Resumo:
IEEE International Symposium on Circuits and Systems, pp. 2713 – 2716, Seattle, EUA
Resumo:
In the past few years we have witnessed the fast development of distance learning tools such as Open Educational Resources (OER) and Massive Open Online Courses (MOOCs). This paper presents the “Mathematics without STRESS” MOOC Project, which is a cooperation between four schools from the Polytechnic Institute of Oporto (IPP). The concepts of MOOC and their quickly growing popularity are presented and complemented by a discussion of some MOOC definitions. The process of the project development is demonstrated by focusing on used MOOC structure, as well as the several types of course materials produced. At last, is presented a short discussion about problems and challenges met throughout the project. It is also our goal to contribute for a change in the way as teaching and learning Mathematics is seen and practiced nowadays.
Resumo:
The computations performed by the brain ultimately rely on the functional connectivity between neurons embedded in complex networks. It is well known that the neuronal connections, the synapses, are plastic, i.e. the contribution of each presynaptic neuron to the firing of a postsynaptic neuron can be independently adjusted. The modulation of effective synaptic strength can occur on time scales that range from tens or hundreds of milliseconds, to tens of minutes or hours, to days, and may involve pre- and/or post-synaptic modifications. The collection of these mechanisms is generally believed to underlie learning and memory and, hence, it is fundamental to understand their consequences in the behavior of neurons.(...)
Resumo:
In the current context of serious climate changes, where the increase of the frequency of some extreme events occurrence can enhance the rate of periods prone to high intensity forest fires, the National Forest Authority often implements, in several Portuguese forest areas, a regular set of measures in order to control the amount of fuel mass availability (PNDFCI, 2008). In the present work we’ll present a preliminary analysis concerning the assessment of the consequences given by the implementation of prescribed fire measures to control the amount of fuel mass in soil recovery, in particular in terms of its water retention capacity, its organic matter content, pH and content of iron. This work is included in a larger study (Meira-Castro, 2009(a); Meira-Castro, 2009(b)). According to the established praxis on the data collection, embodied in multidimensional matrices of n columns (variables in analysis) by p lines (sampled areas at different depths), and also considering the quantitative data nature present in this study, we’ve chosen a methodological approach that considers the multivariate statistical analysis, in particular, the Principal Component Analysis (PCA ) (Góis, 2004). The experiments were carried out in a soil cover over a natural site of Andaluzitic schist, in Gramelas, Caminha, NW Portugal, who was able to maintain itself intact from prescribed burnings from four years and was submit to prescribed fire in March 2008. The soils samples were collected from five different plots at six different time periods. The methodological option that was adopted have allowed us to identify the most relevant relational structures inside the n variables, the p samples and in two sets at the same time (Garcia-Pereira, 1990). Consequently, and in addition to the traditional outputs produced from the PCA, we have analyzed the influence of both sampling depths and geomorphological environments in the behavior of all variables involved.
Resumo:
Reconfigurable computing experienced a considerable expansion in the last few years, due in part to the fast run-time partial reconfiguration features offered by recent SRAM-based Field Programmable Gate Arrays (FPGAs), which allowed the implementation in real-time of dynamic resource allocation strategies, with multiple independent functions from different applications sharing the same logic resources in the space and temporal domains. However, when the sequence of reconfigurations to be performed is not predictable, the efficient management of the logic space available becomes the greatest challenge posed to these systems. Resource allocation decisions have to be made concurrently with system operation, taking into account function priorities and optimizing the space currently available. As a consequence of the unpredictability of this allocation procedure, the logic space becomes fragmented, with many small areas of free resources failing to satisfy most requests and so remaining unused. A rearrangement of the currently running functions is therefore necessary, so as to obtain enough contiguous space to implement incoming functions, avoiding the spreading of their components and the resulting degradation of system performance. A novel active relocation procedure for Configurable Logic Blocks (CLBs) is herein presented, able to carry out online rearrangements, defragmenting the available FPGA resources without disturbing functions currently running.
Resumo:
With the emergence of low-power wireless hardware new ways of communication were needed. In order to standardize the communication between these low powered devices the Internet Engineering Task Force (IETF) released the 6LoWPAN stand- ard that acts as an additional layer for making the IPv6 link layer suitable for the lower-power and lossy networks. In the same way, IPv6 Routing Protocol for Low- Power and Lossy Networks (RPL) has been proposed by the IETF Routing Over Low power and Lossy networks (ROLL) Working Group as a standard routing protocol for IPv6 routing in low-power wireless sensor networks. The research performed in this thesis uses these technologies to implement a mobility process. Mobility management is a fundamental yet challenging area in low-power wireless networks. There are applications that require mobile nodes to exchange data with a xed infrastructure with quality-of-service guarantees. A prime example of these applications is the monitoring of patients in real-time. In these scenarios, broadcast- ing data to all access points (APs) within range may not be a valid option due to the energy consumption, data storage and complexity requirements. An alternative and e cient option is to allow mobile nodes to perform hand-o s. Hand-o mechanisms have been well studied in cellular and ad-hoc networks. However, low-power wireless networks pose a new set of challenges. On one hand, simpler radios and constrained resources ask for simpler hand-o schemes. On the other hand, the shorter coverage and higher variability of low-power links require a careful tuning of the hand-o parameters. In this work, we tackle the problem of integrating smart-HOP within a standard protocol, speci cally RPL. The simulation results in Cooja indicate that the pro- posed scheme minimizes the hand-o delay and the total network overhead. The standard RPL protocol is simply unable to provide a reliable mobility support sim- ilar to other COTS technologies. Instead, they support joining and leaving of nodes, with very low responsiveness in the existence of physical mobility.
Resumo:
Computerized scheduling methods and computerized scheduling systems according to exemplary embodiments. A computerized scheduling method may be stored in a memory and executed on one or more processors. The method may include defining a main multi-machine scheduling problem as a plurality of single machine scheduling problems; independently solving the plurality of single machine scheduling problems thereby calculating a plurality of near optimal single machine scheduling problem solutions; integrating the plurality of near optimal single machine scheduling problem solutions into a main multi-machine scheduling problem solution; and outputting the main multi-machine scheduling problem solution.
Resumo:
FCM: UC Bioquímica I - PhD Thesis
Resumo:
Dissertação apresentada na faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising two different types of processors—such a platform is referred to as two-type platform. We present two low degree polynomial time-complexity algorithms, SA and SA-P, each providing the following guarantee. For a given two-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then (i) using SA, it is guaranteed to find such an assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which processors are 1+α times faster. The parameter 0<α≤1 is a property of the task set; it is the maximum of all the task utilizations that are no greater than 1. We evaluate average-case performance of both the algorithms by generating task sets randomly and measuring how much faster processors the algorithms need (which is upper bounded by 1+α/2 for SA and 1+α for SA-P) in order to output a feasible task assignment (intra-migrative for SA and non-migrative for SA-P). In our evaluations, for the vast majority of task sets, these algorithms require significantly smaller processor speedup than indicated by their theoretical bounds. Finally, we consider a special case where no task utilization in the given task set can exceed one and for this case, we (re-)prove the performance guarantees of SA and SA-P. We show, for both of the algorithms, that changing the adversary from intra-migrative to a more powerful one, namely fully-migrative, in which tasks can migrate between processors of any type, does not deteriorate the performance guarantees. For this special case, we compare the average-case performance of SA-P and a state-of-the-art algorithm by generating task sets randomly. In our evaluations, SA-P outperforms the state-of-the-art by requiring much smaller processor speedup and by running orders of magnitude faster.
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising a constant number (denoted by t) of distinct types of processors—such a platform is referred to as a t-type platform. We present two algorithms, LPGIM and LPGNM, each providing the following guarantee. For a given t-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet their deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then: (i) LPGIM succeeds in finding such an assignment where the same restriction on task migration applies (intra-migrative) but given a platform in which only one processor of each type is 1 + α × t-1/t times faster and (ii) LPGNM succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which every processor is 1 + α times faster. The parameter α is a property of the task set; it is the maximum of all the task utilizations that are no greater than one. To the best of our knowledge, for t-type heterogeneous multiprocessors: (i) for the problem of intra-migrative task assignment, no previous algorithm exists with a proven bound and hence our algorithm, LPGIM, is the first of its kind and (ii) for the problem of non-migrative task assignment, our algorithm, LPGNM, has superior performance compared to state-of-the-art.
Resumo:
Hand-off (or hand-over), the process where mobile nodes select the best access point available to transfer data, has been well studied in wireless networks. The performance of a hand-off process depends on the specific characteristics of the wireless links. In the case of low-power wireless networks, hand-off decisions must be carefully taken by considering the unique properties of inexpensive low-power radios. This paper addresses the design, implementation and evaluation of smart-HOP, a hand-off mechanism tailored for low-power wireless networks. This work has three main contributions. First, it formulates the hard hand-off process for low-power networks (such as typical wireless sensor networks - WSNs) with a probabilistic model, to investigate the impact of the most relevant channel parameters through an analytical approach. Second, it confirms the probabilistic model through simulation and further elaborates on the impact of several hand-off parameters. Third, it fine-tunes the most relevant hand-off parameters via an extended set of experiments, in a realistic experimental scenario. The evaluation shows that smart-HOP performs well in the transitional region while achieving more than 98 percent relative delivery ratio and hand-off delays in the order of a few tens of a milliseconds.
Resumo:
The Fast Field-Cycling Nuclear Magnetic Resonance (FFC-NMR) is a technique used to study the molecular dynamics of different types of materials. The main elements of this equipment are a magnet and its power supply. The magnet used as reference in this work is basically a ferromagnetic core with two sets of coils and an air-gap where the materials' sample is placed. The power supply should supply the magnet being the magnet current controlled in order to perform cycles. One of the technical issues of this type of solution is the compensation of the non-linearities associated to the magnetic characteristic of the magnet and to parasitic magnetic fields. To overcome this problem, this paper describes and discusses a solution for the FFC-NMR power supply based on a four quadrant DC/DC converter.
Resumo:
Thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Electrical and Computer Engineering
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the linear unmixing. The key to linear unmixing is to find the set of reference substances, also called endmembers, that are representative of a given scene. This paper presents the vertex component analysis (VCA) a new method to unmix linear mixtures of hyperspectral sources. The algorithm is unsupervised and exploits a simple geometric fact: endmembers are vertices of a simplex. The algorithm complexity, measured in floating points operations, is O (n), where n is the sample size. The effectiveness of the proposed scheme is illustrated using simulated data.