939 resultados para Circuit analysis computing
Resumo:
Real-time scheduling usually considers worst-case values for the parameters of task (or message stream) sets, in order to provide safe schedulability tests for hard real-time systems. However, worst-case conditions introduce a level of pessimism that is often inadequate for a certain class of (soft) real-time systems. In this paper we provide an approach for computing the stochastic response time of tasks where tasks have inter-arrival times described by discrete probabilistic distribution functions, instead of minimum inter-arrival (MIT) values.
Resumo:
Physical computing has spun a true global revolution in the way in which the digital interfaces with the real world. From bicycle jackets with turn signal lights to twitter-controlled christmas trees, the Do-it-Yourself (DiY) hardware movement has been driving endless innovations and stimulating an age of creative engineering. This ongoing (r)evolution has been led by popular electronics platforms such as the Arduino, the Lilypad, or the Raspberry Pi, however, these are not designed taking into account the specific requirements of biosignal acquisition. To date, the physiological computing community has been severely lacking a parallel to that found in the DiY electronics realm, especially in what concerns suitable hardware frameworks. In this paper, we build on previous work developed within our group, focusing on an all-in-one, low-cost, and modular biosignal acquisition hardware platform, that makes it quicker and easier to build biomedical devices. We describe the main design considerations, experimental evaluation and circuit characterization results, together with the results from a usability study performed with volunteers from multiple target user groups, namely health sciences and electrical, biomedical, and computer engineering. Copyright © 2014 SCITEPRESS - Science and Technology Publications. All rights reserved.
Resumo:
Floating-point computing with more than one TFLOP of peak performance is already a reality in recent Field-Programmable Gate Arrays (FPGA). General-Purpose Graphics Processing Units (GPGPU) and recent many-core CPUs have also taken advantage of the recent technological innovations in integrated circuit (IC) design and had also dramatically improved their peak performances. In this paper, we compare the trends of these computing architectures for high-performance computing and survey these platforms in the execution of algorithms belonging to different scientific application domains. Trends in peak performance, power consumption and sustained performances, for particular applications, show that FPGAs are increasing the gap to GPUs and many-core CPUs moving them away from high-performance computing with intensive floating-point calculations. FPGAs become competitive for custom floating-point or fixed-point representations, for smaller input sizes of certain algorithms, for combinational logic problems and parallel map-reduce problems. © 2014 Technical University of Munich (TUM).
Resumo:
The three-dimensional (3D) exact solutions developed in the early 1970s by Pagano for simply supported multilayered orthotropic composite plates and later in the 1990s extended to piezoelectric plates by Heyliger have been extremely useful in the assessment and development of advanced laminated plate theories and related finite element models. In fact, the well-known test cases provided by Pagano and by Heyliger in those earlier works are still used today as benchmark solutions. However, the limited number of test cases whose 3D exact solutions have been published has somewhat restricted the assessment of recent advanced models to the same few test cases. This work aims to provide additional test cases to serve as benchmark exact solutions for the static analysis of multilayered piezoelectric composite plates. The method introduced by Heyliger to derive the 3D exact solutions has been successfully implemented using symbolic computing and a number of new test cases are here presented thoroughly. Specifically, two multilayered plates using PVDF piezoelectric material are selected as test cases under two different loading conditions and considering three plate aspect ratios for thick, moderately thick and thin plate, in a total of 12 distinct test cases. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
The increasing complexity of VLSI circuits and the reduced accessibility of modern packaging and mounting technologies restrict the usefulness of conventional in-circuit debugging tools, such as in-circuit emulators for microprocessors and microcontrollers. However, this same trend enables the development of more complex products, which in turn require more powerful debugging tools. These conflicting demands could be met if the standard scan test infrastructures now common in most complex components were able to match the debugging requirements of design verification and prototype validation. This paper analyses the main debug requirements in the design of microprocessor-based applications and the feasibility of their implementation using the mandatory, optional and additional operating modes of the standard IEEE 1149.1 test infrastructure.
Resumo:
One fundamental idea of service-oriented computing is that applications should be developed by composing already available services. Due to the long running nature of service interactions, a main challenge in service composition is ensuring correctness of transaction recovery. In this paper, we use a process calculus suitable for modelling long running transactions with a recovery mechanism based on compensations. Within this setting, we discuss and formally state correctness criteria for compensable processes compositions, assuming that each process is correct with respect to transaction recovery. Under our theory, we formally interpret self-healing compositions, that can detect and recover from faults, as correct compositions of compensable processes. Moreover, we develop an automated verification approach and we apply it to an illustrative case study.
Resumo:
Dynamically reconfigurable SRAM-based field-programmable gate arrays (FPGAs) enable the implementation of reconfigurable computing systems where several applications may be run simultaneously, sharing the available resources according to their own immediate functional requirements. To exclude malfunctioning due to faulty elements, the reliability of all FPGA resources must be guaranteed. Since resource allocation takes place asynchronously, an online structural test scheme is the only way of ensuring reliable system operation. On the other hand, this test scheme should not disturb the operation of the circuit, otherwise availability would be compromised. System performance is also influenced by the efficiency of the management strategies that must be able to dynamically allocate enough resources when requested by each application. As those resources are allocated and later released, many small free resource blocks are created, which are left unused due to performance and routing restrictions. To avoid wasting logic resources, the FPGA logic space must be defragmented regularly. This paper presents a non-intrusive active replication procedure that supports the proposed test methodology and the implementation of defragmentation strategies, assuring both the availability of resources and their perfect working condition, without disturbing system operation.
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
Lunacloud is a cloud service provider with offices in Portugal, Spain, France and UK that focus on delivering reliable, elastic and low cost cloud Infrastructure as a Service (IaaS) solutions. The company currently relies on a proprietary IaaS platform - the Parallels Automation for Cloud Infrastructure (PACI) - and wishes to expand and integrate other IaaS solutions seamlessly, namely open source solutions. This is the challenge addressed in this thesis. This proposal, which was fostered by Eurocloud Portugal Association, contributes to the promotion of interoperability and standardisation in Cloud Computing. The goal is to investigate, propose and develop an interoperable open source solution with standard interfaces for the integrated management of IaaS Cloud Computing resources based on new as well as existing abstraction libraries or frameworks. The solution should provide bothWeb and application programming interfaces. The research conducted consisted of two surveys covering existing open source IaaS platforms and PACI (features and API) and open source IaaS abstraction solutions. The first study was focussed on the characteristics of most popular open source IaaS platforms, namely OpenNebula, OpenStack, CloudStack and Eucalyptus, as well as PACI and included a thorough inventory of the provided Application Programming Interfaces (API), i.e., offered operations, followed by a comparison of these platforms in order to establish their similarities and dissimilarities. The second study on existing open source interoperability solutions included the analysis of existing abstraction libraries and frameworks and their comparison. The approach proposed and adopted, which was supported on the conclusions of the carried surveys, reuses an existing open source abstraction solution – the Apache Deltacloud framework. Deltacloud relies on the development of software driver modules to interface with different IaaS platforms, officially provides and supports drivers to sixteen IaaS platform, including OpenNebula and OpenStack, and allows the development of new provider drivers. The latter functionality was used to develop a new Deltacloud driver for PACI. Furthermore, Deltacloud provides a Web dashboard and REpresentational State Transfer (REST) API interfaces. To evaluate the adopted solution, a test bed integrating OpenNebula, Open- Stack and PACI nodes was assembled and deployed. The tests conducted involved time elapsed and data payload measurements via the Deltacloud framework as well as via the pre-existing IaaS platform API. The Deltacloud framework behaved as expected, i.e., introduced additional delays, but no substantial overheads. Both the Web and the REST interfaces were tested and showed identical measurements. The developed interoperable solution for the seamless integration and provision of IaaS resources from PACI, OpenNebula and OpenStack IaaS platforms fulfils the specified requirements, i.e., provides Lunacloud with the ability to expand the range of adopted IaaS platforms and offers a Web dashboard and REST API for the integrated management. The contributions of this work include the surveys and comparisons made, the selection of the abstraction framework and, last, but not the least, the PACI driver developed.
Resumo:
Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Harnessing idle PCs CPU cycles, storage space and other resources of networked computers to collaborative are mainly fixated on for all major grid computing research projects. Most of the university computers labs are occupied with the high puissant desktop PC nowadays. It is plausible to notice that most of the time machines are lying idle or wasting their computing power without utilizing in felicitous ways. However, for intricate quandaries and for analyzing astronomically immense amounts of data, sizably voluminous computational resources are required. For such quandaries, one may run the analysis algorithms in very puissant and expensive computers, which reduces the number of users that can afford such data analysis tasks. Instead of utilizing single expensive machines, distributed computing systems, offers the possibility of utilizing a set of much less expensive machines to do the same task. BOINC and Condor projects have been prosperously utilized for solving authentic scientific research works around the world at a low cost. In this work the main goal is to explore both distributed computing to implement, Condor and BOINC, and utilize their potency to harness the ideal PCs resources for the academic researchers to utilize in their research work. In this thesis, Data mining tasks have been performed in implementation of several machine learning algorithms on the distributed computing environment.
Resumo:
Comunicação apresentada na CAPSI 2011 - 11ª Conferência da Associação Portuguesa de Sistemas de Informação – A Gestão de Informação na era da Cloud Computing, Lisboa, ISEG/IUL-ISCTE/, 19 a 21 de Outubro de 2011.
Resumo:
6th Real-Time Scheduling Open Problems Seminar (RTSOPS 2015), Lund, Sweden.
Resumo:
Dissertação para obtenção do Grau de Doutor em Química