1000 resultados para Real Jardín Botánico (Spain)
Resumo:
The new generations of SRAM-based FPGA (field programmable gate array) devices are the preferred choice for the implementation of reconfigurable computing platforms intended to accelerate processing in real-time systems. However, FPGA's vulnerability to hard and soft errors is a major weakness to robust configurable system design. In this paper, a novel built-in self-healing (BISH) methodology, based on run-time self-reconfiguration, is proposed. A soft microprocessor core implemented in the FPGA is responsible for the management and execution of all the BISH procedures. Fault detection and diagnosis is followed by repairing actions, taking advantage of the dynamic reconfiguration features offered by new FPGA families. Meanwhile, modular redundancy assures that the system still works correctly
Resumo:
Mestrado em Auditoria
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Manutenção
Resumo:
As alterações sociais, culturais, tecnológicas ou puramente virtuais a que se assiste são indiscutíveis e, porventura, irreversíveis. O ritmo da evolução tecnológica não permite pausas. A crescente densidade das redes e a velocidade dos fluxos de informação fazem aumentar a complexidade da análise, contrastando com uma diminuição dos tempos de resposta. Observa-se uma dinâmica cada vez mais acentuada no processo de transição entre a informação e o conhecimento, acelerando os inputs que influenciam ou dominam as práticas sociais, políticas e simbólicas da vida. Este artigo analisa conceitos como “informação” e “conhecimento” e outros, mais geográficos, como “lugar” e “espaço”; e de que forma pode a sua dinâmica influenciar o território, que deixa de ser (apenas) real para passar a ser, também, virtual. A Sociedade do Conhecimento surge a partir da Sociedade da Informação, num contexto evolucional da cadeia de valor “dado-informação-conhecimento-sabedoria” e onde o elevado potencial tecnológico extravasa as noções tradicionais da Geografia. Para ajudar a compreender as mutações observadas no território, explicando as suas causas e consequências surge a Geografia da Sociedade do Conhecimento, um ramo da Geografia vocacionado para a análise do desenvolvimento sócio-económico da sociedade moderna.
Resumo:
The accuracy of the Navigation Satellite Timing and Ranging (NAVSTAR) Global Positioning System (GPS) measurements is insufficient for many outdoor navigation tasks. As a result, in the late nineties, a new methodology – the Differential GPS (DGPS) – was developed. The differential approach is based on the calculation and dissemination of the range errors of the GPS satellites received. GPS/DGPS receivers correlate the broadcasted GPS data with the DGPS corrections, granting users increased accuracy. DGPS data can be disseminated using terrestrial radio beacons, satellites and, more recently, the Internet. Our goal is to provide mobile platforms within our campus with DGPS data for precise outdoor navigation. To achieve this objective, we designed and implemented a three-tier client/server distributed system that establishes Internet links with remote DGPS sources and performs campus-wide dissemination of the obtained data. The Internet links are established between data servers connected to remote DGPS sources and the client, which is the data input module of the campus-wide DGPS data provider. The campus DGPS data provider allows the establishment of both Intranet and wireless links within the campus. This distributed system is expected to provide adequate support for accurate (submetric) outdoor navigation tasks.
Resumo:
In a real world multiagent system, where the agents are faced with partial, incomplete and intrinsically dynamic knowledge, conflicts are inevitable. Frequently, different agents have goals or beliefs that cannot hold simultaneously. Conflict resolution methodologies have to be adopted to overcome such undesirable occurrences. In this paper we investigate the application of distributed belief revision techniques as the support for conflict resolution in the analysis of the validity of the candidate beams to be produced in the CERN particle accelerators. This CERN multiagent system contains a higher hierarchy agent, the Specialist agent, which makes use of meta-knowledge (on how the con- flicting beliefs have been produced by the other agents) in order to detect which beliefs should be abandoned. Upon solving a conflict, the Specialist instructs the involved agents to revise their beliefs accordingly. Conflicts in the problem domain are mapped into conflicting beliefs of the distributed belief revision system, where they can be handled by proven formal methods. This technique builds on well established concepts and combines them in a new way to solve important problems. We find this approach generally applicable in several domains.
Resumo:
Relatório de Estágio submetido à Escola Superior de Teatro e Cinema para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Teatro - especialização em Teatro e Comunidade.
Resumo:
Consider the problem of scheduling a task set τ of implicit-deadline sporadic tasks to meet all deadlines on a t-type heterogeneous multiprocessor platform where tasks may access multiple shared resources. The multiprocessor platform has m k processors of type-k, where k∈{1,2,…,t}. The execution time of a task depends on the type of processor on which it executes. The set of shared resources is denoted by R. For each task τ i , there is a resource set R i ⊆R such that for each job of τ i , during one phase of its execution, the job requests to hold the resource set R i exclusively with the interpretation that (i) the job makes a single request to hold all the resources in the resource set R i and (ii) at all times, when a job of τ i holds R i , no other job holds any resource in R i . Each job of task τ i may request the resource set R i at most once during its execution. A job is allowed to migrate when it requests a resource set and when it releases the resource set but a job is not allowed to migrate at other times. Our goal is to design a scheduling algorithm for this problem and prove its performance. We propose an algorithm, LP-EE-vpr, which offers the guarantee that if an implicit-deadline sporadic task set is schedulable on a t-type heterogeneous multiprocessor platform by an optimal scheduling algorithm that allows a job to migrate only when it requests or releases a resource set, then our algorithm also meets the deadlines with the same restriction on job migration, if given processors 4×(1+MAXP×⌈|P|×MAXPmin{m1,m2,…,mt}⌉) times as fast. (Here MAXP and |P| are computed based on the resource sets that tasks request.) For the special case that each task requests at most one resource, the bound of LP-EE-vpr collapses to 4×(1+⌈|R|min{m1,m2,…,mt}⌉). To the best of our knowledge, LP-EE-vpr is the first algorithm with proven performance guarantee for real-time scheduling of sporadic tasks with resource sharing on t-type heterogeneous multiprocessors.
Resumo:
Task scheduling is one of the key mechanisms to ensure timeliness in embedded real-time systems. Such systems have often the need to execute not only application tasks but also some urgent routines (e.g. error-detection actions, consistency checkers, interrupt handlers) with minimum latency. Although fixed-priority schedulers such as Rate-Monotonic (RM) are in line with this need, they usually make a low processor utilization available to the system. Moreover, this availability usually decreases with the number of considered tasks. If dynamic-priority schedulers such as Earliest Deadline First (EDF) are applied instead, high system utilization can be guaranteed but the minimum latency for executing urgent routines may not be ensured. In this paper we describe a scheduling model according to which urgent routines are executed at the highest priority level and all other system tasks are scheduled by EDF. We show that the guaranteed processor utilization for the assumed scheduling model is at least as high as the one provided by RM for two tasks, namely 2(2√−1). Seven polynomial time tests for checking the system timeliness are derived and proved correct. The proposed tests are compared against each other and to an exact but exponential running time test.
Resumo:
The interest of the study on the implementation of expanded agglomerated cork as exterior wall covering derives from two critical factors in a perspective of sustainable development: the use of a product consisting of a renewable natural material-cork-and the concern to contribute to greater sustainability in construction. The study aims to assess the feasibility of its use by analyzing the corresponding behaviour under different conditions. Since this application is relatively recent, only about ten years old, there is still much to learn about the reliability of its long-term properties. In this context, this study aims to deepen and approach aspects, some of them poorly studied and even unknown, that deal with characteristics that will make the agglomerate a good choice for exterior wall covering. The analysis of these and other characteristics is being performed by testing both under actual exposure conditions, on an experimental cell at LNEC, and on laboratory. In this paper the main laboratory tests are presented and the obtained results are compared with the outcome of the field study. © (2015) Trans Tech Publications, Switzerland.
Resumo:
Este texto apresenta os conceitos fundamentais das funções reais de mais de duas variáveis. Estes slides são um complemento às aulas de Matemática II para o tema em estudo para alunos da licenciatura em gestão.
Resumo:
Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.
Resumo:
Actas do 17º Congresso da Associação Internacional para a História do Vidro
Resumo:
Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.