927 resultados para timing constraint
Resumo:
Prototype validation is a major concern in modern electronic product design and development. Simulation, structural test, functional and timing debug are all forming parts of the validation process, although very often addressed as dissociated tasks. In this paper we describe an integrated approach to board-level prototype validation, based on a set of mandatory/optional BST instructions and a built-in controller for debug and test, that addresses the late mentioned tasks as inherent parts of a whole process
Resumo:
Remote Laboratories are an emergent technological and pedagogical tool at all education levels, and their widespread use is an important part of their own improvement and evolution. This paper describes several issues encountered on laboratorial classes, on higher education courses, when using remote laboratories based on PXI systems, either using the VISIR system or an alternate in-house solution. Three main issues are presented and explained, all reported by teachers that gave support to students use of remote laboratories. The first issue deals with the need to allow students to select the actual place where an ammeter is to be inserted on electric circuits, even incorrectly, therefore emulating real world difficulties. The second one deals with problems with timing when several measurements are required at short intervals, as in the discharge cycle of a capacitor. And the last issue deals with the use of a multimeter in DC mode when reading AC values, a use that collides with the lab settings. All scenarios are presented and discussed including the solution found for each case. The conclusion derived from the described work is that the remote laboratories area is an expanding field, where practical use leads to improvement and evolution of the available solutions, requiring a strict cooperation and information sharing between all actors, i.e. developers, teachers and students.
Resumo:
Thesis submitted for assessment with a view to obtaining the degree of Doctor of Political and Social Science of the European University Institute
Resumo:
Dissertação apresentada para obtenção de Grau de Doutor em Bioquímica,Bioquímica Estrutural, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e Computadores
Resumo:
RESUMO - Num contexto em que a prestação de cuidados de Fisioterapia e Reabilitação é identificada como apresentando uma desigualdade e desajustamento da oferta regional superior à dos restantes cuidados de saúde, assim como uma falta de adequação dos preços praticados, perante as condições de oferta e procura actualmente existentes, o presente trabalho tem por objectivo investigar, no domínio do Desempenho, a influência do Financiamento na definição da prestação destes cuidados, tendo como pressuposto genérico que as decisões estratégicas e a reestruturação produtiva das organizações de saúde são condicionadas pelo sistema de preços. Considera-se que o actual sistema de Financiamento/Pagamento provoca um constrangimento na qualidade da resposta destes cuidados a dois níveis: um primeiro nível, ao colocar o pagamento no âmbito dos Meios Complementares de Diagnóstico e Terapêutica (MCDTs) a contratar pelo Serviço Nacional de Saúde, com isso determinando a configuração organizativa do sistema; um segundo nível de constrangimento que incide sobre as estruturas das organizações prestadoras, pela modelação que induz, nomeadamente a nível da sua produção. Na impossibilidade de tratar as duas dimensões do problema, pela falta de indicadores de desempenho deste sector, analisou-se, relativamente ao segundo nível de constrangimento, a produção de fisioterapia de três organizações que, potencialmente, teriam o mesmo o mesmo perfil de oferta por se enquadrarem num mesmo perfil de procura. Os resultados reflectem o pressuposto genérico do trabalho e abrem espaço para colocar como futura hipótese de investigação a razão da(s) causa(s) que poderão estar subjacentes à discrepância encontrada na média de tratamentos por sessão (duas vezes e meia) na produção das duas organizações que foi possível comparar.------------------- ABSTRACT - In a context where the provision of Physical Therapy and Rehabilitation care is identified as having a regional mismatch of supply and inequality above all the others health cares, and a lack of adequacy of prices in the current conditions of supply and demand, the present work has, as main purpose, to investigate, in the field of Performance, the Payment’s influence in shaping the provision of such health care. The general assumption tracking this analysis is that the strategic decisions on productive structure of health care organizations are influenced by the price systems. It is considered that the current Finance / Payment system causes two levels of constraints on the quality of such health care: a first constraint, as it putts its payment under the Supplementary Means of Diagnosis and Therapy (MCDTs), witch ends up establishing the organizational setup of the system; a second level of constraint by modelling the internal structure of these organizations. The lack of indicators characterizing the performance of this sector, addressed the present study to the second dimension, in witch was analysed the physical therapy production in three organizations that, potentially, would have the same profile of supply responding to similar characteristics of demand. The results reflect the above mentioned general assumption that supported the work, and leave an open space for future research, about the reason (s) that lay behind the discrepancy found between the average of treatments per session (two and a half times) in
Resumo:
Trabalho de Projecto submetida(o) à Escola Superior de Teatro e Cinema para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Teatro - especialização em Artes Performativas - Interpretação
Resumo:
The accuracy of the Navigation Satellite Timing and Ranging (NAVSTAR) Global Positioning System (GPS) measurements is insufficient for many outdoor navigation tasks. As a result, in the late nineties, a new methodology – the Differential GPS (DGPS) – was developed. The differential approach is based on the calculation and dissemination of the range errors of the GPS satellites received. GPS/DGPS receivers correlate the broadcasted GPS data with the DGPS corrections, granting users increased accuracy. DGPS data can be disseminated using terrestrial radio beacons, satellites and, more recently, the Internet. Our goal is to provide mobile platforms within our campus with DGPS data for precise outdoor navigation. To achieve this objective, we designed and implemented a three-tier client/server distributed system that establishes Internet links with remote DGPS sources and performs campus-wide dissemination of the obtained data. The Internet links are established between data servers connected to remote DGPS sources and the client, which is the data input module of the campus-wide DGPS data provider. The campus DGPS data provider allows the establishment of both Intranet and wireless links within the campus. This distributed system is expected to provide adequate support for accurate (submetric) outdoor navigation tasks.
Resumo:
Although the Navigation Satellite Timing and Ranging (NAVSTAR) Global Positioning System (GPS) is, de facto, the standard positioning system used in outdoor navigation, it does not provide, per se, all the features required to perform many outdoor navigational tasks. The accuracy of the GPS measurements is the most critical issue. The quest for higher position readings accuracy led to the development, in the late nineties, of the Differential Global Positioning System (DGPS). The differential GPS method detects the range errors of the GPS satellites received and broadcasts them. The DGPS/GPS receivers correlate the DGPS data with the GPS satellite data they are receiving, granting users increased accuracy. DGPS data is broadcasted using terrestrial radio beacons, satellites and, more recently, the Internet. Our goal is to have access, within the ISEP campus, to DGPS correction data. To achieve this objective we designed and implemented a distributed system composed of two main modules which are interconnected: a distributed application responsible for the establishment of the data link over the Internet between the remote DGPS stations and the campus, and the campus-wide DGPS data server application. The DGPS data Internet link is provided by a two-tier client/server distributed application where the server-side is connected to the DGPS station and the client-side is located at the campus. The second unit, the campus DGPS data server application, diffuses DGPS data received at the campus via the Intranet and via a wireless data link. The wireless broadcast is intended for DGPS/GPS portable receivers equipped with an air interface and the Intranet link is provided for DGPS/GPS receivers with just a RS232 DGPS data interface. While the DGPS data Internet link servers receive the DGPS data from the DGPS base stations and forward it to the DGPS data Internet link client, the DGPS data Internet link client outputs the received DGPS data to the campus DGPS data server application. The distributed system is expected to provide adequate support for accurate (sub-metric) outdoor campus navigation tasks. This paper describes in detail the overall distributed application.
Resumo:
This paper is on the self-scheduling problem for a thermal power producer taking part in a pool-based electricity market as a price-taker, having bilateral contracts and emission-constrained. An approach based on stochastic mixed-integer linear programming approach is proposed for solving the self-scheduling problem. Uncertainty regarding electricity price is considered through a set of scenarios computed by simulation and scenario-reduction. Thermal units are modelled by variable costs, start-up costs and technical operating constraints, such as: forbidden operating zones, ramp up/down limits and minimum up/down time limits. A requirement on emission allowances to mitigate carbon footprint is modelled by a stochastic constraint. Supply functions for different emission allowance levels are accessed in order to establish the optimal bidding strategy. A case study is presented to illustrate the usefulness and the proficiency of the proposed approach in supporting biding strategies. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.
Resumo:
Composition is a practice of key importance in software engineering. When real-time applications are composed, it is necessary that their timing properties (such as meeting the deadlines) are guaranteed. The composition is performed by establishing an interface between the application and the physical platform. Such an interface typically contains information about the amount of computing capacity needed by the application. For multiprocessor platforms, the interface should also present information about the degree of parallelism. Several interface proposals have recently been put forward in various research works. However, those interfaces are either too complex to be handled or too pessimistic. In this paper we propose the generalized multiprocessor periodic resource model (GMPR) that is strictly superior to the MPR model without requiring a too detailed description. We then derive a method to compute the interface from the application specification. This method has been implemented in Matlab routines that are publicly available.
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
Trabalho final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações