843 resultados para Predictable routing
Resumo:
A principal causa de morte e incapacidade em Portugal deve-se a Acidentes Vasculares Cerebrais (AVC). Assim, este trabalho de investigação pretende identificar e quantificar quais os fatores que contribuem para a ocorrência de um AVC (por tipo e com sequelas), duração do internamento e potenciais soluções de encaminhamento terapêutico para o doente após a ocorrência de AVC. Identificando e quantificando tais fatores é possível traçar um perfil de doente em risco, atuando sobre ele através de medidas preventivas de forma a minimizar o impacto deste problema em termos pessoais, sociais e financeiros. Para atingir este objetivo foi analisada uma amostra de indivíduos internados em 2012 na Unidade de AVC do Centro Hospitalar do Tâmega e Sousa. Dos casos analisados 87,8% são causados por AVCI (isquémicos) e 12,2% por casos de AVCH (hemorrágicos). Do total dos casos, 58,9% apresentam sequelas. A hipertensão, a diabetes de Mellitus e o Colesterol apresentam-se como antecedentes clínicos com elevado fator de risco. O tabagismo regista grande importância na propulsão dos anteriores fatores analisados assim como o alcoolismo. Conclui-se que a prevenção do AVC e outras doenças cardiovasculares é importante desde a idade escolar, dando-se especial importância ao período que antecede os 36 anos de idade, altura em que se começa a verificar uma subida agravada de ocorrências. O investimento na prevenção e vigilância médica do cidadão é um fator crucial neste período podendo reduzir em grande escala os custos associados a médio-longo prazo para todas as partes intervenientes.
Resumo:
Dynamically reconfigurable systems have benefited from a new class of FPGAs recently introduced into the market, which allow partial and dynamic reconfiguration at run-time, enabling multiple independent functions from different applications to share the same device, swapping resources as needed. When the sequence of tasks to be performed is not predictable, resource allocation decisions have to be made on-line, fragmenting the FPGA logic space. A rearrangement may be necessary to get enough contiguous space to efficiently implement incoming functions, to avoid spreading their components and, as a result, degrading their performance. This paper presents a novel active replication mechanism for configurable logic blocks (CLBs), able to implement on-line rearrangements, defragmenting the available FPGA resources without disturbing those functions that are currently running.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações
Resumo:
Reconfigurable computing experienced a considerable expansion in the last few years, due in part to the fast run-time partial reconfiguration features offered by recent SRAM-based Field Programmable Gate Arrays (FPGAs), which allowed the implementation in real-time of dynamic resource allocation strategies, with multiple independent functions from different applications sharing the same logic resources in the space and temporal domains. However, when the sequence of reconfigurations to be performed is not predictable, the efficient management of the logic space available becomes the greatest challenge posed to these systems. Resource allocation decisions have to be made concurrently with system operation, taking into account function priorities and optimizing the space currently available. As a consequence of the unpredictability of this allocation procedure, the logic space becomes fragmented, with many small areas of free resources failing to satisfy most requests and so remaining unused. A rearrangement of the currently running functions is therefore necessary, so as to obtain enough contiguous space to implement incoming functions, avoiding the spreading of their components and the resulting degradation of system performance. A novel active relocation procedure for Configurable Logic Blocks (CLBs) is herein presented, able to carry out online rearrangements, defragmenting the available FPGA resources without disturbing functions currently running.
Resumo:
Volatile organic compounds are a common source of groundwater contamination that can be easily removed by air stripping in columns with random packing and using a counter-current flow between the phases. This work proposes a new methodology for column design for any type of packing and contaminant which avoids the necessity of an arbitrary chosen diameter. It also avoids the employment of the usual graphical Eckert correlations for pressure drop. The hydraulic features are previously chosen as a project criterion. The design procedure was translated into a convenient algorithm in C++ language. A column was built in order to test the design, the theoretical steady-state and dynamic behaviour. The experiments were conducted using a solution of chloroform in distilled water. The results allowed for a correction in the theoretical global mass transfer coefficient previously estimated by the Onda correlations, which depend on several parameters that are not easy to control in experiments. For best describe the column behaviour in stationary and dynamic conditions, an original mathematical model was developed. It consists in a system of two partial non linear differential equations (distributed parameters). Nevertheless, when flows are steady, the system became linear, although there is not an evident solution in analytical terms. In steady state the resulting ODE can be solved by analytical methods, and in dynamic state the discretization of the PDE by finite differences allows for the overcoming of this difficulty. To estimate the contaminant concentrations in both phases in the column, a numerical algorithm was used. The high number of resulting algebraic equations and the impossibility of generating a recursive procedure did not allow the construction of a generalized programme. But an iterative procedure developed in an electronic worksheet allowed for the simulation. The solution is stable only for similar discretizations values. If different values for time/space discretization parameters are used, the solution easily becomes unstable. The system dynamic behaviour was simulated for the common liquid phase perturbations: step, impulse, rectangular pulse and sinusoidal. The final results do not configure strange or non-predictable behaviours.
Resumo:
With the emergence of low-power wireless hardware new ways of communication were needed. In order to standardize the communication between these low powered devices the Internet Engineering Task Force (IETF) released the 6LoWPAN stand- ard that acts as an additional layer for making the IPv6 link layer suitable for the lower-power and lossy networks. In the same way, IPv6 Routing Protocol for Low- Power and Lossy Networks (RPL) has been proposed by the IETF Routing Over Low power and Lossy networks (ROLL) Working Group as a standard routing protocol for IPv6 routing in low-power wireless sensor networks. The research performed in this thesis uses these technologies to implement a mobility process. Mobility management is a fundamental yet challenging area in low-power wireless networks. There are applications that require mobile nodes to exchange data with a xed infrastructure with quality-of-service guarantees. A prime example of these applications is the monitoring of patients in real-time. In these scenarios, broadcast- ing data to all access points (APs) within range may not be a valid option due to the energy consumption, data storage and complexity requirements. An alternative and e cient option is to allow mobile nodes to perform hand-o s. Hand-o mechanisms have been well studied in cellular and ad-hoc networks. However, low-power wireless networks pose a new set of challenges. On one hand, simpler radios and constrained resources ask for simpler hand-o schemes. On the other hand, the shorter coverage and higher variability of low-power links require a careful tuning of the hand-o parameters. In this work, we tackle the problem of integrating smart-HOP within a standard protocol, speci cally RPL. The simulation results in Cooja indicate that the pro- posed scheme minimizes the hand-o delay and the total network overhead. The standard RPL protocol is simply unable to provide a reliable mobility support sim- ilar to other COTS technologies. Instead, they support joining and leaving of nodes, with very low responsiveness in the existence of physical mobility.
Resumo:
Conventional film based X-ray imaging systems are being replaced by their digital equivalents. Different approaches are being followed by considering direct or indirect conversion, with the later technique dominating. The typical, indirect conversion, X-ray panel detector uses a phosphor for X-ray conversion coupled to a large area array of amorphous silicon based optical sensors and a couple of switching thin film transistors (TFT). The pixel information can then be readout by switching the correspondent line and column transistors, routing the signal to an external amplifier. In this work we follow an alternative approach, where the electrical switching performed by the TFT is replaced by optical scanning using a low power laser beam and a sensing/switching PINPIN structure, thus resulting in a simpler device. The optically active device is a PINPIN array, sharing both front and back electrical contacts, deposited over a glass substrate. During X-ray exposure, each sensing side photodiode collects photons generated by the scintillator screen (560 nm), charging its internal capacitance. Subsequently a laser beam (445 nm) scans the switching diodes (back side) retrieving the stored charge in a sequential way, reconstructing the image. In this paper we present recent work on the optoelectronic characterization of the PINPIN structure to be incorporated in the X-ray image sensor. The results from the optoelectronic characterization of the device and the dependence on scanning beam parameters are presented and discussed. Preliminary results of line scans are also presented. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Res-Systemica, Volume N°5, Numéro Spécial
Resumo:
This paper discusses the changes brought by the communication revolution in teaching and learning in the scope of LSP. Its aim is to provide an insight on how teaching which was bi-dimensional, turned into a multidimensional system, gathering other complementary resources that have transformed, in a incredibly short time, the ways we receive share and store information, for instance as professionals, and keep in touch with our peers. The increasing rise of electronic publications, the incredible boom of social and professional networks, search engines, blogs, list servs, forums, e-mail blasts, Facebook pages, YouTube contents, Tweets and Apps, have twisted the way information is conveyed. Classes ceased to be predictable and have been empowered by digital platforms, innumerous and different data repositories (TILDE, IATE, LINGUEE, and so many other terminological data banks) that have definitely transformed the academic world in general and tertiary education in particular. There is a bulk of information to be digested by students, who are no longer passive but instead responsible and active for their academic outcomes. The question is whether they possess the tools to select only what is accurate and important for a certain subject or assignment, due to that overflow? Due to the reduction of the number of course years in most degrees, after the implementation of Bologna and the shrinking of the curricula contents, have students the possibility of developing critical thinking? Both teaching and learning rely on digital resources to improve the speed of the spreading of knowledge. But have those changes been effective to promote really communication? Furthermore, with the increasing Apps that have already been developed and will continue to appear for learning foreign languages, for translation among others, will the students feel the need of learning them once they have those Apps. These are some the questions we would like to discuss in our paper.
Resumo:
In the last decade, both scientific community and automotive industry enabled communications among vehicles in different kinds of scenarios proposing different vehicular architectures. Vehicular delay-tolerant networks (VDTNs) were proposed as a solution to overcome some of the issues found in other vehicular architectures, namely, in dispersed regions and emergency scenarios. Most of these issues arise from the unique characteristics of vehicular networks. Contrary to delay-tolerant networks (DTNs), VDTNs place the bundle layer under the network layer in order to simplify the layered architecture and enable communications in sparse regions characterized by long propagation delays, high error rates, and short contact durations. However, such characteristics turn contacts very important in order to exchange as much information as possible between nodes at every contact opportunity. One way to accomplish this goal is to enforce cooperation between network nodes. To promote cooperation among nodes, it is important that nodes share their own resources to deliver messages from others. This can be a very difficult task, if selfish nodes affect the performance of cooperative nodes. This paper studies the performance of a cooperative reputation system that detects, identify, and avoid communications with selfish nodes. Two scenarios were considered across all the experiments enforcing three different routing protocols (First Contact, Spray and Wait, and GeoSpray). For both scenarios, it was shown that reputation mechanisms that punish aggressively selfish nodes contribute to increase the overall network performance.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Redes de Comunicações e Multimédia
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
The game of football demands new computational approaches to measure individual and collective performance. Understanding the phenomena involved in the game may foster the identification of strengths and weaknesses, not only of each player, but also of the whole team. The development of assertive quantitative methodologies constitutes a key element in sports training. In football, the predictability and stability inherent in the motion of a given player may be seen as one of the most important concepts to fully characterise the variability of the whole team. This paper characterises the predictability and stability levels of players during an official football match. A Fractional Calculus (FC) approach to define a player’s trajectory. By applying FC, one can benefit from newly considered modeling perspectives, such as the fractional coefficient, to estimate a player’s predictability and stability. This paper also formulates the concept of attraction domain, related to the tactical region of each player, inspired by stability theory principles. To compare the variability inherent in the player’s process variables (e.g., distance covered) and to assess his predictability and stability, entropy measures are considered. Experimental results suggest that the most predictable player is the goalkeeper while, conversely, the most unpredictable players are the midfielders. We also conclude that, despite his predictability, the goalkeeper is the most unstable player, while lateral defenders are the most stable during the match.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática