847 resultados para Pipelined calculus unit


Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIM: To analyze the search for Emergency Care (EC) in the Western Health District of Ribeirão Preto (São Paulo), in order to identify the reasons why users turn to these services in situations that are not characterized as urgencies and emergencies. METHODS: A qualitative and descriptive study was undertaken. A guiding script was applied to 23 EC users, addressing questions related to health service accessibility and welcoming, problem solving, reason to visit the EC and care comprehensiveness. RESULTS: The subjects reported that, at the Primary Health Care services, receiving care and scheduling consultations took a long time and that the opening hours of these services coincide with their work hours. At the EC service, access to technologies and medicines was easier. CONCLUSION: Primary health care services have been unable to turn into the entry door to the health system, being replaced by emergency services, putting a significant strain on these services' capacity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aimed to evaluate the parameters established in COFEN Resolution 293/04 concerning nursing staff dimensioning in adult intensive care units (AICU). The research was conducted in six hospitals in São Paulo City. The daily quantitative average of professionals needed for patient care was calculated according to the parameters established by COFEN. The obtained results were compared with the existing number of daily staff members in these units. It was observed that the proportions recommended by COFEN for the nurse category are superior to those used in the hospitals studied, which represents a challenge for Brazilian nursing. Mean care time values were found appropriate and represent important standards for dimensioning the minimum number of professionals in AICU. This study contributed to the validation of the parameters indicated in Resolution 293/04 for nursing staff dimensioning in the AICU.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To assess the cardiovascular risk, using the Framingham risk score, in a sample of hypertensive individuals coming from a public primary care unit. METHODS: The caseload comprised hypertensive individuals according to criteria established by the JNC VII, 2003, of 2003, among 1601 patients followed up in 1999, at the Cardiology and Arterial Hypertension Outpatients Clinic of the Teaching Primary Care Unit, at the Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo. The patients were selected by draw, aged over 20 years, both genders, excluding pregnant women. It was a descriptive, cross-sectional, observational study. The Framingham risk score was used to stratify cardiovascular risk of developing coronary artery disease (death or non-fatal acute myocardial infarction). RESULTS: Age range of 27-79 years ( = 63.2 ± 9.58). Out of 382 individuals studied, 270 (70.7%) were female and 139 (36.4%) were characterized as high cardiovascular risk for presenting diabetes mellitus, atherosclerosis documented by event or procedure. Out of 243 stratified patients, 127 (52.3%) had HDL-C < 50 mg/dL; 210 (86.4%) had systolic blood pressure > 120 mmHg; 46 (18.9%) were smokers; 33 (13.6%) had a high cardiovascular risk. Those added to 139 enrolled directly as high cardiovascular risk, totaled up 172 (45%); 77 (20.2%) of medium cardiovascular risk and 133 (34.8%) of low risk. The highest percentage of high cardiovascular risk individuals was aged over 70 years; those of medium risk were aged over 60 years; and the low risk patients were aged 50 to 69 years. CONCLUSION: The significant number of high and medium cardiovascular risk individuals indicates the need to closely follow them up.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of Concurrency Theory to Systems Biology is in its earliest stage of progress. The metaphor of cells as computing systems by Regev and Shapiro opened the employment of concurrent languages for the modelling of biological systems. Their peculiar characteristics led to the design of many bio-inspired formalisms which achieve higher faithfulness and specificity. In this thesis we present pi@, an extremely simple and conservative extension of the pi-calculus representing a keystone in this respect, thanks to its expressiveness capabilities. The pi@ calculus is obtained by the addition of polyadic synchronisation and priority to the pi-calculus, in order to achieve compartment semantics and atomicity of complex operations respectively. In its direct application to biological modelling, the stochastic variant of the calculus, Spi@, is shown able to model consistently several phenomena such as formation of molecular complexes, hierarchical subdivision of the system into compartments, inter-compartment reactions, dynamic reorganisation of compartment structure consistent with volume variation. The pivotal role of pi@ is evidenced by its capability of encoding in a compositional way several bio-inspired formalisms, so that it represents the optimal core of a framework for the analysis and implementation of bio-inspired languages. In this respect, the encodings of BioAmbients, Brane Calculi and a variant of P Systems in pi@ are formalised. The conciseness of their translation in pi@ allows their indirect comparison by means of their encodings. Furthermore it provides a ready-to-run implementation of minimal effort whose correctness is granted by the correctness of the respective encoding functions. Further important results of general validity are stated on the expressive power of priority. Several impossibility results are described, which clearly state the superior expressiveness of prioritised languages and the problems arising in the attempt of providing their parallel implementation. To this aim, a new setting in distributed computing (the last man standing problem) is singled out and exploited to prove the impossibility of providing a purely parallel implementation of priority by means of point-to-point or broadcast communication.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Studie werden strukturgeologische, metamorphe und geochronologische Daten benutzt, um eine Quantifizierung tektonischer Prozesse vorzunehmen, die für die Exhumierung der Kykladischen Blauschiefereinheit in der Ägäis und der Westtürkei verantwortlich waren. Bei den beiden tektonischen Prozessen handelt es sich um: (1) Abschiebungstektonik und (2) vertikale duktile Ausdünnung. Eine finite Verformungsanalyse an Proben der Kykladischen Blauschiefereinheit ermöglicht eine Abschätzung des Beitrags von vertikaler duktiler Ausdünnung an der gesamten Exhumierung. Kalkulationen mit einem eindimensionalen, numerischen Model zeigt, daß vertikale duktile Ausdünnung nur ca. 10% an der gesamten Exhumierung ausmacht. Kinematische, metamorphe und geochronologische Daten erklären die tektonische Natur und die Evolution eines extensionalen Störungssystems auf der Insel Ikaria in der östlichen Ägäis. Thermobarometrische Daten lassen erkennen, daß das Liegende des Störungssystems aus ca. 15 km Tiefe exhumiert wurde. Sowohl Apatit- und Zirkonspaltspurenalter als auch Apatit (U-Th)/He-Alter zeigen, daß sich das extensionale Störungssystem zwischen 11-3 Ma mit einer Geschwindigkeit von ca. 7-8 km/Ma bewegte. Spät-Miozäne Abschiebungen trugen zur Exhumierung der letzten ~5-15 km der Hochdruckgesteine bei. Ein Großteil der Exhumierung der Kykladischen Blauschiefereinheit muß vor dem Miozän stattgefunden haben. Dies wird durch einen Extrusionskeil erklärt, der ca. 30-35 km der Kykladischen Blauschiefereinheit in der Westtürkei exhumierte. 40Ar/39Ar und 87Rb/86Sr Datierungen an Myloniten des oberen Abschiebungskontakts zwischen der Selçuk Decke und der darunterliegenden Ampelos/Dilek Decke der Kykladischen Blauschiefereinheit als auch des unteren Überschiebungskontakts zwischen der Ampelos/Dilek Decke und den darunterliegenden Menderes Decken zeigt, daß sich beide mylonitische Zonen um ca. ~35 Ma formten, was die Existenz eines Spät-Eozänen/Früh-Oligozänen Extrusionskeils beweist.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Microprocessori basati su singolo processore (CPU), hanno visto una rapida crescita di performances ed un abbattimento dei costi per circa venti anni. Questi microprocessori hanno portato una potenza di calcolo nell’ordine del GFLOPS (Giga Floating Point Operation per Second) sui PC Desktop e centinaia di GFLOPS su clusters di server. Questa ascesa ha portato nuove funzionalità nei programmi, migliori interfacce utente e tanti altri vantaggi. Tuttavia questa crescita ha subito un brusco rallentamento nel 2003 a causa di consumi energetici sempre più elevati e problemi di dissipazione termica, che hanno impedito incrementi di frequenza di clock. I limiti fisici del silicio erano sempre più vicini. Per ovviare al problema i produttori di CPU (Central Processing Unit) hanno iniziato a progettare microprocessori multicore, scelta che ha avuto un impatto notevole sulla comunità degli sviluppatori, abituati a considerare il software come una serie di comandi sequenziali. Quindi i programmi che avevano sempre giovato di miglioramenti di prestazioni ad ogni nuova generazione di CPU, non hanno avuto incrementi di performance, in quanto essendo eseguiti su un solo core, non beneficiavano dell’intera potenza della CPU. Per sfruttare appieno la potenza delle nuove CPU la programmazione concorrente, precedentemente utilizzata solo su sistemi costosi o supercomputers, è diventata una pratica sempre più utilizzata dagli sviluppatori. Allo stesso tempo, l’industria videoludica ha conquistato una fetta di mercato notevole: solo nel 2013 verranno spesi quasi 100 miliardi di dollari fra hardware e software dedicati al gaming. Le software houses impegnate nello sviluppo di videogames, per rendere i loro titoli più accattivanti, puntano su motori grafici sempre più potenti e spesso scarsamente ottimizzati, rendendoli estremamente esosi in termini di performance. Per questo motivo i produttori di GPU (Graphic Processing Unit), specialmente nell’ultimo decennio, hanno dato vita ad una vera e propria rincorsa alle performances che li ha portati ad ottenere dei prodotti con capacità di calcolo vertiginose. Ma al contrario delle CPU che agli inizi del 2000 intrapresero la strada del multicore per continuare a favorire programmi sequenziali, le GPU sono diventate manycore, ovvero con centinaia e centinaia di piccoli cores che eseguono calcoli in parallelo. Questa immensa capacità di calcolo può essere utilizzata in altri campi applicativi? La risposta è si e l’obiettivo di questa tesi è proprio quello di constatare allo stato attuale, in che modo e con quale efficienza pùo un software generico, avvalersi dell’utilizzo della GPU invece della CPU.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

“ Per anni ho creduto essere cresciuto in una periferia di Buenos Aires, periferia di strade avventurose e di tramonti visibili. In realtà sono cresciuto in un giardino, dietro le lance di un cancellata, e in una biblioteca di infiniti volumi inglesi. Quel Palermo del coltello e la chitarra (mi assicurano) era agli angoli delle strade, ma chi popolava le mie mattine e procurava un gradevole orrore alle mie notti erano il bucaniere cieco di Stevenson, agonizzante sotto gli zoccoli dei cavalli, e il traditore che abbandonò l’amico sulla luna, e il viaggiatore del tempo che riportò dal futuro un fiore appassito, e il genio prigioniero per secoli nell’anfora salomonica, e il profeta velato del Khorasan, che dietro le gemme e la seta nascondeva la lebbra. Cosa succedeva, nel frattempo, oltre le lance della cancellata? Quali destini vernacoli e violenti andavano compiendosi a pochi passi da me, nella sordida bettola o nello spazio turbolento? Com’era quel Palermo o come sarebbe stato bello che fosse? A tali domande vuole rispondere questo libro, più d’immaginazione che documentato.” E’ così che Jorge Luis Borges apre il libro Evaristo Carriego, saggio biografico che passò quasi inosservato all’interno degli ambienti intellettuali della capitale dei quali faceva parte lo scrittore stesso. Il libro era dedicato alla figura, estranea a quegli ambienti, del poeta “bohemien, tisico ed anarchico” Evaristo Carriego, vissuto tra otto e novecento nel quartiere Palermo, in quei tempi periferia malfamata di Buenos Aires. Buenos Aires è la città borgesiana per eccellenza: priva di caratteristiche tipologiche precise, volubile allo sguardo, specchio e metafora di tutte le grandi città del mondo. Ma soprattutto Buenos Aires è una città “inventata” da Borges, che le ha dato un’immagine letteraria che si relaziona in maniera complessa con quella reale. Segnerà nell’immaginario di Borges l’inizio e la fine della sua vita, una sorta di centralità, unico stralcio di stabilità, al quale l’inquieta mente dell’argentino possa far ritorno.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Procedures for quantitative walking analysis include the assessment of body segment movements within defined gait cycles. Recently, methods to track human body motion using inertial measurement units have been suggested. It is not known if these techniques can be readily transferred to clinical measurement situations. This work investigates the aspects necessary for one inertial measurement unit mounted on the lower back to track orientation, and determine spatio-temporal features of gait outside the confines of a conventional gait laboratory. Apparent limitations of different inertial sensors can be overcome by fusing data using methods such as a Kalman filter. The benefits of optimizing such a filter for the type of motion are unknown. 3D accelerations and 3D angular velocities were collected for 18 healthy subjects while treadmill walking. Optimization of Kalman filter parameters improved pitch and roll angle estimates when compared to angles derived using stereophotogrammetry. A Weighted Fourier Linear Combiner method for estimating 3D orientation angles by constructing an analytical representation of angular velocities and allowing drift free integration is also presented. When tested this method provided accurate estimates of 3D orientation when compared to stereophotogrammetry. Methods to determine spatio-temporal features from lower trunk accelerations generally require knowledge of sensor alignment. A method was developed to estimate the instants of initial and final ground contact from accelerations measured by a waist mounted inertial device without rigorous alignment. A continuous wavelet transform method was used to filter and differentiate the signal and derive estimates of initial and final contact times. The technique was tested with data recorded for both healthy and pathologic (hemiplegia and Parkinson’s disease) subjects and validated using an instrumented mat. The results show that a single inertial measurement unit can assist whole body gait assessment however further investigation is required to understand altered gait timing in some pathological subjects.