898 resultados para Time and hardware redundancy
Resumo:
Este documento descreve um modelo de tolerância a falhas para sistemas de tempo-real distribuídos. A sugestão deste modelo tem como propósito a apresentação de uma solu-ção fiável, flexível e adaptável às necessidades dos sistemas de tempo-real distribuídos. A tolerância a falhas é um aspeto extremamente importante na construção de sistemas de tempo-real e a sua aplicação traz inúmeros benefícios. Um design orientado para a to-lerância a falhas contribui para um melhor desempenho do sistema através do melhora-mento de aspetos chave como a segurança, a confiabilidade e a disponibilidade dos sis-temas. O trabalho desenvolvido centra-se na prevenção, deteção e tolerância a falhas de tipo ló-gicas (software) e físicas (hardware) e assenta numa arquitetura maioritariamente basea-da no tempo, conjugada com técnicas de redundância. O modelo preocupa-se com a efi-ciência e os custos de execução. Para isso utilizam-se também técnicas tradicionais de to-lerância a falhas, como a redundância e a migração, no sentido de não prejudicar o tempo de execução do serviço, ou seja, diminuindo o tempo de recuperação das réplicas, em ca-so de ocorrência de falhas. Neste trabalho são propostas heurísticas de baixa complexida-de para tempo-de-execução, a fim de se determinar para onde replicar os componentes que constituem o software de tempo-real e de negociá-los num mecanismo de coordena-ção por licitações. Este trabalho adapta e estende alguns algoritmos que fornecem solu-ções ainda que interrompidos. Estes algoritmos são referidos em trabalhos de investiga-ção relacionados, e são utilizados para formação de coligações entre nós coadjuvantes. O modelo proposto colmata as falhas através de técnicas de replicação ativa, tanto virtual como física, com blocos de execução concorrentes. Tenta-se melhorar ou manter a sua qualidade produzida, praticamente sem introduzir overhead de informação significativo no sistema. O modelo certifica-se que as máquinas escolhidas, para as quais os agentes migrarão, melhoram iterativamente os níveis de qualidade de serviço fornecida aos com-ponentes, em função das disponibilidades das respetivas máquinas. Caso a nova configu-ração de qualidade seja rentável para a qualidade geral do serviço, é feito um esforço no sentido de receber novos componentes em detrimento da qualidade dos já hospedados localmente. Os nós que cooperam na coligação maximizam o número de execuções para-lelas entre componentes paralelos que compõem o serviço, com o intuito de reduzir atra-sos de execução. O desenvolvimento desta tese conduziu ao modelo proposto e aos resultados apresenta-dos e foi genuinamente suportado por levantamentos bibliográficos de trabalhos de in-vestigação e desenvolvimento, literaturas e preliminares matemáticos. O trabalho tem também como base uma lista de referências bibliográficas.
Resumo:
Coarse Grained Reconfigurable Architectures (CGRAs) are emerging as enabling platforms to meet the high performance demanded by modern applications (e.g. 4G, CDMA, etc.). Recently proposed CGRAs offer time-multiplexing and dynamic applications parallelism to enhance device utilization and reduce energy consumption at the cost of additional memory (up to 50% area of the overall platform). To reduce the memory overheads, novel CGRAs employ either statistical compression, intermediate compact representation, or multicasting. Each compaction technique has different properties (i.e. compression ratio, decompression time and decompression energy) and is best suited for a particular class of applications. However, existing research only deals with these methods separately. Moreover, they only analyze the compaction ratio and do not evaluate the associated energy overheads. To tackle these issues, we propose a polymorphic compression architecture that interleaves these techniques in a unique platform. The proposed architecture allows each application to take advantage of a separate compression/decompression hierarchy (consisting of various types and implementations of hardware/software decoders) tailored to its needs. Simulation results, using different applications (FFT, Matrix multiplication, and WLAN), reveal that the choice of compression hierarchy has a significant impact on compression ratio (up to 52%), decompression energy (up to 4 orders of magnitude), and configuration time (from 33 n to 1.5 s) for the tested applications. Synthesis results reveal that introducing adaptivity incurs negligible additional overheads (1%) compared to the overall platform area.
Resumo:
In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns.
Resumo:
in RoboCup 2007: Robot Soccer World Cup XI
Resumo:
This Thesis has the main target to make a research about FPAA/dpASPs devices and technologies applied to control systems. These devices provide easy way to emulate analog circuits that can be reconfigurable by programming tools from manufactures and in case of dpASPs are able to be dynamically reconfigurable on the fly. It is described different kinds of technologies commercially available and also academic projects from researcher groups. These technologies are very recent and are in ramp up development to achieve a level of flexibility and integration to penetrate more easily the market. As occurs with CPLD/FPGAs, the FPAA/dpASPs technologies have the target to increase the productivity, reducing the development time and make easier future hardware reconfigurations reducing the costs. FPAA/dpAsps still have some limitations comparing with the classic analog circuits due to lower working frequencies and emulation of complex circuits that require more components inside the integrated circuit. However, they have great advantages in sensor signal condition, filter circuits and control systems. This thesis focuses practical implementations of these technologies to control system PID controllers. The result of the experiments confirms the efficacy of FPAA/dpASPs on signal condition and control systems.
Resumo:
Glaucoma is a multifactorial condition under serious influence of many risk factors. The role of diabetes mellitus (DM) in glaucoma etiology or progression remains inconclusive. Although, the diabetic patients have different healing mechanism comparing to the general population and it has a possible-negative role on surgical outcomes. This review article attempts to analyze the association of both diseases, glaucoma and DM, before and after the surgery. The epidemiological studies, based mainly in population prevalence analyzes, have shown opposite outcomes in time and even in the most recent articles also the association remains inconclusive. On the contrary, the experimental models based on animal induced chronic hyperglycemia have shown an important association of both diseases, explained by common neurodegenerative mechanisms. Diabetic patients have a different wound healing process in the eye viz-a-viz other organs. The healing process is more and it results in lower surgical survival time, higher intraocular pressure (IOP) levels and, therefore, these patients usually need more medication to lower the IOP. Both randomized and nonrandomized retrospective and experimental molecular studies have shown the association between DM and glaucoma. Further studies are needed to get better explanations about outcomes on more recent surgical procedures and with the exponential use of antifibrotics. How to cite this article: Costa L, Cunha JP, Amado D, Pinto LA, Ferreira J. Diabetes Mellitus as a Risk Factor in Glaucoma's Physiopathology and Surgical Survival Time: A Literature Review.
Resumo:
Malaria diagnoses has traditionally been made using thick blood smears, but more sensitive and faster techniques are required to process large numbers of samples in clinical and epidemiological studies and in blood donor screening. Here, we evaluated molecular and serological tools to build a screening platform for pooled samples aimed at reducing both the time and the cost of these diagnoses. Positive and negative samples were analysed in individual and pooled experiments using real-time polymerase chain reaction (PCR), nested PCR and an immunochromatographic test. For the individual tests, 46/49 samples were positive by real-time PCR, 46/49 were positive by nested PCR and 32/46 were positive by immunochromatographic test. For the assays performed using pooled samples, 13/15 samples were positive by real-time PCR and nested PCR and 11/15 were positive by immunochromatographic test. These molecular methods demonstrated sensitivity and specificity for both the individual and pooled samples. Due to the advantages of the real-time PCR, such as the fast processing and the closed system, this method should be indicated as the first choice for use in large-scale diagnosis and the nested PCR should be used for species differentiation. However, additional field isolates should be tested to confirm the results achieved using cultured parasites and the serological test should only be adopted as a complementary method for malaria diagnosis.
Resumo:
BACKGROUND: Different studies have shown circadian variation of ischemic burden among patients with ST-Elevation Myocardial Infarction (STEMI), but with controversial results. The aim of this study was to analyze circadian variation of myocardial infarction size and in-hospital mortality in a large multicenter registry. METHODS: This retrospective, registry-based study was based on data from AMIS Plus, a large multicenter Swiss registry of patients who suffered myocardial infarction between 1999 and 2013. Peak creatine kinase (CK) was used as a proxy measure for myocardial infarction size. Associations between peak CK, in-hospital mortality, and the time of day at symptom onset were modelled using polynomial-harmonic regression methods. RESULTS: 6,223 STEMI patients were admitted to 82 acute-care hospitals in Switzerland and treated with primary angioplasty within six hours of symptom onset. Only the 24-hour harmonic was significantly associated with peak CK (p = 0.0001). The maximum average peak CK value (2,315 U/L) was for patients with symptom onset at 23:00, whereas the minimum average (2,017 U/L) was for onset at 11:00. The amplitude of variation was 298 U/L. In addition, no correlation was observed between ischemic time and circadian peak CK variation. Of the 6,223 patients, 223 (3.58%) died during index hospitalization. Remarkably, only the 24-hour harmonic was significantly associated with in-hospital mortality. The risk of death from STEMI was highest for patients with symptom onset at 00:00 and lowest for those with onset at 12:00. DISCUSSION: As a part of this first large study of STEMI patients treated with primary angioplasty in Swiss hospitals, investigations confirmed a circadian pattern to both peak CK and in-hospital mortality which were independent of total ischemic time. Accordingly, this study proposes that symptom onset time be incorporated as a prognosis factor in patients with myocardial infarction.
Resumo:
We analyze the impact of a minimum price variation (tick) and timepriority on the dynamics of quotes and the trading costs when competitionfor the order flow is dynamic. We find that convergence to competitiveoutcomes can take time and that the speed of convergence is influencedby the tick size, the priority rule and the characteristics of the orderarrival process. We show also that a zero minimum price variation is neveroptimal when competition for the order flow is dynamic. We compare thetrading outcomes with and without time priority. Time priority is shownto guarantee that uncompetitive spreads cannot be sustained over time.However it can sometimes result in higher trading costs. Empiricalimplications are proposed. In particular, we relate the size of thetrading costs to the frequency of new offers and the dynamics of theinside spread to the state of the book.
Resumo:
Aim: The aim of this research is to assess the associations between subjective pubertal timing (SPT) and onset of health-compromising behaviours among girls reporting an on-time objective pubertal timing (OPT). Methods: Data were drawn from the Swiss SMASH 2002 survey, a self-administered questionnaire study conducted among a nationally representative sample of 7548 adolescents aged 16-20 years. From the 3658 girls in the initial sample, we selected only those (n = 1003) who provided information about SPT and who reported the average age at menarche, namely 13, considering this as an on-time OPT. Bivariate and logistic analyses were conducted to compare the early, on-time and late SPT groups in terms of onset of health-compromising behaviours. Results: A perception of pubertal precocity was associated with sexual intercourse before age 16 [adjusted odds ratio (AOR): 2.10 (1.30-3.37)] and early use of illegal drugs other than cannabis [AOR: 2.55 (1.30-5.02)]. Conversely, girls perceiving their puberty as late were less likely to report intercourse before age 16 [AOR: 0.30 (0.12-0.75)]. Conclusion: Faced with an adolescent girl perceiving her puberty as early, the practitioner should investigate the existence of health-compromising behaviours even if her puberty is or was objectively on-time.
A filtering method to correct time-lapse 3D ERT data and improve imaging of natural aquifer dynamics
Resumo:
We have developed a processing methodology that allows crosshole ERT (electrical resistivity tomography) monitoring data to be used to derive temporal fluctuations of groundwater electrical resistivity and thereby characterize the dynamics of groundwater in a gravel aquifer as it is infiltrated by river water. Temporal variations of the raw ERT apparent-resistivity data were mainly sensitive to the resistivity (salinity), temperature and height of the groundwater, with the relative contributions of these effects depending on the time and the electrode configuration. To resolve the changes in groundwater resistivity, we first expressed fluctuations of temperature-detrended apparent-resistivity data as linear superpositions of (i) time series of riverwater-resistivity variations convolved with suitable filter functions and (ii) linear and quadratic representations of river-water-height variations multiplied by appropriate sensitivity factors; river-water height was determined to be a reliable proxy for groundwater height. Individual filter functions and sensitivity factors were obtained for each electrode configuration via deconvolution using a one month calibration period and then the predicted contributions related to changes in water height were removed prior to inversion of the temperature-detrended apparent-resistivity data. Applications of the filter functions and sensitivity factors accurately predicted the apparent-resistivity variations (the correlation coefficient was 0.98). Furthermore, the filtered ERT monitoring data and resultant time-lapse resistivity models correlated closely with independently measured groundwater electrical resistivity monitoring data and only weakly with the groundwater-height fluctuations. The inversion results based on the filtered ERT data also showed significantly less inversion artefacts than the raw data inversions. We observed resistivity increases of up to 10% and the arrival time peaks in the time-lapse resistivity models matched those in the groundwater resistivity monitoring data.
Resumo:
BACKGROUND AND PURPOSE: This study aims to determine whether perfusion computed tomographic (PCT) thresholds for delineating the ischemic core and penumbra are time dependent or time independent in patients presenting with symptoms of acute stroke. METHODS: Two hundred seventeen patients were evaluated in a retrospective, multicenter study. Patients were divided into those with either persistent occlusion or recanalization. All patients received admission PCT and follow-up imaging to determine the final ischemic core, which was then retrospectively matched to the PCT images to identify optimal thresholds for the different PCT parameters. These thresholds were assessed for significant variation over time since symptom onset. RESULTS: In the persistent occlusion group, optimal PCT parameters that did not significantly change with time included absolute mean transit time, relative mean transit time, relative cerebral blood flow, and relative cerebral blood volume when time was restricted to 15 hours after symptom onset. Conversely, the recanalization group showed no significant time variation for any PCT parameter at any time interval. In the persistent occlusion group, the optimal threshold to delineate the total ischemic area was the relative mean transit time at a threshold of 180%. In patients with recanalization, the optimal parameter to predict the ischemic core was relative cerebral blood volume at a threshold of 66%. CONCLUSIONS: Time does not influence the optimal PCT thresholds to delineate the ischemic core and penumbra in the first 15 hours after symptom onset for relative mean transit time and relative cerebral blood volume, the optimal parameters to delineate ischemic core and penumbra.
Resumo:
A fundamental trait of the human self is its continuum experience of space and time. Perceptual aberrations of this spatial and temporal continuity is a major characteristic of schizophrenia spectrum disturbances--including schizophrenia, schizotypal personality disorder and schizotypy. We have previously found the classical Perceptual Aberration Scale (PAS) scores, related to body and space, to be positively correlated with both behavior and temporo-parietal activation in healthy participants performing a task involving self-projection in space. However, not much is known about the relationship between temporal perceptual aberration, behavior and brain activity. To this aim, we composed a temporal Perceptual Aberration Scale (tPAS) similar to the traditional PAS. Testing on 170 participants suggested similar performance for PAS and tPAS. We then correlated tPAS and PAS scores to participants' performance and neural activity in a task of self-projection in time. tPAS scores correlated positively with reaction times across task conditions, as did PAS scores. Evoked potential mapping and electrical neuroimaging showed self-projection in time to recruit a network of brain regions at the left anterior temporal cortex, right temporo-parietal junction, and occipito-temporal cortex, and duration of activation in this network positively correlated with tPAS and PAS scores. These data demonstrate that schizotypal perceptual aberrations of both time and space, as reflected by tPAS and PAS scores, are positively correlated with performance and brain activation during self-projection in time in healthy individuals along the schizophrenia spectrum.
Resumo:
Organizations across the globe are creating and distributing products that include open source software. To ensure compliance with the open source licenses, each company needs to evaluate exactly what open source licenses and copyrights are included - resulting in duplicated effort and redundancy. This talk will provide an overview of a new Software Package Data Exchange (SPDX) specification. This specification will provide a common format to share information about the open source licenses and copyrights that are included in any software package, with the goal of saving time and improving data accuracy. This talk will review the progress of the initiative; discuss the benefits to organizations using open source and share information on how you can contribute.
Resumo:
We have investigated the phenomenon of deprivation in contemporary Switzerland through the adoption of a multidimensional, dynamic approach. By applying Self Organizing Maps (SOM) to a set of 33 non-monetary indicators from the 2009 wave of the Swiss Household Panel (SHP), we identified 13 prototypical forms (or clusters) of well-being, financial vulnerability, psycho-physiological fragility and deprivation within a topological dimensional space. Then new data from the previous waves (2003 to 2008) were classified by the SOM model, making it possible to estimate the weight of the different clusters in time and reconstruct the dynamics of stability and mobility of individuals within the map. Looking at the transition probabilities between year t and year t+1, we observed that the paths of mobility which catalyze the largest number of observations are those connecting clusters that are adjacent on the topological space.