925 resultados para Timing errors
Resumo:
The first part of my thesis presents an overview of the different approaches used in the past two decades in the attempt to forecast epileptic seizure on the basis of intracranial and scalp EEG. Past research could reveal some value of linear and nonlinear algorithms to detect EEG features changing over different phases of the epileptic cycle. However, their exact value for seizure prediction, in terms of sensitivity and specificity, is still discussed and has to be evaluated. In particular, the monitored EEG features may fluctuate with the vigilance state and lead to false alarms. Recently, such a dependency on vigilance states has been reported for some seizure prediction methods, suggesting a reduced reliability. An additional factor limiting application and validation of most seizure-prediction techniques is their computational load. For the first time, the reliability of permutation entropy [PE] was verified in seizure prediction on scalp EEG data, contemporarily controlling for its dependency on different vigilance states. PE was recently introduced as an extremely fast and robust complexity measure for chaotic time series and thus suitable for online application even in portable systems. The capability of PE to distinguish between preictal and interictal state has been demonstrated using Receiver Operating Characteristics (ROC) analysis. Correlation analysis was used to assess dependency of PE on vigilance states. Scalp EEG-Data from two right temporal epileptic lobe (RTLE) patients and from one patient with right frontal lobe epilepsy were analysed. The last patient was included only in the correlation analysis, since no datasets including seizures have been available for him. The ROC analysis showed a good separability of interictal and preictal phases for both RTLE patients, suggesting that PE could be sensitive to EEG modifications, not visible on visual inspection, that might occur well in advance respect to the EEG and clinical onset of seizures. However, the simultaneous assessment of the changes in vigilance showed that: a) all seizures occurred in association with the transition of vigilance states; b) PE was sensitive in detecting different vigilance states, independently of seizure occurrences. Due to the limitations of the datasets, these results cannot rule out the capability of PE to detect preictal states. However, the good separability between pre- and interictal phases might depend exclusively on the coincidence of epileptic seizure onset with a transition from a state of low vigilance to a state of increased vigilance. The finding of a dependency of PE on vigilance state is an original finding, not reported in literature, and suggesting the possibility to classify vigilance states by means of PE in an authomatic and objectic way. The second part of my thesis provides the description of a novel behavioral task based on motor imagery skills, firstly introduced (Bruzzo et al. 2007), in order to study mental simulation of biological and non-biological movement in paranoid schizophrenics (PS). Immediately after the presentation of a real movement, participants had to imagine or re-enact the very same movement. By key release and key press respectively, participants had to indicate when they started and ended the mental simulation or the re-enactment, making it feasible to measure the duration of the simulated or re-enacted movements. The proportional error between duration of the re-enacted/simulated movement and the template movement were compared between different conditions, as well as between PS and healthy subjects. Results revealed a double dissociation between the mechanisms of mental simulation involved in biological and non-biologial movement simulation. While for PS were found large errors for simulation of biological movements, while being more acurate than healthy subjects during simulation of non-biological movements. Healthy subjects showed the opposite relationship, making errors during simulation of non-biological movements, but being most accurate during simulation of non-biological movements. However, the good timing precision during re-enactment of the movements in all conditions and in both groups of participants suggests that perception, memory and attention, as well as motor control processes were not affected. Based upon a long history of literature reporting the existence of psychotic episodes in epileptic patients, a longitudinal study, using a slightly modified behavioral paradigm, was carried out with two RTLE patients, one patient with idiopathic generalized epilepsy and one patient with extratemporal lobe epilepsy. Results provide strong evidence for a possibility to predict upcoming seizures in RTLE patients behaviorally. In the last part of the thesis it has been validated a behavioural strategy based on neurobiofeedback training, to voluntarily control seizures and to reduce there frequency. Three epileptic patients were included in this study. The biofeedback was based on monitoring of slow cortical potentials (SCPs) extracted online from scalp EEG. Patients were trained to produce positive shifts of SCPs. After a training phase patients were monitored for 6 months in order to validate the ability of the learned strategy to reduce seizure frequency. Two of the three refractory epileptic patients recruited for this study showed improvements in self-management and reduction of ictal episodes, even six months after the last training session.
Resumo:
Satellite SAR (Synthetic Aperture Radar) interferometry represents a valid technique for digital elevation models (DEM) generation, providing metric accuracy even without ancillary data of good quality. Depending on the situations the interferometric phase could be interpreted both as topography and as a displacement eventually occurred between the two acquisitions. Once that these two components have been separated it is possible to produce a DEM from the first one or a displacement map from the second one. InSAR DEM (Digital Elevation Model) generation in the cryosphere is not a straightforward operation because almost every interferometric pair contains also a displacement component, which, even if small, when interpreted as topography during the phase to height conversion step could introduce huge errors in the final product. Considering a glacier, assuming the linearity of its velocity flux, it is therefore necessary to differentiate at least two pairs in order to isolate the topographic residue only. In case of an ice shelf the displacement component in the interferometric phase is determined not only by the flux of the glacier but also by the different heights of the two tides. As a matter of fact even if the two scenes of the interferometric pair are acquired at the same time of the day only the main terms of the tide disappear in the interferogram, while the other ones, smaller, do not elide themselves completely and so correspond to displacement fringes. Allowing for the availability of tidal gauges (or as an alternative of an accurate tidal model) it is possible to calculate a tidal correction to be applied to the differential interferogram. It is important to be aware that the tidal correction is applicable only knowing the position of the grounding line, which is often a controversial matter. In this thesis it is described the methodology applied for the generation of the DEM of the Drygalski ice tongue in Northern Victoria Land, Antarctica. The displacement has been determined both in an interferometric way and considering the coregistration offsets of the two scenes. A particular attention has been devoted to investigate the importance of the role of some parameters, such as timing annotations and orbits reliability. Results have been validated in a GIS environment by comparison with GPS displacement vectors (displacement map and InSAR DEM) and ICEsat GLAS points (InSAR DEM).
Resumo:
At the light of what happened in 2010 and 2011, a lot of European countries founded themselves in a difficult position where all the credit rating agencies were downgrading debt states. Problem of solvency and guarantees on the states' bond were perceived as too risky for a Monetary Union as Europe is. Fear of a contagion from Greece as well was threatening the other countries as Italy, Spain, Portugal and Ireland; while Germany and France asked for a division between risky and riskless bond in order to feel more safe. Our paper gets inspiration by Roch and Uhlig (2011), it refers to the Argentinian case examined by Arellano (2008) and examine possible interventions as monetization or bailout as proposed by Cole and Kehoe (2000). We propose a model in which a state defaults and cannot repay a fraction of the old bond; but contrary to Roch and Uhlig that where considering a one-time cost of default we consider default as an accumulation of losses, perceived as unpaid fractions of the old debts. Our contributions to literature is that default immediately imply that economy faces a bad period and, accumulating losses, government will be worse-off. We studied a function for this accumulation of debt period by period, in order to get an idea of the magnitude of this waste of resources that economy will face when experiences a default. Our thesis is that bailouts just postpone the day of reckoning (Roch, Uhlig); so it's better to default before accumulate a lot of debts. What Europe need now is the introduction of new reforms in a controlled default where the Eurozone will be saved in its whole integrity and a state could fail with the future promise of a resurrection. As experience show us, governments are not interested into reducing debts since there are ECB interventions. That clearly create a distortion between countries in the same monetary union, giving to the states just an illusion about their future debtor position.
Resumo:
Timing of waiting list entrance for patients with cystic fibrosis in need of pulmonary transplant: the experience of a regional referral centre Objective: Evaluation of parameters that can predict a rapid decay of general conditions of patients affected by Cystic Fibrosis (CF) with no specific criteria to be candidate to pulmonary transplant. Material and methods: Fifteen patients with CF who died for complications and 8 who underwent lung transplantation in the 2000-2010 decade, were enrolled. Clinical data 2 years before the event (body max index, FEV1%, number of EV antibiotic treatments per year, colonization with Methicillin-resistant Staphylococcus aureus (MRSA), pseudomonas aeruginosa mucosus, burkholderia cepacia, pulmonary allergic aspergilosis) were compared among the 2 groups. Results: Mean FEV1% was significantly higher and mean number of antibiotic treatment was lower in deceased than in the transplanted patients (p<0.002 and p<0.001 respectively). Although in patients who died there were no including criteria to enter the transplant list 2 years before the exitus, suggestive findings such as low BMI (17.3), high incidence of hepatic pathology (33.3%), diabetes (50%), and infections with MRSA infection (25%), Pseudomonas aeruginosa (83.3%) and burkholderia cepacia (8.3%) were found with no statistical difference with transplanted patients, suggesting those patients were at risk of severe prognosis. In patients who died, females were double than males. Conclusion: While evaluating patients with CF, negative prognostic factors such as the ones investigated in this study, should be considered to select individuals with high mortality risk who need stricter therapeutical approach and follow up. Inclusion of those patients in the transplant waiting list should be taken into account.
Resumo:
This thesis deal with the design of advanced OFDM systems. Both waveform and receiver design have been treated. The main scope of the Thesis is to study, create, and propose, ideas and novel design solutions able to cope with the weaknesses and crucial aspects of modern OFDM systems. Starting from the the transmitter side, the problem represented by low resilience to non-linear distortion has been assessed. A novel technique that considerably reduces the Peak-to-Average Power Ratio (PAPR) yielding a quasi constant signal envelope in the time domain (PAPR close to 1 dB) has been proposed.The proposed technique, named Rotation Invariant Subcarrier Mapping (RISM),is a novel scheme for subcarriers data mapping,where the symbols belonging to the modulation alphabet are not anchored, but maintain some degrees of freedom. In other words, a bit tuple is not mapped on a single point, rather it is mapped onto a geometrical locus, which is totally or partially rotation invariant. The final positions of the transmitted complex symbols are chosen by an iterative optimization process in order to minimize the PAPR of the resulting OFDM symbol. Numerical results confirm that RISM makes OFDM usable even in severe non-linear channels. Another well known problem which has been tackled is the vulnerability to synchronization errors. Indeed in OFDM system an accurate recovery of carrier frequency and symbol timing is crucial for the proper demodulation of the received packets. In general, timing and frequency synchronization is performed in two separate phases called PRE-FFT and POST-FFT synchronization. Regarding the PRE-FFT phase, a novel joint symbol timing and carrier frequency synchronization algorithm has been presented. The proposed algorithm is characterized by a very low hardware complexity, and, at the same time, it guarantees very good performance in in both AWGN and multipath channels. Regarding the POST-FFT phase, a novel approach for both pilot structure and receiver design has been presented. In particular, a novel pilot pattern has been introduced in order to minimize the occurrence of overlaps between two pattern shifted replicas. This allows to replace conventional pilots with nulls in the frequency domain, introducing the so called Silent Pilots. As a result, the optimal receiver turns out to be very robust against severe Rayleigh fading multipath and characterized by low complexity. Performance of this approach has been analytically and numerically evaluated. Comparing the proposed approach with state of the art alternatives, in both AWGN and multipath fading channels, considerable performance improvements have been obtained. The crucial problem of channel estimation has been thoroughly investigated, with particular emphasis on the decimation of the Channel Impulse Response (CIR) through the selection of the Most Significant Samples (MSSs). In this contest our contribution is twofold, from the theoretical side, we derived lower bounds on the estimation mean-square error (MSE) performance for any MSS selection strategy,from the receiver design we proposed novel MSS selection strategies which have been shown to approach these MSE lower bounds, and outperformed the state-of-the-art alternatives. Finally, the possibility of using of Single Carrier Frequency Division Multiple Access (SC-FDMA) in the Broadband Satellite Return Channel has been assessed. Notably, SC-FDMA is able to improve the physical layer spectral efficiency with respect to single carrier systems, which have been used so far in the Return Channel Satellite (RCS) standards. However, it requires a strict synchronization and it is also sensitive to phase noise of local radio frequency oscillators. For this reason, an effective pilot tone arrangement within the SC-FDMA frame, and a novel Joint Multi-User (JMU) estimation method for the SC-FDMA, has been proposed. As shown by numerical results, the proposed scheme manages to satisfy strict synchronization requirements and to guarantee a proper demodulation of the received signal.
Resumo:
The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.
Resumo:
Turfgrasses are ubiquitous in urban landscape and their role on carbon (C) cycle is increasing important also due to the considerable footprint related to their management practices. It is crucial to understand the mechanisms driving the C assimilation potential of these terrestrial ecosystems Several approaches have been proposed to assess C dynamics: micro-meteorological methods, small-chamber enclosure system (SC), chrono-sequence approach and various models. Natural and human-induced variables influence turfgrasses C fluxes. Species composition, environmental conditions, site characteristics, former land use and agronomic management are the most important factors considered in literature driving C sequestration potential. At the same time different approaches seem to influence C budget estimates. In order to study the effect of different management intensities on turfgrass, we estimated net ecosystem exchange (NEE) through a SC approach in a hole of a golf course in the province of Verona (Italy) for one year. The SC approach presented several advantages but also limits related to the measurement frequency, timing and duration overtime, and to the methodological errors connected to the measuring system. Daily CO2 fluxes changed according to the intensity of maintenance, likely due to different inputs and disturbances affecting biogeochemical cycles, combined also to the different leaf area index (LAI). The annual cumulative NEE decreased with the increase of the intensity of management. NEE was related to the seasonality of turfgrass, following temperatures and physiological activity. Generally on the growing season CO2 fluxes towards atmosphere exceeded C sequestered. The cumulative NEE showed a system near to a steady state for C dynamics. In the final part greenhouse gases (GHGs) emissions due to fossil fuel consumption for turfgrass upkeep were estimated, pinpointing that turfgrass may result a considerable C source. The C potential of trees and shrubs needs to be considered to obtain a complete budget.
Resumo:
In questa tesi ho voluto descrivere il Timing Attack al sistema crittografico RSA, il suo funzionamento, la teoria su cui si basa, i suoi punti di forza e i punti deboli. Questo particolare tipo di attacco informatico fu presentato per la prima volta da Paul C. Kocher nel 1996 all’“RSA Data Security and CRYPTO conferences”. Nel suo articolo “Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems” l’autore svela una nuova possibile falla nel sistema RSA, che non dipende da debolezze del crittosistema puramente matematiche, ma da un aspetto su cui nessuno prima di allora si era mai soffermato: il tempo di esecuzione delle operazioni crittografiche. Il concetto è tanto semplice quanto geniale: ogni operazione in un computer ha una certa durata. Le variazioni dei tempi impiegati per svolgere le operazioni dal computer infatti, necessariamente dipendono dal tipo di algoritmo e quindi dalle chiavi private e dal particolare input che si è fornito. In questo modo, misurando le variazioni di tempo e usando solamente strumenti statistici, Kocher mostra che è possibile ottenere informazioni sull’implementazione del crittosistema e quindi forzare RSA e altri sistemi di sicurezza, senza neppure andare a toccare l’aspetto matematico dell’algoritmo. Di centrale importanza per questa teoria diventa quindi la statistica. Questo perché entrano in gioco molte variabili che possono influire sul tempo di calcolo nella fase di decifrazione: - La progettazione del sistema crittografico - Quanto impiega la CPU ad eseguire il processo - L’algoritmo utilizzato e il tipo di implementazione - La precisione delle misurazioni - Ecc. Per avere più possibilità di successo nell’attaccare il sistema occorre quindi fare prove ripetute utilizzando la stessa chiave e input differenti per effettuare analisi di correlazione statistica delle informazioni di temporizzazione, fino al punto di recuperare completamente la chiave privata. Ecco cosa asserisce Kocher: “Against a vulnerable system, the attack is computationally inexpensive and often requires only known ciphertext.”, cioè, contro sistemi vulnerabili, l’attacco è computazionalmente poco costoso e spesso richiede solo di conoscere testi cifrati e di ottenere i tempi necessari per la loro decifrazione.
Resumo:
Das aSPECT Spektrometer wurde entworfen, um das Spektrum der Protonen beimrnZerfall freier Neutronen mit hoher Präzision zu messen. Aus diesem Spektrum kann dann der Elektron-Antineutrino Winkelkorrelationskoeffizient "a" mit hoher Genauigkeit bestimmt werden. Das Ziel dieses Experiments ist es, diesen Koeffizienten mit einem absoluten relativen Fehler von weniger als 0.3% zu ermitteln, d.h. deutlich unter dem aktuellen Literaturwert von 5%.rnrnErste Messungen mit dem aSPECT Spektrometer wurden an der Forschungsneutronenquelle Heinz Maier-Leibnitz in München durchgeführt. Jedoch verhinderten zeitabhängige Instabilitäten des Meßhintergrunds eine neue Bestimmung von "a".rnrnDie vorliegende Arbeit basiert hingegen auf den letzten Messungen mit dem aSPECTrnSpektrometer am Institut Laue-Langevin (ILL) in Grenoble, Frankreich. Bei diesen Messungen konnten die Instabilitäten des Meßhintergrunds bereits deutlich reduziert werden. Weiterhin wurden verschiedene Veränderungen vorgenommen, um systematische Fehler zu minimieren und um einen zuverlässigeren Betrieb des Experiments sicherzustellen. Leider konnte aber wegen zu hohen Sättigungseffekten der Empfängerelektronik kein brauchbares Ergebnis gemessen werden. Trotzdem konnten diese und weitere systematische Fehler identifiziert und verringert, bzw. sogar teilweise eliminiert werden, wovon zukünftigernStrahlzeiten an aSPECT profitieren werden.rnrnDer wesentliche Teil der vorliegenden Arbeit befasst sich mit der Analyse und Verbesserung der systematischen Fehler, die durch das elektromagnetische Feld aSPECTs hervorgerufen werden. Hieraus ergaben sich vielerlei Verbesserungen, insbesondere konnten die systematischen Fehler durch das elektrische Feld verringert werden. Die durch das Magnetfeld verursachten Fehler konnten sogar soweit minimiert werden, dass nun eine Verbesserung des aktuellen Literaturwerts von "a" möglich ist. Darüber hinaus wurde in dieser Arbeit ein für den Versuch maßgeschneidertes NMR-Magnetometer entwickelt und soweit verbessert, dass nun Unsicherheiten bei der Charakterisierung des Magnetfeldes soweit reduziert wurden, dass sie für die Bestimmung von "a" mit einer Genauigkeit von mindestens 0.3% vernachlässigbar sind.
Resumo:
The aim of this dissertation is to show the power of contrastive analysis in successfully predicting the errors a language learner will make by means of a concrete case study. First, there is a description of what language transfer is and why it is important in the matter of second language acquisition. Second, a brief explanation of the history and development of contrastive analysis will be offered. Third, the focus of the thesis will move to an analysis of errors usually made by language learners. To conclude, the dissertation will focus on the concrete case study of a Russian learner of English: after an analysis of the errors the student is likely to make, a recorded conversation will be examined.
Resumo:
The aim of this dissertation is to provide a translation from English into Italian of an extract from the research report “The Nature of Errors Made by Drivers”. The research was conducted by the MUARC (the Monash University Accident Research Centre) and published in June 2011 by Austroads, the association of Australasian road transport and traffic agencies. The excerpt chosen for translation is the third chapter, which provides an overview of the on-road pilot study conducted to analyse why drivers make mistakes during their everyday drive, including the methodology employed and the results obtained. This work is divided into six sections. It opens with an introduction on the topic and the formal structure of the report, followed by the first chapter, which provides an overview of the main features of the languages for special purposes and the specialised texts, an analysis of the text type and a presentation of the extract chosen for translation. In the second chapter the linguistic and extralinguistic resources available to specialised translators are presented, focussing on the ones used to translate the text. The third chapter is dedicated to the source text and its translation, while the fourth one provides an analysis of the strategies chosen to translate the text and a comment on the solutions to problematic passages. Finally, the last section – the conclusion – provides a comment on the entire work and on the professional activity of translators. The work closes with an appendix, which contains a glossary of the terms extracted from the translated text.
Resumo:
Nell'elaborato è stato svolto uno studio su più livelli degli elementi essenziali del pacemaker asincrono secondo la realizzazione circuitale proposta da Wilson Greatbatch nel 1960. Un primo livello ha riguardato l’analisi teorica del circuito. Un secondo livello ha riguardato un’analisi svolta con LTSPICE. Con questo stesso programma, si è analizzato il segnale di temporizzazione e la forma d’onda sul carico al variare del valore di alcuni componenti chiave del circuito. Infine, si è proceduto alla sua realizzazione su breadboard.
Resumo:
Antisaccade errors are attributed to failure to inhibit the habitual prosaccade. We investigated whether the amount of information about the required response the patient has before the trial begins also contributes to error rate. Participants performed antisaccades in five conditions. The traditional design had two goals on the left and right horizontal meridians. In the second condition, stimulus-goal confusability between trials was eliminated by displacing one goal upward. In the third, hemifield uncertainty was eliminated by placing both goals in the same hemifield. In the fourth, goal uncertainty was eliminated by having only one goal, but interspersed with no-go trials. The fifth condition eliminated all uncertainty by having the same goal on every trial. Antisaccade error rate increased by 2% with each additional source of uncertainty, with the main effect being hemifield information, and a trend for stimulus-goal confusability. A control experiment for the effects of increasing angular separation between targets without changing these types of prior response information showed no effects on latency or error rate. We conclude that other factors besides prosaccade inhibition contribute to antisaccade error rates in traditional designs, possibly by modulating the strength of goal activation.