927 resultados para timing constraint
Resumo:
The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.
Resumo:
In questa tesi ho voluto descrivere il Timing Attack al sistema crittografico RSA, il suo funzionamento, la teoria su cui si basa, i suoi punti di forza e i punti deboli. Questo particolare tipo di attacco informatico fu presentato per la prima volta da Paul C. Kocher nel 1996 all’“RSA Data Security and CRYPTO conferences”. Nel suo articolo “Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems” l’autore svela una nuova possibile falla nel sistema RSA, che non dipende da debolezze del crittosistema puramente matematiche, ma da un aspetto su cui nessuno prima di allora si era mai soffermato: il tempo di esecuzione delle operazioni crittografiche. Il concetto è tanto semplice quanto geniale: ogni operazione in un computer ha una certa durata. Le variazioni dei tempi impiegati per svolgere le operazioni dal computer infatti, necessariamente dipendono dal tipo di algoritmo e quindi dalle chiavi private e dal particolare input che si è fornito. In questo modo, misurando le variazioni di tempo e usando solamente strumenti statistici, Kocher mostra che è possibile ottenere informazioni sull’implementazione del crittosistema e quindi forzare RSA e altri sistemi di sicurezza, senza neppure andare a toccare l’aspetto matematico dell’algoritmo. Di centrale importanza per questa teoria diventa quindi la statistica. Questo perché entrano in gioco molte variabili che possono influire sul tempo di calcolo nella fase di decifrazione: - La progettazione del sistema crittografico - Quanto impiega la CPU ad eseguire il processo - L’algoritmo utilizzato e il tipo di implementazione - La precisione delle misurazioni - Ecc. Per avere più possibilità di successo nell’attaccare il sistema occorre quindi fare prove ripetute utilizzando la stessa chiave e input differenti per effettuare analisi di correlazione statistica delle informazioni di temporizzazione, fino al punto di recuperare completamente la chiave privata. Ecco cosa asserisce Kocher: “Against a vulnerable system, the attack is computationally inexpensive and often requires only known ciphertext.”, cioè, contro sistemi vulnerabili, l’attacco è computazionalmente poco costoso e spesso richiede solo di conoscere testi cifrati e di ottenere i tempi necessari per la loro decifrazione.
Resumo:
Recent research has shown that the performance of a single, arbitrarily efficient algorithm can be significantly outperformed by using a portfolio of —possibly on-average slower— algorithms. Within the Constraint Programming (CP) context, a portfolio solver can be seen as a particular constraint solver that exploits the synergy between the constituent solvers of its portfolio for predicting which is (or which are) the best solver(s) to run for solving a new, unseen instance. In this thesis we examine the benefits of portfolio solvers in CP. Despite portfolio approaches have been extensively studied for Boolean Satisfiability (SAT) problems, in the more general CP field these techniques have been only marginally studied and used. We conducted this work through the investigation, the analysis and the construction of several portfolio approaches for solving both satisfaction and optimization problems. We focused in particular on sequential approaches, i.e., single-threaded portfolio solvers always running on the same core. We started from a first empirical evaluation on portfolio approaches for solving Constraint Satisfaction Problems (CSPs), and then we improved on it by introducing new data, solvers, features, algorithms, and tools. Afterwards, we addressed the more general Constraint Optimization Problems (COPs) by implementing and testing a number of models for dealing with COP portfolio solvers. Finally, we have come full circle by developing sunny-cp: a sequential CP portfolio solver that turned out to be competitive also in the MiniZinc Challenge, the reference competition for CP solvers.
Resumo:
Nell'elaborato è stato svolto uno studio su più livelli degli elementi essenziali del pacemaker asincrono secondo la realizzazione circuitale proposta da Wilson Greatbatch nel 1960. Un primo livello ha riguardato l’analisi teorica del circuito. Un secondo livello ha riguardato un’analisi svolta con LTSPICE. Con questo stesso programma, si è analizzato il segnale di temporizzazione e la forma d’onda sul carico al variare del valore di alcuni componenti chiave del circuito. Infine, si è proceduto alla sua realizzazione su breadboard.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
Detrital zircon and igneous zircon U-Pb ages are reported from Proterozoic metamorphic rocks in northern New Mexico. These data give new insight into the provenance and depositional age of a >3-km-thick metasedimentary succession and help resolve the timing of orogenesis within an area of overlapping accretionary orogens and thermal events related to the Proterozoic tectonic evolution of southwest Laurentia. Three samples from the Paleoproterozoic Vadito Group yield narrow, unimodal detrital zircon age spectra with peak ages near 1710 Ma. Igneous rocks that intrude the Vadito Group include the Cerro Alto metadacite, the Picuris Pueblo granite, and the Penasco quartz monzonite and yield crystallization ages of 1710 +/- 10 Ma, 1699 +/- 3 Ma, and 1450 +/- 10 Ma, respectively. Within the overlying Hondo Group, a metamorphosed tuff layer from the Pilar Formation yields an age of 1488 +/- 6 Ma and represents the first direct depositional age constraint on any part of the Proterozoic metasedimentary succession in northern New Mexico. Detrital zircon from the overlying Piedra Lumbre Formation yield a minimum age peak of 1475 Ma, and similar to 60 grains (similar to 25%) yield ages between 1500 Ma and 1600 Ma, possibly representing non-Laurentian detritus originating from Australia and/or Antarctica. Detrital zircons from the basal metaconglomerate and the middle quartzite member of the Marquenas Formation yield minimum age peaks of 1472 Ma and 1471 Ma, consistent with earlier results. We interpret the onset of ca. 1490-1450 Ma deposition followed by tectonic burial, regional Al2SiO5 triple-point metamorphism, and ductile deformation at depths of 12-18 km to reflect a Mesoproterozoic contractional orogenic event, possibly related to the final suturing of the Mazatzal crustal province to the southern margin of Laurentia. We propose to call this event the Picuris orogeny.
Resumo:
Cerebrovascular accidents (CVA) are considered among the most serious adverse events after transcatheter aortic valve implantation (TAVI). The objective of the present study was to evaluate the frequency and timing of CVA after TAVI and to investigate the impact on clinical outcomes within 30 days of the procedure.