931 resultados para cryptographic pairing computation, elliptic curve cryptography
Resumo:
Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.
Resumo:
This thesis provides efficient and robust algorithms for the computation of the intersection curve between a torus and a simple surface (e.g. a plane, a natural quadric or another torus), based on algebraic and numeric methods. The algebraic part includes the classification of the topological type of the intersection curve and the detection of degenerate situations like embedded conic sections and singularities. Moreover, reference points for each connected intersection curve component are determined. The required computations are realised efficiently by solving quartic polynomials at most and exactly by using exact arithmetic. The numeric part includes algorithms for the tracing of each intersection curve component, starting from the previously computed reference points. Using interval arithmetic, accidental incorrectness like jumping between branches or the skipping of parts are prevented. Furthermore, the environments of singularities are correctly treated. Our algorithms are complete in the sense that any kind of input can be handled including degenerate and singular configurations. They are verified, since the results are topologically correct and approximate the real intersection curve up to any arbitrary given error bound. The algorithms are robust, since no human intervention is required and they are efficient in the way that the treatment of algebraic equations of high degree is avoided.
Resumo:
People are daily faced with intertemporal choice, i.e., choices differing in the timing of their consequences, frequently preferring smaller-sooner rewards over larger-delayed ones, reflecting temporal discounting of the value of future outcomes. This dissertation addresses two main goals. New evidence about the neural bases of intertemporal choice is provided. Following the disruption of either the medial orbitofrontal cortex or the insula, the willingness to wait for larger-delayed outcomes is affected in odd directions, suggesting the causal involvement of these areas in regulating the value computation of rewards available with different timings. These findings were also supported by a reported imaging study. Moreover, this dissertation provides new evidence about how temporal discounting can be modulated at a behavioral level through different manipulations, e.g., allowing individuals to think about the distant time, pairing rewards with aversive events, or changing their perceived spatial position. A relationship between intertemporal choice, moral judgements and aging is also discussed. All these findings link together to support a unitary neural model of temporal discounting according to which signals coming from several cortical (i.e., medial orbitofrontal cortex, insula) and subcortical regions (i.e., amygdala, ventral striatum) are integrated to represent the subjective value of both earlier and later rewards, under the top-down regulation of dorsolateral prefrontal cortex. The present findings also support the idea that the process of outcome evaluation is strictly related to the ability to pre-experience and envision future events through self-projection, the anticipation of visceral feelings associated with receiving rewards, and the psychological distance from rewards. Furthermore, taking into account the emotions and the state of arousal at the time of decision seems necessary to understand impulsivity associated with preferring smaller-sooner goods in place of larger-later goods.
Resumo:
Questo lavoro si pone come obiettivo l'approfondimento della natura e delle proprietà dei polinomi espressi mediante la base di Bernstein. Introdotti originariamente all'inizio del '900 per risolvere il problema di approssimare una funzione continua su un intervallo chiuso e limitato della retta reale (Teorema di Stone-Weierstrass), essi hanno riscosso grande successo solo a partire dagli anni '60 quando furono applicati alla computer-grafica per costruire le cosiddette curve di Bezier. Queste, ereditando le loro proprietà geometriche da quelle analitiche dei polinomi di Bernstein, risultano intuitive e facilmente modellabili da un software interattivo e sono alla base di tutti i più moderni disegni curvilinei: dal design industriale, ai sistemi CAD, dallo standard SVG alla rappresentazione di font di caratteri.
Resumo:
Caratterizzazione delle trasformazioni e riferimento di Frenet-Serret
Resumo:
A highly dangerous situations for tractor driver is the lateral rollover in operating conditions. Several accidents, involving tractor rollover, have indeed been encountered, requiring the design of a robust Roll-Over Protective Structure (ROPS). The aim of the thesis was to evaluate tractor behaviour in the rollover phase so as to calculate the energy absorbed by the ROPS to ensure driver safety. A Mathematical Model representing the behaviour of a generic tractor during a lateral rollover, with the possibility of modifying the geometry, the inertia of the tractor and the environmental boundary conditions, is proposed. The purpose is to define a method allowing the prediction of the elasto-plastic behaviour of the subsequent impacts occurring in the rollover phase. A tyre impact model capable of analysing the influence of the wheels on the energy to be absorbed by the ROPS has been also developed. Different tractor design parameters affecting the rollover behaviour, such as mass and dimensions, have been considered. This permitted the evaluation of their influence on the amount of energy to be absorbed by the ROPS. The mathematical model was designed and calibrated with respect to the results of actual lateral upset tests carried out on a narrow-track tractor. The dynamic behaviour of the tractor and the energy absorbed by the ROPS, obtained from the actual tests, showed to match the results of the model developed. The proposed approach represents a valuable tool in understanding the dynamics (kinetic energy) and kinematics (position, velocity, angular velocity, etc.) of the tractor in the phases of lateral rollover and the factors mainly affecting the event. The prediction of the amount of energy to be absorbed in some cases of accident is possible with good accuracy. It can then help in designing protective structures or active security devices.
Resumo:
Studio dei punti singolari di alcune curve celebri
Resumo:
In the first part of the thesis, we propose an exactly-solvable one-dimensional model for fermions with long-range p-wave pairing decaying with distance as a power law. We studied the phase diagram by analyzing the critical lines, the decay of correlation functions and the scaling of the von Neumann entropy with the system size. We found two gapped regimes, where correlation functions decay (i) exponentially at short range and algebraically at long range, (ii) purely algebraically. In the latter the entanglement entropy is found to diverge logarithmically. Most interestingly, along the critical lines, long-range pairing breaks also the conformal symmetry. This can be detected via the dynamics of entanglement following a quench. In the second part of the thesis we studied the evolution in time of the entanglement entropy for the Ising model in a transverse field varying linearly in time with different velocities. We found different regimes: an adiabatic one (small velocities) when the system evolves according the instantaneous ground state; a sudden quench (large velocities) when the system is essentially frozen to its initial state; and an intermediate one, where the entropy starts growing linearly but then displays oscillations (also as a function of the velocity). Finally, we discussed the Kibble-Zurek mechanism for the transition between the paramagnetic and the ordered phase.
Resumo:
Per prevedere i campi di dose attorno a dispositivi radiologici, vengono sviluppati e validati, per mezzo di misure sperimentali, modelli Monte Carlo (utilizzando MCNP5). Lo scopo di questo lavoro è quello di valutare le dosi ricevute da persone che operano all'interno della sala di raggi X, mentre il tubo è in funzione. Il tubo utilizzato è un DI-1000/0.6-1.3 della azienda svizzera COMET AG. Per prima cosa si è ottenuto lo spettro di emissione dei raggi X con la Tally F5 simulando l'interazione di un fascio di elettroni contro un anodo di tungsteno. Successivamente, con una F4MESH, si è ricavato il flusso di fotoni in ogni cella della mesh tridimensionale definita sulla sala; la conversione a dose equivalente è ottenuta per mezzo di fattori di conversione estratti dal NIST. I risultati della Tally FMESH vengono confrontati con i valori di dose misurati con una camera di ionizzazione Radcal 1800 cc. I risultati sono ottenuti per le seguenti condizioni operative: 40 kVp, 100 mA, 200 mAs, fuoco fine e filtro in alluminio di spessore 0,8 mm. Confrontando con le misure sperimentali si osserva che tali valori differiscono da quelli simulati di circa un 10%. Possiamo quindi prevedere con buona approssimazione la distribuzione di dose mentre il tubo è in funzione. In questo modo è possibile ridurre al minimo la dose ricevuta dall'operatore.
Resumo:
Il presente lavoro è stato svolto presso il Servizio di Fisica Sanitaria dell’Azienda USL della Romagna, Presidio Ospedaliero di Ravenna e consiste nella validazione del dato dosimetrico visualizzato su due apparecchiature per mammografia digitale e nel confronto tra qualità immagine di diverse curve di acquisizione in funzione della dose e della post-elaborazione. Presupposto per l’acquisizione delle immagini è stata la validazione del dato dosimetrico visualizzato sui mammografi, tramite misura diretta della dose in ingresso con strumentazione idonea, secondo protocolli standard e linee guida europee. A seguire, sono state effettuate prove di acquisizione delle immagini radiografiche su due diversi fantocci, contenenti inserti a diverso contrasto e risoluzione, ottenute in corrispondenza di tre curve dosimetriche e in funzione dei due livelli di post-elaborazione delle immagini grezze. Una volta verificati i vari passaggi si è proceduto con l’analisi qualitativa e quantitativa sulle immagini prodotte: la prima valutazione è stata eseguita su monitor di refertazione mammografica, mentre la seconda è stata effettuata calcolando il contrasto in relazione alla dose ghiandolare media. In particolare è stato studiato l’andamento del contrasto cambiando le modalità del software Premium View e lo spessore interposto tra la sorgente di raggi X ed il fantoccio, in modo da simulare mammelle con grandezze differenti.
Resumo:
Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.
Resumo:
Trattazione sulla superficie quadrica rigata nel proiettivo, con cenni sulle quadriche in generale nello spazio affine e proiettivo e sull'unicità della superficie quadrica liscia nello spazio proiettivo complesso. Descrizione della quadrica rigata tramite la Mappa di Segre e tramite la proiezione da un suo punto su di un piano, studio di come ricavare tale quadrica da un piano e descrizione delle curve su di essa.
Resumo:
Si è interessati a classificare le cubiche del piano proiettivo complesso. In particolare vengono classificate le cubiche piane dimostrando che ogni cubica non singolare è proiettivamente equivalente a una cubica di equazione affine nota e che esistono infinite classi di equivalenza proiettiva per le cubiche piane non singolari. Si mostra inoltre che le cubiche piane singolari irriducibili possono essere ricondotte a due classi di equivalenza proiettive: la prima classe contiene le cubiche con un nodo, la seconda classe contiene invece le cubiche con una cuspide. Infine si studiano le proiezioni piane della cubica gobba da un suo punto, oppure da un punto esterno alla cubica.
Resumo:
Questa tesi presenta un metodo generale per la costruzione di curve spline generalizzate di interpolazione locale. Costruiremo quest'ultime miscelando polinomi interpolanti generalizzati a blending function generalizzate. Verrano inoltre verificate sperimentalmente alcune delle proprietà di queste curve.
Resumo:
La tesi affronta la classificazione delle superfici compatte e prive di bordo. Successivamente, si vede un'applicazione del teorema di classificazione alle curve algebriche proiettive complesse, non singolari e irriducibili.