11 resultados para ElGamal, CZK, Multiple discrete logarithm assumption, Extended linear algebra

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The seismic behaviour of one-storey asymmetric structures has been studied since 1970s by a number of researches studies which identified the coupled nature of the translational-to-torsional response of those class of systems leading to severe displacement magnifications at the perimeter frames and therefore to significant increase of local peak seismic demand to the structural elements with respect to those of equivalent not-eccentric systems (Kan and Chopra 1987). These studies identified the fundamental parameters (such as the fundamental period TL normalized eccentricity e and the torsional-to-lateral frequency ratio Ωϑ) governing the torsional behavior of in-plan asymmetric structures and trends of behavior. It has been clearly recognized that asymmetric structures characterized by Ωϑ >1, referred to as torsionally-stiff systems, behave quite different form structures with Ωϑ <1, referred to as torsionally-flexible systems. Previous research works by some of the authors proposed a simple closed-form estimation of the maximum torsional response of one-storey elastic systems (Trombetti et al. 2005 and Palermo et al. 2010) leading to the so called “Alpha-method” for the evaluation of the displacement magnification factors at the corner sides. The present paper provides an upgrade of the “Alpha Method” removing the assumption of linear elastic response of the system. The main objective is to evaluate how the excursion of the structural elements in the inelastic field (due to the reaching of yield strength) affects the displacement demand of one-storey in-plan asymmetric structures. The system proposed by Chopra and Goel in 2007, which is claimed to be able to capture the main features of the non-linear response of in-plan asymmetric system, is used to perform a large parametric analysis varying all the fundamental parameters of the system, including the inelastic demand by varying the force reduction factor from 2 to 5. Magnification factors for different force reduction factor are proposed and comparisons with the results obtained from linear analysis are provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main practical implications of quantum mechanical theory is quantum computing, and therefore the quantum computer. Quantum computing (for example, with Shor’s algorithm) challenges the computational hardness assumptions, such as the factoring problem and the discrete logarithm problem, that anchor the safety of cryptosystems. So the scientific community is studying how to defend cryptography; there are two defense strategies: the quantum cryptography (which involves the use of quantum cryptographic algorithms on quantum computers) and the post-quantum cryptography (based on classical cryptographic algorithms, but resistant to quantum computers). For example, National Institute of Standards and Technology (NIST) is collecting and standardizing the post-quantum ciphers, as it established DES and AES as symmetric cipher standards, in the past. In this thesis an introduction on quantum mechanics was given, in order to be able to talk about quantum computing and to analyze Shor’s algorithm. The differences between quantum and post-quantum cryptography were then analyzed. Subsequently the focus was given to the mathematical problems assumed to be resistant to quantum computers. To conclude, post-quantum digital signature cryptographic algorithms selected by NIST were studied and compared in order to apply them in today’s life.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first part of my work consisted in samplings conduced in nine different localities of the salento peninsula and Apulia (Italy): Costa Merlata (BR), Punta Penne (BR), Santa Cesarea terme (LE), Santa Caterina (LE), Torre Inserraglio (LE), Torre Guaceto (BR), Porto Cesareo (LE), Otranto (LE), Isole Tremiti (FG). I collected data of species percentage covering from the infralittoral rocky zone, using squares of 50x50 cm. We considered 3 sites for location and 10 replicates for each site, which has been taken randomly. Then I took other data about the same places, collected in some years, and I combined them together, to do a spatial analysis. So I started from a data set of 1896 samples but I decided not to consider time as a factor because I have reason to think that in this period of time anthropogenic stressors and their effects (if present), didn’t change considerably. The response variable I’ve analysed is the covering percentage of an amount of 243 species (subsequently merged into 32 functional groups), including seaweeds, invertebrates, sediment and rock. 2 After the sampling, I have been spent a period of two months at the Hopkins Marine Station of Stanford University, in Monterey (California,USA), at Fiorenza Micheli's laboratory. I've been carried out statistical analysis on my data set, using the software PRIMER 6. My explorative analysis starts with a nMDS in PRIMER 6, considering the original data matrix without, for the moment, the effect of stressors. What comes out is a good separation between localities and it confirms the result of ANOSIM analysis conduced on the original data matrix. What is possible to ensure is that there is not a separation led by a geographic pattern, but there should be something else that leads the differences. Is clear the presence of at least three groups: one composed by Porto cesareo, Torre Guaceto and Isole tremiti (the only marine protected areas considered in this work); another one by Otranto, and the last one by the rest of little, impacted localities. Inside the localities that include MPA(Marine Protected Areas), is also possible to observe a sort of grouping between protected and controlled areas. What comes out from SIMPER analysis is that the most of the species involved in leading differences between populations are not rare species, like: Cystoseira spp., Mytilus sp. and ECR. Moreover I assigned discrete values (0,1,2) of each stressor to all the sites I considered, in relation to the intensity with which the anthropogenic factor affect the localities. 3 Then I tried to estabilish if there were some significant interactions between stressors: by using Spearman rank correlation and Spearman tables of significance, and taking into account 17 grades of freedom, the outcome shows some significant stressors interactions. Then I built a nMDS considering the stressors as response variable. The result was positive: localities are well separeted by stressors. Consequently I related the matrix with 'localities and species' with the 'localities and stressors' one. Stressors combination explains with a good significance level the variability inside my populations. I tried with all the possible data transformations (none, square root, fourth root, log (X+1), P/A), but the fourth root seemed to be the best one, with the highest level of significativity, meaning that also rare species can influence the result. The challenge will be to characterize better which kind of stressors (including also natural ones), act on the ecosystem; and give them a quantitative and more accurate values, trying to understand how they interact (in an additive or non-additive way).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks (WSNs) consist of a large number of sensor nodes, characterized by low power constraint, limited transmission range and limited computational capabilities [1][2].The cost of these devices is constantly decreasing, making it possible to use a large number of sensor devices in a wide array of commercial, environmental, military, and healthcare fields. Some of these applications involve placing the sensors evenly spaced on a straight line for example in roads, bridges, tunnels, water catchments and water pipelines, city drainages, oil and gas pipelines etc., making a special class of these networks which we define as a Linear Wireless Network (LWN). In LWNs, data transmission happens hop by hop from the source to the destination, through a route composed of multiple relays. The peculiarity of the topology of LWNs, motivates the design of specialized protocols, taking advantage of the linearity of such networks, in order to increase reliability, communication efficiency, energy savings, network lifetime and to minimize the end-to-end delay [3]. In this thesis a novel contention based Medium Access Control (MAC) protocol called L-CSMA, specifically devised for LWNs is presented. The basic idea of L-CSMA is to assign different priorities to nodes based on their position along the line. The priority is assigned in terms of sensing duration, whereby nodes closer to the destination are assigned shorter sensing time compared to the rest of the nodes and hence higher priority. This mechanism speeds up the transmission of packets which are already in the path, making transmission flow more efficient. Using NS-3 simulator, the performance of L-CSMA in terms of packets success rate, that is, the percentage of packets that reach destination, and throughput are compared with that of IEEE 802.15.4 MAC protocol, de-facto standard for wireless sensor networks. In general, L-CSMA outperforms the IEEE 802.15.4 MAC protocol.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present thesis we address the problem of detecting and localizing a small spherical target with characteristic electrical properties inside a volume of cylindrical shape, representing female breast, with MWI. One of the main works of this project is to properly extend the existing linear inversion algorithm from planar slice to volume reconstruction; results obtained, under the same conditions and experimental setup are reported for the two different approaches. Preliminar comparison and performance analysis of the reconstruction algorithms is performed via numerical simulations in a software-created environment: a single dipole antenna is used for illuminating the virtual breast phantom from different positions and, for each position, the corresponding scattered field value is registered. Collected data are then exploited in order to reconstruct the investigation domain, along with the scatterer position, in the form of image called pseudospectrum. During this process the tumor is modeled as a dielectric sphere of small radius and, for electromagnetic scattering purposes, it's treated as a point-like source. To improve the performance of reconstruction technique, we repeat the acquisition for a number of frequencies in a given range: the different pseudospectra, reconstructed from single frequency data, are incoherently combined with MUltiple SIgnal Classification (MUSIC) method which returns an overall enhanced image. We exploit multi-frequency approach to test the performance of 3D linear inversion reconstruction algorithm while varying the source position inside the phantom and the height of antenna plane. Analysis results and reconstructed images are then reported. Finally, we perform 3D reconstruction from experimental data gathered with the acquisition system in the microwave laboratory at DIFA, University of Bologna for a recently developed breast-phantom prototype; obtained pseudospectrum and performance analysis for the real model are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In designing the trajectory for a multiple flyby mission to asteroids the choice of the targets is the most challenging problem. This dissertation faces this problem in the framework of the recently issued medium-size mission call (M5) from ESA: CASTAway. Starting from the preliminary work done in [6], this thesis develops a methodology for sequencing the potential targets in a multiple flyby mission. In order to reduce the computational time, the complete database of known small bodies is firstly pruned on the base of heuristic considerations. Using the assumption of small manoeuvres, a chief orbit concept could be used. Thus, two heuristic thresholds are defined in order to exclude non-promising targets given a chief orbit. The sequencing process takes chief orbit and promising targets as inputs and gives a set of candidate sequences. The results of such a process are analysed in the CASTAway framework and the best feasible sequence studied in details.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

State-of-the-art NLP systems are generally based on the assumption that the underlying models are provided with vast datasets to train on. However, especially when working in multi-lingual contexts, datasets are often scarce, thus more research should be carried out in this field. This thesis investigates the benefits of introducing an additional training step when fine-tuning NLP models, named Intermediate Training, which could be exploited to augment the data used for the training phase. The Intermediate Training step is applied by training models on NLP tasks that are not strictly related to the target task, aiming to verify if the models are able to leverage the learned knowledge of such tasks. Furthermore, in order to better analyze the synergies between different categories of NLP tasks, experimentations have been extended also to Multi-Task Training, in which the model is trained on multiple tasks at the same time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work proposes the analysis of tracking algorithms for point objects and extended targets particle filter on a radar application problem. Through simulations, the number of particles, the process and measurement noise of particle filter have been optimized. Four different scenarios have been considered in this work: point object with linear trajectory, point object with non-linear trajectory, extended object with linear trajectory, extended object with non-linear trajectory. The extended target has been modelled as an ellipse parametrized by the minor and major axes, the orientation angle, and the center coordinates (5 parameters overall).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La miscelazione di sistemi gas-liquido all’interno di recipienti meccanicamente agitati ritrova molte applicazioni industriali. I recipienti meccanicamente agitati rappresentano la scelta più conveniente poiché consentono una dispersione ottimale della fase dispersa all’interno della fase continua ed uno scambio di materia e di calore efficiente. I parametri chiave sono il consumo di potenza, l’hold-up di gas ed il regime gas-impeller. La fase sperimentale è stata sviluppata mediante l’utilizzo di un reattore pilota operante con tre turbine Rushton. Il consumo di potenza è stato valutato ed i risultati ottenuti dalle analisi di laboratorio sono stati confrontati con la correlazione empirica di Warmoeskerken (1986). Le tecniche impiegate in passato per la caratterizzazione dei sistemi gas-liquido sono caratterizzate da diversi svantaggi quali la possibilità di essere utilizzate solo per lo studio di sistemi trasparenti, l’intrusività ed i costi alti. Attualmente, la tomografia a resistenza elettrica (ERT) è una delle tecniche di ispezione più utilizzate grazie alla sua potenzialità di fornire informazioni qualitative e quantitativi. Uno degli obiettivi di questo lavoro di tesi è quello di validare una metodologia da seguire per la caratterizzazione dei sistemi gas-liquido. L’ERT è stata utilizzata per la valutazione dell’hold-up di gas e per l’identificazione del regime instaurato all’interno dell’apparecchiatura. I risultati sono stati confrontati con le valutazioni visive e le correlazioni proposte dalla letteratura. La strumentazione dell’ERT comprende un sistema di sensori, un sistema di acquisizione dati (DAS) ed un computer sul quale è installato il software per la ricostruzione dell’immagine, il quale, generalmente, è basato sull’algoritmo linear back - projection. Ulteriore obiettivo di questo lavoro di tesi è quello di investigare sulla convenienza di adoperare un software più sofisticato per la ricostruzione dell’immagine.