5 resultados para k-Error linear complexity
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
Shell structure is widely used in engineering area. The purpose of this dissertation is to show the behavior of a thin shell under external load, especially for long cylindrical shell under compressive load, I analyzed not only for linear elastic problem and also for buckling problem, and by using finite element analysis it shows that the imperfection of a cylinder could affect the critical load which means the buckling capability of this cylinder. For linear elastic problem, I compared the theoretical results with the results got from Straus7 and Abaqus, and the results are really close. For the buckling problem I did the same: compared the theoretical and Abaqus results, the error is less than 1%, but in reality, it’s not possible to reach the theoretical buckling capability due to the imperfection of the cylinder, so I put different imperfection for the cylinder in Abaqus, and found out that with the increasing of the percentage of imperfection, the buckling capability decreases, for example 10% imperfection could decrease 40% of the buckling capability, and the outcome meet the buckling behavior in reality.
Resumo:
The cerebral cortex presents self-similarity in a proper interval of spatial scales, a property typical of natural objects exhibiting fractal geometry. Its complexity therefore can be characterized by the value of its fractal dimension (FD). In the computation of this metric, it has usually been employed a frequentist approach to probability, with point estimator methods yielding only the optimal values of the FD. In our study, we aimed at retrieving a more complete evaluation of the FD by utilizing a Bayesian model for the linear regression analysis of the box-counting algorithm. We used T1-weighted MRI data of 86 healthy subjects (age 44.2 ± 17.1 years, mean ± standard deviation, 48% males) in order to gain insights into the confidence of our measure and investigate the relationship between mean Bayesian FD and age. Our approach yielded a stronger and significant (P < .001) correlation between mean Bayesian FD and age as compared to the previous implementation. Thus, our results make us suppose that the Bayesian FD is a more truthful estimation for the fractal dimension of the cerebral cortex compared to the frequentist FD.
Resumo:
In this work an Underactuated Cable-Driven Parallel Robot (UACDPR) that operates in the three dimensional Euclidean space is considered. The End-Effector has 6 degrees of freedom and is actuated by 4 cables, therefore from a mechanical point of view the robot is defined underconstrained. However, considering only three controlled pose variables, the degree of redundancy for the control theory can be considered one. The aim of this thesis is to design a feedback controller for a point-to-point motion that satisfies the transient requirements, and is capable of reducing oscillations that derive from the reduced number of constraints. A force control is chosen for the positioning of the End-Effector, and error with respect to the reference is computed through data measure of several sensors (load cells, encoders and inclinometers) such as cable lengths, tension and orientation of the platform. In order to express the relation between pose and cable tension, the inverse model is derived from the kinematic and dynamic model of the parallel robot. The intrinsic non-linear nature of UACDPRs systems introduces an additional level of complexity in the development of the controller, as a result the control law is composed by a partial feedback linearization, and damping injection to reduce orientation instability. The fourth cable allows to satisfy a further tension distribution constraint, ensuring positive tension during all the instants of motion. Then simulations with different initial conditions are presented in order to optimize control parameters, and lastly an experimental validation of the model is carried out, the results are analysed and limits of the presented approach are defined.
Resumo:
Intermediate-complexity general circulation models are a fundamental tool to investigate the role of internal and external variability within the general circulation of the atmosphere and ocean. The model used in this thesis is an intermediate complexity atmospheric general circulation model (SPEEDY) coupled to a state-of-the-art modelling framework for the ocean (NEMO). We assess to which extent the model allows a realistic simulation of the most prominent natural mode of variability at interannual time scales: El-Niño Southern Oscillation (ENSO). To a good approximation, the model represents the ENSO-induced Sea Surface Temperature (SST) pattern in the equatorial Pacific, despite a cold tongue-like bias. The model underestimates (overestimates) the typical ENSO spatial variability during the winter (summer) seasons. The mid-latitude response to ENSO reveals that the typical poleward stationary Rossby wave train is reasonably well represented. The spectral decomposition of ENSO features a spectrum that lacks periodicity at high frequencies and is overly periodic at interannual timescales. We then implemented an idealised transient mean state change in the SPEEDY model. A warmer climate is simulated by an alteration of the parametrized radiative fluxes that corresponds to doubled carbon dioxide absorptivity. Results indicate that the globally averaged surface air temperature increases of 0.76 K. Regionally, the induced signal on the SST field features a significant warming over the central-western Pacific and an El-Niño-like warming in the subtropics. In general, the model features a weakening of the tropical Walker circulation and a poleward expansion of the local Hadley cell. This response is also detected in a poleward rearrangement of the tropical convective rainfall pattern. The model setting that has been here implemented provides a valid theoretical support for future studies on climate sensitivity and forced modes of variability under mean state changes.
Resumo:
Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.