27 resultados para Acceleration


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this PhD thesis is to study accurately and in depth the figure and the literary production of the intellectual Jacopo Aconcio. This minor author of the 16th century has long been considered a sort of “enigmatic character”, a profile which results from the work of those who, for many centuries, have left his writing to its fate: a story of constant re-readings and equally incessant oversights. This is why it is necessary to re-read Aconcio’s production in its entirety and to devote to it a monographic study. Previous scholars’ interpretations will obviously be considered, but at the same time an effort will be made to go beyond them through the analysis of both published and manuscript sources, in the attempt to attain a deeper understanding of the figure of this man, who was a Christian, a military and hydraulic engineer and a political philosopher,. The title of the thesis was chosen to emphasise how, throughout the three years of the doctorate, my research concentrated in equal measure and with the same degree of importance on all the reflections and activities of Jacopo Aconcio. My object, in fact, was to establish how and to what extent the methodological thinking of the intellectual found application in, and at the same time guided, his theoretical and practical production. I did not mention in the title the author’s religious thinking, which has always been considered by everyone the most original and interesting element of his production, because religion, from the Reformation onwards, was primarily a political question and thus it was treated by almost all the authors involved in the Protestant movement - Aconcio in the first place. Even the remarks concerning the private, intimate sphere of faith have therefore been analysed in this light: only by acknowledging the centrality of the “problem of politics” in Aconcio’s theories, in fact, is it possible to interpret them correctly. This approach proves the truth of the theoretical premise to my research, that is to say the unity and orderliness of the author’s thought: in every field of knowledge, Aconcio applies the rules of the methodus resolutiva, as a means to achieve knowledge and elaborate models of pacific cohabitation in society. Aconcio’s continuous references to method can make his writing pedant and rather complex, but at the same time they allow for a consistent and valid analysis of different disciplines. I have not considered the fact that most of his reflections appear to our eyes as strongly conditioned by the time in which he lived as a limit. To see in him, as some have done, the forerunner of Descartes’ methodological discourse or, conversely, to judge his religious theories as not very modern, is to force the thought of an author who was first and foremost a Christian man of his own time. Aconcio repeats this himself several times in his writings: he wants to provide individuals with the necessary tools to reach a full-fledged scientific knowledge in the various fields, and also to enable them to seek truth incessantly in the religious domain, which is the duty of every human being. The will to find rules, instruments, effective solutions characterizes the whole of the author’s corpus: Aconcio feels he must look for truth in all the arts, aware as he is that anything can become science as long as it is analysed with method. Nevertheless, he remains a man of his own time, a Christian convinced of the existence of God, creator and governor of the world, to whom people must account for their own actions. To neglect this fact in order to construct a “character”, a generic forerunner, but not participant, of whatever philosophical current, is a dangerous and sidetracking operation. In this study, I have highlighted how Aconcio’s arguments only reveal their full meaning when read in the context in which they were born, without depriving them of their originality but also without charging them with meanings they do not possess. Through a historical-doctrinal approach, I have tried to analyse the complex web of theories and events which constitute the substratum of Aconcio’s reflection, in order to trace the correct relations between texts and contexts. The thesis is therefore organised in six chapters, dedicated respectively to Aconcio’s biography, to the methodological question, to the author’s engineering activity, to his historical knowledge and to his religious thinking, followed by a last section concerning his fortune throughout the centuries. The above-mentioned complexity is determined by the special historical moment in which the author lived. On the one hand, thanks to the new union between science and technique, the 16th century produces discoveries and inventions which make available a previously unthinkable number of notions and lead to a “revolution” in the way of studying and teaching the different subjects, which, by producing a new form of intellectual, involved in politics but also aware of scientific-technological issues, will contribute to the subsequent birth of modern science. On the other, the 16th century is ravaged by religious conflicts, which shatter the unity of the Christian world and generate theological-political disputes which will inform the history of European states for many decades. My aim is to show how Aconcio’s multifarious activity is the conscious fruit of this historical and religious situation, as well as the attempt of an answer to the request of a new kind of engagement on the intellectual’s behalf. Plunged in the discussions around methodus, employed in the most important European courts, involved in the abrupt acceleration of technical-scientific activities, and especially concerned by the radical religious reformation brought on by the Protestant movement, Jacopo Aconcio reflects this complex conjunction in his writings, without lacking in order and consistency, differently from what many scholars assume. The object of this work, therefore, is to highlight the unity of the author’s thought, in which science, technique, faith and politics are woven into a combination which, although it may appear illogical and confused, is actually tidy and methodical, and therefore in agreement with Aconcio’s own intentions and with the specific characters of European culture in the Renaissance. This theory is confirmed by the reading of the Ars muniendorum oppidorum, Aconcio’s only work which had been up till now unavailable. I am persuaded that only a methodical reading of Aconcio’s works, without forgetting nor glorifying any single one, respects the author’s will. From De methodo (1558) onwards, all his writings are summae, guides for the reader who wishes to approach the study of the various disciplines. Undoubtedly, Satan’s Stratagems (1565) is something more, not only because of its length, but because it deals with the author’s main interest: the celebration of doubt and debate as bases on which to build religious tolerance, which is the best method for pacific cohabitation in society. This, however, does not justify the total centrality which the Stratagems have enjoyed for centuries, at the expense of a proper understanding of the author’s will to offer examples of methodological rigour in all sciences. Maybe it is precisely because of the reforming power of Aconcio’s thought that, albeit often forgotten throughout the centuries, he has never ceased to reappear and continues to draw attention, both as a man and as an author. His ideas never stop stimulating the reader’s curiosity and this may ultimately be the best demonstration of their worth, independently from the historical moment in which they come back to the surface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sports biomechanics describes human movement from a performance enhancement and an injury reduction perspective. In this respect, the purpose of sports scientists is to support coaches and physicians with reliable information about athletes’ technique. The lack of methods allowing for in-field athlete evaluation as well as for accurate joint force estimates represents, to date, the main limitation to this purpose. The investigations illustrated in the present thesis aimed at providing a contribution towards the development of the above mentioned methods. Two complementary approaches were adopted: a Low Resolution Approach – related to performance assessment – where the use of wearable inertial measurement units is exploited during different phases of sprint running, and a High Resolution Approach – related to joint kinetics estimate for injury prevention – where subject-specific, non-rigid constraints for knee joint kinematic modelling used in multi-body optimization techniques are defined. Results obtained using the Low Resolution Approach indicated that, due to their portability and inexpensiveness, inertial measurement systems are a valid alternative to laboratory-based instrumentation for in-field performance evaluation of sprint running. Using acceleration and angular velocity data, the following quantities were estimated: trunk inclination and angular velocity, instantaneous horizontal velocity and displacement of a point approximating the centre of mass, and stride and support phase durations. As concerns the High Resolution Approach, results indicated that the length of the anterior cruciate and lateral collateral ligaments decreased, while that of the deep bundle of the medial collateral ligament increased significantly during flexion. Variations of the posterior cruciate and the superficial bundle of the medial collateral ligament lengths were concealed by the experimental indeterminacy. A mathematical model was provided that allowed the estimate of subject-specific ligament lengths as a function of knee flexion and that can be integrated in a multi-body optimization procedure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the most important problems in inertial confinement fusion is how to find a way to mitigate the onset of the Rayleigh-Taylor instability which arises in the ablation front during the compression. In this thesis it is studied in detail the possibility of using for such a purpose the well-known mechanism of dynamic stabilization, already applied to other dynamical systems such as the inverted pendulum. In this context, a periodic acceleration superposed to the background gravity generates a vertical vibration of the ablation front itself. The effects of different driving modulations (Dirac deltas and square waves) are analyzed from a theoretical point of view, with a focus on stabilization of ion beam driven ablation fronts, and a comparison is made, in order to look for optimization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The needs of customers to improve machinery in recent years have driven tractor manufacturers to reduce product life and development costs. The most significant efforts have concentrated on the attempt to decrease the costs of the experimental testing sector. The validation of the tractor prototypes are presently performed with a replication of a particularly unfavourable condition a defined number of times. These laboratory tests do not always faithfully reproduce the real use of the tractor. Therefore, field tests are also carried out to evaluate the prototype during real use, but it is difficult to perform such tests for a period of time long enough to reproduce tractor life usage. In this context, accelerated tests have been introduced in the automotive sector, producing a certain damage to the structure in a reduced amount of time. The goal of this paper is to define a methodology for the realization of accelerated structural tests on a tractor, through the reproduction of real customer tractor usage. A market analysis was performed on a 80 kW power tractor and a series of measures were then taken to simulate the real use of the tractor. Subsequently, the rainflow matrixes of the signals were extrapolated and used to estimate the tractor loadings for 10 years of tractor life. Finally these loadings were reproduced on testing grounds with special road pavements. The results obtained highlight the possibility of reproducing field loadings during road driving on proving grounds (PGs), but the use of two field operations is also necessary. The global acceleration factor obtained in this first step of the methodology is equal to three.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present work consists of the investigation of the navigation of Pioneer 10 and 11 probes becoming known as the “Pioneer Anomaly”: the trajectories followed by the spacecrafts did not match the ones retrieved with standard navigation software. Mismatching appeared as a linear drift in the Doppler data received by the spacecrafts, which has been ascribed to a constant sunward acceleration of about 8.5×10-10 m/s2. The study presented hereafter tries to find a convincing explanation to this discrepancy. The research is based on the analysis of Doppler tracking data through the ODP (Orbit Determination Program), developed by NASA/JPL. The method can be summarized as: seek for any kind of physics affecting the dynamics of the spacecraft or the propagation of radiometric data, which may have not been properly taken into account previously, and check whether or not these might rule out the anomaly. A major effort has been put to build a thermal model of the spacecrafts for predicting the force due to anisotropic thermal radiation, since this is a model not natively included in the ODP. Tracking data encompassing more than twenty years of Pioneer 10 interplanetary cruise, plus twelve years of Pioneer 11 have been analyzed in light of the results of the thermal model. Different strategies of orbit determination have been implemented, including single arc, multi arc and stochastic filters, and their performance compared. Orbital solutions have been obtained without the needing of any acceleration other than the thermal recoil one indicating it as the responsible for the observed linear drift in the Doppler residuals. As a further support to this we checked that inclusion of additional constant acceleration as does not improve the quality of orbital solutions. All the tests performed lead to the conclusion that no anomalous acceleration is acting on Pioneers spacecrafts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to investigate the influence of the diaphragm flexibility on the behavior of out-of-plane walls in masonry buildings. Simplified models have been developed to perform kinematic and dynamic analyses in order to compare the response of walls with different restraint conditions. Kinematic non linear analyses of assemblages of rigid blocks have been performed to obtain the acceleration-displacement curves for walls with different restraint conditions at the top. A simplified 2DOF model has been developed to analyse the dynamic response of the wall with an elastic spring at the top, following the Housner rigid behaviour hypothesis. The dissipation of energy is concentrated at every impact at the base of the wall and is modelled through the introduction of the coefficient of restitution. The sets of equations of the possible configurations of the wall, depending on the different positions of the centre of rotation at the base and at the intermediate hinge have been obtained. An algorithm for the numerical integration of the sets of the equations of motion in the time domain has been developed. Dynamic analyses of a set of walls with Gaussian impulses and recorded accelerograms inputs have been performed in order to compare the response of the simply supported wall with the one of the wall with elastic spring at the top. The influence of diaphragm stiffness Kd has been investigated determining the variation of maximum displacement demand with the value of Kd. A more regular trend has been obtained for the Gaussian input than for the recorded accelerograms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During my PhD, starting from the original formulations proposed by Bertrand et al., 2000 and Emolo & Zollo 2005, I developed inversion methods and applied then at different earthquakes. In particular large efforts have been devoted to the study of the model resolution and to the estimation of the model parameter errors. To study the source kinematic characteristics of the Christchurch earthquake we performed a joint inversion of strong-motion, GPS and InSAR data using a non-linear inversion method. Considering the complexity highlighted by superficial deformation data, we adopted a fault model consisting of two partially overlapping segments, with dimensions 15x11 and 7x7 km2, having different faulting styles. This two-fault model allows to better reconstruct the complex shape of the superficial deformation data. The total seismic moment resulting from the joint inversion is 3.0x1025 dyne.cm (Mw = 6.2) with an average rupture velocity of 2.0 km/s. Errors associated with the kinematic model have been estimated of around 20-30 %. The 2009 Aquila sequence was characterized by an intense aftershocks sequence that lasted several months. In this study we applied an inversion method that assumes as data the apparent Source Time Functions (aSTFs), to a Mw 4.0 aftershock of the Aquila sequence. The estimation of aSTFs was obtained using the deconvolution method proposed by Vallée et al., 2004. The inversion results show a heterogeneous slip distribution, characterized by two main slip patches located NW of the hypocenter, and a variable rupture velocity distribution (mean value of 2.5 km/s), showing a rupture front acceleration in between the two high slip zones. Errors of about 20% characterize the final estimated parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’esposizione degli operatori in campo agricolo alle vibrazioni trasmesse al corpo intero, produce effetti dannosi alla salute nel breve e nel lungo termine. Le vibrazioni che si generano sulle trattrici agricole hanno una elevata intensità e una bassa frequenza. Le componenti orizzontali, amplificate dalla posizione elevata della postazione di guida dall’asse di rollio, presentano maggiori criticità per quanto riguarda i sistemi di smorzamento rispetto alle componenti verticali. Queste caratteristiche rendono difficoltosa la progettazione dei sistemi dedicati alla riduzione del livello vibrazionale per questa categoria di macchine agricole. Nonostante l’installazione di diversi sistemi di smorzamento, il livello di vibrazioni a cui è sottoposto l’operatore può superare, in diverse condizioni di impiego, i livelli massimi imposti dalla legge per la salvaguardia della salute. L’obiettivo di questo lavoro è quello di valutare l’influenza dei moti rigidi di una trattrice (beccheggio, rollio e saltellamento) dotata di sospensione assale anteriore, sospensione cabina e sospensione sedile, sul livello vibrazionale trasmesso all’operatore.E’ stata pertanto strumenta una trattrice con accelerometri e inclinometri installati su telaio, cabina e sedile e utilizzata in diverse condizioni di lavoro in campo e di trasporto su strada. Dall’analisi delle prove effettuate emerge che durante il trasporto su strada è predominante l’accelerazione longitudinale, a causa dell’elevata influenza del beccheggio. La sospensione riduce notevolmente il moto rigido di beccheggio mentre l’effetto della sospensione della cabina è quello di incrementare, in ogni condizione di lavoro, il livello di accelerazione trasmesso dal telaio della macchina.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il presente lavoro ha lo scopo di presentare gli studi e i risultati ottenuti durante l’attività di ricerca svolta sul Displacement-based Assessment (DBA) dei telai in cemento armato. Dopo alcune considerazioni iniziali sul tema della vulnerabilità sismica e sui metodi di analisi e verifica, si procede alla descrizione teorica del metodo. Sono stati analizzati tre casi studio di telai piani, progettati per soli carichi verticali e secondo normative non più in vigore che non prevedevano l’applicazione della gerarchia delle resistenze. I telai considerati, destinati ad abitazione civile, hanno diversa altezza e numero di piani, e diverso numero di campate. Si è proceduto all’applicazione del metodo, alla valutazione della vulnerabilità sismica in base alla domanda in termini di spostamento costituita da uno spettro elastico previsto dall’EC8 e alla validazione dei risultati ottenuti mediante analisi non lineari statiche e dinamiche e mediante l’applicazione dei teoremi dell’Analisi limite dei telai, proposta come procedura alternativa per la determinazione del meccanismo anelastico e della capacità in termini di taglio alla base. In ultimo si è applicata la procedura DBA per la valutazione della vulnerabilità sismica di un edificio scolastico, realizzato tra il 1969 e il 1975 in un sito caratterizzato da una accelerazione di picco orizzontale pari a 0,24g e una probabilità di superamento del 10% in 75 anni.