9 resultados para real-scale battery

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Per quanto riguarda le costruzioni in conglomerato cementizio armato gettato in opera, i sistemi strutturali più comunemente utilizzati sono quelli a telaio (con trasmissione di momento flettente), a setti portanti o una combinazione di entrambi. A partire dagli anni ’60, numerosissimi sono stati gli studi relativamente al comportamento sismico di strutture in c.a. a telaio. Lo stesso si può affermare per le costruzioni costituite da pareti miste a telai. In particolare, l’argomento della progettazione sismica di tali tipologie di edifici ha sempre riguardato soprattutto gli edifici alti nei quali, evidentemente, l’impiego delle pareti avveniva allo scopo di limitarne la elevata deformabilità. Il comportamento sismico di strutture realizzate interamente a pareti portanti in c.a. è stato meno studiato negli anni, nonostante si sia osservato che edifici realizzati mediante tali sistemi strutturali abbiano mostrato, in generale, pregevoli risorse di resistenza nei confronti di terremoti anche di elevata intensità. Negli ultimi 10 anni, l’ingegneria sismica si sta incentrando sull’approfondimento delle risorse di tipologie costruttive di cui si è sempre fatto largo uso in passato (tipicamente nei paesi dell’Europa continentale, in America latina, negli USA e anche in Italia), ma delle quali mancavano adeguate conoscenze scientifiche relativamente al loro comportamento in zona sismica. Tali tipologie riguardano sostanzialmente sistemi strutturali interamente costituiti da pareti portanti in c.a. per edifici di modesta altezza, usualmente utilizzati in un’edilizia caratterizzata da ridotti costi di realizzazione (fabbricati per abitazioni civili e/o uffici). Obiettivo “generale” del lavoro di ricerca qui presentato è lo studio del comportamento sismico di strutture realizzate interamente a setti portanti in c.a. e di modesta altezza (edilizia caratterizzata da ridotti costi di realizzazione). In particolare, le pareti che si intendono qui studiare sono caratterizzate da basse percentuali geometriche di armatura e sono realizzate secondo la tecnologia del cassero a perdere. A conoscenza dello scrivente, non sono mai stati realizzati, fino ad oggi, studi sperimentali ed analitici allo scopo di determinare il comportamento sismico di tali sistemi strutturali, mentre è ben noto il loro comportamento statico. In dettaglio, questo lavoro di ricerca ha il duplice scopo di: • ottenere un sistema strutturale caratterizzato da elevate prestazioni sismiche; • mettere a punto strumenti applicativi (congruenti e compatibili con le vigenti normative e dunque immediatamente utilizzabili dai progettisti) per la progettazione sismica dei pannelli portanti in c.a. oggetto del presente studio. Al fine di studiare il comportamento sismico e di individuare gli strumenti pratici per la progettazione, la ricerca è stata organizzata come segue: • identificazione delle caratteristiche delle strutture studiate, mediante lo sviluppo/specializzazione di opportune formulazioni analitiche; • progettazione, supervisione, ed interpretazione di una estesa campagna di prove sperimentali eseguita su pareti portanti in c.a. in vera grandezza, al fine di verificarne l’efficace comportamento sotto carico ciclico; • sviluppo di semplici indicazioni (regole) progettuali relativamente alle strutture a pareti in c.a. studiate, al fine di ottenere le caratteristiche prestazionali desiderate. I risultati delle prove sperimentali hanno mostrato di essere in accordo con le previsioni analitiche, a conferma della validità degli strumenti di predizione del comportamento di tali pannelli. Le elevatissime prestazioni riscontrate sia in termini di resistenza che in termini di duttilità hanno evidenziato come le strutture studiate, così messe a punto, abbiano manifestato un comportamento sismico più che soddisfacente.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The energy released during a seismic crisis in volcanic areas is strictly related to the physical processes in the volcanic structure. In particular Long Period seismicity, that seems to be related to the oscillation of a fluid-filled crack (Chouet , 1996, Chouet, 2003, McNutt, 2005), can precedes or accompanies an eruption. The present doctoral thesis is focused on the study of the LP seismicity recorded in the Campi Flegrei volcano (Campania, Italy) during the October 2006 crisis. Campi Flegrei Caldera is an active caldera; the combination of an active magmatic system and a dense populated area make the Campi Flegrei a critical volcano. The source dynamic of LP seismicity is thought to be very different from the other kind of seismicity ( Tectonic or Volcano Tectonic): it’s characterized by a time sustained source and a low content in frequency. This features implies that the duration–magnitude, that is commonly used for VT events and sometimes for LPs as well, is unadapted for LP magnitude evaluation. The main goal of this doctoral work was to develop a method for the determination of the magnitude for the LP seismicity; it’s based on the comparison of the energy of VT event and LP event, linking the energy to the VT moment magnitude. So the magnitude of the LP event would be the moment magnitude of a VT event with the same energy of the LP. We applied this method to the LP data-set recorded at Campi Flegrei caldera in 2006, to an LP data-set of Colima volcano recorded in 2005 – 2006 and for an event recorded at Etna volcano. Experimenting this method to lots of waveforms recorded at different volcanoes we tested its easy applicability and consequently its usefulness in the routinely and in the quasi-real time work of a volcanological observatory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present doctoral thesis discusses the ways to improve the performance of driving simulator, provide objective measures for the road safety evaluation methodology based on driver’s behavior and response and investigates the drivers' adaptation to the driving assistant systems. The activities are divided into two macro areas; the driving simulation studies and on-road experiments. During the driving simulation experimentation, the classical motion cueing algorithm with logarithmic scale was implemented in the 2DOF motion cueing simulator and the motion cues were found desirable by the participants. In addition, it found out that motion stimuli could change the behaviour of the drivers in terms of depth/distance perception. During the on-road experimentations, The driver gaze behaviour was investigated to find the objective measures on the visibility of the road signs and reaction time of the drivers. The sensor infusion and the vehicle monitoring instruments were found useful for an objective assessment of the pavement condition and the drivers’ performance. In the last chapter of the thesis, the safety assessment during the use of level 1 automated driving “ACC” is discussed with the simulator and on-road experiment. The drivers’ visual behaviour was investigated in both studies with innovative classification method to find the epochs of the distraction of the drivers. The behavioural adaptation to ACC showed that drivers may divert their attention away from the driving task to engage in secondary, non-driving-related tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The idea behind the project is to develop a methodology for analyzing and developing techniques for the diagnosis and the prediction of the state of charge and health of lithium-ion batteries for automotive applications. For lithium-ion batteries, residual functionality is measured in terms of state of health; however, this value cannot be directly associated with a measurable value, so it must be estimated. The development of the algorithms is based on the identification of the causes of battery degradation, in order to model and predict the trend. Therefore, models have been developed that are able to predict the electrical, thermal and aging behavior. In addition to the model, it was necessary to develop algorithms capable of monitoring the state of the battery, online and offline. This was possible with the use of algorithms based on Kalman filters, which allow the estimation of the system status in real time. Through machine learning algorithms, which allow offline analysis of battery deterioration using a statistical approach, it is possible to analyze information from the entire fleet of vehicles. Both systems work in synergy in order to achieve the best performance. Validation was performed with laboratory tests on different batteries and under different conditions. The development of the model allowed to reduce the time of the experimental tests. Some specific phenomena were tested in the laboratory, and the other cases were artificially generated.