14 resultados para Phenomena and statements

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cone penetration test (CPT), together with its recent variation (CPTU), has become the most widely used in-situ testing technique for soil profiling and geotechnical characterization. The knowledge gained over the last decades on the interpretation procedures in sands and clays is certainly wide, whilst very few contributions can be found as regards the analysis of CPT(u) data in intermediate soils. Indeed, it is widely accepted that at the standard rate of penetration (v = 20 mm/s), drained penetration occurs in sands while undrained penetration occurs in clays. However, a problem arise when the available interpretation approaches are applied to cone measurements in silts, sandy silts, silty or clayey sands, since such intermediate geomaterials are often characterized by permeability values within the range in which partial drainage is very likely to occur. Hence, the application of the available and well-established interpretation procedures, developed for ‘standard’ clays and sands, may result in invalid estimates of soil parameters. This study aims at providing a better understanding on the interpretation of CPTU data in natural sand and silt mixtures, by taking into account two main aspects, as specified below: 1)Investigating the effect of penetration rate on piezocone measurements, with the aim of identifying drainage conditions when cone penetration is performed at a standard rate. This part of the thesis has been carried out with reference to a specific CPTU database recently collected in a liquefaction-prone area (Emilia-Romagna Region, Italy). 2)Providing a better insight into the interpretation of piezocone tests in the widely studied silty sediments of the Venetian lagoon (Italy). Research has focused on the calibration and verification of some site-specific correlations, with special reference to the estimate of compressibility parameters for the assessment of long-term settlements of the Venetian coastal defences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis analyzes the impact of heat extremes in urban and rural environments, considering processes related to severely high temperatures and unusual dryness. The first part deals with the influence of large-scale heatwave events on the local-scale urban heat island (UHI) effect. The temperatures recorded over a 20-year summer period by meteorological stations in 37 European cities are examined to evaluate the variations of UHI during heatwaves with respect to non-heatwave days. A statistical analysis reveals a negligible impact of large-scale extreme temperatures on the local daytime urban climate, while a notable exacerbation of UHI effect at night. A comparison with the UrbClim model outputs confirms the UHI strengthening during heatwave episodes, with an intensity independent of the climate zone. The investigation of the relationship between large-scale temperature anomalies and UHI highlights a smooth and continuous dependence, but with a strong variability. The lack of a threshold behavior in this relationship suggests that large-scale temperature variability can affect the local-scale UHI even in different conditions than during extreme events. The second part examines the transition from meteorological to agricultural drought, being the first stage of the drought propagation process. A multi-year reanalysis dataset involving numerous drought events over the Iberian Peninsula is considered. The behavior of different non-parametric standardized drought indices in drought detection is evaluated. A statistical approach based on run theory is employed, analyzing the main characteristics of drought propagation. The propagation from meteorological to agricultural drought events is found to develop in about 1-2 months. The duration of agricultural drought appears shorter than that of meteorological drought, but the onset is delayed. The propagation probability increases with the severity of the originating meteorological drought. A new combined agricultural drought index is developed to be a useful tool for balancing the characteristics of other adopted indices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract. This thesis presents a discussion on a few specific topics regarding the low velocity impact behaviour of laminated composites. These topics were chosen because of their significance as well as the relatively limited attention received so far by the scientific community. The first issue considered is the comparison between the effects induced by a low velocity impact and by a quasi-static indentation experimental test. An analysis of both test conditions is presented, based on the results of experiments carried out on carbon fibre laminates and on numerical computations by a finite element model. It is shown that both quasi-static and dynamic tests led to qualitatively similar failure patterns; three characteristic contact force thresholds, corresponding to the main steps of damage progression, were identified and found to be equal for impact and indentation. On the other hand, an equal energy absorption resulted in a larger delaminated area in quasi-static than in dynamic tests, while the maximum displacement of the impactor (or indentor) was higher in the case of impact, suggesting a probably more severe fibre damage than in indentation. Secondly, the effect of different specimen dimensions and boundary conditions on its impact response was examined. Experimental testing showed that the relationships of delaminated area with two significant impact parameters, the absorbed energy and the maximum contact force, did not depend on the in-plane dimensions and on the support condition of the coupons. The possibility of predicting, by means of a simplified numerical computation, the occurrence of delaminations during a specific impact event is also discussed. A study about the compressive behaviour of impact damaged laminates is also presented. Unlike most of the contributions available about this subject, the results of compression after impact tests on thin laminates are described in which the global specimen buckling was not prevented. Two different quasi-isotropic stacking sequences, as well as two specimen geometries, were considered. It is shown that in the case of rectangular coupons the lay-up can significantly affect the damage induced by impact. Different buckling shapes were observed in laminates with different stacking sequences, in agreement with the results of numerical analysis. In addition, the experiments showed that impact damage can alter the buckling mode of the laminates in certain situations, whereas it did not affect the compressive strength in every case, depending on the buckling shape. Some considerations about the significance of the test method employed are also proposed. Finally, a comprehensive study is presented regarding the influence of pre-existing in-plane loads on the impact response of laminates. Impact events in several conditions, including both tensile and compressive preloads, both uniaxial and biaxial, were analysed by means of numerical finite element simulations; the case of laminates impacted in postbuckling conditions was also considered. The study focused on how the effect of preload varies with the span-to-thickness ratio of the specimen, which was found to be a key parameter. It is shown that a tensile preload has the strongest effect on the peak stresses at low span-to-thickness ratios, leading to a reduction of the minimum impact energy required to initiate damage, whereas this effect tends to disappear as the span-to-thickness ratio increases. On the other hand, a compression preload exhibits the most detrimental effects at medium span-to-thickness ratios, at which the laminate compressive strength and the critical instability load are close to each other, while the influence of preload can be negligible for thin plates or even beneficial for very thick plates. The possibility to obtain a better explanation of the experimental results described in the literature, in view of the present findings, is highlighted. Throughout the thesis the capabilities and limitations of the finite element model, which was implemented in an in-house program, are discussed. The program did not include any damage model of the material. It is shown that, although this kind of analysis can yield accurate results as long as damage has little effect on the overall mechanical properties of a laminate, it can be helpful in explaining some phenomena and also in distinguishing between what can be modelled without taking into account the material degradation and what requires an appropriate simulation of damage. Sommario. Questa tesi presenta una discussione su alcune tematiche specifiche riguardanti il comportamento dei compositi laminati soggetti ad impatto a bassa velocità. Tali tematiche sono state scelte per la loro importanza, oltre che per l’attenzione relativamente limitata ricevuta finora dalla comunità scientifica. La prima delle problematiche considerate è il confronto fra gli effetti prodotti da una prova sperimentale di impatto a bassa velocità e da una prova di indentazione quasi statica. Viene presentata un’analisi di entrambe le condizioni di prova, basata sui risultati di esperimenti condotti su laminati in fibra di carbonio e su calcoli numerici svolti con un modello ad elementi finiti. È mostrato che sia le prove quasi statiche sia quelle dinamiche portano a un danneggiamento con caratteristiche qualitativamente simili; tre valori di soglia caratteristici della forza di contatto, corrispondenti alle fasi principali di progressione del danno, sono stati individuati e stimati uguali per impatto e indentazione. D’altro canto lo stesso assorbimento di energia ha portato ad un’area delaminata maggiore nelle prove statiche rispetto a quelle dinamiche, mentre il massimo spostamento dell’impattatore (o indentatore) è risultato maggiore nel caso dell’impatto, indicando la probabilità di un danneggiamento delle fibre più severo rispetto al caso dell’indentazione. In secondo luogo è stato esaminato l’effetto di diverse dimensioni del provino e diverse condizioni al contorno sulla sua risposta all’impatto. Le prove sperimentali hanno mostrato che le relazioni fra l’area delaminata e due parametri di impatto significativi, l’energia assorbita e la massima forza di contatto, non dipendono dalle dimensioni nel piano dei provini e dalle loro condizioni di supporto. Viene anche discussa la possibilità di prevedere, per mezzo di un calcolo numerico semplificato, il verificarsi di delaminazioni durante un determinato caso di impatto. È presentato anche uno studio sul comportamento a compressione di laminati danneggiati da impatto. Diversamente della maggior parte della letteratura disponibile su questo argomento, vengono qui descritti i risultati di prove di compressione dopo impatto su laminati sottili durante le quali l’instabilità elastica globale dei provini non è stata impedita. Sono state considerate due differenti sequenze di laminazione quasi isotrope, oltre a due geometrie per i provini. Viene mostrato come nel caso di provini rettangolari la sequenza di laminazione possa influenzare sensibilmente il danno prodotto dall’impatto. Due diversi tipi di deformate in condizioni di instabilità sono stati osservati per laminati con diversa laminazione, in accordo con i risultati dell’analisi numerica. Gli esperimenti hanno mostrato inoltre che in certe situazioni il danno da impatto può alterare la deformata che il laminato assume in seguito ad instabilità; d’altra parte tale danno non ha sempre influenzato la resistenza a compressione, a seconda della deformata. Vengono proposte anche alcune considerazioni sulla significatività del metodo di prova utilizzato. Infine viene presentato uno studio esaustivo riguardo all’influenza di carichi membranali preesistenti sulla risposta all’impatto dei laminati. Sono stati analizzati con simulazioni numeriche ad elementi finiti casi di impatto in diverse condizioni di precarico, sia di trazione sia di compressione, sia monoassiali sia biassiali; è stato preso in considerazione anche il caso di laminati impattati in condizioni di postbuckling. Lo studio si è concentrato in particolare sulla dipendenza degli effetti del precarico dal rapporto larghezza-spessore del provino, che si è rivelato un parametro fondamentale. Viene illustrato che un precarico di trazione ha l’effetto più marcato sulle massime tensioni per bassi rapporti larghezza-spessore, portando ad una riduzione della minima energia di impatto necessaria per innescare il danneggiamento, mentre questo effetto tende a scomparire all’aumentare di tale rapporto. Il precarico di compressione evidenzia invece gli effetti più deleteri a rapporti larghezza-spessore intermedi, ai quali la resistenza a compressione del laminato e il suo carico critico di instabilità sono paragonabili, mentre l’influenza del precarico può essere trascurabile per piastre sottili o addirittura benefica per piastre molto spesse. Viene evidenziata la possibilità di trovare una spiegazione più soddisfacente dei risultati sperimentali riportati in letteratura, alla luce del presente contributo. Nel corso della tesi vengono anche discussi le potenzialità ed i limiti del modello ad elementi finiti utilizzato, che è stato implementato in un programma scritto in proprio. Il programma non comprende alcuna modellazione del danneggiamento del materiale. Viene però spiegato come, nonostante questo tipo di analisi possa portare a risultati accurati soltanto finché il danno ha scarsi effetti sulle proprietà meccaniche d’insieme del laminato, esso possa essere utile per spiegare alcuni fenomeni, oltre che per distinguere fra ciò che si può riprodurre senza tenere conto del degrado del materiale e ciò che invece richiede una simulazione adeguata del danneggiamento.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The object of the present study is the process of gas transport in nano-sized materials, i.e. systems having structural elements of the order of nanometers. The aim of this work is to advance the understanding of the gas transport mechanism in such materials, for which traditional models are not often suitable, by providing a correct interpretation of the relationship between diffusive phenomena and structural features. This result would allow the development new materials with permeation properties tailored on the specific application, especially in packaging systems. The methods used to achieve this goal were a detailed experimental characterization and different simulation methods. The experimental campaign regarded the determination of oxygen permeability and diffusivity in different sets of organic-inorganic hybrid coatings prepared via sol-gel technique. The polymeric samples coated with these hybrid layers experienced a remarkable enhancement of the barrier properties, which was explained by the strong interconnection at the nano-scale between the organic moiety and silica domains. An analogous characterization was performed on microfibrillated cellulose films, which presented remarkable barrier effect toward oxygen when it is dry, while in the presence of water the performance significantly drops. The very low value of water diffusivity at low activities is also an interesting characteristic which deals with its structural properties. Two different approaches of simulation were then considered: the diffusion of oxygen through polymer-layered silicates was modeled on a continuum scale with a CFD software, while the properties of n-alkanthiolate self assembled monolayers on gold were analyzed from a molecular point of view by means of a molecular dynamics algorithm. Modeling transport properties in layered nanocomposites, resulting from the ordered dispersion of impermeable flakes in a 2-D matrix, allowed the calculation of the enhancement of barrier effect in relation with platelets structural parameters leading to derive a new expression. On this basis, randomly distributed systems were simulated and the results were analyzed to evaluate the different contributions to the overall effect. The study of more realistic three-dimensional geometries revealed a prefect correspondence with the 2-D approximation. A completely different approach was applied to simulate the effect of temperature on the oxygen transport through self assembled monolayers; the structural information obtained from equilibrium MD simulations showed that raising the temperature, makes the monolayer less ordered and consequently less crystalline. This disorder produces a decrease in the barrier free energy and it lowers the overall resistance to oxygen diffusion, making the monolayer more permeable to small molecules.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present work tries to display a comprehensive and comparative study of the different legal and regulatory problems involved in international securitization transactions. First, an introduction to securitization is provided, with the basic elements of the transaction, followed by the different varieties of it, including dynamic securitization and synthetic securitization structures. Together with this introduction to the intricacies of the structure, a insight into the influence of securitization in the financial and economic crisis of 2007-2009 is provided too; as well as an overview of the process of regulatory competition and cooperation that constitutes the framework for the international aspects of securitization. The next Chapter focuses on the aspects that constitute the foundations of structured finance: the inception of the vehicle, and the transfer of risks associated to the securitized assets, with particular emphasis on the validity of those elements, and how a securitization transaction could be threatened at its root. In this sense, special importance is given to the validity of the trust as an instrument of finance, to the assignment of future receivables or receivables in block, and to the importance of formalities for the validity of corporations, trusts, assignments, etc., and the interaction of such formalities contained in general corporate, trust and assignment law with those contemplated under specific securitization regulations. Then, the next Chapter (III) focuses on creditor protection aspects. As such, we provide some insights on the debate on the capital structure of the firm, and its inadequacy to assess the financial soundness problems inherent to securitization. Then, we proceed to analyze the importance of rules on creditor protection in the context of securitization. The corollary is in the rules in case of insolvency. In this sense, we divide the cases where a party involved in the transaction goes bankrupt, from those where the transaction itself collapses. Finally, we focus on the scenario where a substance over form analysis may compromise some of the elements of the structure (notably the limited liability of the sponsor, and/or the transfer of assets) by means of veil piercing, substantive consolidation, or recharacterization theories. Once these elements have been covered, the next Chapters focus on the regulatory aspects involved in the transaction. Chapter IV is more referred to “market” regulations, i.e. those concerned with information disclosure and other rules (appointment of the indenture trustee, and elaboration of a rating by a rating agency) concerning the offering of asset-backed securities to the public. Chapter V, on the other hand, focuses on “prudential” regulation of the entity entrusted with securitizing assets (the so-called Special Purpose vehicle), and other entities involved in the process. Regarding the SPV, a reference is made to licensing requirements, restriction of activities and governance structures to prevent abuses. Regarding the sponsor of the transaction, a focus is made on provisions on sound originating practices, and the servicing function. Finally, we study accounting and banking regulations, including the Basel I and Basel II Frameworks, which determine the consolidation of the SPV, and the de-recognition of the securitized asset from the originating company’s balance-sheet, as well as the posterior treatment of those assets, in particular by banks. Chapters VI-IX are concerned with liability matters. Chapter VI is an introduction to the different sources of liability. Chapter VII focuses on the liability by the SPV and its management for the information supplied to investors, the management of the asset pool, and the breach of loyalty (or fiduciary) duties. Chapter VIII rather refers to the liability of the originator as a result of such information and statements, but also as a result of inadequate and reckless originating or servicing practices. Chapter IX finally focuses on third parties entrusted with the soundness of the transaction towards the market, the so-called gatekeepers. In this respect, we make special emphasis on the liability of indenture trustees, underwriters and rating agencies. Chapters X and XI focus on the international aspects of securitization. Chapter X contains a conflicts of laws analysis of the different aspects of structured finance. In this respect, a study is made of the laws applicable to the vehicle, to the transfer of risks (either by assignment or by means of derivatives contracts), to liability issues; and a study is also made of the competent jurisdiction (and applicable law) in bankruptcy cases; as well as in cases where a substance-over-form is performed. Then, special attention is also devoted to the role of financial and securities regulations; as well as to their territorial limits, and extraterritoriality problems involved. Chapter XI supplements the prior Chapter, for it analyzes the limits to the States’ exercise of regulatory power by the personal and “market” freedoms included in the US Constitution or the EU Treaties. A reference is also made to the (still insufficient) rules from the WTO Framework, and their significance to the States’ recognition and regulation of securitization transactions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Semiconductors technologies are rapidly evolving driven by the need for higher performance demanded by applications. Thanks to the numerous advantages that it offers, gallium nitride (GaN) is quickly becoming the technology of reference in the field of power amplification at high frequency. The RF power density of AlGaN/GaN HEMTs (High Electron Mobility Transistor) is an order of magnitude higher than the one of gallium arsenide (GaAs) transistors. The first demonstration of GaN devices dates back only to 1993. Although over the past few years some commercial products have started to be available, the development of a new technology is a long process. The technology of AlGaN/GaN HEMT is not yet fully mature, some issues related to dispersive phenomena and also to reliability are still present. Dispersive phenomena, also referred as long-term memory effects, have a detrimental impact on RF performances and are due both to the presence of traps in the device structure and to self-heating effects. A better understanding of these problems is needed to further improve the obtainable performances. Moreover, new models of devices that take into consideration these effects are necessary for accurate circuit designs. New characterization techniques are thus needed both to gain insight into these problems and improve the technology and to develop more accurate device models. This thesis presents the research conducted on the development of new charac- terization and modelling methodologies for GaN-based devices and on the use of this technology for high frequency power amplifier applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tumours are characterized by a metabolic rewiring that helps transformed cells to survive in harsh conditions. The endogenous inhibitor of the ATP-synthase IF1 is overexpressed in several tumours and it has been proposed to drive metabolic adaptation. In ischemic normal-cells, IF1 acts limiting the ATP consumption by the reverse activity of the ATP-synthase, activated by ΔΨm collapse. Conversely, IF1 role in cancer cells is still unclear. It has been proposed that IF1 favours cancer survival by preventing energy dissipation in low oxygen availability, a frequent condition in solid tumours. Our previous data proved that in cancer cells hypoxia does not abolish ΔΨm, avoiding the ATP-synthase reversal and IF1 activation. In this study, we investigated the bioenergetics of cancer cells in conditions mimicking anoxia to evaluate the possible role of IF1. Data obtained indicate that also in cancer cells the ΔΨm collapse induces the ATP-synthase reversal and its inhibition by IF1. Moreover, we demonstrated that upon uncoupling conditions, IF1 favours cancer cells growth preserving ATP levels and energy charge. We also showed that in these conditions IF1 favours the mitochondrial mass renewal, a mechanism we proposed driving apoptosis-resistance. Cancer adaptability is also associated with the onset of therapy resistance, the major challenge for melanoma treatment. Recent studies demonstrated that miRNAs dysregulation drive melanoma progression and drug-resistance by regulating tumour-suppressor and oncogenes. In this context, we attempted to identify and characterize miRNAs driving resistance to vemurafenib in patient-derived metastatic melanoma cells BRAFV600E-mutated. Our results highlighted that several oncogenic pathways are altered in resistant cells, indicating the complexity of both drug-resistance phenomena and miRNAs action. Profiling analysis identified a group of dysregulated miRNAs conserved in vemurafenib-resistance cells from distinct patients, suggesting that they ubiquitously drive drug-resistance. Functional studies performed with a first miRNA confirmed its pivotal role in resistance towards vemurafenib.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The work of this thesis has been focused on the characterisation of inorganic membranes for the hydrogen purification from steam reforming gas. Composite membranes based on porous inorganic supports coated with palladium silver alloys and ceramic membranes have been analysed. A brief resume of theoretical laws governing transport of gases through dense and porous inorganic membranes and an overview on different methods to prepare inorganic membranes has been also reported. A description of the experimental apparatus used for the characterisation of gas permeability properties has been reported. The device used permits to evaluate transport properties in a wide range of temperatures (till 500°C) and pressures (till 15 bar). Data obtained from experimental campaigns reveal a good agreement with Sievert law for hydrogen transport through dense palladium based membranes while different transport mechanisms, such as Knudsen diffusion and Hagen-Poiseuille flow, have been observed for porous membranes and for palladium silver alloy ones with pinholes in the metal layer. Mixtures permeation experiments reveal also concentration polarisation phenomena and hydrogen permeability reduction due to carbon monoxide adsorption on metal surface.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this work is to put forward a statistical mechanics theory of social interaction, generalizing econometric discrete choice models. After showing the formal equivalence linking econometric multinomial logit models to equilibrium statical mechanics, a multi- population generalization of the Curie-Weiss model for ferromagnets is considered as a starting point in developing a model capable of describing sudden shifts in aggregate human behaviour. Existence of the thermodynamic limit for the model is shown by an asymptotic sub-additivity method and factorization of correlation functions is proved almost everywhere. The exact solution for the model is provided in the thermodynamical limit by nding converging upper and lower bounds for the system's pressure, and the solution is used to prove an analytic result regarding the number of possible equilibrium states of a two-population system. The work stresses the importance of linking regimes predicted by the model to real phenomena, and to this end it proposes two possible procedures to estimate the model's parameters starting from micro-level data. These are applied to three case studies based on census type data: though these studies are found to be ultimately inconclusive on an empirical level, considerations are drawn that encourage further refinements of the chosen modelling approach, to be considered in future work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the last decade the interest for submarine instability grew up, driven by the increasing exploitation of natural resources (primary hydrocarbons), the emplacement of bottom-lying structures (cables and pipelines) and by the development of coastal areas, whose infrastructures increasingly protrude to the sea. The great interest for this topic promoted a number of international projects such as: STEAM (Sediment Transport on European Atlantic Margins, 93-96), ENAM II (European North Atlantic Margin, 96-99), GITEC (Genesis and Impact of Tsunamis on the European Coast 92-95), STRATAFORM (STRATA FORmation on Margins, 95-01), Seabed Slope Process in Deep Water Continental Margin (Northwest Gulf of Mexico, 96-04), COSTA (Continental slope Stability, 00-05), EUROMARGINS (Slope Stability on Europe’s Passive Continental Margin), SPACOMA (04-07), EUROSTRATAFORM (European Margin Strata Formation), NGI's internal project SIP-8 (Offshore Geohazards), IGCP-511: Submarine Mass Movements and Their Consequences (05-09) and projects indirectly related to instability processes, such as TRANSFER (Tsunami Risk ANd Strategies For the European region, 06-09) or NEAREST (integrated observations from NEAR shore sourcES of Tsunamis: towards an early warning system, 06-09). In Italy, apart from a national project realized within the activities of the National Group of Volcanology during the framework 2000-2003 “Conoscenza delle parti sommerse dei vulcani italiani e valutazione del potenziale rischio vulcanico”, the study of submarine mass-movement has been underestimated until the occurrence of the landslide-tsunami events that affected Stromboli on December 30, 2002. This event made the Italian Institutions and the scientific community more aware of the hazard related to submarine landslides, mainly in light of the growing anthropization of coastal sectors, that increases the vulnerability of these areas to the consequences of such processes. In this regard, two important national projects have been recently funded in order to study coastal instabilities (PRIN 24, 06-08) and to map the main submarine hazard features on continental shelves and upper slopes around the most part of Italian coast (MaGIC Project). The study realized in this Thesis is addressed to the understanding of these processes, with particular reference to Stromboli submerged flanks. These latter represent a natural laboratory in this regard, as several kind of instability phenomena are present on the submerged flanks, affecting about 90% of the entire submerged areal and often (strongly) influencing the morphological evolution of subaerial slopes, as witnessed by the event occurred on 30 December 2002. Furthermore, each phenomenon is characterized by different pre-failure, failure and post-failure mechanisms, ranging from rock-falls, to turbidity currents up to catastrophic sector collapses. The Thesis is divided into three introductive chapters, regarding a brief review of submarine instability phenomena and related hazard (cap. 1), a “bird’s-eye” view on methodologies and available dataset (cap. 2) and a short introduction on the evolution and the morpho-structural setting of the Stromboli edifice (cap. 3). This latter seems to play a major role in the development of largescale sector collapses at Stromboli, as they occurred perpendicular to the orientation of the main volcanic rift axis (oriented in NE-SW direction). The characterization of these events and their relationships with successive erosive-depositional processes represents the main focus of cap.4 (Offshore evidence of large-scale lateral collapses on the eastern flank of Stromboli, Italy, due to structurally-controlled, bilateral flank instability) and cap. 5 (Lateral collapses and active sedimentary processes on the North-western flank of Stromboli Volcano), represented by articles accepted for publication on international papers (Marine Geology). Moreover, these studies highlight the hazard related to these catastrophic events; several calamities (with more than 40000 casualties only in the last two century) have been, in fact, the direct or indirect result of landslides affecting volcanic flanks, as observed at Oshima-Oshima (1741) and Unzen Volcano (1792) in Japan (Satake&Kato, 2001; Brantley&Scott, 1993), Krakatau (1883) in Indonesia (Self&Rampino, 1981), Ritter Island (1888), Sissano in Papua New Guinea (Ward& Day, 2003; Johnson, 1987; Tappin et al., 2001) and Mt St. Augustine (1883) in Alaska (Beget& Kienle, 1992). Flank landslide are also recognized as the most important and efficient mass-wasting process on volcanoes, contributing to the development of the edifices by widening their base and to the growth of a volcaniclastic apron at the foot of a volcano; a number of small and medium-scale erosive processes are also responsible for the carving of Stromboli submarine flanks and the transport of debris towards the deeper areas. The characterization of features associated to these processes is the main focus of cap. 6; it is also important to highlight that some small-scale events are able to create damage to coastal areas, as also witnessed by recent events of Gioia Tauro 1978, Nizza, 1979 and Stromboli 2002. The hazard potential related to these phenomena is, in fact, very high, as they commonly occur at higher frequency with respect to large-scale collapses, therefore being more significant in terms of human timescales. In the last chapter (cap. 7), a brief review and discussion of instability processes identified on Stromboli submerged flanks is presented; they are also compared with respect to analogous processes recognized in other submerged areas in order to shed lights on the main factors involved in their development. Finally, some applications of multibeam data to assess the hazard related to these phenomena are also discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The thesis focuses on the link between education and poverty. The first part of the work investigates these concepts through a multidisciplinary study (History, Anthropology, Sociology, Psychology, Pedagogy) to show the complexity of the phenomena, and analyzes poverty considering it as an educational challenge. The second part presents the outcomes of a qualitative research about the role and the power that education has in the fight against poverty. The interviews and the focus groups with teachers and educators who work with the poor in several Italian cities and abroad (Denver and Los Angeles, Israel and Palestine), and the observations of the educational work done in some schools and services considered the "best practices" highlight the importance to re-educate our society, that is impoverished by the crisis of welfare state and the weakness of social networks. The final chapter is dedicated to a reflection on social justice, solidarity and sobriety, as pillars for Social Pedagogy in a society that cannot close its eyes to the inequalities that it generates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La presente tesi indaga le potenzialità, in termini di criteri operativi di inserimento ambientale e procedure di intervento, che possono derivare dall’analisi dei fenomeni insediativi facendo riferimento ai concetti di ‘transizione’ e ‘resilienza’. L’attuale periodo di crisi sembra scaturire dal disequilibrio di due fattori: le esigenze umane e l’ambiente. La contestualizzazione degli interventi, il graduale adattamento alle risorse ambientali locali e la valorizzazione dei processi “dal basso” sembrano consentire di riappropriarsi sia del valore identitario dei luoghi, dando risposta ai problemi di natura sociale evidenziati, sia della eco-compatibilità delle trasformazioni, del corretto utilizzo delle risorse energetiche e della gestione delle dinamiche economiche, in risposta ai problemi ambientali analizzati. Il prefigurare applicazioni pratiche del modello di trasformazione indagato alla scala edilizia, utilizzando tavole parametriche di analisi del costruito, viste d’insieme planivolumetriche ed elaborazioni di dati e immagini, può consentire la gestione di eventuali fasi di programmazione e di pianificazione da parte delle amministrazioni finalizzate a favorire e non ostacolare i presenti e futuri fenomeni di transizione. Particolarmente significativa appare l’analisi delle diverse tendenze di ricerca progettuale in atto, con riferimento a contributi caratterizzati da un’impostazione fenomenologica e tipo-morfologica, che dimostra l’attualità degli argomenti affrontati. La ricerca di Dottorato si conclude con l’applicazione dei criteri operativi di inserimento ambientale e delle procedure di intervento individuati ad uno specifico caso di studio.