21 resultados para Scientific community
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The need for a convergence between semi-structured data management and Information Retrieval techniques is manifest to the scientific community. In order to fulfil this growing request, W3C has recently proposed XQuery Full Text, an IR-oriented extension of XQuery. However, the issue of query optimization requires the study of important properties like query equivalence and containment; to this aim, a formal representation of document and queries is needed. The goal of this thesis is to establish such formal background. We define a data model for XML documents and propose an algebra able to represent most of XQuery Full-Text expressions. We show how an XQuery Full-Text expression can be translated into an algebraic expression and how an algebraic expression can be optimized.
Resumo:
Abstract. This thesis presents a discussion on a few specific topics regarding the low velocity impact behaviour of laminated composites. These topics were chosen because of their significance as well as the relatively limited attention received so far by the scientific community. The first issue considered is the comparison between the effects induced by a low velocity impact and by a quasi-static indentation experimental test. An analysis of both test conditions is presented, based on the results of experiments carried out on carbon fibre laminates and on numerical computations by a finite element model. It is shown that both quasi-static and dynamic tests led to qualitatively similar failure patterns; three characteristic contact force thresholds, corresponding to the main steps of damage progression, were identified and found to be equal for impact and indentation. On the other hand, an equal energy absorption resulted in a larger delaminated area in quasi-static than in dynamic tests, while the maximum displacement of the impactor (or indentor) was higher in the case of impact, suggesting a probably more severe fibre damage than in indentation. Secondly, the effect of different specimen dimensions and boundary conditions on its impact response was examined. Experimental testing showed that the relationships of delaminated area with two significant impact parameters, the absorbed energy and the maximum contact force, did not depend on the in-plane dimensions and on the support condition of the coupons. The possibility of predicting, by means of a simplified numerical computation, the occurrence of delaminations during a specific impact event is also discussed. A study about the compressive behaviour of impact damaged laminates is also presented. Unlike most of the contributions available about this subject, the results of compression after impact tests on thin laminates are described in which the global specimen buckling was not prevented. Two different quasi-isotropic stacking sequences, as well as two specimen geometries, were considered. It is shown that in the case of rectangular coupons the lay-up can significantly affect the damage induced by impact. Different buckling shapes were observed in laminates with different stacking sequences, in agreement with the results of numerical analysis. In addition, the experiments showed that impact damage can alter the buckling mode of the laminates in certain situations, whereas it did not affect the compressive strength in every case, depending on the buckling shape. Some considerations about the significance of the test method employed are also proposed. Finally, a comprehensive study is presented regarding the influence of pre-existing in-plane loads on the impact response of laminates. Impact events in several conditions, including both tensile and compressive preloads, both uniaxial and biaxial, were analysed by means of numerical finite element simulations; the case of laminates impacted in postbuckling conditions was also considered. The study focused on how the effect of preload varies with the span-to-thickness ratio of the specimen, which was found to be a key parameter. It is shown that a tensile preload has the strongest effect on the peak stresses at low span-to-thickness ratios, leading to a reduction of the minimum impact energy required to initiate damage, whereas this effect tends to disappear as the span-to-thickness ratio increases. On the other hand, a compression preload exhibits the most detrimental effects at medium span-to-thickness ratios, at which the laminate compressive strength and the critical instability load are close to each other, while the influence of preload can be negligible for thin plates or even beneficial for very thick plates. The possibility to obtain a better explanation of the experimental results described in the literature, in view of the present findings, is highlighted. Throughout the thesis the capabilities and limitations of the finite element model, which was implemented in an in-house program, are discussed. The program did not include any damage model of the material. It is shown that, although this kind of analysis can yield accurate results as long as damage has little effect on the overall mechanical properties of a laminate, it can be helpful in explaining some phenomena and also in distinguishing between what can be modelled without taking into account the material degradation and what requires an appropriate simulation of damage. Sommario. Questa tesi presenta una discussione su alcune tematiche specifiche riguardanti il comportamento dei compositi laminati soggetti ad impatto a bassa velocità. Tali tematiche sono state scelte per la loro importanza, oltre che per l’attenzione relativamente limitata ricevuta finora dalla comunità scientifica. La prima delle problematiche considerate è il confronto fra gli effetti prodotti da una prova sperimentale di impatto a bassa velocità e da una prova di indentazione quasi statica. Viene presentata un’analisi di entrambe le condizioni di prova, basata sui risultati di esperimenti condotti su laminati in fibra di carbonio e su calcoli numerici svolti con un modello ad elementi finiti. È mostrato che sia le prove quasi statiche sia quelle dinamiche portano a un danneggiamento con caratteristiche qualitativamente simili; tre valori di soglia caratteristici della forza di contatto, corrispondenti alle fasi principali di progressione del danno, sono stati individuati e stimati uguali per impatto e indentazione. D’altro canto lo stesso assorbimento di energia ha portato ad un’area delaminata maggiore nelle prove statiche rispetto a quelle dinamiche, mentre il massimo spostamento dell’impattatore (o indentatore) è risultato maggiore nel caso dell’impatto, indicando la probabilità di un danneggiamento delle fibre più severo rispetto al caso dell’indentazione. In secondo luogo è stato esaminato l’effetto di diverse dimensioni del provino e diverse condizioni al contorno sulla sua risposta all’impatto. Le prove sperimentali hanno mostrato che le relazioni fra l’area delaminata e due parametri di impatto significativi, l’energia assorbita e la massima forza di contatto, non dipendono dalle dimensioni nel piano dei provini e dalle loro condizioni di supporto. Viene anche discussa la possibilità di prevedere, per mezzo di un calcolo numerico semplificato, il verificarsi di delaminazioni durante un determinato caso di impatto. È presentato anche uno studio sul comportamento a compressione di laminati danneggiati da impatto. Diversamente della maggior parte della letteratura disponibile su questo argomento, vengono qui descritti i risultati di prove di compressione dopo impatto su laminati sottili durante le quali l’instabilità elastica globale dei provini non è stata impedita. Sono state considerate due differenti sequenze di laminazione quasi isotrope, oltre a due geometrie per i provini. Viene mostrato come nel caso di provini rettangolari la sequenza di laminazione possa influenzare sensibilmente il danno prodotto dall’impatto. Due diversi tipi di deformate in condizioni di instabilità sono stati osservati per laminati con diversa laminazione, in accordo con i risultati dell’analisi numerica. Gli esperimenti hanno mostrato inoltre che in certe situazioni il danno da impatto può alterare la deformata che il laminato assume in seguito ad instabilità; d’altra parte tale danno non ha sempre influenzato la resistenza a compressione, a seconda della deformata. Vengono proposte anche alcune considerazioni sulla significatività del metodo di prova utilizzato. Infine viene presentato uno studio esaustivo riguardo all’influenza di carichi membranali preesistenti sulla risposta all’impatto dei laminati. Sono stati analizzati con simulazioni numeriche ad elementi finiti casi di impatto in diverse condizioni di precarico, sia di trazione sia di compressione, sia monoassiali sia biassiali; è stato preso in considerazione anche il caso di laminati impattati in condizioni di postbuckling. Lo studio si è concentrato in particolare sulla dipendenza degli effetti del precarico dal rapporto larghezza-spessore del provino, che si è rivelato un parametro fondamentale. Viene illustrato che un precarico di trazione ha l’effetto più marcato sulle massime tensioni per bassi rapporti larghezza-spessore, portando ad una riduzione della minima energia di impatto necessaria per innescare il danneggiamento, mentre questo effetto tende a scomparire all’aumentare di tale rapporto. Il precarico di compressione evidenzia invece gli effetti più deleteri a rapporti larghezza-spessore intermedi, ai quali la resistenza a compressione del laminato e il suo carico critico di instabilità sono paragonabili, mentre l’influenza del precarico può essere trascurabile per piastre sottili o addirittura benefica per piastre molto spesse. Viene evidenziata la possibilità di trovare una spiegazione più soddisfacente dei risultati sperimentali riportati in letteratura, alla luce del presente contributo. Nel corso della tesi vengono anche discussi le potenzialità ed i limiti del modello ad elementi finiti utilizzato, che è stato implementato in un programma scritto in proprio. Il programma non comprende alcuna modellazione del danneggiamento del materiale. Viene però spiegato come, nonostante questo tipo di analisi possa portare a risultati accurati soltanto finché il danno ha scarsi effetti sulle proprietà meccaniche d’insieme del laminato, esso possa essere utile per spiegare alcuni fenomeni, oltre che per distinguere fra ciò che si può riprodurre senza tenere conto del degrado del materiale e ciò che invece richiede una simulazione adeguata del danneggiamento.
Resumo:
Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques.
Resumo:
Magnetic resonance imaging (MRI) is today precluded to patients bearing active implantable medical devices AIMDs). The great advantages related to this diagnostic modality, together with the increasing number of people benefiting from implantable devices, in particular pacemakers(PM)and carioverter/defibrillators (ICD), is prompting the scientific community the study the possibility to extend MRI also to implanted patients. The MRI induced specific absorption rate (SAR) and the consequent heating of biological tissues is one of the major concerns that makes patients bearing metallic structures contraindicated for MRI scans. To date, both in-vivo and in-vitro studies have demonstrated the potentially dangerous temperature increase caused by the radiofrequency (RF) field generated during MRI procedures in the tissues surrounding thin metallic implants. On the other side, the technical evolution of MRI scanners and of AIMDs together with published data on the lack of adverse events have reopened the interest in this field and suggest that, under given conditions, MRI can be safely performed also in implanted patients. With a better understanding of the hazards of performing MRI scans on implanted patients as well as the development of MRI safe devices, we may soon enter an era where the ability of this imaging modality may be more widely used to assist in the appropriate diagnosis of patients with devices. In this study both experimental measures and numerical analysis were performed. Aim of the study is to systematically investigate the effects of the MRI RF filed on implantable devices and to identify the elements that play a major role in the induced heating. Furthermore, we aimed at developing a realistic numerical model able to simulate the interactions between an RF coil for MRI and biological tissues implanted with a PM, and to predict the induced SAR as a function of the particular path of the PM lead. The methods developed and validated during the PhD program led to the design of an experimental framework for the accurate measure of PM lead heating induced by MRI systems. In addition, numerical models based on Finite-Differences Time-Domain (FDTD) simulations were validated to obtain a general tool for investigating the large number of parameters and factors involved in this complex phenomenon. The results obtained demonstrated that the MRI induced heating on metallic implants is a real risk that represents a contraindication in extending MRI scans also to patient bearing a PM, an ICD, or other thin metallic objects. On the other side, both experimental data and numerical results show that, under particular conditions, MRI procedures might be consider reasonably safe also for an implanted patient. The complexity and the large number of variables involved, make difficult to define a unique set of such conditions: when the benefits of a MRI investigation cannot be obtained using other imaging techniques, the possibility to perform the scan should not be immediately excluded, but some considerations are always needed.
Resumo:
The last decades have seen a large effort of the scientific community to study and understand the physics of sea ice. We currently have a wide - even though still not exhaustive - knowledge of the sea ice dynamics and thermodynamics and of their temporal and spatial variability. Sea ice biogeochemistry is instead largely unknown. Sea ice algae production may account for up to 25% of overall primary production in ice-covered waters of the Southern Ocean. However, the influence of physical factors, such as the location of ice formation, the role of snow cover and light availability on sea ice primary production is poorly understood. There are only sparse localized observations and little knowledge of the functioning of sea ice biogeochemistry at larger scales. Modelling becomes then an auxiliary tool to help qualifying and quantifying the role of sea ice biogeochemistry in the ocean dynamics. In this thesis, a novel approach is used for the modelling and coupling of sea ice biogeochemistry - and in particular its primary production - to sea ice physics. Previous attempts were based on the coupling of rather complex sea ice physical models to empirical or relatively simple biological or biogeochemical models. The focus is moved here to a more biologically-oriented point of view. A simple, however comprehensive, physical model of the sea ice thermodynamics (ESIM) was developed and coupled to a novel sea ice implementation (BFM-SI) of the Biogeochemical Flux Model (BFM). The BFM is a comprehensive model, largely used and validated in the open ocean environment and in regional seas. The physical model has been developed having in mind the biogeochemical properties of sea ice and the physical inputs required to model sea ice biogeochemistry. The central concept of the coupling is the modelling of the Biologically-Active-Layer (BAL), which is the time-varying fraction of sea ice that is continuously connected to the ocean via brines pockets and channels and it acts as rich habitat for many microorganisms. The physical model provides the key physical properties of the BAL (e.g., brines volume, temperature and salinity), and the BFM-SI simulates the physiological and ecological response of the biological community to the physical enviroment. The new biogeochemical model is also coupled to the pelagic BFM through the exchange of organic and inorganic matter at the boundaries between the two systems . This is done by computing the entrapment of matter and gases when sea ice grows and release to the ocean when sea ice melts to ensure mass conservation. The model was tested in different ice-covered regions of the world ocean to test the generality of the parameterizations. The focus was particularly on the regions of landfast ice, where primary production is generally large. The implementation of the BFM in sea ice and the coupling structure in General Circulation Models will add a new component to the latters (and in general to Earth System Models), which will be able to provide adequate estimate of the role and importance of sea ice biogeochemistry in the global carbon cycle.
Resumo:
In the last decade the interest for submarine instability grew up, driven by the increasing exploitation of natural resources (primary hydrocarbons), the emplacement of bottom-lying structures (cables and pipelines) and by the development of coastal areas, whose infrastructures increasingly protrude to the sea. The great interest for this topic promoted a number of international projects such as: STEAM (Sediment Transport on European Atlantic Margins, 93-96), ENAM II (European North Atlantic Margin, 96-99), GITEC (Genesis and Impact of Tsunamis on the European Coast 92-95), STRATAFORM (STRATA FORmation on Margins, 95-01), Seabed Slope Process in Deep Water Continental Margin (Northwest Gulf of Mexico, 96-04), COSTA (Continental slope Stability, 00-05), EUROMARGINS (Slope Stability on Europe’s Passive Continental Margin), SPACOMA (04-07), EUROSTRATAFORM (European Margin Strata Formation), NGI's internal project SIP-8 (Offshore Geohazards), IGCP-511: Submarine Mass Movements and Their Consequences (05-09) and projects indirectly related to instability processes, such as TRANSFER (Tsunami Risk ANd Strategies For the European region, 06-09) or NEAREST (integrated observations from NEAR shore sourcES of Tsunamis: towards an early warning system, 06-09). In Italy, apart from a national project realized within the activities of the National Group of Volcanology during the framework 2000-2003 “Conoscenza delle parti sommerse dei vulcani italiani e valutazione del potenziale rischio vulcanico”, the study of submarine mass-movement has been underestimated until the occurrence of the landslide-tsunami events that affected Stromboli on December 30, 2002. This event made the Italian Institutions and the scientific community more aware of the hazard related to submarine landslides, mainly in light of the growing anthropization of coastal sectors, that increases the vulnerability of these areas to the consequences of such processes. In this regard, two important national projects have been recently funded in order to study coastal instabilities (PRIN 24, 06-08) and to map the main submarine hazard features on continental shelves and upper slopes around the most part of Italian coast (MaGIC Project). The study realized in this Thesis is addressed to the understanding of these processes, with particular reference to Stromboli submerged flanks. These latter represent a natural laboratory in this regard, as several kind of instability phenomena are present on the submerged flanks, affecting about 90% of the entire submerged areal and often (strongly) influencing the morphological evolution of subaerial slopes, as witnessed by the event occurred on 30 December 2002. Furthermore, each phenomenon is characterized by different pre-failure, failure and post-failure mechanisms, ranging from rock-falls, to turbidity currents up to catastrophic sector collapses. The Thesis is divided into three introductive chapters, regarding a brief review of submarine instability phenomena and related hazard (cap. 1), a “bird’s-eye” view on methodologies and available dataset (cap. 2) and a short introduction on the evolution and the morpho-structural setting of the Stromboli edifice (cap. 3). This latter seems to play a major role in the development of largescale sector collapses at Stromboli, as they occurred perpendicular to the orientation of the main volcanic rift axis (oriented in NE-SW direction). The characterization of these events and their relationships with successive erosive-depositional processes represents the main focus of cap.4 (Offshore evidence of large-scale lateral collapses on the eastern flank of Stromboli, Italy, due to structurally-controlled, bilateral flank instability) and cap. 5 (Lateral collapses and active sedimentary processes on the North-western flank of Stromboli Volcano), represented by articles accepted for publication on international papers (Marine Geology). Moreover, these studies highlight the hazard related to these catastrophic events; several calamities (with more than 40000 casualties only in the last two century) have been, in fact, the direct or indirect result of landslides affecting volcanic flanks, as observed at Oshima-Oshima (1741) and Unzen Volcano (1792) in Japan (Satake&Kato, 2001; Brantley&Scott, 1993), Krakatau (1883) in Indonesia (Self&Rampino, 1981), Ritter Island (1888), Sissano in Papua New Guinea (Ward& Day, 2003; Johnson, 1987; Tappin et al., 2001) and Mt St. Augustine (1883) in Alaska (Beget& Kienle, 1992). Flank landslide are also recognized as the most important and efficient mass-wasting process on volcanoes, contributing to the development of the edifices by widening their base and to the growth of a volcaniclastic apron at the foot of a volcano; a number of small and medium-scale erosive processes are also responsible for the carving of Stromboli submarine flanks and the transport of debris towards the deeper areas. The characterization of features associated to these processes is the main focus of cap. 6; it is also important to highlight that some small-scale events are able to create damage to coastal areas, as also witnessed by recent events of Gioia Tauro 1978, Nizza, 1979 and Stromboli 2002. The hazard potential related to these phenomena is, in fact, very high, as they commonly occur at higher frequency with respect to large-scale collapses, therefore being more significant in terms of human timescales. In the last chapter (cap. 7), a brief review and discussion of instability processes identified on Stromboli submerged flanks is presented; they are also compared with respect to analogous processes recognized in other submerged areas in order to shed lights on the main factors involved in their development. Finally, some applications of multibeam data to assess the hazard related to these phenomena are also discussed.
Resumo:
The ever-increasing spread of automation in industry puts the electrical engineer in a central role as a promoter of technological development in a sector such as the use of electricity, which is the basis of all the machinery and productive processes. Moreover the spread of drives for motor control and static converters with structures ever more complex, places the electrical engineer to face new challenges whose solution has as critical elements in the implementation of digital control techniques with the requirements of inexpensiveness and efficiency of the final product. The successfully application of solutions using non-conventional static converters awake an increasing interest in science and industry due to the promising opportunities. However, in the same time, new problems emerge whose solution is still under study and debate in the scientific community During the Ph.D. course several themes have been developed that, while obtaining the recent and growing interest of scientific community, have much space for the development of research activity and for industrial applications. The first area of research is related to the control of three phase induction motors with high dynamic performance and the sensorless control in the high speed range. The management of the operation of induction machine without position or speed sensors awakes interest in the industrial world due to the increased reliability and robustness of this solution combined with a lower cost of production and purchase of this technology compared to the others available in the market. During this dissertation control techniques will be proposed which are able to exploit the total dc link voltage and at the same time capable to exploit the maximum torque capability in whole speed range with good dynamic performance. The proposed solution preserves the simplicity of tuning of the regulators. Furthermore, in order to validate the effectiveness of presented solution, it is assessed in terms of performance and complexity and compared to two other algorithm presented in literature. The feasibility of the proposed algorithm is also tested on induction motor drive fed by a matrix converter. Another important research area is connected to the development of technology for vehicular applications. In this field the dynamic performances and the low power consumption is one of most important goals for an effective algorithm. Towards this direction, a control scheme for induction motor that integrates within a coherent solution some of the features that are commonly required to an electric vehicle drive is presented. The main features of the proposed control scheme are the capability to exploit the maximum torque in the whole speed range, a weak dependence on the motor parameters, a good robustness against the variations of the dc-link voltage and, whenever possible, the maximum efficiency. The second part of this dissertation is dedicated to the multi-phase systems. This technology, in fact, is characterized by a number of issues worthy of investigation that make it competitive with other technologies already on the market. Multiphase systems, allow to redistribute power at a higher number of phases, thus making possible the construction of electronic converters which otherwise would be very difficult to achieve due to the limits of present power electronics. Multiphase drives have an intrinsic reliability given by the possibility that a fault of a phase, caused by the possible failure of a component of the converter, can be solved without inefficiency of the machine or application of a pulsating torque. The control of the magnetic field spatial harmonics in the air-gap with order higher than one allows to reduce torque noise and to obtain high torque density motor and multi-motor applications. In one of the next chapters a control scheme able to increase the motor torque by adding a third harmonic component to the air-gap magnetic field will be presented. Above the base speed the control system reduces the motor flux in such a way to ensure the maximum torque capability. The presented analysis considers the drive constrains and shows how these limits modify the motor performance. The multi-motor applications are described by a well-defined number of multiphase machines, having series connected stator windings, with an opportune permutation of the phases these machines can be independently controlled with a single multi-phase inverter. In this dissertation this solution will be presented and an electric drive consisting of two five-phase PM tubular actuators fed by a single five-phase inverter will be presented. Finally the modulation strategies for a multi-phase inverter will be illustrated. The problem of the space vector modulation of multiphase inverters with an odd number of phases is solved in different way. An algorithmic approach and a look-up table solution will be proposed. The inverter output voltage capability will be investigated, showing that the proposed modulation strategy is able to fully exploit the dc input voltage either in sinusoidal or non-sinusoidal operating conditions. All this aspects are considered in the next chapters. In particular, Chapter 1 summarizes the mathematical model of induction motor. The Chapter 2 is a brief state of art on three-phase inverter. Chapter 3 proposes a stator flux vector control for a three- phase induction machine and compares this solution with two other algorithms presented in literature. Furthermore, in the same chapter, a complete electric drive based on matrix converter is presented. In Chapter 4 a control strategy suitable for electric vehicles is illustrated. Chapter 5 describes the mathematical model of multi-phase induction machines whereas chapter 6 analyzes the multi-phase inverter and its modulation strategies. Chapter 7 discusses the minimization of the power losses in IGBT multi-phase inverters with carrier-based pulse width modulation. In Chapter 8 an extended stator flux vector control for a seven-phase induction motor is presented. Chapter 9 concerns the high torque density applications and in Chapter 10 different fault tolerant control strategies are analyzed. Finally, the last chapter presents a positioning multi-motor drive consisting of two PM tubular five-phase actuators fed by a single five-phase inverter.
Resumo:
The main aim of this thesis is strongly interdisciplinary: it involves and presumes a knowledge on Neurophysiology, to understand the mechanisms that undergo the studied phenomena, a knowledge and experience on Electronics, necessary during the hardware experimental set-up to acquire neuronal data, on Informatics and programming to write the code necessary to control the behaviours of the subjects during experiments and the visual presentation of stimuli. At last, neuronal and statistical models should be well known to help in interpreting data. The project started with an accurate bibliographic research: until now the mechanism of perception of heading (or direction of motion) are still poorly known. The main interest is to understand how the integration of visual information relative to our motion with eye position information happens. To investigate the cortical response to visual stimuli in motion and the integration with eye position, we decided to study an animal model, using Optic Flow expansion and contraction as visual stimuli. In the first chapter of the thesis, the basic aims of the research project are presented, together with the reasons why it’s interesting and important to study perception of motion. Moreover, this chapter describes the methods my research group thought to be more adequate to contribute to scientific community and underlines my personal contribute to the project. The second chapter presents an overview on useful knowledge to follow the main part of the thesis: it starts with a brief introduction on central nervous system, on cortical functions, then it presents more deeply associations areas, which are the main target of our study. Furthermore, it tries to explain why studies on animal models are necessary to understand mechanism at a cellular level, that could not be addressed on any other way. In the second part of the chapter, basics on electrophysiology and cellular communication are presented, together with traditional neuronal data analysis methods. The third chapter is intended to be a helpful resource for future works in the laboratory: it presents the hardware used for experimental sessions, how to control animal behaviour during the experiments by means of C routines and a software, and how to present visual stimuli on a screen. The forth chapter is the main core of the research project and the thesis. In the methods, experimental paradigms, visual stimuli and data analysis are presented. In the results, cellular response of area PEc to visual stimuli in motion combined with different eye positions are shown. In brief, this study led to the identification of different cellular behaviour in relation to focus of expansion (the direction of motion given by the optic flow pattern) and eye position. The originality and importance of the results are pointed out in the conclusions: this is the first study aimed to investigate perception of motion in this particular cortical area. In the last paragraph, a neuronal network model is presented: the aim is simulating cellular pre-saccadic and post-saccadic response of neuron in area PEc, during eye movement tasks. The same data presented in chapter four, are further analysed in chapter fifth. The analysis started from the observation of the neuronal responses during 1s time period in which the visual stimulation was the same. It was clear that cells activities showed oscillations in time, that had been neglected by the previous analysis based on mean firing frequency. Results distinguished two cellular behaviour by their response characteristics: some neurons showed oscillations that changed depending on eye and optic flow position, while others kept the same oscillations characteristics independent of the stimulus. The last chapter discusses the results of the research project, comments the originality and interdisciplinary of the study and proposes some future developments.
Resumo:
The focus of this research is to develop and apply an analytical framework for evaluating the effectiveness and practicability of sustainability certification schemes for biofuels, especially in a developing country’s perspective. The main question that drives the research analysis is “Which are the main elements of and how to develop sustainability certification schemes that would be effective and practicable in certifying the contribution of biofuels in meeting the goals Governments and other stakeholders have set up?”. Biofuels have been identified as a promising tool to reach a variety of goals: climate change protection, energy security, agriculture development, and, especially in developing countries, economic development. Once the goals have been identified, and ambitious mandatory targets for biofuels use agreed at national level, concerns have been raised by the scientific community on the negative externalities that biofuels production and use can have at environment, social and economic level. Therefore certification schemes have been recognized as necessary processes to measure these externalities, and examples of such schemes are in effect, or are in a negotiating phase, both at mandatory and voluntary levels. The research focus has emerged by the concern that the ongoing examples are very demanding in terms of compliance, both for those that are subject to certification and those that have to certify, on the quantity and quality of information to be reported. A certification system, for reasons linked to costs, lack of expertise, inadequate infrastructure, absence of an administrative and legislative support, can represent an intensive burden and can act as a serious impediment for the industrial and agriculture development of developing countries, going against the principle of equity and level playing field. While this research recognizes the importance of comprehensiveness and ambition in designing an important tool for the measurement of sustainability effects of biofuels production and use, it stresses the need to focus on the effectiveness and practicability of this tool in measuring the compliance with the goal. This research that falls under the rationale of the Sustainability Science Program housed at Harvard Kennedy School, has as main objective to close the gap between the research and policy makers worlds in the field of sustainability certification schemes for biofuels.
Resumo:
The emergency of infection by highly pathogenic avian influenza virus (HPAI) subtype H5N1 has focused the attention of the world scientific community, requiring the prompt provision of effective control systems for early detection of the circulation of low pathogenic influenza H5 viruses (LPAI) in populations of wild birds to prevent outbreaks of highly pathogenic (HPAI) in populations of domestic birds with possible transmission to humans. The project stems from the aim to provide, through a preliminary analysis of data obtained from surveillance in Italy and Europe, a preliminary study about the virus detection rates and the development of mathematical models, an objective assessment of the effectiveness of avian influenza surveillance systems in wild bird populations, and to point out guidelines to support the planning process of the sampling activities. The results obtained from the statistical processing quantify the sampling effort in terms of time and sample size required, and simulating different epidemiological scenarios identify active surveillance as the most suitable for endemic LPAI infection monitoring in wild waterfowl, and passive surveillance as the only really effective tool in early detecting HPAI H5N1 circulation in wild populations. Given the lack of relevant information on H5N1 epidemiology, and the actual finantial and logistic constraints, an approach that makes use of statistical tools to evaluate and predict monitoring activities effectiveness proves to be of primary importance to direct decision-making and make the best use of available resources.
La Pace Calda. La nascita del movimento antinucleare negli Stati Uniti e in Gran Bretagna, 1957-1963
Resumo:
The aim of this proposal is to offer an alternative perspective on the study of Cold War, since insufficient attention is usually paid to those organizations that mobilized against the development and proliferation of nuclear weapons. The antinuclear movement began to mobilize between the 1950s and the 1960s, when it finally gained the attention of public opinion, and helped to build a sort of global conscience about nuclear bombs. This was due to the activism of a significant part of the international scientific community, which offered powerful intellectual and political legitimization to the struggle, and to the combined actions of the scientific and organized protests. This antinuclear conscience is something we usually tend to consider as a fait accompli in contemporary world, but the question is to show its roots, and the way it influenced statesmen and political choices during the period of nuclear confrontation of the early Cold War. To understand what this conscience could be and how it should be defined, we have to look at the very meaning of the nuclear weapons that has deeply modified the sense of war. Nuclear weapons seemed to be able to destroy human beings everywhere with no realistic forms of control of the damages they could set off, and they represented the last resource in the wide range of means of mass destruction. Even if we tend to consider this idea fully rational and incontrovertible, it was not immediately born with the birth of nuclear weapons themselves. Or, better, not everyone in the world did immediately share it. Due to the particular climate of Cold War confrontation, deeply influenced by the persistence of realistic paradigms in international relations, British and U.S. governments looked at nuclear weapons simply as «a bullet». From the Trinity Test to the signature of the Limited Test Ban Treaty in 1963, many things happened that helped to shift this view upon nuclear weapons. First of all, more than ten years of scientific protests provided a more concerned knowledge about consequences of nuclear tests and about the use of nuclear weapons. Many scientists devoted their social activities to inform public opinion and policy-makers about the real significance of the power of the atom and the related danger for human beings. Secondly, some public figures, as physicists, philosophers, biologists, chemists, and so on, appealed directly to the human community to «leave the folly and face reality», publicly sponsoring the antinuclear conscience. Then, several organizations leaded by political, religious or radical individuals gave to this protests a formal structure. The Campaign for Nuclear Disarmament in Great Britain, as well as the National Committee for a Sane Nuclear Policy in the U.S., represented the voice of the masses against the attempts of governments to present nuclear arsenals as a fundamental part of the international equilibrium. Therefore, the antinuclear conscience could be defined as an opposite feeling to the development and the use of nuclear weapons, able to create a political issue oriented to the influence of military and foreign policies. Only taking into consideration the strength of this pressure, it seems possible to understand not only the beginning of nuclear negotiations, but also the reasons that permitted Cold War to remain cold.
Resumo:
L’utilizzo del reservoir geotermico superficiale a scopi termici / frigoriferi è una tecnica consolidata che permette di sfruttare, tramite appositi “geoscambiatori”, un’energia presente ovunque ed inesauribile, ad un ridotto prezzo in termini di emissioni climalteranti. Pertanto, il pieno sfruttamento di questa risorsa è in linea con gli obiettivi del Protocollo di Kyoto ed è descritto nella Direttiva Europea 2009/28/CE (Comunemente detta: Direttiva Rinnovabili). Considerato il notevole potenziale a fronte di costi sostenibili di installazione ed esercizio, la geotermia superficiale è stata sfruttata già dalla metà del ventesimo secolo in diversi contesti (geografici, geologici e climatici) e per diverse applicazioni (residenziali, commerciali, industriali, infrastrutturali). Ciononostante, solo a partire dagli anni 2000 la comunità scientifica e il mercato si sono realmente interessati ed affacciati all’argomento, a seguito di sopraggiunte condizioni economiche e tecniche. Una semplice ed immediata dimostrazione di ciò si ritrova nel fatto che al 2012 non esiste ancora un chiaro riferimento tecnico condiviso a livello internazionale, né per la progettazione, né per l’installazione, né per il testing delle diverse applicazioni della geotermia superficiale, questo a fronte di una moltitudine di articoli scientifici pubblicati, impianti realizzati ed associazioni di categoria coinvolte nel primo decennio del ventunesimo secolo. Il presente lavoro di ricerca si colloca all’interno di questo quadro. In particolare verranno mostrati i progressi della ricerca svolta all’interno del Dipartimento di Ingegneria Civile, Ambientale e dei Materiali nei settori della progettazione e del testing dei sistemi geotermici, nonché verranno descritte alcune tipologie di geoscambiatori innovative studiate, analizzate e testate nel periodo di ricerca.
Resumo:
Il presente lavoro è strutturato in quattro parti analizzando e comparando le pubblicazioni del settore scientifico italiano, anglofono e tedesco di riferimento. Nel primo capitolo della tesi viene proposta una riflessione sulle parole che ruotano attorno al tema dei DSA e della disabilità. Nel secondo capitolo vengono presentati, a partire dalla letteratura scientifica di riferimento, gli indicatori di rischio che segnalano possibili disturbi specifici di apprendimento e le caratteristiche di apprendimento dei DSA mettendo in luce potenzialità e talenti spesso intrinseci. Nel terzo capitolo viene vagliata la normativa di riferimento, in particolare la recente Legge 170/2010 e le relative Linee Guida. Nel quarto capitolo, partendo dal tema della diffusione delle tecnologie dell’informazione e della comunicazione (da ora in poi TIC) nel mondo della scuola, sono ampiamente trattati i principali strumenti compensativi (sintesi vocale, libri digitali, mappe concettuali, Lavagna Interattiva Multimediale) e le misure dispensative adottabili. Nel quinto capitolo viene analizzato in tutte le sue parti il Piano Didattico Personalizzato (da ora in poi PDP) e viene proposto un possibile modello di PDP pubblicato sul sito dell'Ufficio per l’Ambito Territoriale di Bologna. Nel sesto capitolo della tesi viene presentato il Progetto Regionale ProDSA. Il Progetto, rivolto a studenti, con diagnosi di DSA, delle scuole secondarie di primo grado e del primo biennio delle secondarie di secondo grado dell’Emilia-Romagna, ha visto, grazie a un finanziamento della Regione, la consegna in comodato d'uso gratuito di tecnologie compensative agli alunni che hanno aderito. La sezione empirica del presente lavoro indaga l’uso reale che è stato fatto degli strumenti proposti in comodato d’uso e le motivazioni legate alla scelta di non utilizzarli in classe. Nel settimo capitolo vengono proposti strumenti progettati per rispondere concretamente alle criticità emerse dall'analisi dei dati e per sensibilizzare il mondo della scuola sulle caratteristiche dei DSA.
Resumo:
The main objective of this research is to improve the comprehension of the processes controlling the formation of caves and karst-like morphologies in quartz-rich lithologies (more than 90% quartz), like quartz-sandstones and metamorphic quartzites. In the scientific community the processes actually most retained to be responsible of these formations are explained in the “Arenisation Theory”. This implies a slow but pervasive dissolution of the quartz grain/mineral boundaries increasing the general porosity until the rock becomes incohesive and can be easily eroded by running waters. The loose sands produced by the weathering processes are then evacuated to the surface through processes of piping due to the infiltration of waters from the fracture network or the bedding planes. To deal with these problems we adopted a multidisciplinary approach through the exploration and the study of several cave systems in different tepuis. The first step was to build a theoretical model of the arenisation process, considering the most recent knowledge about the dissolution kinetics of quartz, the intergranular/grain boundaries diffusion processes, the primary diffusion porosity, in the simplified conditions of an open fracture crossed by a continuous flow of undersatured water. The results of the model were then compared with the world’s widest dataset (more than 150 analyses) of water geochemistry collected till now on the tepui, in superficial and cave settings. All these studies allowed verifying the importance and the effectiveness of the arenisation process that is confirmed to be the main process responsible of the primary formation of these caves and of the karst-like superficial morphologies. The numerical modelling and the field observations allowed evaluating a possible age of the cave systems around 20-30 million of years.
Resumo:
Questo studio propone un'esplorazione dei nessi tra processi migratori ed esperienze di salute e malattia a partire da un'indagine sulle migrazioni provenienti dall'America latina in Emilia-Romagna. Contemporaneamente indaga i termini del dibattito sulla diffusione della Malattia di Chagas, “infezione tropicale dimenticata” endemica in America centro-meridionale che, grazie all'incremento dei flussi migratori transnazionali, viene oggi riconfigurata come 'emergente' in alcuni contesti di immigrazione. Attraverso i paradigmi teorico-metodologici disciplinari dell'antropologia medica, della salute globale e degli studi sulle migrazioni, si è inteso indagare la natura della relazione tra “dimenticanza” ed “emergenza” nelle politiche che caratterizzano il contesto migratorio europeo e italiano nello specifico. Si sono analizzate questioni vincolate alla legittimità degli attori coinvolti nella ridefinizione del fenomeno in ambito pubblico; alle visioni che informano le strategie sanitarie di presa in carico dell'infezione; alle possibili ricadute di tali visioni nelle pratiche di cura. Parte della ricerca si è realizzata all'interno del reparto ospedaliero ove è stato implementato il primo servizio di diagnosi e trattamento per l'infezione in Emilia-Romagna. È stata pertanto realizzata una etnografia fuori/dentro al servizio, coinvolgendo i principali soggetti del campo di indagine -immigrati latinoamericani e operatori sanitari-, con lo scopo di cogliere visioni, logiche e pratiche a partire da un'analisi della legislazione che regola l'accesso al servizio sanitario pubblico in Italia. Attraverso la raccolta di narrazioni biografiche, lo studio ha contribuito a far luce su peculiari percorsi migratori e di vita nel contesto locale; ha permesso di riflettere sulla validità di categorie come quella di “latinoamericano” utilizzata dalla comunità scientifica in stretta correlazione con il Chagas; ha riconfigurato il senso di un approccio attento alle connotazioni culturali all'interno di un più ampio ripensamento delle forme di inclusione e di partecipazione finalizzate a dare asilo ai bisogni sanitari maggiormente percepiti e alle esperienze soggettive di malattia.