20 resultados para Uncertainty and disturbance

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis gives an overview of the history of gold per se, of gold as an investment good and offers some institutional details about gold and other precious metal markets. The goal of this study is to investigate the role of gold as a store of value and hedge against negative market movements in turbulent times. I investigate gold’s ability to act as a safe haven during periods of financial stress by employing instrumental variable techniques that allow for time varying conditional covariance. I find broad evidence supporting the view that gold acts as an anchor of stability during market downturns. During periods of high uncertainty and low stock market returns, gold tends to have higher than average excess returns. The effectiveness of gold as a safe haven is enhanced during periods of extreme crises: the largest peaks are observed during the global financial crises of 2007-2009 and, in particular, during the Lehman default (October 2008). A further goal of this thesis is to investigate whether gold provides protection from tail risk. I address the issue of asymmetric precious metal behavior conditioned to stock market performance and provide empirical evidence about the contribution of gold to a portfolio’s systematic skewness and kurtosis. I find that gold has positive coskewness with the market portfolio when the market is skewed to the left. Moreover, gold shows low cokurtosis with the market returns during volatile periods. I therefore show that gold is a desirable investment good to risk averse investors, since it tends to decrease the probability of experiencing extreme bad outcomes, and the magnitude of losses in case such events occur. Gold thus bears very important and under-researched characteristics as an asset class per se, which this thesis contributed to address and unveil.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis collects the outcomes of a Ph.D. course in Telecommunications engineering and it is focused on enabling techniques for Spread Spectrum (SS) navigation and communication satellite systems. It provides innovations for both interference management and code synchronization techniques. These two aspects are critical for modern navigation and communication systems and constitute the common denominator of the work. The thesis is organized in two parts: the former deals with interference management. We have proposed a novel technique for the enhancement of the sensitivity level of an advanced interference detection and localization system operating in the Global Navigation Satellite System (GNSS) bands, which allows the identification of interfering signals received with power even lower than the GNSS signals. Moreover, we have introduced an effective cancellation technique for signals transmitted by jammers, exploiting their repetitive characteristics, which strongly reduces the interference level at the receiver. The second part, deals with code synchronization. More in detail, we have designed the code synchronization circuit for a Telemetry, Tracking and Control system operating during the Launch and Early Orbit Phase; the proposed solution allows to cope with the very large frequency uncertainty and dynamics characterizing this scenario, and performs the estimation of the code epoch, of the carrier frequency and of the carrier frequency variation rate. Furthermore, considering a generic pair of circuits performing code acquisition, we have proposed a comprehensive framework for the design and the analysis of the optimal cooperation procedure, which minimizes the time required to accomplish synchronization. The study results particularly interesting since it enables the reduction of the code acquisition time without increasing the computational complexity. Finally, considering a network of collaborating navigation receivers, we have proposed an innovative cooperative code acquisition scheme, which allows exploit the shared code epoch information between neighbor nodes, according to the Peer-to-Peer paradigm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Phasmatodea Leach, 1815 (Hexapoda; Insecta) is a polyneopteran order which counts approximately 3000 described species, often known for their remarkable forms of mimicry. In this thesis, I provide a comprehensive systematic framework which includes over 180 species never considered in a phylogenetic framework: the latter can facilitate a better understanding of the processes underlying phasmids evolutionary history. The clade represents in fact an incredible testing ground to study trait evolution and its striking disparity of reproductive strategies and wing morphologies have been of great interest to the evolutionary biology community. Phasmids wings represent one of the first and most notable rejection of Dollo’s law and they played a central role in initiating a long- standing debate on the irreversibility of complex traits loss. Macroevolutionary analyses presented here confirm that wings evolution in phasmids is a reversible process even when possible biases - such as systematic uncertainty and trait-dependent diversification rates - are considered. These findings remark how complex traits can evolve in a dynamic, reversible manner and imply that their molecular groundplan can be preserved despite its phenotypical absence. This concept has been further tested with phylogenetic and transcriptomic approaches in two phasmids parthenogenetic lineages and a bisexual congeneric of the European Bacillus species complex. Leveraging a gene co-expression network approach, male gonad associated genes were retrieved in the bisexual species and then their modifications in the parthenogens were charachterized. Pleiotropy appears to constrain gene modifications associated to male reproductive structures after their loss in parthenogens, so that the lost trait molecular groundplan can be largely preserved in both transcription patterns and sequence evolution. Overall, the results presented in this thesis contribute to shape our understanding of the interplay between the phenotypic and molecular levels in trait evolution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The nature of concepts is a matter of intense debate in cognitive sciences. While traditional views claim that conceptual knowledge is represented in a unitary symbolic system, recent Embodied and Grounded Cognition theories (EGC) submit the idea that conceptual system is couched in our body and influenced by the environment (Barsalou, 2008). One of the major challenges for EGC is constituted by abstract concepts (ACs), like fantasy. Recently, some EGC proposals addressed this criticism, arguing that the ACs comprise multifaced exemplars that rely on different grounding sources beyond sensorimotor one, including interoception, emotions, language, and sociality (Borghi et al., 2018). However, little is known about how ACs representation varies as a function of life experiences and their use in communication. The theoretical arguments and empirical studies comprised in this dissertation aim to provide evidence on multiple grounding of ACs taking into account their varieties and flexibility. Study I analyzed multiple ratings on a large sample of ACs and identified four distinct subclusters. Study II validated this classification with an interference paradigm involving motor/manual, interoceptive, and linguistic systems during a difficulty rating task. Results confirm that different grounding sources are activated depending on ACs kind. Study III-IV investigate the variability of institutional concepts, showing that the higher the law expertise level, the stronger the concrete/emotional determinants in their representation. Study V introduced a novel interactive task in which abstract and concrete sentences serve as cues to simulate conversation. Analysis of language production revealed that the uncertainty and interactive exchanges increase with abstractness, leading to generating more questions/requests for clarifications with abstract than concrete sentences. Overall, results confirm that ACs are multidimensional, heterogeneous, and flexible constructs and that social and linguistic interactions are crucial to shaping their meanings. Investigating ACs in real-time dialogues may be a promising direction for future research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

While imperfect information games are an excellent model of real-world problems and tasks, they are often difficult for computer programs to play at a high level of proficiency, especially if they involve major uncertainty and a very large state space. Kriegspiel, a variant of chess making it similar to a wargame, is a perfect example: while the game was studied for decades from a game-theoretical viewpoint, it was only very recently that the first practical algorithms for playing it began to appear. This thesis presents, documents and tests a multi-sided effort towards making a strong Kriegspiel player, using heuristic searching, retrograde analysis and Monte Carlo tree search algorithms to achieve increasingly higher levels of play. The resulting program is currently the strongest computer player in the world and plays at an above-average human level.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Negli ultimi decenni la Politica Agricola Comune (PAC) è stata sottoposta a diverse revisioni, più o meno programmate, che ne hanno modificato gli obiettivi operativi e gli strumenti per perseguirli. In letteratura economica agraria sono state eseguite diverse ricerche che affrontano analisi ex-ante sui possibili impatti delle riforme politiche, in particolare al disaccoppiamento, riguardo all’allocazione dei terreni alle diverse colture e all’adozione di tecniche di coltivazione più efficienti. Ma tale argomento, nonostante sia di grande importanza, non è stato finora affrontato come altri temi del mondo agricolo. Le principali lacune si riscontrano infatti nella carenza di analisi ex-ante, di modelli che includano le preferenze e le aspettative degli agricoltori. Questo studio valuta le scelte di investimento in terreno di un’azienda agricola di fronte a possibili scenari PAC post-2013, in condizioni di incertezza circa le specifiche condizioni in cui ciascuno scenario verrebbe a verificarsi. L’obiettivo è di ottenere indicazioni utili in termini di comprensione delle scelte di investimento dell’agricoltore in presenza di incertezza sul futuro. L’elemento maggiormente innovativo della ricerca consiste nell’applicazione di un approccio real options e nell’interazione tra la presenza di diversi scenari sul futuro del settore agricolo post-2013, e la componente di incertezza che incide e gravita su di essi. La metodologia adottata nel seguente lavoro si basa sulla modellizzazione di un’azienda agricola, in cui viene simulato il comportamento dell’azienda agricola in reazione alle riforme della PAC e alla variazione dei prezzi dei prodotti in presenza di incertezza. Mediante un modello di Real Option viene valutata la scelta della tempistica ottimale per investire nell’acquisto di terreno (caratterizzato da incertezza e irreversibilità). Dai risultati emerge come in presenza di incertezza all’agricoltore convenga rimandare la decisione a dopo il 2013 e in base alle maggiori informazioni disponibili eseguire l’investimento solo in presenza di condizioni favorevoli. La variazione dei prezzi dei prodotti influenza le scelte più dell’incertezza dei contributi PAC. Il Real Option sembra interpretare meglio il comportamento dell’agricoltore rispetto all’approccio classico del Net Present Value.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La tesi di Dottorato studia il flusso sanguigno tramite un codice agli elementi finiti (COMSOL Multiphysics). Nell’arteria è presente un catetere Doppler (in posizione concentrica o decentrata rispetto all’asse di simmetria) o di stenosi di varia forma ed estensione. Le arterie sono solidi cilindrici rigidi, elastici o iperelastici. Le arterie hanno diametri di 6 mm, 5 mm, 4 mm e 2 mm. Il flusso ematico è in regime laminare stazionario e transitorio, ed il sangue è un fluido non-Newtoniano di Casson, modificato secondo la formulazione di Gonzales & Moraga. Le analisi numeriche sono realizzate in domini tridimensionali e bidimensionali, in quest’ultimo caso analizzando l’interazione fluido-strutturale. Nei casi tridimensionali, le arterie (simulazioni fluidodinamiche) sono infinitamente rigide: ricavato il campo di pressione si procede quindi all’analisi strutturale, per determinare le variazioni di sezione e la permanenza del disturbo sul flusso. La portata sanguigna è determinata nei casi tridimensionali con catetere individuando tre valori (massimo, minimo e medio); mentre per i casi 2D e tridimensionali con arterie stenotiche la legge di pressione riproduce l’impulso ematico. La mesh è triangolare (2D) o tetraedrica (3D), infittita alla parete ed a valle dell’ostacolo, per catturare le ricircolazioni. Alla tesi sono allegate due appendici, che studiano con codici CFD la trasmissione del calore in microcanali e l’ evaporazione di gocce d’acqua in sistemi non confinati. La fluidodinamica nei microcanali è analoga all’emodinamica nei capillari. Il metodo Euleriano-Lagrangiano (simulazioni dell’evaporazione) schematizza la natura mista del sangue. La parte inerente ai microcanali analizza il transitorio a seguito dell’applicazione di un flusso termico variabile nel tempo, variando velocità in ingresso e dimensioni del microcanale. L’indagine sull’evaporazione di gocce è un’analisi parametrica in 3D, che esamina il peso del singolo parametro (temperatura esterna, diametro iniziale, umidità relativa, velocità iniziale, coefficiente di diffusione) per individuare quello che influenza maggiormente il fenomeno.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over the last three decades, international agricultural trade has grown significantly. Technological advances in transportation logistics and storage have created opportunities to ship anything almost anywhere. Bilateral and multilateral trade agreements have also opened new pathways to an increasingly global market place. Yet, international agricultural trade is often constrained by differences in regulatory regimes. The impact of “regulatory asymmetry” is particularly acute for small and medium sized enterprises (SMEs) that lack resources and expertise to successfully operate in markets that have substantially different regulatory structures. As governments seek to encourage the development of SMEs, policy makers often confront the critical question of what ultimately motivates SME export behavior. Specifically, there is considerable interest in understanding how SMEs confront the challenges of regulatory asymmetry. Neoclassical models of the firm generally emphasize expected profit maximization under uncertainty, however these approaches do not adequately explain the entrepreneurial decision under regulatory asymmetry. Behavioral theories of the firm offer a far richer understanding of decision making by taking into account aspirations and adaptive performance in risky environments. This paper develops an analytical framework for decision making of a single agent. Considering risk, uncertainty and opportunity cost, the analysis focuses on the export behavior response of an SME in a situation of regulatory asymmetry. Drawing on the experience of fruit processor in Muzaffarpur, India, who must consider different regulatory environments when shipping fruit treated with sulfur dioxide, the study dissects the firm-level decision using @Risk, a Monte Carlo computational tool.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work provides several policy proposals capable to strengthen the private enforcement of EU competition law in arbitration. It focuses on the procedural law aspects that are permeated by legal uncertainty and that have not yet fallen under the scrutiny of the law and economics debate. The policy proposals described herein are based on the functional approach to law and economics and aim to promote a more qualified decision making process by: adjudicators, private parties and lawmakers. The resulting framework of procedural rules would be a cost-effective policy tool that could sustain the European Commission’s effort to guarantee a workable level of competition in the EU internal market. This project aims to answer the following broad research question: which procedural rules can improve the efficiency of antitrust arbitration by decreasing litigation costs for private parties on the one hand, and by increasing private parties’ compliance with competition law on the other hand?Throughout this research project, such broad question has been developed into research sub-questions revolving around several key legal issues. The chosen sub-research questions result from a vacuum in the European enforcement system that leaves several key legal issues in antitrust arbitration unresolved. The legal framework proposed in this research project could prevent such a blurry scenario from impairing the EU private enforcement of competition law in arbitration. Therefore, our attention was triggered by those legal issues whose proposed solutions lead to relevant uncertainties and that are most suitable for a law and economics analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Studies have depicted that the rate of unused patents comprises a high portion of patents in North America, Europe and Japan. Particularly, studies have identified a considerable share of strategic patents which are left unused due to pure strategic reasons. While such patents might generate strategic rents to their owner, they may have harmful consequences for the society if by blocking alternative solutions that other inventions provide they hamper the possibility of better solutions. Accordingly, the importance of the issue of nonuse is highlighted within the literature on strategic patenting, IPR policy and innovation economics. Moreover, the current literature has emphasized on the role of patent pools in dealing with potential issues such as excessive transaction cost caused by patent thickets and blocking patents. In fact, patent pools have emerged as policy tools facilitating technology commercialization and alleviating patent litigation among rivals holding overlapping IPRs. In this dissertation I provide a critical literature review on strategic patenting, identify present gaps and discuss some future research paths. Moreover, I investigate the drivers of strategic non-use of patents with particular focus on unused strategic play patents. Finally, I examine if participation intensity in patent pools by pool members explains their willingness to use their non-pooled patents. I also investigate which characteristics of the patent pools are associated to the willingness to use non-pooled patents through pool participation. I show that technological uncertainty and technological complexity are two technology environment factors that drive unused play patents. I also show that pool members participating more intensively in patent pools are more likely to be willing to use their non-pooled patents through pool participation. I further depict that pool licensors are more likely to be willing to use their non-pooled patents by participating in pools with higher level of technological complementarity to their own technology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Forest models are tools for explaining and predicting the dynamics of forest ecosystems. They simulate forest behavior by integrating information on the underlying processes in trees, soil and atmosphere. Bayesian calibration is the application of probability theory to parameter estimation. It is a method, applicable to all models, that quantifies output uncertainty and identifies key parameters and variables. This study aims at testing the Bayesian procedure for calibration to different types of forest models, to evaluate their performances and the uncertainties associated with them. In particular,we aimed at 1) applying a Bayesian framework to calibrate forest models and test their performances in different biomes and different environmental conditions, 2) identifying and solve structure-related issues in simple models, and 3) identifying the advantages of additional information made available when calibrating forest models with a Bayesian approach. We applied the Bayesian framework to calibrate the Prelued model on eight Italian eddy-covariance sites in Chapter 2. The ability of Prelued to reproduce the estimated Gross Primary Productivity (GPP) was tested over contrasting natural vegetation types that represented a wide range of climatic and environmental conditions. The issues related to Prelued's multiplicative structure were the main topic of Chapter 3: several different MCMC-based procedures were applied within a Bayesian framework to calibrate the model, and their performances were compared. A more complex model was applied in Chapter 4, focusing on the application of the physiology-based model HYDRALL to the forest ecosystem of Lavarone (IT) to evaluate the importance of additional information in the calibration procedure and their impact on model performances, model uncertainties, and parameter estimation. Overall, the Bayesian technique proved to be an excellent and versatile tool to successfully calibrate forest models of different structure and complexity, on different kind and number of variables and with a different number of parameters involved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis consists of three essays on information economics. I explore how information is strategically communicated or designed by senders who aim to influence the decisions of a receiver. In the first chapter, I study a cheap talk game between two imperfectly informed experts and a decision maker. The experts receive noisy signals about the state and sequentially communicate the relevant information to the decision maker. I refine the self-serving belief system under uncertainty and Ι characterise the most informative equilibrium that might arise in such environments.In the second chapter, I consider the case where a decision maker seeks advice from a biased expert who cares also about establishing a reputation of being competent. The expert has the incentives to misreport her information but she faces a trade-off between the gain from misrepresentation and the potential reputation loss. I show that the equilibrium is fully-revealing if the expert is not too biased and not too highly reputable. If there is competition between two experts the information transmission is always improved. However, in cases where the experts are more than two the result is ambiguous, and it depends on the players’ prior belief over states.In the last chapter, I consider a model of strategic communication where a privately and imperfectly informed sender can persuade a receiver. The sender may receive favorable or unfavorable private information about her preferred state. I describe two ways that are adopted in real life situations and theoretically improve equilibrium informativeness given sender's private information. First, a policy that suggests symmetry constraints to the experiments' choice. Second, an approval strategy characterised by a low precision threshold where the receiver will accept the sender with a positive probability and a higher one where the sender will be accepted with certainty.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Marine soft bottom systems show a high variability across multiple spatial and temporal scales. Both natural and anthropogenic sources of disturbance act together in affecting benthic sedimentary characteristics and species distribution. The description of such spatial variability is required to understand the ecological processes behind them. However, in order to have a better estimate of spatial patterns, methods that take into account the complexity of the sedimentary system are required. This PhD thesis aims to give a significant contribution both in improving the methodological approaches to the study of biological variability in soft bottom habitats and in increasing the knowledge of the effect that different process (both natural and anthropogenic) could have on the benthic communities of a large area in the North Adriatic Sea. Beta diversity is a measure of the variability in species composition, and Whittaker’s index has become the most widely used measure of beta-diversity. However, application of the Whittaker index to soft bottom assemblages of the Adriatic Sea highlighted its sensitivity to rare species (species recorded in a single sample). This over-weighting of rare species induces biased estimates of the heterogeneity, thus it becomes difficult to compare assemblages containing a high proportion of rare species. In benthic communities, the unusual large number of rare species is frequently attributed to a combination of sampling errors and insufficient sampling effort. In order to reduce the influence of rare species on the measure of beta diversity, I have developed an alternative index based on simple probabilistic considerations. It turns out that this probability index is an ordinary Michaelis-Menten transformation of Whittaker's index but behaves more favourably when species heterogeneity increases. The suggested index therefore seems appropriate when comparing patterns of complexity in marine benthic assemblages. Although the new index makes an important contribution to the study of biodiversity in sedimentary environment, it remains to be seen which processes, and at what scales, influence benthic patterns. The ability to predict the effects of ecological phenomena on benthic fauna highly depends on both spatial and temporal scales of variation. Once defined, implicitly or explicitly, these scales influence the questions asked, the methodological approaches and the interpretation of results. Problem often arise when representative samples are not taken and results are over-generalized, as can happen when results from small-scale experiments are used for resource planning and management. Such issues, although globally recognized, are far from been resolved in the North Adriatic Sea. This area is potentially affected by both natural (e.g. river inflow, eutrophication) and anthropogenic (e.g. gas extraction, fish-trawling) sources of disturbance. Although few studies in this area aimed at understanding which of these processes mainly affect macrobenthos, these have been conducted at a small spatial scale, as they were designated to examine local changes in benthic communities or particular species. However, in order to better describe all the putative processes occurring in the entire area, a high sampling effort performed at a large spatial scale is required. The sedimentary environment of the western part of the Adriatic Sea was extensively studied in this thesis. I have described, in detail, spatial patterns both in terms of sedimentary characteristics and macrobenthic organisms and have suggested putative processes (natural or of human origin) that might affect the benthic environment of the entire area. In particular I have examined the effect of off shore gas platforms on benthic diversity and tested their effect over a background of natural spatial variability. The results obtained suggest that natural processes in the North Adriatic such as river outflow and euthrophication show an inter-annual variability that might have important consequences on benthic assemblages, affecting for example their spatial pattern moving away from the coast and along a North to South gradient. Depth-related factors, such as food supply, light, temperature and salinity play an important role in explaining large scale benthic spatial variability (i.e., affecting both the abundance patterns and beta diversity). Nonetheless, more locally, effects probably related to an organic enrichment or pollution from Po river input has been observed. All these processes, together with few human-induced sources of variability (e.g. fishing disturbance), have a higher effect on macrofauna distribution than any effect related to the presence of gas platforms. The main effect of gas platforms is restricted mainly to small spatial scales and related to a change in habitat complexity due to a natural dislodgement or structure cleaning of mussels that colonize their legs. The accumulation of mussels on the sediment reasonably affects benthic infauna composition. All the components of the study presented in this thesis highlight the need to carefully consider methodological aspects related to the study of sedimentary habitats. With particular regards to the North Adriatic Sea, a multi-scale analysis along natural and anthopogenic gradients was useful for detecting the influence of all the processes affecting the sedimentary environment. In the future, applying a similar approach may lead to an unambiguous assessment of the state of the benthic community in the North Adriatic Sea. Such assessment may be useful in understanding if any anthropogenic source of disturbance has a negative effect on the marine environment, and if so, planning sustainable strategies for a proper management of the affected area.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.