5 resultados para Contiguous
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Anthropogenic activities and climatic processes heavily influence surface water resources by causing their progressive depletion, which in turn affects both societies and the environment. Therefore, there is an urgent need to understand the contribution of human and climatic dynamics on the variation of surface water availability. Here, this investigation is performed on the contiguous United States (CONUS) using remotely-sensed data. Three anthropogenic (i.e., urban area, population, and irrigation) and two climatic factors (i.e., precipitation and temperature) were selected as potential drivers of changes in surface water extent and the overlap between the increase or decrease in these drivers and the variation of surface water was examined. Most of the river basins experienced a surface water gain due to precipitation increase (eastern CONUS), and a reduction of irrigated land (western CONUS). River basins of the arid southwestern region and some river basins of the northeastern area encountered a surface water loss, essentially induced by population growth, along with a precipitation deficit and a general expansion of irrigated land. To further inspect the role of population growth and urbanization on surface water loss, the spatial interaction between human settlements and surface water depletion was examined by evaluating the frequency of surface water loss as a function of distance from urban areas. The decline of the observed frequency was successfully reproduced with an exponential distance-decay model, proving that surface water losses are more concentrated in the proximity of cities. Climatic conditions influenced this pattern, with more widely distributed losses in arid regions compared to temperate and continental areas. The results presented in this Thesis provide an improved understanding of the effects of anthropogenic and climatic dynamics on surface water availability, which could be integrated in the definition of sustainable strategies for urbanization, water management, and surface water restoration.
Resumo:
This research argues for an analysis of textual and cultural forms in the American horror film (1968- 1998), by defining the so-called postmodern characters. The “postmodern” term will not mean a period of the history of cinema, but a series of forms and strategies recognizable in many American films. From a bipolar re-mediation and cognitive point of view, the postmodern phenomenon is been considered as a formal and epistemological re-configuration of the cultural “modern” system. The first section of the work examines theoretical problems about the “postmodern phenomenon” by defining its cultural and formal constants in different areas (epistemology, economy, mass-media): the character of convergence, fragmentation, manipulation and immersion represent the first ones, while the “excess” is the morphology of the change, by realizing the “fluctuation” of the previous consolidated system. The second section classifies the textual and cultural forms of American postmodern film, generally non-horror. The “classic narrative” structure – coherent and consequent chain of causal cues toward a conclusion – is scattered by the postmodern constant of “fragmentation”. New textual models arise, fragmenting the narrative ones into the aggregations of data without causal-temporal logics. Considering the process of “transcoding”1 and “remediation”2 between media, and the principle of “convergence” in the phenomenon, the essay aims to define these structures in postmodern film as “database forms” and “navigable space forms.” The third section applies this classification to American horror film (1968-1998). The formal constant of “excess” in the horror genre works on the paradigm of “vision”: if postmodern film shows a crisis of the “truth” in the vision, in horror movies the excess of vision becomes “hyper-vision” – that is “multiplication” of the death/blood/torture visions – and “intra-vision”, that shows the impossibility of recognizing the “real” vision from the virtual/imaginary. In this perspective, the textual and cultural forms and strategies of postmodern horror film are predominantly: the “database-accumulation” forms, where the events result from a very simple “remote cause” serving as a pretext (like in Night of the Living Dead); the “database-catalogue” forms, where the events follow one another displaying a “central” character or theme. In the first case, the catalogue syntagms are connected by “consecutive” elements, building stories linked by the actions of a single character (usually the killer), or connected by non-consecutive episodes about a general theme: examples of the first kind are built on the model of The Wizard of Gore; the second ones, on the films such as Mario Bava’s I tre volti della paura. The “navigable space” forms are defined: hyperlink a, where one universe is fluctuating between reality and dream, as in Rosemary’s Baby; hyperlink b (where two non-hierarchical universes are convergent, the first one real and the other one fictional, as in the Nightmare series); hyperlink c (where more worlds are separated but contiguous in the last sequence, as in Targets); the last form, navigable-loop, includes a textual line which suddenly stops and starts again, reflecting the pattern of a “loop” (as in Lost Highway). This essay analyses in detail the organization of “visual space” into the postmodern horror film by tracing representative patterns. It concludes by examining the “convergence”3 of technologies and cognitive structures of cinema and new media.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Dopo gli indubbi sviluppi politici e legali tendenti all’uniformazione è inevitabile non sostenere che anche il mercato della gestione delle infrastrutture e del trasporto aereo a terra costituisce un fattore determinante del trasporto aereo con una più stretta necessità di uniformazione del quadro regolamentare. La gestione aeroportuale e i servizi connessi è collocata all’interno del diritto aereo. Perché si configuri il “trasporto aereo” (nozione dinamica base che caratterizza il diritto del trasporto aereo) si ha la necessità di un accordo tra due paesi – un permesso di volo designato – una finestra di orario di decollo e atterraggio e la regolamentazione delle relative attività connesse, affinché si svolgano in situazione di safety, quale conditio sine qua non di tutte le attività di aviazione. Tuttavia, la migliore dottrina sente il bisogno di una trattazione separata della materia diritto aereo in senso stretto e quella della disciplina aeroportuale, benché i due ambiti sono tra di loro contigui. Questo è legittimato da esigenze contrapposte fra gli operatori dei due settori. In ultima considerazione possiamo sostenere che gli sviluppi legislativi, sia nel diritto aeronautico e in quello marittimo, portano all’abbraccio della impostazione di un diritto dei trasporti inclusivo di ogni forma dell’attuazione del fenomeno trasporto, scollegandosi al solo fenomeno dell’esercizio nautico quale elemento caratterizzante della disciplina. Quale futuro legislativo si prospetta per la gestione del bene aeroporto? Quale sarà la sua dimensione legale su questioni importanti sulle quali esiste una normazione europea come l’allocazione delle bande orarie, tasse aeroportuali e assistenza a terra oppure su quelle che hanno un carattere prevalentemente nazionale? E infine, quale sarebbe la strada da seguire per regolare il nuovo mercato aeroportuale che è passato dalla idea della competizione per il mercato esplorando anche la competizione nel mercato, con aeroporti che si comportano come operatori in concorrenza tra loro?
Resumo:
Since the first subdivisions of the brain into macro regions, it has always been thought a priori that, given the heterogeneity of neurons, different areas host specific functions and process unique information in order to generate a behaviour. Moreover, the various sensory inputs coming from different sources (eye, skin, proprioception) flow from one macro area to another, being constantly computed and updated. Therefore, especially for non-contiguous cortical areas, it is not expected to find the same information. From this point of view, it would be inconceivable that the motor and the parietal cortices, diversified by the information encoded and by the anatomical position in the brain, could show very similar neural dynamics. With the present thesis, by analyzing the population activity of parietal areas V6A and PEc with machine learning methods, we argue that a simplified view of the brain organization do not reflect the actual neural processes. We reliably detected a number of neural states that were tightly linked to distinct periods of the task sequence, i.e. the planning and execution of movement and the holding of target as already observed in motor cortices. The states before and after the movement could be further segmented into two states related to different stages of movement planning and arm posture processing. Rather unexpectedly, we found that activity during the movement could be parsed into two states of equal duration temporally linked to the acceleration and deceleration phases of the arm. Our findings suggest that, at least during arm reaching in 3D space, the posterior parietal cortex (PPC) shows low-level population neural dynamics remarkably similar to those found in the motor cortices. In addition, the present findings suggest that computational processes in PPC could be better understood if studied using a dynamical system approach rather than studying a mosaic of single units.