15 resultados para Objective assumptions
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The research performed during the PhD and presented in this thesis, allowed to make judgments on pushover analysis method about its application in evaluating the correct structural seismic response. In this sense, the extensive critical review of existing pushover procedures (illustrated in chapter 1) outlined their major issues related to assumptions and to hypothesis made in the application of the method. Therefore, with the purpose of evaluate the effectiveness of pushover procedures, a wide numerical investigation have been performed. In particular the attention has been focused on the structural irregularity on elevation, on the choice of the load vector and on its updating criteria. In the study eight pushover procedures have been considered, of which four are conventional type, one is multi-modal, and three are adaptive. The evaluation of their effectiveness in the identification of the correct dynamic structural response, has been done by performing several dynamic and static non-linear analysis on eight RC frames, characterized by different proprieties in terms of regularity in elevation. The comparisons of static and dynamic results have then permitted to evaluate the examined pushover procedures and to identify the expected margin of error by using each of them. Both on base shear-top displacement curves and on considered storey parameters, the best agreement with the dynamic response has been noticed on Multi-Modal Pushover procedure. Therefore the attention has been focused on Displacement-based Adative Pushover, coming to define for it an improvement strategy, and on modal combination rules, advancing an innovative method based on a quadratic combination of the modal shapes (QMC). This latter has been implemented in a conventional pushover procedure, whose results have been compared with those obtained by other multi-modal procedures. The development of research on pushover analysis is very important because the objective is to come to the definition of a simple, effective and reliable analysis method, indispensable tool in the seismic evaluation of new or existing structures.
Resumo:
The general objective of this research is to explore theories and methodologies of sustainability indicators, environmental management and decision making disciplines with the operational purpose of producing scientific, robust and relevant information for supporting system understanding and decision making in real case studies. Several tools have been applied in order to increase the understanding of socio-ecological systems as well as providing relevant information on the choice between alternatives. These tools have always been applied having in mind the complexity of the issues and the uncertainty tied to the partial knowledge of the systems under study. Two case studies with specific application to performances measurement (environmental performances in the case of the K8 approach and sustainable development performances in the case of the EU Sustainable Development Strategy) and a case study about the selection of sustainable development indicators amongst Municipalities in Scotland, are discussed in the first part of the work. In the second part of the work, the common denominator among subjects consists in the application of spatial indices and indicators to address operational problems in land use management within the territory of the Ravenna province (Italy). The main conclusion of the thesis is that a ‘perfect’ methodological approach which always produces the best results in assessing sustainability performances does not exist. Rather, there is a pool of correct approaches answering different evaluation questions, to be used when methodologies fit the purpose of the analysis. For this reason, methodological limits and conceptual assumptions as well as consistency and transparency of the assessment, become the key factors for assessing the quality of the analysis.
Resumo:
The present work concerns with the study of debris flows and, in particular, with the related hazard in the Alpine Environment. During the last years several methodologies have been developed to evaluate hazard associated to such a complex phenomenon, whose velocity, impacting force and inappropriate temporal prediction are responsible of the related high hazard level. This research focuses its attention on the depositional phase of debris flows through the application of a numerical model (DFlowz), and on hazard evaluation related to watersheds morphometric, morphological and geological characterization. The main aims are to test the validity of DFlowz simulations and assess sources of errors in order to understand how the empirical uncertainties influence the predictions; on the other side the research concerns with the possibility of performing hazard analysis starting from the identification of susceptible debris flow catchments and definition of their activity level. 25 well documented debris flow events have been back analyzed with the model DFlowz (Berti and Simoni, 2007): derived form the implementation of the empirical relations between event volume and planimetric and cross section inundated areas, the code allows to delineate areas affected by an event by taking into account information about volume, preferential flow path and digital elevation model (DEM) of fan area. The analysis uses an objective methodology for evaluating the accuracy of the prediction and involve the calibration of the model based on factors describing the uncertainty associated to the semi empirical relationships. The general assumptions on which the model is based have been verified although the predictive capabilities are influenced by the uncertainties of the empirical scaling relationships, which have to be necessarily taken into account and depend mostly on errors concerning deposited volume estimation. In addition, in order to test prediction capabilities of physical-based models, some events have been simulated through the use of RAMMS (RApid Mass MovementS). The model, which has been developed by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) in Birmensdorf and the Swiss Federal Institute for Snow and Avalanche Research (SLF) takes into account a one-phase approach based on Voellmy rheology (Voellmy, 1955; Salm et al., 1990). The input file combines the total volume of the debris flow located in a release area with a mean depth. The model predicts the affected area, the maximum depth and the flow velocity in each cell of the input DTM. Relatively to hazard analysis related to watersheds characterization, the database collected by the Alto Adige Province represents an opportunity to examine debris-flow sediment dynamics at the regional scale and analyze lithologic controls. With the aim of advancing current understandings about debris flow, this study focuses on 82 events in order to characterize the topographic conditions associated with their initiation , transportation and deposition, seasonal patterns of occurrence and examine the role played by bedrock geology on sediment transfer.
Resumo:
The objective of the work is the evaluation of the potential capabilities of navigation satellite signals to retrieve basic atmospheric parameters. A capillary study have been performed on the assumptions more or less explicitly contained in the common processing steps of navigation signals. A probabilistic procedure has been designed for measuring vertical discretised profiles of pressure, temperature and water vapour and their associated errors. Numerical experiments on a synthetic dataset have been performed with the main objective of quantifying the information that could be gained from such approach, using entropy and relative entropy as testing parameters. A simulator of phase delay and bending of a GNSS signal travelling across the atmosphere has been developed to this aim.
Resumo:
The objective of the current thesis is to investigate the temporal dynamics (i.e., time courses) of the Simon effect, both from a theoretical and experimental point of view, for a better understanding of whether a) one or more process are responsible for the Simon effect and b) how this/these mechanism/s differently influence performance. In the first theoretical (i.e., “Theoretical Overview”) part, I examined in detail the process and justification for analyzing the temporal dynamics of the Simon effect and the assumptions that underlie interpretation of the results which have been obtained in the existing literature so far. In the second part (“Experimental Investigations”), though, I experimentally investigated several issues which the existing literature left unsolved, in order to get further evidence in favor or in contrast with the mainstream models which are currently used to account for the different Simon effect time courses. Some points about the experiments are worth mentioning: First, all the experiments were conducted in the laboratory, facing participants with stimuli presented on a PC screen and then recording their responses. Both stimuli presentation and response collection was controlled by the E-Prime software. The dependent variables of interest were always behavioral measures of performance, such as velocity and accuracy. Second, the most part of my experiments had been conducted at the Communication Sciences Department (University of Bologna), under Prof. Nicoletti’s supervision. The remaining part, though, had been conducted at the Psychological Sciences Department of Purdue University (West Lafayette, Indiana, USA), where I collaborated for one year as a visiting student with Prof. Proctor and his team. Third, my experimental pool was entirely composed by healthy and young students, since the cognitive functioning of elderly people was not the target of my research.
Resumo:
Geometric nonlinearities of flexure hinges introduced by large deflections often complicate the analysis of compliant mechanisms containing such members, and therefore, Pseudo-Rigid-Body Models (PRBMs) have been well proposed and developed by Howell [1994] to analyze the characteristics of slender beams under large deflection. These models, however, fail to approximate the characteristics for the deep beams (short beams) or the other flexure hinges. Lobontiu's work [2001] contributed to the diverse flexure hinge analysis building on the assumptions of small deflection, which also limits the application range of these flexure hinges and cannot analyze the stiffness and stress characteristics of these flexure hinges for large deflection. Therefore, the objective of this thesis is to analyze flexure hinges considering both the effects of large-deflection and shear force, which guides the design of flexure-based compliant mechanisms. The main work conducted in the thesis is outlined as follows. 1. Three popular types of flexure hinges: (circular flexure hinges, elliptical flexure hinges and corner-filleted flexure hinges) are chosen for analysis at first. 2. Commercial software (Comsol) based Finite Element Analysis (FEA) method is then used for correcting the errors produced by the equations proposed by Lobontiu when the chosen flexure hinges suffer from large deformation. 3. Three sets of generic design equations for the three types of flexure hinges are further proposed on the basis of stiffness and stress characteristics from the FEA results. 4. A flexure-based four-bar compliant mechanism is finally studied and modeled using the proposed generic design equations. The load-displacement relationships are verified by a numerical example. The results show that a maximum error about the relationship between moment and rotation deformation is less than 3.4% for a flexure hinge, and it is lower than 5% for the four-bar compliant mechanism compared with the FEA results.
Resumo:
Desde los plantemientos que entienden la mediación como una mera relación de hecho, se avanzará hasta la consideración de la misma como contrato. Para ello, se separarán las categorías o institutos que no se consideren adecuados a la naturaleza de la mediación, profundizando en sus estructuras más básicas, hasta llegar a la consideración contractual de la misma. Desde el contrato, se tratarán los elementos de sinalagmaticidad, principalidad, onerosidad o consensualidad, entre otras circunstancias, esenciales o naturales, antes de abordar la que quizá sea la circunstancia que merezca un tratamiento más especial: el riesgo aleatorio de la mediación. Además, al tratarse de contratos infrarregulados, la dimensión económica y efectiva de los mismos no sólo permite la reflexión, sino que impulsa la crítica y, si bien no puede ofrecer por sí misma respuestas, sí puede ayudar a descartar opciones menos adecuadas en cada caso. De todo lo anterior se concluirá una única categoría contractual, clara y precisa, enfrentada también a un análisis económico de cada uno de sus elementos, así como una confrontación con las propuestas, sobre los mismos supuestos, del Derecho contractual europeo. Así, se tratará de evaluar, en cada paso, que las conclusiones parciales sobre las que se asienten ulteriores estadios de la investigación no sólo resulten sistemáticas, sino también idóneas para alcanzar la finalidad de contrato y, a su través, la de las partes. El contrato resultante se encontrará categorizado, pero no por ello regulado. De ahí la necesidad, última, de ubicarlo en una categoría contractual más amplia que pueda informar dichos caracteres con una regulación suficiente, desde la que establecer una normatividad real.
Resumo:
DI Diesel engine are widely used both for industrial and automotive applications due to their durability and fuel economy. Nonetheless, increasing environmental concerns force that type of engine to comply with increasingly demanding emission limits, so that, it has become mandatory to develop a robust design methodology of the DI Diesel combustion system focused on reduction of soot and NOx simultaneously while maintaining a reasonable fuel economy. In recent years, genetic algorithms and CFD three-dimensional combustion simulations have been successfully applied to that kind of problem. However, combining GAs optimization with actual CFD three-dimensional combustion simulations can be too onerous since a large number of calculations is usually needed for the genetic algorithm to converge, resulting in a high computational cost and, thus, limiting the suitability of this method for industrial processes. In order to make the optimization process less time-consuming, CFD simulations can be more conveniently used to generate a training set for the learning process of an artificial neural network which, once correctly trained, can be used to forecast the engine outputs as a function of the design parameters during a GA optimization performing a so-called virtual optimization. In the current work, a numerical methodology for the multi-objective virtual optimization of the combustion of an automotive DI Diesel engine, which relies on artificial neural networks and genetic algorithms, was developed.
Resumo:
La recente Direttiva 31/2010 dell’Unione Europea impone agli stati membri di riorganizzare il quadro legislativo nazionale in materia di prestazione energetica degli edifici, affinchè tutte le nuove costruzioni presentino dal 1° gennaio 2021 un bilancio energetico tendente allo zero; termine peraltro anticipato al 1° gennaio 2019 per gli edifici pubblici. La concezione di edifici a energia “quasi” zero (nZEB) parte dal presupposto di un involucro energeticamente di standard passivo per arrivare a compensare, attraverso la produzione preferibilmente in sito di energia da fonti rinnovabili, gli esigui consumi richiesti su base annuale. In quest’ottica la riconsiderazione delle potenzialità dell’architettura solare individua degli strumenti concreti e delle valide metodologie per supportare la progettazione di involucri sempre più performanti che sfruttino pienamente una risorsa inesauribile, diffusa e alla portata di tutti come quella solare. Tutto ciò in considerazione anche della non più procrastinabile necessità di ridurre il carico energetico imputabile agli edifici, responsabili come noto di oltre il 40% dei consumi mondiali e del 24% delle emissioni di gas climalteranti. Secondo queste premesse la ricerca pone come centrale il tema dell’integrazione dei sistemi di guadagno termico, cosiddetti passivi, e di produzione energetica, cosiddetti attivi, da fonte solare nell’involucro architettonico. Il percorso sia analitico che operativo effettuato si è posto la finalità di fornire degli strumenti metodologici e pratici al progetto dell’architettura, bisognoso di un nuovo approccio integrato mirato al raggiungimento degli obiettivi di risparmio energetico. Attraverso una ricognizione generale del concetto di architettura solare e dei presupposti teorici e terminologici che stanno alla base della stessa, la ricerca ha prefigurato tre tipologie di esito finale: una codificazione delle morfologie ricorrenti nelle realizzazioni solari, un’analisi comparata del rendimento solare nelle principali aggregazioni tipologiche edilizie e una parte importante di verifica progettuale dove sono stati applicati gli assunti delle categorie precedenti
Resumo:
Gait analysis allows to characterize motor function, highlighting deviations from normal motor behavior related to an underlying pathology. The widespread use of wearable inertial sensors has opened the way to the evaluation of ecological gait, and a variety of methodological approaches and algorithms have been proposed for the characterization of gait from inertial measures (e.g. for temporal parameters, motor stability and variability, specific pathological alterations). However, no comparative analysis of their performance (i.e. accuracy, repeatability) was available yet, in particular, analysing how this performance is affected by extrinsic (i.e. sensor location, computational approach, analysed variable, testing environmental constraints) and intrinsic (i.e. functional alterations resulting from pathology) factors. The aim of the present project was to comparatively analyze the influence of intrinsic and extrinsic factors on the performance of the numerous algorithms proposed in the literature for the quantification of specific characteristics (i.e. timing, variability/stability) and alterations (i.e. freezing) of gait. Considering extrinsic factors, the influence of sensor location, analyzed variable, and computational approach on the performance of a selection of gait segmentation algorithms from a literature review was analysed in different environmental conditions (e.g. solid ground, sand, in water). Moreover, the influence of altered environmental conditions (i.e. in water) was analyzed as referred to the minimum number of stride necessary to obtain reliable estimates of gait variability and stability metrics, integrating what already available in the literature for over ground gait in healthy subjects. Considering intrinsic factors, the influence of specific pathological conditions (i.e. Parkinson’s Disease) was analyzed as affecting the performance of segmentation algorithms, with and without freezing. Finally, the analysis of the performance of algorithms for the detection of gait freezing showed how results depend on the domain of implementation and IMU position.
Resumo:
This thesis explores the methods based on the free energy principle and active inference for modelling cognition. Active inference is an emerging framework for designing intelligent agents where psychological processes are cast in terms of Bayesian inference. Here, I appeal to it to test the design of a set of cognitive architectures, via simulation. These architectures are defined in terms of generative models where an agent executes a task under the assumption that all cognitive processes aspire to the same objective: the minimization of variational free energy. Chapter 1 introduces the free energy principle and its assumptions about self-organizing systems. Chapter 2 describes how from the mechanics of self-organization can emerge a minimal form of cognition able to achieve autopoiesis. In chapter 3 I present the method of how I formalize generative models for action and perception. The architectures proposed allow providing a more biologically plausible account of more complex cognitive processing that entails deep temporal features. I then present three simulation studies that aim to show different aspects of cognition, their associated behavior and the underlying neural dynamics. In chapter 4, the first study proposes an architecture that represents the visuomotor system for the encoding of actions during action observation, understanding and imitation. In chapter 5, the generative model is extended and is lesioned to simulate brain damage and neuropsychological patterns observed in apraxic patients. In chapter 6, the third study proposes an architecture for cognitive control and the modulation of attention for action selection. At last, I argue how active inference can provide a formal account of information processing in the brain and how the adaptive capabilities of the simulated agents are a mere consequence of the architecture of the generative models. Cognitive processing, then, becomes an emergent property of the minimization of variational free energy.
Resumo:
Pain is a highly complex phenomenon involving intricate neural systems, whose interactions with other physiological mechanisms are not fully understood. Standard pain assessment methods, relying on verbal communication, often fail to provide reliable and accurate information, which poses a critical challenge in the clinical context. In the era of ubiquitous and inexpensive physiological monitoring, coupled with the advancement of artificial intelligence, these new tools appear as the natural candidates to be tested to address such a challenge. This thesis aims to conduct experimental research to develop digital biomarkers for pain assessment. After providing an overview of the state-of-the-art regarding pain neurophysiology and assessment tools, methods for appropriately conditioning physiological signals and controlling confounding factors are presented. The thesis focuses on three different pain conditions: cancer pain, chronic low back pain, and pain experienced by patients undergoing neurorehabilitation. The approach presented in this thesis has shown promise, but further studies are needed to confirm and strengthen these results. Prior to developing any models, a preliminary signal quality check is essential, along with the inclusion of personal and health information in the models to limit their confounding effects. A multimodal approach is preferred for better performance, although unimodal analysis has revealed interesting aspects of the pain experience. This approach can enrich the routine clinical pain assessment procedure by enabling pain to be monitored when and where it is actually experienced, and without the involvement of explicit communication,. This would improve the characterization of the pain experience, aid in antalgic therapy personalization, and bring timely relief, with the ultimate goal of improving the quality of life of patients suffering from pain.
Resumo:
Drawing on ethnographic data collected in Italian courts and prosecution offices, this dissertation offers new perspectives on legal decision-making by highlighting the importance of emotions for constructing and evaluating legal narratives. Focusing on criminal cases, it describes and dissects how judges and prosecutors use emotions in reflection and action tied to lay narratives and legal constraints. The analysis shows that legal professionals engage in different types of emotional dynamics when dealing with stories; first, they develop gut feelings, which are either endorsed or kept at distance by means of emotional reflexivity, to comply with legal ideals of objectivity and impartiality. Second, empathy emerges as a crucial tool to direct the interaction with lay people and to interpret legal prerequisites, such as credibility, and intent. Finally, the dissertation shows that lay stories lead legal professionals to become passionate and committed towards the correct application of the law, the restoration of the moral order, and the achievement of justice. In light of the empirical findings, this thesis strives to develop a theoretical understanding of legal decision-making as narrative work that includes emotional dynamics consistent with rational, objective action.