26 resultados para Interconnected microgrids
Resumo:
What is the relationship between executive pay regulation and corporate social responsibility (CSR)? Currently, CSR is neither sufficiently included in economic research on executive pay, nor is pay regulation considered as a potential instrument in the growing body of CSR legislation. The successful proliferation of CSR in business practice and the attention policymakers and legislators now pay to it, however, have raised the importance of answering these questions. Thus, this blind spot in corporate governance—the relationship between compensation, CSR, and law—is the topic of this thesis. The dissertation approaches these issues in two subsequent research question: first, the role of executive pay regulation as an institutional determinant of CSR engagement is identified. From the results of this, the second research question arises: should legislators promote CSR engagement and—if so—how? Lastly, a case study is conducted to map how the influence of index funds as an important driver of CSR in corporate governance should be accommodated in the design of CSR legislation. The research project shows that pay regulation is part of the institutional determinants of CSR and, depending on its design, can incentivise or discourage different forms of CSR engagement. As a form of private self-regulation, CSR is closely interconnected with legal rules and the result of complex underlying drivers inside and outside the firm. The study develops a differentiation of CSR activities to accommodate this complexity, which is applied in an analysis of pay regulation. Together, these inquiries form a comprehensive picture of the ways in which pay regulation sets incentives for CSR engagement. Finally, the thesis shows how CSR-oriented pay regulation is consistent with the conventional goals of corporate governance and eventually provides a prospect for the integration of CSR and corporate law in general.
Resumo:
Modern world suffers from an intense water crisis. Emerging contaminants represent one of the most concerning elements of this issue. Substances, molecules, ions, and microorganisms take part in this vast and variegated class of pollutants, which main characteristic is to be highly resistant to traditional water purification technologies. An intense international research effort is being carried out in order to find new and innovative solutions to this problem, and graphene-based materials are one of the most promising options. Graphene oxide (GO) is a nanostructured material where domains populated by oxygenated groups alternate with interconnected areas of sp2 hybridized carbon atoms, on the surface of a one-atom thick nanosheets. GO can adsorb a great number of molecules and ions on its surface, thanks to the variety of different interactions that it can express, such as hydrogen bonding, p-p stacking, and electrostatic and hydrophobic interaction. These characteristics, added to the high superficial area, make it an optimal material for the development of innovative materials for drinking water remediation. The main concern in the use of GO in this field is to avoid secondary contaminations (i.e. GO itself must not become a pollutant). This issue can be faced through the immobilization of GO onto polymeric substrates, thus developing composite materials. The use of micro/ultrafiltration polymeric hollow fibers as substrates allows the design of adsorptive membranes, meaning devices that can perform filtration and adsorption simultaneously. In this thesis, two strategies for the development of adsorptive membranes were investigated: a core-shell strategy, where hollow fibers are coated with GO, and a coextrusion strategy, where GO is embedded in the polymeric matrix of the fibers. The so-obtained devices were exploited for both fundamental studies (i.e. molecular and ionic behaviour in between GO nanosheets) and real applications (the coextruded material is now at TRL 9).
Resumo:
Over the last century, mathematical optimization has become a prominent tool for decision making. Its systematic application in practical fields such as economics, logistics or defense led to the development of algorithmic methods with ever increasing efficiency. Indeed, for a variety of real-world problems, finding an optimal decision among a set of (implicitly or explicitly) predefined alternatives has become conceivable in reasonable time. In the last decades, however, the research community raised more and more attention to the role of uncertainty in the optimization process. In particular, one may question the notion of optimality, and even feasibility, when studying decision problems with unknown or imprecise input parameters. This concern is even more critical in a world becoming more and more complex —by which we intend, interconnected —where each individual variation inside a system inevitably causes other variations in the system itself. In this dissertation, we study a class of optimization problems which suffer from imprecise input data and feature a two-stage decision process, i.e., where decisions are made in a sequential order —called stages —and where unknown parameters are revealed throughout the stages. The applications of such problems are plethora in practical fields such as, e.g., facility location problems with uncertain demands, transportation problems with uncertain costs or scheduling under uncertain processing times. The uncertainty is dealt with a robust optimization (RO) viewpoint (also known as "worst-case perspective") and we present original contributions to the RO literature on both the theoretical and practical side.
Resumo:
This doctoral dissertation represents a cluster of research activities carried out at the DICAM Department of the University of Bologna during a three-year Ph.D. course. The goal of this research is to show how the development of an interconnected infrastructure network, aimed at promoting accessibility and sustainability of places, is fundamental in a framework of deep urban regeneration. Sustainable urban mobility plays an important role in improving the quality of life of citizens. From an environmental point of view, a sustainable mobility system means reducing fuel discharges and energy waste and, in general, aims to promote low carbon emissions. At the same time, a socially and economically sustainable mobility system should be accessible to everybody and create more job opportunities through better connectivity and mobility. Environmentally friendly means of transport such as non-motorized transport, electric vehicles, and hybrid vehicles play an important role in achieving sustainability but require a planned approach at the local policy level. The aim of this study is to demonstrate that, through a targeted reconnection of road and cycle-pedestrian routes, the quality of life of an urban area subject to degradation can be significantly improved just by increasing its accessibility and sustainability. Starting from a detailed study of the European policies and from the comparison with real similar cases, the case study of the Canal Port of Rimini (Italy) has been analysed within the European project FRAMESPORT. The analysis allowed the elaboration of a multicriterial methodology to get to the definition of a project proposal and of a priority scale of interventions. The applied methodology is a valuable tool that may be used in the future in similar urban contexts. Finally, the whole project was represented by using virtual reality to visually show the difference between the before and after the regeneration intervention.
Assessing brain connectivity through electroencephalographic signal processing and modeling analysis
Resumo:
Brain functioning relies on the interaction of several neural populations connected through complex connectivity networks, enabling the transmission and integration of information. Recent advances in neuroimaging techniques, such as electroencephalography (EEG), have deepened our understanding of the reciprocal roles played by brain regions during cognitive processes. The underlying idea of this PhD research is that EEG-related functional connectivity (FC) changes in the brain may incorporate important neuromarkers of behavior and cognition, as well as brain disorders, even at subclinical levels. However, a complete understanding of the reliability of the wide range of existing connectivity estimation techniques is still lacking. The first part of this work addresses this limitation by employing Neural Mass Models (NMMs), which simulate EEG activity and offer a unique tool to study interconnected networks of brain regions in controlled conditions. NMMs were employed to test FC estimators like Transfer Entropy and Granger Causality in linear and nonlinear conditions. Results revealed that connectivity estimates reflect information transmission between brain regions, a quantity that can be significantly different from the connectivity strength, and that Granger causality outperforms the other estimators. A second objective of this thesis was to assess brain connectivity and network changes on EEG data reconstructed at the cortical level. Functional brain connectivity has been estimated through Granger Causality, in both temporal and spectral domains, with the following goals: a) detect task-dependent functional connectivity network changes, focusing on internal-external attention competition and fear conditioning and reversal; b) identify resting-state network alterations in a subclinical population with high autistic traits. Connectivity-based neuromarkers, compared to the canonical EEG analysis, can provide deeper insights into brain mechanisms and may drive future diagnostic methods and therapeutic interventions. However, further methodological studies are required to fully understand the accuracy and information captured by FC estimates, especially concerning nonlinear phenomena.
Resumo:
The aim of this research is to improve the understanding of the factors that control the formation of karst porosity in hypogene settings and its associated patterns of void-conduit networks. Subsurface voids created by hypogene dissolution may span from few microns to decametric tubes providing interconnected conduit systems and forming highly anisotropic permeability domains in many reservoirs. Characterizing the spatial-morphological organization of hypogene karst is a challenging task that has dramatic implications for the applied industry, given that only partial data can be acquired from the subsurface by indirect techniques. Therefore, two outcropping cave analogues are examined: the Cavallone-Bove Cave in the Majella Massif (Italy), and the karst systems of the Salitre Formation (Brazil). In the latter, a peculiar example of hypogene speleogenesis associated with silicification has been studied, providing an analogue of many karstified reservoirs hosted in cherts or cherty-carbonates within mixed sedimentary sequences. The first part of the thesis is focused on the relationships between fracture patterns and flow pathways in deformed units in: 1) a fold-and-thrust setting (Majella Massif); 2) a cratonic block (Brazil). These settings represent potential playgrounds for the migration and accumulation of geofluids, where hypogene conduits may affect flow pathways, fluid storage, and reservoir properties. The results indicate that localized deformation producing cross-formational fracture zones associated with anticline hinges or fault damage zones is critical for hypogene fluid migration and karstification. The second part of the thesis deals with the multidisciplinary study of hydrothermal silicification and hypogene dissolution in Calixto Cave (Brazil). Petrophysical analyses and a geochemical characterization of silica deposits are used to unravel the spatial-morphological organization of the conduit system and its speleogenesis. The novel results obtained from this cave shed new light on the relationship between hydrothermal silicification, hypogene dissolution and the development of multistorey cave systems in layered carbonate-siliciclastic sequences.
Resumo:
Values are beliefs or principles that are deemed significant or desirable within a specific society or culture, serving as the fundamental underpinnings for ethical and socio-behavioral norms. The objective of this research is to explore the domain encompassing moral, cultural, and individual values. To achieve this, we employ an ontological approach to formally represent the semantic relations within the value domain. The theoretical framework employed adopts Fillmore’s frame semantics, treating values as semantic frames. A value situation is thus characterized by the co-occurrence of specific semantic roles fulfilled within a given event or circumstance. Given the intricate semantics of values as abstract entities with high social capital, our investigation extends to two interconnected domains. The first domain is embodied cognition, specifically image schemas, which are cognitive patterns derived from sensorimotor experiences that shape our conceptualization of entities in the world. The second domain pertains to emotions, which are inherently intertwined with the realm of values. Consequently, our approach endeavors to formalize the semantics of values within an embodied cognition framework, recognizing values as emotional-laden semantic frames. The primary ontologies proposed in this work are: (i) ValueNet, an ontology network dedicated to the domain of values; (ii) ISAAC, the Image Schema Abstraction And Cognition ontology; and (iii) EmoNet, an ontology for theories of emotions. The knowledge formalization adheres to established modeling practices, including the reuse of semantic web resources such as WordNet, VerbNet, FrameNet, DBpedia, and alignment to foundational ontologies like DOLCE, as well as the utilization of Ontology Design Patterns. These ontological resources are operationalized through the development of a fully explainable frame-based detector capable of identifying values, emotions, and image schemas generating knowledge graphs from from natural language, leveraging the semantic dependencies of a sentence, and allowing non trivial higher layer knowledge inferences.
Resumo:
The purpose of this research study is to discuss privacy and data protection-related regulatory and compliance challenges posed by digital transformation in healthcare in the wake of the COVID-19 pandemic. The public health crisis accelerated the development of patient-centred remote/hybrid healthcare delivery models that make increased use of telehealth services and related digital solutions. The large-scale uptake of IoT-enabled medical devices and wellness applications, and the offering of healthcare services via healthcare platforms (online doctor marketplaces) have catalysed these developments. However, the use of new enabling technologies (IoT, AI) and the platformisation of healthcare pose complex challenges to the protection of patient’s privacy and personal data. This happens at a time when the EU is drawing up a new regulatory landscape for the use of data and digital technologies. Against this background, the study presents an interdisciplinary (normative and technology-oriented) critical assessment on how the new regulatory framework may affect privacy and data protection requirements regarding the deployment and use of Internet of Health Things (hardware) devices and interconnected software (AI systems). The study also assesses key privacy and data protection challenges that affect healthcare platforms (online doctor marketplaces) in their offering of video API-enabled teleconsultation services and their (anticipated) integration into the European Health Data Space. The overall conclusion of the study is that regulatory deficiencies may create integrity risks for the protection of privacy and personal data in telehealth due to uncertainties about the proper interplay, legal effects and effectiveness of (existing and proposed) EU legislation. The proliferation of normative measures may increase compliance costs, hinder innovation and ultimately, deprive European patients from state-of-the-art digital health technologies, which is paradoxically, the opposite of what the EU plans to achieve.
Resumo:
The world currently faces a paradox in terms of accessibility for people with disabilities. While digital technologies hold immense potential to improve their quality of life, the majority of web content still exhibits critical accessibility issues. This PhD thesis addresses this challenge by proposing two interconnected research branches. The first introduces a groundbreaking approach to improving web accessibility by rethinking how it is approached, making it more accessible itself. It involves the development of: 1. AX, a declarative framework of web components that enforces the generation of accessible markup by means of static analysis. 2. An innovative accessibility testing and evaluation methodology, which communicates test results by exploiting concepts that developers are already familiar with (visual rendering and mouse operability) to convey the accessibility of a page. This methodology is implemented through the SAHARIAN browser extension. 3. A11A, a categorized and structured collection of curated accessibility resources aimed at facilitating their intended audiences discover and use them. The second branch focuses on unleashing the full potential of digital technologies to improve accessibility in the physical world. The thesis proposes the SCAMP methodology to make scientific artifacts accessible to blind, visually impaired individuals, and the general public. It enhances the natural characteristics of objects, making them more accessible through interactive, multimodal, and multisensory experiences. Additionally, the prototype of \gls{a11yvt}, a system supporting accessible virtual tours, is presented. It provides blind and visually impaired individuals with features necessary to explore unfamiliar indoor environments, while maintaining universal design principles that makes it suitable for usage by the general public. The thesis extensively discusses the theoretical foundations, design, development, and unique characteristics of these innovative tools. Usability tests with the intended target audiences demonstrate the effectiveness of the proposed artifacts, suggesting their potential to significantly improve the current state of accessibility.
Resumo:
The aim of this thesis is to present exact and heuristic algorithms for the integrated planning of multi-energy systems. The idea is to disaggregate the energy system, starting first with its core the Central Energy System, and then to proceed towards the Decentral part. Therefore, a mathematical model for the generation expansion operations to optimize the performance of a Central Energy System system is first proposed. To ensure that the proposed generation operations are compatible with the network, some extensions of the existing network are considered as well. All these decisions are evaluated both from an economic viewpoint and from an environmental perspective, as specific constraints related to greenhouse gases emissions are imposed in the formulation. Then, the thesis presents an optimization model for solar organic Rankine cycle in the context of transactive energy trading. In this study, the impact that this technology can have on the peer-to-peer trading application in renewable based community microgrids is inspected. Here the consumer becomes a prosumer and engages actively in virtual trading with other prosumers at the distribution system level. Moreover, there is an investigation of how different technological parameters of the solar Organic Rankine Cycle may affect the final solution. Finally, the thesis introduces a tactical optimization model for the maintenance operations’ scheduling phase of a Combined Heat and Power plant. Specifically, two types of cleaning operations are considered, i.e., online cleaning and offline cleaning. Furthermore, a piecewise linear representation of the electric efficiency variation curve is included. Given the challenge of solving the tactical management model, a heuristic algorithm is proposed. The heuristic works by solving the daily operational production scheduling problem, based on the final consumer’s demand and on the electricity prices. The aggregate information from the operational problem is used to derive maintenance decisions at a tactical level.
Resumo:
The present Dissertation shows how recent statistical analysis tools and open datasets can be exploited to improve modelling accuracy in two distinct yet interconnected domains of flood hazard (FH) assessment. In the first Part, unsupervised artificial neural networks are employed as regional models for sub-daily rainfall extremes. The models aim to learn a robust relation to estimate locally the parameters of Gumbel distributions of extreme rainfall depths for any sub-daily duration (1-24h). The predictions depend on twenty morphoclimatic descriptors. A large study area in north-central Italy is adopted, where 2238 annual maximum series are available. Validation is performed over an independent set of 100 gauges. Our results show that multivariate ANNs may remarkably improve the estimation of percentiles relative to the benchmark approach from the literature, where Gumbel parameters depend on mean annual precipitation. Finally, we show that the very nature of the proposed ANN models makes them suitable for interpolating predicted sub-daily rainfall quantiles across space and time-aggregation intervals. In the second Part, decision trees are used to combine a selected blend of input geomorphic descriptors for predicting FH. Relative to existing DEM-based approaches, this method is innovative, as it relies on the combination of three characteristics: (1) simple multivariate models, (2) a set of exclusively DEM-based descriptors as input, and (3) an existing FH map as reference information. First, the methods are applied to northern Italy, represented with the MERIT DEM (∼90m resolution), and second, to the whole of Italy, represented with the EU-DEM (25m resolution). The results show that multivariate approaches may (a) significantly enhance flood-prone areas delineation relative to a selected univariate one, (b) provide accurate predictions of expected inundation depths, (c) produce encouraging results in extrapolation, (d) complete the information of imperfect reference maps, and (e) conveniently convert binary maps into continuous representation of FH.