19 resultados para Stack Overflow

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to protect river water quality, highly affected in urban areas by continuos as intermittent immissions, it is necessary to adopt measures to intercept and treat these polluted flows. In particular during rain events, river water quality is affected by CSOs activation. Built in order to protect the sewer system and the WWTP by increased flows due to heavy rains, CSOs divert excess flows to the receiving water body. On the basis of several scientific papers, and of direct evidences as well, that demonstrate the detrimental effect of CSOs discharges, also the legislative framework moved towards a stream standard point of view. The WFD (EU/69/2000) sets new goals for receiving water quality, and groundwater as well, through an integrated immission/emissions phylosophy, in which emission limits are associated with effluent standards, based on the receiving water characteristics and their specific use. For surface waters the objective is that of a “good” ecological and chemical quality status. A surface water is defined as of good ecological quality if there is only slight departure from the biological community that would be expected in conditions of minimal anthropogenic impact. Each Member State authority is responsible for preparing and implementing a River Basin Management Plan to achieve the good ecological quality, and comply with WFD requirements. In order to cope with WFD targets, and thus to improve urban receiving water quality, a CSOs control strategy need to be implemented. Temporarily storing the overflow (or at least part of it) into tanks and treating it in the WWTP, after the end of the storm, showed good results in reducing total pollutant mass spilled into the receiving river. Italian State Authority, in order to comply with WFD statements, sets general framework, and each Region has to adopt a Water Remediation Plan (PTA, Piano Tutela Acque), setting goals, methods, and terms, to improve river water quality. Emilia Romagna PTA sets 25% reduction up to 2008, and 50% reduction up to 2015 fo total pollutants masses delivered by CSOs spills. In order to plan remediation actions, a deep insight into spills dynamics is thus of great importance. The present thesis tries to understand spills dynamics through a numerical and an experimental approach. A four months monitoring and sampling campaign was set on the Bologna sewer network, and on the Navile Channel, that is the WWTP receiving water , and that receives flows from up to 28 CSOs during rain events. On the other hand, the full model of the sewer network, was build with the commercial software InfoWorks CS. The model was either calibrated with the data from the monitoring and sampling campaign. Through further model simulations interdependencies among masses spilled, rain characteristics and basin characteristics are looked for. The thesis can be seen as a basis for further insighs and for planning remediation actions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

(U-Th)/He and fission-track analyses of apatite along deep-seated tunnels crossing high-relief mountain ranges offer the opportunity to investigate climate and tectonic forcing on the topographic evolution. In this study, the thermochronologic analysis of a large set of samples collected in the Simplon railway tunnel (western-central Alps; Italy and Switzerland) and along its surface trace, coupled with kinematic and structural analysis of major fault zones intersecting the tunnel, constrains the phenomena controlling the topographic and structural evolution, during the latest stage of exhumation of the Simplon Massif, and the timing in which they operated. The study area is located at the western margin of the Lepontine metamorphic dome where a complex nappe-stack pertaining to the Penninic and Ultrahelvetic domains experienced a fast exhumation from the latest Oligocene onward. The exhumation was mainly accommodated by a west-dipping low-angle detachment (the Simplon Fault Zone) which is located just 8 km to the west of the tunnel. However, along the section itself several faults related to two principal phases both with important dip-slip kinematics have been detected. Cooling rates derived from our thermocronological data vary from about 10 °C/Ma at about 10 Ma to about 35 °C/Ma in the last 5 Ma. Such increase in the cooling rate corresponds to the most important climatic change recorded in the northern hemisphere in the last 10 Ma, i.e. the shift to wetter conditions at the end of the Messinian salinity crisis and the inception of glacial cycles in the northern hemisphere. In addition, (U-Th)/He and fission-track age patterns lack of important correlation with the topography suggesting that the present-day relief morphology is the result of recent erosional dynamics. More in details, the (U-Th)/He tunnel ages show an impressive uniformity at 2 Ma, whereas cooling rates calculated at 1 Ma increase towards the two major valleys. This indicates a focusing of erosive processes in the valleys which led to the shaping of present-day topography. Structural analysis documents the presence of two phases of brittle deformation postdating the metamorphic phases in the area. The first one is directly related to the last phase of activity along the Simplon Fault Zone and is characterized by extension towards SO and vertical shortening. The young one is characterized by extension towards NO and horizontal shortening in a along the NE-SO direction. Structures related to the first phase of brittle deformation generate important variations in the older ages' dataset, until 3 Ma, suggesting that tectonics controlled rocks exhumation up to that age. Structures related to the second phase generate some variations also in the younger age dataset, highlighting the activity of faults bordering the massif and suggesting a continuous activity also after 2 Ma. However, most of (U-Th)/He tunnel ages, varying slightly around 2 Ma, document that the Simplon area has experienced primarily erosional exhumation in this time span. In conclusion, all our data suggest that in the central Italian Alps the climatic signal gradually overrode the tectonic effects after about 5 Ma, as a consequence of the climatic instability started at end of Messinian salinity crisis and improved by the onset of glaciations in the northern hemisphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Supramolecular self-assembly represents a key technology for the spontaneous construction of nanoarchitectures and for the fabrication of materials with enhanced physical and chemical properties. In addition, a significant asset of supramolecular self-assemblies rests on their reversible formation, thanks to the kinetic lability of their non-covalent interactions. This dynamic nature can be exploited for the development of “self-healing” and “smart” materials towards the tuning of their functional properties upon various external factors. One particular intriguing objective in the field is to reach a high level of control over the shape and size of the supramolecular architectures, in order to produce well-defined functional nanostructures by rational design. In this direction, many investigations have been pursued toward the construction of self-assembled objects from numerous low-molecular weight scaffolds, for instance by exploiting multiple directional hydrogen-bonding interactions. In particular, nucleobases have been used as supramolecular synthons as a result of their efficiency to code for non-covalent interaction motifs. Among nucleobases, guanine represents the most versatile one, because of its different H-bond donor and acceptor sites which display self-complementary patterns of interactions. Interestingly, and depending on the environmental conditions, guanosine derivatives can form various types of structures. Most of the supramolecular architectures reported in this Thesis from guanosine derivatives require the presence of a cation which stabilizes, via dipole-ion interactions, the macrocyclic G-quartet that can, in turn, stack in columnar G-quadruplex arrangements. In addition, in absence of cations, guanosine can polymerize via hydrogen bonding to give a variety of supramolecular networks including linear ribbons. This complex supramolecular behavior confers to the guanine-guanine interactions their upper interest among all the homonucleobases studied. They have been subjected to intense investigations in various areas ranging from structural biology and medicinal chemistry – guanine-rich sequences are abundant in telomeric ends of chromosomes and promoter regions of DNA, and are capable of forming G-quartet based structures– to material science and nanotechnology. This Thesis, organized into five Chapters, describes mainly some recent advances in the form and function provided by self-assembly of guanine based systems. More generally, Chapter 4 will focus on the construction of supramolecular self-assemblies whose self-assembling process and self-assembled architectures can be controlled by light as external stimulus. Chapter 1 will describe some of the many recent studies of G-quartets in the general area of nanoscience. Natural G- quadruplexes can be useful motifs to build new structures and biomaterials such as self-assembled nanomachines, biosensors, therapeutic aptamer and catalysts. In Chapters 2-4 it is pointed out the core concept held in this PhD Thesis, i.e. the supramolecular organization of lipophilic guanosine derivatives with photo or chemical addressability. Chapter 2 will mainly focus on the use of cation-templated guanosine derivatives as a potential scaffold for designing functional materials with tailored physical properties, showing a new way to control the bottom-up realization of well-defined nanoarchitectures. In section 2.6.7, the self-assembly properties of compound 28a may be considered an example of open-shell moieties ordered by a supramolecular guanosine architecture showing a new (magnetic) property. Chapter 3 will report on ribbon-like structures, supramolecular architectures formed by guanosine derivatives that may be of interest for the fabrication of molecular nanowires within the framework of future molecular electronic applications. In section 3.4 we investigate the supramolecular polymerizations of derivatives dG 1 and G 30 by light scattering technique and TEM experiments. The obtained data reveal the presence of several levels of organization due to the hierarchical self-assembly of the guanosine units in ribbons that in turn aggregate in fibrillar or lamellar soft structures. The elucidation of these structures furnishes an explanation to the physical behaviour of guanosine units which display organogelator properties. Chapter 4 will describe photoresponsive self-assembling systems. Numerous research examples have demonstrated that the use of photochromic molecules in supramolecular self-assemblies is the most reasonable method to noninvasively manipulate their degree of aggregation and supramolecular architectures. In section 4.4 we report on the photocontrolled self-assembly of modified guanosine nucleobase E-42: by the introduction of a photoactive moiety at C8 it is possible to operate a photocontrol over the self-assembly of the molecule, where the existence of G-quartets can be alternately switched on and off. In section 4.5 we focus on the use of cyclodextrins as photoresponsive host-guest assemblies: αCD–azobenzene conjugates 47-48 (section 4.5.3) are synthesized in order to obtain a photoresponsive system exhibiting a fine photocontrollable degree of aggregation and self-assembled architecture. Finally, Chapter 5 contains the experimental protocols used for the research described in Chapters 2-4.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis deals with channel coding theory applied to upper layers in the protocol stack of a communication link and it is the outcome of four year research activity. A specific aspect of this activity has been the continuous interaction between the natural curiosity related to the academic blue-sky research and the system oriented design deriving from the collaboration with European industry in the framework of European funded research projects. In this dissertation, the classical channel coding techniques, that are traditionally applied at physical layer, find their application at upper layers where the encoding units (symbols) are packets of bits and not just single bits, thus explaining why such upper layer coding techniques are usually referred to as packet layer coding. The rationale behind the adoption of packet layer techniques is in that physical layer channel coding is a suitable countermeasure to cope with small-scale fading, while it is less efficient against large-scale fading. This is mainly due to the limitation of the time diversity inherent in the necessity of adopting a physical layer interleaver of a reasonable size so as to avoid increasing the modem complexity and the latency of all services. Packet layer techniques, thanks to the longer codeword duration (each codeword is composed of several packets of bits), have an intrinsic longer protection against long fading events. Furthermore, being they are implemented at upper layer, Packet layer techniques have the indisputable advantages of simpler implementations (very close to software implementation) and of a selective applicability to different services, thus enabling a better matching with the service requirements (e.g. latency constraints). Packet coding technique improvement has been largely recognized in the recent communication standards as a viable and efficient coding solution: Digital Video Broadcasting standards, like DVB-H, DVB-SH, and DVB-RCS mobile, and 3GPP standards (MBMS) employ packet coding techniques working at layers higher than the physical one. In this framework, the aim of the research work has been the study of the state-of-the-art coding techniques working at upper layer, the performance evaluation of these techniques in realistic propagation scenario, and the design of new coding schemes for upper layer applications. After a review of the most important packet layer codes, i.e. Reed Solomon, LDPC and Fountain codes, in the thesis focus our attention on the performance evaluation of ideal codes (i.e. Maximum Distance Separable codes) working at UL. In particular, we analyze the performance of UL-FEC techniques in Land Mobile Satellite channels. We derive an analytical framework which is a useful tool for system design allowing to foresee the performance of the upper layer decoder. We also analyze a system in which upper layer and physical layer codes work together, and we derive the optimal splitting of redundancy when a frequency non-selective slowly varying fading channel is taken into account. The whole analysis is supported and validated through computer simulation. In the last part of the dissertation, we propose LDPC Convolutional Codes (LDPCCC) as possible coding scheme for future UL-FEC application. Since one of the main drawbacks related to the adoption of packet layer codes is the large decoding latency, we introduce a latency-constrained decoder for LDPCCC (called windowed erasure decoder). We analyze the performance of the state-of-the-art LDPCCC when our decoder is adopted. Finally, we propose a design rule which allows to trade-off performance and latency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To continuously improve the performance of metal-oxide-semiconductor field-effect-transistors (MOSFETs), innovative device architectures, gate stack engineering and mobility enhancement techniques are under investigation. In this framework, new physics-based models for Technology Computer-Aided-Design (TCAD) simulation tools are needed to accurately predict the performance of upcoming nanoscale devices and to provide guidelines for their optimization. In this thesis, advanced physically-based mobility models for ultrathin body (UTB) devices with either planar or vertical architectures such as single-gate silicon-on-insulator (SOI) field-effect transistors (FETs), double-gate FETs, FinFETs and silicon nanowire FETs, integrating strain technology and high-κ gate stacks are presented. The effective mobility of the two-dimensional electron/hole gas in a UTB FETs channel is calculated taking into account its tensorial nature and the quantization effects. All the scattering events relevant for thin silicon films and for high-κ dielectrics and metal gates have been addressed and modeled for UTB FETs on differently oriented substrates. The effects of mechanical stress on (100) and (110) silicon band structures have been modeled for a generic stress configuration. Performance will also derive from heterogeneity, coming from the increasing diversity of functions integrated on complementary metal-oxide-semiconductor (CMOS) platforms. For example, new architectural concepts are of interest not only to extend the FET scaling process, but also to develop innovative sensor applications. Benefiting from properties like large surface-to-volume ratio and extreme sensitivity to surface modifications, silicon-nanowire-based sensors are gaining special attention in research. In this thesis, a comprehensive analysis of the physical effects playing a role in the detection of gas molecules is carried out by TCAD simulations combined with interface characterization techniques. The complex interaction of charge transport in silicon nanowires of different dimensions with interface trap states and remote charges is addressed to correctly reproduce experimental results of recently fabricated gas nanosensors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As land is developed, the impervious surfaces that are created increase the amount of runoff during rainfall events, disrupting the natural hydrologic cycle, with an increment in volume of runoff and in pollutant loadings. Pollutants deposited or derived from an activity on the land surface will likely end up in stormwater runoff in some concentration, such as nutrients, sediment, heavy metals, hydrocarbons, gasoline additives, pathogens, deicers, herbicides and pesticides. Several of these pollutants are particulate-bound, so it appears clear that sediment removal can provide significant water-quality improvements and it appears to be important the knowledge of the ability of stromwater treatment devices to retain particulate matter. For this reason three different units which remove sediments have been tested through laboratory. In particular a roadside gully pot has been tested under steady hydraulic conditions, varying the characteristics of the influent solids (diameter, particle size distribution and specific gravity). The efficiency in terms of particles retained has been evaluated as a function of influent flow rate and particles characteristics; results have been compared to efficiency evaluated applying an overflow rate model. Furthermore the role of particles settling velocity in efficiency determination has been investigated. After the experimental runs on the gully pot, a standard full-scale model of an hydrodynamic separator (HS) has been tested under unsteady influent flow rate condition, and constant solid concentration at the input. The results presented in this study illustrate that particle separation efficiency of the unit is predominately influenced by operating flow rate, which strongly affects the particles and hydraulic residence time of the system. The efficiency data have been compared to results obtained from a modified overflow rate model; moreover the residence time distribution has been experimentally determined through tracer analyses for several steady flow rates. Finally three testing experiments have been performed for two different configurations of a full-scale model of a clarifier (linear and crenulated) under unsteady influent flow rate condition, and constant solid concentration at the input. The results illustrate that particle separation efficiency of the unit is predominately influenced by the configuration of the unit itself. Turbidity measures have been used to compare turbidity with the suspended sediments concentration, in order to find a correlation between these two values, which can allow to have a measure of the sediments concentration simply installing a turbidity probe.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pervasive Sensing is a recent research trend that aims at providing widespread computing and sensing capabilities to enable the creation of smart environments that can sense, process, and act by considering input coming from both people and devices. The capabilities necessary for Pervasive Sensing are nowadays available on a plethora of devices, from embedded devices to PCs and smartphones. The wide availability of new devices and the large amount of data they can access enable a wide range of novel services in different areas, spanning from simple data collection systems to socially-aware collaborative filtering. However, the strong heterogeneity and unreliability of devices and sensors poses significant challenges. So far, existing works on Pervasive Sensing have focused only on limited portions of the whole stack of available devices and data that they can use, to propose and develop mainly vertical solutions. The push from academia and industry for this kind of services shows that time is mature for a more general support framework for Pervasive Sensing solutions able to enhance frail architectures, promote a well balanced usage of resources on different devices, and enable the widest possible access to sensed data, while ensuring a minimal energy consumption on battery-operated devices. This thesis focuses on pervasive sensing systems to extract design guidelines as foundation of a comprehensive reference model for multi-tier Pervasive Sensing applications. The validity of the proposed model is tested in five different scenarios that present peculiar and different requirements, and different hardware and sensors. The ease of mapping from the proposed logical model to the real implementations and the positive performance result campaigns prove the quality of the proposed approach and offer a reliable reference model, together with a direction for the design and deployment of future Pervasive Sensing applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A permutation is said to avoid a pattern if it does not contain any subsequence which is order-isomorphic to it. Donald Knuth, in the first volume of his celebrated book "The art of Computer Programming", observed that the permutations that can be computed (or, equivalently, sorted) by some particular data structures can be characterized in terms of pattern avoidance. In more recent years, the topic was reopened several times, while often in terms of sortable permutations rather than computable ones. The idea to sort permutations by using one of Knuth’s devices suggests to look for a deterministic procedure that decides, in linear time, if there exists a sequence of operations which is able to convert a given permutation into the identical one. In this thesis we show that, for the stack and the restricted deques, there exists an unique way to implement such a procedure. Moreover, we use these sorting procedures to create new sorting algorithms, and we prove some unexpected commutation properties between these procedures and the base step of bubblesort. We also show that the permutations that can be sorted by a combination of the base steps of bubblesort and its dual can be expressed, once again, in terms of pattern avoidance. In the final chapter we give an alternative proof of some enumerative results, in particular for the classes of permutations that can be sorted by the two restricted deques. It is well-known that the permutations that can be sorted through a restricted deque are counted by the Schrӧder numbers. In the thesis, we show how the deterministic sorting procedures yield a bijection between sortable permutations and Schrӧder paths.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of the present research is to define a Semantic Web framework for precedent modelling, by using knowledge extracted from text, metadata, and rules, while maintaining a strong text-to-knowledge morphism between legal text and legal concepts, in order to fill the gap between legal document and its semantics. The framework is composed of four different models that make use of standard languages from the Semantic Web stack of technologies: a document metadata structure, modelling the main parts of a judgement, and creating a bridge between a text and its semantic annotations of legal concepts; a legal core ontology, modelling abstract legal concepts and institutions contained in a rule of law; a legal domain ontology, modelling the main legal concepts in a specific domain concerned by case-law; an argumentation system, modelling the structure of argumentation. The input to the framework includes metadata associated with judicial concepts, and an ontology library representing the structure of case-law. The research relies on the previous efforts of the community in the field of legal knowledge representation and rule interchange for applications in the legal domain, in order to apply the theory to a set of real legal documents, stressing the OWL axioms definitions as much as possible in order to enable them to provide a semantically powerful representation of the legal document and a solid ground for an argumentation system using a defeasible subset of predicate logics. It appears that some new features of OWL2 unlock useful reasoning features for legal knowledge, especially if combined with defeasible rules and argumentation schemes. The main task is thus to formalize legal concepts and argumentation patterns contained in a judgement, with the following requirement: to check, validate and reuse the discourse of a judge - and the argumentation he produces - as expressed by the judicial text.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quench characteristics of second generation (2 G) YBCO Coated Conductor (CC) tapes are of fundamental importance for the design and safe operation of superconducting cables and magnets based on this material. Their ability to transport high current densities at high temperature, up to 77 K, and at very high fields, over 20 T, together with the increasing knowledge in their manufacturing, which is reducing their cost, are pushing the use of this innovative material in numerous system applications, from high field magnets for research to motors and generators as well as for cables. The aim of this Ph. D. thesis is the experimental analysis and numerical simulations of quench in superconducting HTS tapes and coils. A measurements facility for the characterization of superconducting tapes and coils was designed, assembled and tested. The facility consist of a cryostat, a cryocooler, a vacuum system, resistive and superconducting current leads and signal feedthrough. Moreover, the data acquisition system and the software for critical current and quench measurements were developed. A 2D model was developed using the finite element code COMSOL Multiphysics R . The problem of modeling the high aspect ratio of the tape is tackled by multiplying the tape thickness by a constant factor, compensating the heat and electrical balance equations by introducing a material anisotropy. The model was then validated both with the results of a 1D quench model based on a non-linear electric circuit coupled to a thermal model of the tape, to literature measurements and to critical current and quench measurements made in the cryogenic facility. Finally the model was extended to the study of coils and windings with the definition of the tape and stack homogenized properties. The procedure allows the definition of a multi-scale hierarchical model, able to simulate the windings with different degrees of detail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Internet of Things systems are pervasive systems evolved from cyber-physical to large-scale systems. Due to the number of technologies involved, software development involves several integration challenges. Among them, the ones preventing proper integration are those related to the system heterogeneity, and thus addressing interoperability issues. From a software engineering perspective, developers mostly experience the lack of interoperability in the two phases of software development: programming and deployment. On the one hand, modern software tends to be distributed in several components, each adopting its most-appropriate technology stack, pushing programmers to code in a protocol- and data-agnostic way. On the other hand, each software component should run in the most appropriate execution environment and, as a result, system architects strive to automate the deployment in distributed infrastructures. This dissertation aims to improve the development process by introducing proper tools to handle certain aspects of the system heterogeneity. Our effort focuses on three of these aspects and, for each one of those, we propose a tool addressing the underlying challenge. The first tool aims to handle heterogeneity at the transport and application protocol level, the second to manage different data formats, while the third to obtain optimal deployment. To realize the tools, we adopted a linguistic approach, i.e.\ we provided specific linguistic abstractions that help developers to increase the expressive power of the programming language they use, writing better solutions in more straightforward ways. To validate the approach, we implemented use cases to show that the tools can be used in practice and that they help to achieve the expected level of interoperability. In conclusion, to move a step towards the realization of an integrated Internet of Things ecosystem, we target programmers and architects and propose them to use the presented tools to ease the software development process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the main features of nineteenth-century fiction is the quasi-total disappearance of the epistolary novel that had had its heydays in the previous century. For this reason, some scholars have declared the “death” of the letter in literature after the transitional romantic period. However, Victorian novels overflow with letters that are embedded, quoted in part or described and commented on by narrators or characters. Even when its content is not revealed to the reader, the letter becomes a signifier loaded with meanings, also and particularly so, when it is burnt, torn, hidden, found or buried. The Postal Reform of 1839-40 caused the number of letters sent every year in Britain to grow from 75 to 410 million in only 14 years, and the mediatic campaign that supported it drew the attention of the population to the material aspects concerning this means of communication. Newspapers became more affordable too and they promoted a taste for sensationalism that often involved the “spectacularization” of private correspondence. Starting from an excursus on the history of the letter aimed at identifying the key aspects of the genre, this work deals with some real love correspondences from people belonging to different classes in the period from 1840 to the 1870s, to then analyse their fictional and pictorial counterparts. The general picture that emerges from this analysis is that of a Victorian society where letters were able to break down the boundaries between high and low forms of cultural expressions and where, more than ever, letters were present in people’s everyday lives as well as in the art and literature they enjoyed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Global warming and climate change have been among the most controversial topics after the industrial revolution. The main contributor to global warming is carbon dioxide (CO2), which increases the temperature by trapping heat in the atmosphere. Atmospheric CO2 concentration before the industrial era was around 280 ppm for a long period, while it has increased dramatically since the industrial revolution up to approximately 420 ppm. According to the Paris agreement it is needed to keep the temperature increase up to 2°C, preferably 1.5° C, to prevent reaching the tipping point of climate change. To keep the temperature increase below the range, it is required to find solutions to reduce CO2 emissions. The solutions can be low-carbon systems and transition from fossil fuels to renewable energy sources (RES). This thesis is allocated to the assessment of low-carbon systems and the reduction of CO2 by using RES instead of fossil fuels. One of the most important aspects to define the location and capacity of low-carbon systems is CO2 mass estimation. As mentioned, high-emission systems can be substituted by low-carbon systems. An example of high-emission systems is dredging. The global CO2 emission from dredging is relatively high which is associated with the growth of marine transport in addition to its high emission. Thus, ejectors system as alternative for dredging is investigated in chapter 2. For the transition from fossil fuels to RES, it is required to provide solutions for the RES storage problem. A solution could be zero-emission fuels such as hydrogen. However, the production of hydrogen requires electricity, and electricity production emits a large amount of CO2. Therefore, the last three chapters are allocated to hydrogen generation via electrolysis, at the current condition and scenarios of RES and variation of cell characteristics and stack materials, and its delivery.