898 resultados para Sistemi di rigenerazione pompe di calore alta efficienza


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many industries and academic institutions share the vision that an appropriate use of information originated from the environment may add value to services in multiple domains and may help humans in dealing with the growing information overload which often seems to jeopardize our life. It is also clear that information sharing and mutual understanding between software agents may impact complex processes where many actors (humans and machines) are involved, leading to relevant socioeconomic benefits. Starting from these two input, architectural and technological solutions to enable “environment-related cooperative digital services” are here explored. The proposed analysis starts from the consideration that our environment is physical space and here diversity is a major value. On the other side diversity is detrimental to common technological solutions, and it is an obstacle to mutual understanding. An appropriate environment abstraction and a shared information model are needed to provide the required levels of interoperability in our heterogeneous habitat. This thesis reviews several approaches to support environment related applications and intends to demonstrate that smart-space-based, ontology-driven, information-sharing platforms may become a flexible and powerful solution to support interoperable services in virtually any domain and even in cross-domain scenarios. It also shows that semantic technologies can be fruitfully applied not only to represent application domain knowledge. For example semantic modeling of Human-Computer Interaction may support interaction interoperability and transformation of interaction primitives into actions, and the thesis shows how smart-space-based platforms driven by an interaction ontology may enable natural ad flexible ways of accessing resources and services, e.g, with gestures. An ontology for computational flow execution has also been built to represent abstract computation, with the goal of exploring new ways of scheduling computation flows with smart-space-based semantic platforms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extrusion is a process used to form long products of constant cross section, from simple billets, with a high variety of shapes. Aluminum alloys are the materials most processed in the extrusion industry due to their deformability and the wide field of applications that range from buildings to aerospace and from design to automotive industries. The diverse applications imply different requirements that can be fulfilled by the wide range of alloys and treatments, that is from critical structural application to high quality surface and aesthetical aspect. Whether one or the other is the critical aspect, they both depend directly from microstructure. The extrusion process is moreover marked by high deformations and complex strain gradients making difficult the control of microstructure evolution that is at present not yet fully achieved. Nevertheless the evolution of Finite Element modeling has reached a maturity and can therefore start to be used as a tool for investigation and prediction of microstructure evolution. This thesis will analyze and model the evolution of microstructure throughout the entire extrusion process for 6XXX series aluminum alloys. Core phase of the work was the development of specific tests to investigate the microstructure evolution and validate the model implemented in a commercial FE code. Along with it two essential activities were carried out for a correct calibration of the model beyond the simple research of contour parameters, thus leading to the understanding and control of both code and process. In this direction activities were also conducted on building critical knowhow on the interpretation of microstructure and extrusion phenomena. It is believed, in fact, that the sole analysis of the microstructure evolution regardless of its relevance in the technological aspects of the process would be of little use for the industry as well as ineffective for the interpretation of the results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il dibattito sullo sviluppo delle culture dell’età del Bronzo nel territorio dell’Emilia-Romagna sta portando una rinnovata attenzione sull’area romagnola. Le indagini si sono concentrate sull’area compresa tra il fiume Panaro e il Mare Adriatico, riconoscibile nell’odierna Romagna ed in parte della bassa pianura emiliana. Si trattava un territorio strategico, un vero e proprio crocevia socio-economico fra la cultura terramaricola e quelle centro italiche di Grotta Nuova. La presente ricerca di dottorato ha portato alla ricostruzione dei sistemi di gestione e di sfruttamento delle risorse animali in Emilia-Romagna durante l’Età del Bronzo, con particolare attenzione alla definizione della capacità portante ambientale dei diversi territori indagati e delle loro modalità di sfruttamento in relazione alla razionalizzazione della pratiche di allevamento. Sono state studiate in dettaglio le filiere di trasformazione dei prodotti animali primari e secondari definendo, quindi, i caratteri delle paleoeconomie locali nel processo di evoluzione della Romagna durante l’età del Bronzo. La ricerca si è basata sullo studio archeozoologico completo su 13 siti recentemente indagati, distribuiti nelle provincie di: Bologna, Ferrara, Ravenna, Forlì/Cesena e Rimini, e su una revisione completa delle evidenze archeozoologiche prodotte da studi pregressi. Le analisi non si sono limitate al riconoscimento delle specie, ma hanno teso all’individuazione ed alla valutazione di parametri complessi per ricostruire le strategie di abbattimento e le tecniche di sfruttamento e macellazione dei diversi gruppi animali. E’ stato possibile, quindi, valutare il peso ecologico di mandrie e greggi sul territorio e l’impatto economico ed ecologico di un allevamento sempre più sistematico e razionale, sia dal punto di vista dell’organizzazione territoriale degli insediamenti, sia per quanto riguarda le ripercussioni sulla gestione delle risorse agricole ed ambientali in generale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nonostante il fatto che una gran parte del mondo viva ancora oggi a livelli di sussistenza, i dati in nostro possesso ci indicano che le attività umane stanno esaurendo le risorse ambientali del pianeta. La causa di questo eccessivo sfruttamento delle risorse è da ricercare nei pattern non sostenibili di produzione e consumo dei paesi sviluppati. La preoccupazione per le conseguenze sull'ambiente e la lotta al cambiamento climatico hanno posto le politiche ambientali al centro dell'attenzione internazionale. Il Protocollo di Kyoto e la Commissione Europea hanno stabilito degli obiettivi di riduzione delle emissioni di gas serra, rispettivamente del 12% entro il 2012 e del 20% entro il 2020. All'interno del Protocollo di Kyoto l'obiettivo per l'Italia è ridurre del 6,5% le emissioni di gas serra nazionali rispetto al 1990. Le politiche mirate alla riduzione delle emissioni di gas serra hanno in genere come obiettivo gli impianti energetici e i trasporti. Poca attenzione viene data alla filiera agroalimentare pur sapendo che l'agricoltura ha un forte impatto sull'ambiente e recenti studi stimano che circa il 50% del cibo prodotto viene perso o buttato via dalla produzione al consumo. Alla luce di questi dati, il mio lavoro di tesi ha avuto come obiettivo quello di quantificare i rifiuti e gli sprechi agroalimentari in Europa e in Italia e stimare l'impatto ambientale associato. I dati raccolti in questa tesi mettono in evidenza l'importanza di migliorare l'efficienza della filiera agroalimentare per ridurre l'impatto ambientale nazionale e rispettare gli accordi internazionali sulla lotta ai cambiamenti climatici.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I sistemi di versionamento moderni quali "git" o "svn" sono ad oggi basati su svariati algoritmi di analisi delle differenze (detti algoritmi di diffing) tra documenti (detti versioni). Uno degli algoritmi impiegati con maggior successo a tal proposito è il celebre "diff" di Unix. Tale programma è in grado di rilevare le modifiche necessarie da apportare ad un documento al fine di ottenerne un altro in termini di aggiunta o rimozione di linee di testo. L'insieme di tali modifiche prende nome di "delta". La crescente richiesta e applicazione dei documenti semi-strutturati (ed in particolar modo dei documenti XML) da parte della comunità informatica soprattutto in ambito web ha motivato la ricerca di algoritmi di diffing più raffinati che operino al meglio su tale tipologia di documenti. Svariate soluzioni di successo sono state discusse; algoritmi ad alte prestazioni capaci di individuare differenze più sottili della mera aggiunta o rimozione di testo quali il movimento di interi nodi, il loro riordinamento finanche il loro incapsulamento e così via. Tuttavia tali algoritmi mancano di versatilità. L'incapsulamento di un nodo potrebbe essere considerata una differenza troppo (o troppo poco) generale o granulare in taluni contesti. Nella realtà quotidiana ogni settore, pubblico o commerciale, interessato a rilevare differenze tra documenti ha interesse nell'individuarne sempre e soltanto un sottoinsieme molto specifico. Si pensi al parlamento italiano interessato all'analisi comparativa di documenti legislativi piuttosto che ad un ospedale interessato alla diagnostica relativa alla storia clinica di un paziente. Il presente elaborato di tesi dimostra come sia possibile sviluppare un algoritmo in grado di rilevare le differenze tra due documenti semi-strutturati (in termini del più breve numero di modifiche necessarie per trasformare l'uno nell'altro) che sia parametrizzato relativamente alle funzioni di trasformazione operanti su tali documenti. Vengono discusse le definizioni essenziali ed i principali risultati alla base della teoria delle differenze e viene dimostrato come assunzioni più blande inducano la non calcolabilità dell'algoritmo di diffing in questione.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis aims at analysing the role of collective action as a viable alternative to the traditional forms of intervention in agriculture in order to encourage the provision of agri-environmental public goods. Which are the main benefits of collective action, in terms of effectiveness and efficiency, compared to traditional market or public intervention policies? What are the drivers that encourage farmers to participate into collective action? To what extent it is possible to incorporate collective aspects into policies aimed at providing agri-environmental public goods? With the objective of addressing these research questions, the thesis is articulated in two levels: a theoretical analysis on the role of collective action in the provision of public goods and a specific investigation of two local initiative,s were an approach collective management of agro-environmental resources was successfully implemented. The first case study concerns a project named “Custodians of the Territory”, developed by the local agency in Tuscany “Comunità Montana Media Valle del Serchio”, which settled for an agreement with local farmers for a collective provision of environmental services related to the hydro-geological management of the district. The second case study is related to the territorial agri-environmental agreement experimented in Valdaso (Marche), where local farmers have adopted integrated pest management practices collectively with the aim of reducing the environmental impact of their farming practices. The analysis of these initiatives, carried out through participatory methods (Rapid Rural Appraisal), allowed developing a theoretical discussion on the role of innovative tools (such as co-production and co-management) in the provision of agri-environmental public goods. The case studies also provided some recommendations on the government intervention and policies needed to promote successful collective action for the provision of agri-environmental public goods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last few years, a new generation of Business Intelligence (BI) tools called BI 2.0 has emerged to meet the new and ambitious requirements of business users. BI 2.0 not only introduces brand new topics, but in some cases it re-examines past challenges according to new perspectives depending on the market changes and needs. In this context, the term pervasive BI has gained increasing interest as an innovative and forward-looking perspective. This thesis investigates three different aspects of pervasive BI: personalization, timeliness, and integration. Personalization refers to the capacity of BI tools to customize the query result according to the user who takes advantage of it, facilitating the fruition of BI information by different type of users (e.g., front-line employees, suppliers, customers, or business partners). In this direction, the thesis proposes a model for On-Line Analytical Process (OLAP) query personalization to reduce the query result to the most relevant information for the specific user. Timeliness refers to the timely provision of business information for decision-making. In this direction, this thesis defines a new Data Warehuose (DW) methodology, Four-Wheel-Drive (4WD), that combines traditional development approaches with agile methods; the aim is to accelerate the project development and reduce the software costs, so as to decrease the number of DW project failures and favour the BI tool penetration even in small and medium companies. Integration refers to the ability of BI tools to allow users to access information anywhere it can be found, by using the device they prefer. To this end, this thesis proposes Business Intelligence Network (BIN), a peer-to-peer data warehousing architecture, where a user can formulate an OLAP query on its own system and retrieve relevant information from both its local system and the DWs of the net, preserving its autonomy and independency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La realizzazione di stati non classici del campo elettromagnetico e in sistemi di spin è uno stimolo alla ricerca, teorica e sperimentale, da almeno trent'anni. Lo studio di atomi freddi in trappole di dipolo permette di avvicinare questo obbiettivo oltre a offrire la possibilità di effettuare esperimenti su condesati di Bose Einstein di interesse nel campo dell'interferometria atomica. La protezione della coerenza di un sistema macroscopico di spin tramite sistemi di feedback è a sua volta un obbiettivo che potrebbe portare a grandi sviluppi nel campo della metrologia e dell'informazione quantistica. Viene fornita un'introduzione a due tipologie di misura non considerate nei programmi standard di livello universitario: la misura non distruttiva (Quantum Non Demolition-QND) e la misura debole. Entrambe sono sfruttate nell'ambito dell'interazione radiazione materia a pochi fotoni o a pochi atomi (cavity QED e Atom boxes). Una trattazione delle trappole di dipolo per atomi neutri e ai comuni metodi di raffreddamento è necessaria all'introduzione all'esperimento BIARO (acronimo francese Bose Einstein condensate for Atomic Interferometry in a high finesse Optical Resonator), che si occupa di metrologia tramite l'utilizzo di condensati di Bose Einstein e di sistemi di feedback. Viene descritta la progettazione, realizzazione e caratterizzazione di un servo controller per la stabilizzazione della potenza ottica di un laser. Il dispositivo è necessario per la compensazione del ligh shift differenziale indotto da un fascio laser a 1550nm utilizzato per creare una trappola di dipolo su atomi di rubidio. La compensazione gioca un ruolo essenziale nel miglioramento di misure QND necessarie, in uno schema di feedback, per mantenere la coerenza in sistemi collettivi di spin, recentemente realizzato.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The quest for universal memory is driving the rapid development of memories with superior all-round capabilities in non-volatility, high speed, high endurance and low power. The memory subsystem accounts for a significant cost and power budget of a computer system. Current DRAM-based main memory systems are starting to hit the power and cost limit. To resolve this issue the industry is improving existing technologies such as Flash and exploring new ones. Among those new technologies is the Phase Change Memory (PCM), which overcomes some of the shortcomings of the Flash such as durability and scalability. This alternative non-volatile memory technology, which uses resistance contrast in phase-change materials, offers more density relative to DRAM, and can help to increase main memory capacity of future systems while remaining within the cost and power constraints. Chalcogenide materials can suitably be exploited for manufacturing phase-change memory devices. Charge transport in amorphous chalcogenide-GST used for memory devices is modeled using two contributions: hopping of trapped electrons and motion of band electrons in extended states. Crystalline GST exhibits an almost Ohmic I(V) curve. In contrast amorphous GST shows a high resistance at low biases while, above a threshold voltage, a transition takes place from a highly resistive to a conductive state, characterized by a negative differential-resistance behavior. A clear and complete understanding of the threshold behavior of the amorphous phase is fundamental for exploiting such materials in the fabrication of innovative nonvolatile memories. The type of feedback that produces the snapback phenomenon is described as a filamentation in energy that is controlled by electron–electron interactions between trapped electrons and band electrons. The model thus derived is implemented within a state-of-the-art simulator. An analytical version of the model is also derived and is useful for discussing the snapback behavior and the scaling properties of the device.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il presente elaborato ha lo scopo di analizzare lo stato attuale dell'accumulo dell'energia applicato al caso dell'energia eolica. Dopo una introduzione in cui viene presentato il perchè della necessità di garantire sistemi di accumulo dell'energia e quali sono i relativi vantaggi, viene proposto un caso studio di un impianto eolico tipico: partendo dalla potenzialità produttiva dell'impianto e dal tipo di funzione che il sistema di accumulo deve svolgere si passa alla scelta e dimensionamento del sistema più adatto.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A control-oriented model of a Dual Clutch Transmission was developed for real-time Hardware In the Loop (HIL) applications, to support model-based development of the DCT controller. The model is an innovative attempt to reproduce the fast dynamics of the actuation system while maintaining a step size large enough for real-time applications. The model comprehends a detailed physical description of hydraulic circuit, clutches, synchronizers and gears, and simplified vehicle and internal combustion engine sub-models. As the oil circulating in the system has a large bulk modulus, the pressure dynamics are very fast, possibly causing instability in a real-time simulation; the same challenge involves the servo valves dynamics, due to the very small masses of the moving elements. Therefore, the hydraulic circuit model has been modified and simplified without losing physical validity, in order to adapt it to the real-time simulation requirements. The results of offline simulations have been compared to on-board measurements to verify the validity of the developed model, that was then implemented in a HIL system and connected to the TCU (Transmission Control Unit). Several tests have been performed: electrical failure tests on sensors and actuators, hydraulic and mechanical failure tests on hydraulic valves, clutches and synchronizers, and application tests comprehending all the main features of the control performed by the TCU. Being based on physical laws, in every condition the model simulates a plausible reaction of the system. The first intensive use of the HIL application led to the validation of the new safety strategies implemented inside the TCU software. A test automation procedure has been developed to permit the execution of a pattern of tests without the interaction of the user; fully repeatable tests can be performed for non-regression verification, allowing the testing of new software releases in fully automatic mode.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La situazione energetica mondiale, strettamente dipendente dai combustibili fossili, richiede un notevole cambio di rotta, per puntare su fonti energetiche più sostenibili dal punto di vista ambientale, ma che contemporaneamente permettano di evitare le negoziazioni in mercati competitivi come quello del petrolio. Un'efficiente soluzione per colmare i limiti legati all'utilizzo delle fonti rinnovabili è l'uso di sistemi di accumulo dell'energia; grazie a questi sistemi è possibile trasformare l'energia elettrica in altre forme di energia (meccanica, chimica, etc.) per poter essere accumulata e conservata fino al momento dell'utilizzo. Nel corso della trattazione verranno presentate le principali tecnologie per l'accumulo e/o lo stoccaggio di energia, attraverso una descrizione degli impianti, dei principali vantaggi e degli aspetti negativi legati a ciascuno di essi. In seguito saranno fissati i parametri fondamentali per l'analisi delle tecnologie al fine di stabilire, in base alle funzioni da assolvere, quali siano più adatte allo scopo.