16 resultados para one-to-many mapping
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Research in art conservation has been developed from the early 1950s, giving a significant contribution to the conservation-restoration of cultural heritage artefacts. In fact, only through a profound knowledge about the nature and conditions of constituent materials, suitable decisions on the conservation and restoration measures can thus be adopted and preservation practices enhanced. The study of ancient artworks is particularly challenging as they can be considered as heterogeneous and multilayered systems where numerous interactions between the different components as well as degradation and ageing phenomena take place. However, difficulties to physically separate the different layers due to their thickness (1-200 µm) can result in the inaccurate attribution of the identified compounds to a specific layer. Therefore, details can only be analysed when the sample preparation method leaves the layer structure intact, as for example the preparation of embedding cross sections in synthetic resins. Hence, spatially resolved analytical techniques are required not only to exactly characterize the nature of the compounds but also to obtain precise chemical and physical information about ongoing changes. This thesis focuses on the application of FTIR microspectroscopic techniques for cultural heritage materials. The first section is aimed at introducing the use of FTIR microscopy in conservation science with a particular attention to the sampling criteria and sample preparation methods. The second section is aimed at evaluating and validating the use of different FTIR microscopic analytical methods applied to the study of different art conservation issues which may be encountered dealing with cultural heritage artefacts: the characterisation of the artistic execution technique (chapter II-1), the studies on degradation phenomena (chapter II-2) and finally the evaluation of protective treatments (chapter II-3). The third and last section is divided into three chapters which underline recent developments in FTIR spectroscopy for the characterisation of paint cross sections and in particular thin organic layers: a newly developed preparation method with embedding systems in infrared transparent salts (chapter III-1), the new opportunities offered by macro-ATR imaging spectroscopy (chapter III-2) and the possibilities achieved with the different FTIR microspectroscopic techniques nowadays available (chapter III-3). In chapter II-1, FTIR microspectroscopy as molecular analysis, is presented in an integrated approach with other analytical techniques. The proposed sequence is optimized in function of the limited quantity of sample available and this methodology permits to identify the painting materials and characterise the adopted execution technique and state of conservation. Chapter II-2 describes the characterisation of the degradation products with FTIR microscopy since the investigation on the ageing processes encountered in old artefacts represents one of the most important issues in conservation research. Metal carboxylates resulting from the interaction between pigments and binding media are characterized using synthesised metal palmitates and their production is detected on copper-, zinc-, manganese- and lead- (associated with lead carbonate) based pigments dispersed either in oil or egg tempera. Moreover, significant effects seem to be obtained with iron and cobalt (acceleration of the triglycerides hydrolysis). For the first time on sienna and umber paints, manganese carboxylates are also observed. Finally in chapter II-3, FTIR microscopy is combined with further elemental analyses to characterise and estimate the performances and stability of newly developed treatments, which should better fit conservation-restoration problems. In the second part, in chapter III-1, an innovative embedding system in potassium bromide is reported focusing on the characterisation and localisation of organic substances in cross sections. Not only the identification but also the distribution of proteinaceous, lipidic or resinaceous materials, are evidenced directly on different paint cross sections, especially in thin layers of the order of 10 µm. Chapter III-2 describes the use of a conventional diamond ATR accessory coupled with a focal plane array to obtain chemical images of multi-layered paint cross sections. A rapid and simple identification of the different compounds is achieved without the use of any infrared microscope objectives. Finally, the latest FTIR techniques available are highlighted in chapter III-3 in a comparative study for the characterisation of paint cross sections. Results in terms of spatial resolution, data quality and chemical information obtained are presented and in particular, a new FTIR microscope equipped with a linear array detector, which permits reducing the spatial resolution limit to approximately 5 µm, provides very promising results and may represent a good alternative to either mapping or imaging systems.
Resumo:
In this work, we discuss some theoretical topics related to many-body physics in ultracold atomic and molecular gases. First, we present a comparison between experimental data and theoretical predictions in the context of quantum emulator of quantum field theories, finding good results which supports the efficiency of such simulators. In the second and third parts, we investigate several many-body properties of atomic and molecular gases confined in one dimension.
Resumo:
This dissertation deals with the problems and the opportunities of a semiotic approach to perception. Is perception, seen as the ability to detect and articulate an coherent picture of the surrounding environment, describable in semiotic terms? Is it possibile, for a discipline wary of any attempt to reduce semiotic meaning to a psychological and naturalized issue, to come to terms with the cognitive, automatic and genetically hard-wired specifics of our perceptive systems? In order to deal with perceptive signs, is it necessary to modify basic assumptions in semiotics, or can we simply extend the range of our conceptual instruments and definitions? And what if perception is a wholly different semiotic machinery, to be considered as sui generis, but nonetheless interesting for a general theory of semiotics? By exposing the major ideas put forward by the main thinkers in the semiotic field, Mattia de Bernardis gives a comprehensive picture of the theoretical situation, adding to the classical dichotomy between structuralist and interpretative semiotics another distinction, that between homogeneist and etherogeneist theories of perception. Homogeneist semioticians see perception as one of many semiotic means of sign production, totally similar to the other ones, while heterogeneist semioticians consider perceptive meaning as essentially different from normal semiotic meaning, so much so that it requires new methods and ideas to be analyzed. The main example of etherogeneist approach to perception in semiotic literature, Umberto Eco’s “primary semiosis” is then presented, critically examined and eventually rejected and the homogeneist stance is affirmed as the most promising path towards a semiotic theory of perception.
Resumo:
Se il lavoro dello storico è capire il passato come è stato compreso dalla gente che lo ha vissuto, allora forse non è azzardato pensare che sia anche necessario comunicare i risultati delle ricerche con strumenti propri che appartengono a un'epoca e che influenzano la mentalità di chi in quell'epoca vive. Emergenti tecnologie, specialmente nell’area della multimedialità come la realtà virtuale, permettono agli storici di comunicare l’esperienza del passato in più sensi. In che modo la storia collabora con le tecnologie informatiche soffermandosi sulla possibilità di fare ricostruzioni storiche virtuali, con relativi esempi e recensioni? Quello che maggiormente preoccupa gli storici è se una ricostruzione di un fatto passato vissuto attraverso la sua ricreazione in pixels sia un metodo di conoscenza della storia che possa essere considerato valido. Ovvero l'emozione che la navigazione in una realtà 3D può suscitare, è un mezzo in grado di trasmettere conoscenza? O forse l'idea che abbiamo del passato e del suo studio viene sottilmente cambiato nel momento in cui lo si divulga attraverso la grafica 3D? Da tempo però la disciplina ha cominciato a fare i conti con questa situazione, costretta soprattutto dall'invasività di questo tipo di media, dalla spettacolarizzazione del passato e da una divulgazione del passato parziale e antiscientifica. In un mondo post letterario bisogna cominciare a pensare che la cultura visuale nella quale siamo immersi sta cambiando il nostro rapporto con il passato: non per questo le conoscenze maturate fino ad oggi sono false, ma è necessario riconoscere che esiste più di una verità storica, a volte scritta a volte visuale. Il computer è diventato una piattaforma onnipresente per la rappresentazione e diffusione dell’informazione. I metodi di interazione e rappresentazione stanno evolvendo di continuo. Ed è su questi due binari che è si muove l’offerta delle tecnologie informatiche al servizio della storia. Lo scopo di questa tesi è proprio quello di esplorare, attraverso l’utilizzo e la sperimentazione di diversi strumenti e tecnologie informatiche, come si può raccontare efficacemente il passato attraverso oggetti tridimensionali e gli ambienti virtuali, e come, nel loro essere elementi caratterizzanti di comunicazione, in che modo possono collaborare, in questo caso particolare, con la disciplina storica. La presente ricerca ricostruisce alcune linee di storia delle principali fabbriche attive a Torino durante la seconda guerra mondiale, ricordando stretta relazione che esiste tra strutture ed individui e in questa città in particolare tra fabbrica e movimento operaio, è inevitabile addentrarsi nelle vicende del movimento operaio torinese che nel periodo della lotta di Liberazione in città fu un soggetto politico e sociale di primo rilievo. Nella città, intesa come entità biologica coinvolta nella guerra, la fabbrica (o le fabbriche) diventa il nucleo concettuale attraverso il quale leggere la città: sono le fabbriche gli obiettivi principali dei bombardamenti ed è nelle fabbriche che si combatte una guerra di liberazione tra classe operaia e autorità, di fabbrica e cittadine. La fabbrica diventa il luogo di "usurpazione del potere" di cui parla Weber, il palcoscenico in cui si tengono i diversi episodi della guerra: scioperi, deportazioni, occupazioni .... Il modello della città qui rappresentata non è una semplice visualizzazione ma un sistema informativo dove la realtà modellata è rappresentata da oggetti, che fanno da teatro allo svolgimento di avvenimenti con una precisa collocazione cronologica, al cui interno è possibile effettuare operazioni di selezione di render statici (immagini), di filmati precalcolati (animazioni) e di scenari navigabili interattivamente oltre ad attività di ricerca di fonti bibliografiche e commenti di studiosi segnatamente legati all'evento in oggetto. Obiettivo di questo lavoro è far interagire, attraverso diversi progetti, le discipline storiche e l’informatica, nelle diverse opportunità tecnologiche che questa presenta. Le possibilità di ricostruzione offerte dal 3D vengono così messe a servizio della ricerca, offrendo una visione integrale in grado di avvicinarci alla realtà dell’epoca presa in considerazione e convogliando in un’unica piattaforma espositiva tutti i risultati. Divulgazione Progetto Mappa Informativa Multimediale Torino 1945 Sul piano pratico il progetto prevede una interfaccia navigabile (tecnologia Flash) che rappresenti la pianta della città dell’epoca, attraverso la quale sia possibile avere una visione dei luoghi e dei tempi in cui la Liberazione prese forma, sia a livello concettuale, sia a livello pratico. Questo intreccio di coordinate nello spazio e nel tempo non solo migliora la comprensione dei fenomeni, ma crea un maggiore interesse sull’argomento attraverso l’utilizzo di strumenti divulgativi di grande efficacia (e appeal) senza perdere di vista la necessità di valicare le tesi storiche proponendosi come piattaforma didattica. Un tale contesto richiede uno studio approfondito degli eventi storici al fine di ricostruire con chiarezza una mappa della città che sia precisa sia topograficamente sia a livello di navigazione multimediale. La preparazione della cartina deve seguire gli standard del momento, perciò le soluzioni informatiche utilizzate sono quelle fornite da Adobe Illustrator per la realizzazione della topografia, e da Macromedia Flash per la creazione di un’interfaccia di navigazione. La base dei dati descrittivi è ovviamente consultabile essendo contenuta nel supporto media e totalmente annotata nella bibliografia. È il continuo evolvere delle tecnologie d'informazione e la massiccia diffusione dell’uso dei computer che ci porta a un cambiamento sostanziale nello studio e nell’apprendimento storico; le strutture accademiche e gli operatori economici hanno fatto propria la richiesta che giunge dall'utenza (insegnanti, studenti, operatori dei Beni Culturali) di una maggiore diffusione della conoscenza storica attraverso la sua rappresentazione informatizzata. Sul fronte didattico la ricostruzione di una realtà storica attraverso strumenti informatici consente anche ai non-storici di toccare con mano quelle che sono le problematiche della ricerca quali fonti mancanti, buchi della cronologia e valutazione della veridicità dei fatti attraverso prove. Le tecnologie informatiche permettono una visione completa, unitaria ed esauriente del passato, convogliando tutte le informazioni su un'unica piattaforma, permettendo anche a chi non è specializzato di comprendere immediatamente di cosa si parla. Il miglior libro di storia, per sua natura, non può farlo in quanto divide e organizza le notizie in modo diverso. In questo modo agli studenti viene data l'opportunità di apprendere tramite una rappresentazione diversa rispetto a quelle a cui sono abituati. La premessa centrale del progetto è che i risultati nell'apprendimento degli studenti possono essere migliorati se un concetto o un contenuto viene comunicato attraverso più canali di espressione, nel nostro caso attraverso un testo, immagini e un oggetto multimediale. Didattica La Conceria Fiorio è uno dei luoghi-simbolo della Resistenza torinese. Il progetto è una ricostruzione in realtà virtuale della Conceria Fiorio di Torino. La ricostruzione serve a arricchire la cultura storica sia a chi la produce, attraverso una ricerca accurata delle fonti, sia a chi può poi usufruirne, soprattutto i giovani, che, attratti dall’aspetto ludico della ricostruzione, apprendono con più facilità. La costruzione di un manufatto in 3D fornisce agli studenti le basi per riconoscere ed esprimere la giusta relazione fra il modello e l’oggetto storico. Le fasi di lavoro attraverso cui si è giunti alla ricostruzione in 3D della Conceria: . una ricerca storica approfondita, basata sulle fonti, che possono essere documenti degli archivi o scavi archeologici, fonti iconografiche, cartografiche, ecc.; . La modellazione degli edifici sulla base delle ricerche storiche, per fornire la struttura geometrica poligonale che permetta la navigazione tridimensionale; . La realizzazione, attraverso gli strumenti della computer graphic della navigazione in 3D. Unreal Technology è il nome dato al motore grafico utilizzato in numerosi videogiochi commerciali. Una delle caratteristiche fondamentali di tale prodotto è quella di avere uno strumento chiamato Unreal editor con cui è possibile costruire mondi virtuali, e che è quello utilizzato per questo progetto. UnrealEd (Ued) è il software per creare livelli per Unreal e i giochi basati sul motore di Unreal. E’ stata utilizzata la versione gratuita dell’editor. Il risultato finale del progetto è un ambiente virtuale navigabile raffigurante una ricostruzione accurata della Conceria Fiorio ai tempi della Resistenza. L’utente può visitare l’edificio e visualizzare informazioni specifiche su alcuni punti di interesse. La navigazione viene effettuata in prima persona, un processo di “spettacolarizzazione” degli ambienti visitati attraverso un arredamento consono permette all'utente una maggiore immersività rendendo l’ambiente più credibile e immediatamente codificabile. L’architettura Unreal Technology ha permesso di ottenere un buon risultato in un tempo brevissimo, senza che fossero necessari interventi di programmazione. Questo motore è, quindi, particolarmente adatto alla realizzazione rapida di prototipi di una discreta qualità, La presenza di un certo numero di bug lo rende, però, in parte inaffidabile. Utilizzare un editor da videogame per questa ricostruzione auspica la possibilità di un suo impiego nella didattica, quello che le simulazioni in 3D permettono nel caso specifico è di permettere agli studenti di sperimentare il lavoro della ricostruzione storica, con tutti i problemi che lo storico deve affrontare nel ricreare il passato. Questo lavoro vuole essere per gli storici una esperienza nella direzione della creazione di un repertorio espressivo più ampio, che includa gli ambienti tridimensionali. Il rischio di impiegare del tempo per imparare come funziona questa tecnologia per generare spazi virtuali rende scettici quanti si impegnano nell'insegnamento, ma le esperienze di progetti sviluppati, soprattutto all’estero, servono a capire che sono un buon investimento. Il fatto che una software house, che crea un videogame di grande successo di pubblico, includa nel suo prodotto, una serie di strumenti che consentano all'utente la creazione di mondi propri in cui giocare, è sintomatico che l'alfabetizzazione informatica degli utenti medi sta crescendo sempre più rapidamente e che l'utilizzo di un editor come Unreal Engine sarà in futuro una attività alla portata di un pubblico sempre più vasto. Questo ci mette nelle condizioni di progettare moduli di insegnamento più immersivi, in cui l'esperienza della ricerca e della ricostruzione del passato si intreccino con lo studio più tradizionale degli avvenimenti di una certa epoca. I mondi virtuali interattivi vengono spesso definiti come la forma culturale chiave del XXI secolo, come il cinema lo è stato per il XX. Lo scopo di questo lavoro è stato quello di suggerire che vi sono grosse opportunità per gli storici impiegando gli oggetti e le ambientazioni in 3D, e che essi devono coglierle. Si consideri il fatto che l’estetica abbia un effetto sull’epistemologia. O almeno sulla forma che i risultati delle ricerche storiche assumono nel momento in cui devono essere diffuse. Un’analisi storica fatta in maniera superficiale o con presupposti errati può comunque essere diffusa e avere credito in numerosi ambienti se diffusa con mezzi accattivanti e moderni. Ecco perchè non conviene seppellire un buon lavoro in qualche biblioteca, in attesa che qualcuno lo scopra. Ecco perchè gli storici non devono ignorare il 3D. La nostra capacità, come studiosi e studenti, di percepire idee ed orientamenti importanti dipende spesso dai metodi che impieghiamo per rappresentare i dati e l’evidenza. Perché gli storici possano ottenere il beneficio che il 3D porta con sè, tuttavia, devono sviluppare un’agenda di ricerca volta ad accertarsi che il 3D sostenga i loro obiettivi di ricercatori e insegnanti. Una ricostruzione storica può essere molto utile dal punto di vista educativo non sono da chi la visita ma, anche da chi la realizza. La fase di ricerca necessaria per la ricostruzione non può fare altro che aumentare il background culturale dello sviluppatore. Conclusioni La cosa più importante è stata la possibilità di fare esperienze nell’uso di mezzi di comunicazione di questo genere per raccontare e far conoscere il passato. Rovesciando il paradigma conoscitivo che avevo appreso negli studi umanistici, ho cercato di desumere quelle che potremo chiamare “leggi universali” dai dati oggettivi emersi da questi esperimenti. Da punto di vista epistemologico l’informatica, con la sua capacità di gestire masse impressionanti di dati, dà agli studiosi la possibilità di formulare delle ipotesi e poi accertarle o smentirle tramite ricostruzioni e simulazioni. Il mio lavoro è andato in questa direzione, cercando conoscere e usare strumenti attuali che nel futuro avranno sempre maggiore presenza nella comunicazione (anche scientifica) e che sono i mezzi di comunicazione d’eccellenza per determinate fasce d’età (adolescenti). Volendo spingere all’estremo i termini possiamo dire che la sfida che oggi la cultura visuale pone ai metodi tradizionali del fare storia è la stessa che Erodoto e Tucidide contrapposero ai narratori di miti e leggende. Prima di Erodoto esisteva il mito, che era un mezzo perfettamente adeguato per raccontare e dare significato al passato di una tribù o di una città. In un mondo post letterario la nostra conoscenza del passato sta sottilmente mutando nel momento in cui lo vediamo rappresentato da pixel o quando le informazioni scaturiscono non da sole, ma grazie all’interattività con il mezzo. La nostra capacità come studiosi e studenti di percepire idee ed orientamenti importanti dipende spesso dai metodi che impieghiamo per rappresentare i dati e l’evidenza. Perché gli storici possano ottenere il beneficio sottinteso al 3D, tuttavia, devono sviluppare un’agenda di ricerca volta ad accertarsi che il 3D sostenga i loro obiettivi di ricercatori e insegnanti. Le esperienze raccolte nelle pagine precedenti ci portano a pensare che in un futuro non troppo lontano uno strumento come il computer sarà l’unico mezzo attraverso cui trasmettere conoscenze, e dal punto di vista didattico la sua interattività consente coinvolgimento negli studenti come nessun altro mezzo di comunicazione moderno.
Resumo:
The experience of void, essential to the production of forms and to make use them, can be considered as the base of the activities that attend to the formative processes. If void and matter constitutes the basic substances of architecture. Their role in the definition of form, the symbolic value and the constructive methods of it defines the quality of the space. This job inquires the character of space in the architecture of Moneo interpreting the meaning of the void in the Basque culture through the reading of the form matrices in the work of Jorge Oteiza and Eduardo Chillida. In the tie with the Basque culture a reading key is characterized by concurring to put in relation some of the theoretical principles expressed by Moneo on the relationship between place and time, in an unique and specific vision of the space. In the analysis of the process that determines the genesis of the architecture of Moneo emerges a trajectory whose direction is constructed on two pivos: on the one hand architecture like instrument of appropriation of the place, gushed from an acquaintance process who leans itself to the reading of the relations that define the place and of the resonances through which measuring it, on the other hand the architecture whose character is able to represent and to extend the time in which he is conceived, through the autonomy that is conferred to them from values. Following the trace characterized from this hypothesis, that is supported on the theories elaborated from Moneo, surveying deepens the reading of the principles that construct the sculptural work of Oteiza and Chillida, features from a search around the topic of the void and to its expression through the form. It is instrumental to the definition of a specific area that concurs to interpret the character of the space subtended to a vision of the place and the time, affine to the sensibility of Moneo and in some way not stranger to its cultural formation. The years of the academic formation, during which Moneo enters in contact with the Basque artistic culture, seem to be an important period in the birth of that knowledge that will leads him to the formulation of theories tied to the relationship between time, place and architecture. The values expressed through the experimental work of Oteiza and Chillida during years '50 are valid bases to the understanding of such relationships. In tracing a profile of the figures of Oteiza and Chillida, without the pretension that it is exhaustive for the reading of the complex historical period in which they are placed, but with the needs to put the work in a context, I want to be evidenced the important role carried out from the two artists from the Basque cultural area within which Moneo moves its first steps. The tie that approaches Moneo to the Basque culture following the personal trajectory of the formative experience interlaces to that one of important figures of the art and the Spanish architecture. One of the more meaningful relationships is born just during the years of his academic formation, from 1958 to the 1961, when he works like student in the professional office of the architect Francisco Sáenz de Oiza, who was teaching architectural design at the ETSAM. In these years many figures of Basque artists alternated at the professional office of Oiza that enjoys the important support of the manufacturer and maecenas Juan Huarte Beaumont, introduced to he from Oteiza. The tie between Huarte and Oteiza is solid and continuous in the years and it realizes in a contribution to many of the initiatives that makes of Oteiza a forwarder of the Basque culture. In the four years of collaboration with Oiza, Moneo has the opportunity to keep in contact with an atmosphere permeated by a constant search in the field of the plastic art and with figures directly connected to such atmosphere. It’s of a period of great intensity as in the production like in the promotion of the Basque art. The collective “Blanco y Negro”, than is held in 1959 at the Galería Darro to Madrid, is only one of the many times of an exhibition of the work of Oteiza and Chillida. The end of the Fifties is a period of international acknowledgment for Chillida that for Oteiza. The decade of the Fifties consecrates the hypotheses of a mythical past of the Basque people through the spread of the studies carried out in the antecedent years. The archaeological discoveries that join to a context already rich of signs of the prehistoric era, consolidate the knowledge of a strong cultural identity. Oteiza, like Chillida and other contemporary artists, believe in a cosmogonist conception belonging to the Basques, connected to their matriarchal mythological past. The void in its meaning of absence, in the Basque culture, thus as in various archaic and oriental religions, is equivalent to the spiritual fullness as essential condition to the revealing of essence. Retracing the archaic origins of the Basque culture emerges the deep meaning that the void assumes as key element in the religious interpretation of the passage from the life to the death. The symbology becomes rich of meaningful characters who derive from the fact that it is a chthonic cult. A representation of earth like place in which divine manifest itself but also like connection between divine and human, and this manipulation of the matter of which the earth it is composed is the tangible projection of the continuous search of the man towards God. The search of equilibrium between empty and full, that characterizes also the development of the form in architecture, in the Basque culture assumes therefore a peculiar value that returns like constant in great part of the plastic expressions, than in this context seem to be privileged regarding the other expressive forms. Oteiza and Chillida develop two original points of view in the representation of the void through the form. Both use of rigorous systems of rules sensitive to the physics principles and the characters of the matter. The last aim of the Oteiza’s construction is the void like limit of the knowledge, like border between known and unknown. It doesn’t means to reduce the sculptural object to an only allusive dimension because the void as physical and spiritual power is an active void, that possesses that value able to reveal the being through the trace of un-being. The void in its transcendental manifestation acts at the same time from universal and from particular, like in the atomic structure of the matter, in which on one side it constitutes the inner structure of every atom and on the other one it is necessary condition to the interaction between all the atoms. The void can be seen therefore as the action field that concurs the relations between the forms but is also the necessary condition to the same existence of the form. In the construction of Chillida the void represents that counterpart structuring the matter, inborn in it, the element in absence of which wouldn’t be variations neither distinctive characters to define the phenomenal variety of the world. The physics laws become the subject of the sculptural representation, the void are the instrument that concurs to catch up the equilibrium. Chillida dedicate himself to experience the space through the senses, to perceive of the qualities, to tell the physics laws which forge the matter in the form and the form arranges the places. From the artistic experience of the two sculptors they can be transposed, to the architectonic work of Moneo, those matrices on which they have constructed their original lyric expressions, where the void is absolute protagonist. An ambit is defined thus within which the matrices form them drafts from the work of Oteiza and Chillida can be traced in the definition of the process of birth and construction of the architecture of Moneo, but also in the relation that the architecture establishes with the place and in the time. The void becomes instrument to read the space constructed in its relationships that determine the proportions, rhythms, and relations. In this way the void concurs to interpret the architectonic space and to read the value of it, the quality of the spaces constructing it. This because it’s like an instrument of the composition, whose role is to maintain to the separation between the elements putting in evidence the field of relations. The void is that instrument that serves to characterize the elements that are with in the composition, related between each other, but distinguished. The meaning of the void therefore pushes the interpretation of the architectonic composition on the game of the relations between the elements that, independent and distinguished, strengthen themselves in their identity. On the one hand if void, as measurable reality, concurs all the dimensional changes quantifying the relationships between the parts, on the other hand its dialectic connotation concurs to search the equilibrium that regulated such variations. Equilibrium that therefore does not represent an obtained state applying criteria setting up from arbitrary rules but that depends from the intimate nature of the matter and its embodiment in the form. The production of a form, or a formal system that can be finalized to the construction of a building, is indissolubly tied to the technique that is based on the acquaintance of the formal vocation of the matter, and what it also can representing, meaning, expresses itself in characterizing the site. For Moneo, in fact, the space defined from the architecture is above all a site, because the essence of the site is based on the construction. When Moneo speaks about “birth of the idea of plan” like essential moment in the construction process of the architecture, it refers to a process whose complexity cannot be born other than from a deepened acquaintance of the site that leads to the comprehension of its specificity. Specificity arise from the infinite sum of relations, than for Moneo is the story of the oneness of a site, of its history, of the cultural identity and of the dimensional characters that that they are tied to it beyond that to the physical characteristics of the site. This vision is leaned to a solid made physical structure of perceptions, of distances, guideline and references that then make that the process is first of all acquaintance, appropriation. Appropriation that however does not happen for directed consequence because does not exist a relationship of cause and effect between place and architecture, thus as an univocal and exclusive way does not exist to arrive to a representation of an idea. An approach that, through the construction of the place where the architecture acquires its being, searches an expression of its sense of the truth. The proposal of a distinction for areas like space, matter, spirit and time, answering to the issues that scan the topics of the planning search of Moneo, concurs a more immediate reading of the systems subtended to the composition principles, through which is related the recurrent architectonic elements in its planning dictionary. From the dialectic between the opposites that is expressed in the duality of the form, through the definition of a complex element that can mediate between inside and outside as a real system of exchange, Moneo experiences the form development of the building deepening the relations that the volume establishes in the site. From time to time the invention of a system used to answer to the needs of the program and to resolve the dual character of the construction in an only gesture, involves a deep acquaintance of the professional practice. The technical aspect is the essential support to which the construction of the system is indissolubly tied. What therefore arouses interest is the search of the criteria and the way to construct that can reveal essential aspects of the being of the things. The constructive process demands, in fact, the acquaintance of the formative properties of the matter. Property from which the reflections gush on the relations that can be born around the architecture through the resonance produced from the forms. The void, in fact, through the form is in a position to constructing the site establishing a reciprocity relation. A reciprocity that is determined in the game between empty and full and of the forms between each other, regarding around, but also with regard to the subjective experience. The construction of a background used to amplify what is arranged on it and to clearly show the relations between the parts and at the same time able to tie itself with around opening the space of the vision, is a system that in the architecture of Moneo has one of its more effective applications in the use of the platform used like architectonic element. The spiritual force of this architectonic gesture is in the ability to define a place whose projecting intention is perceived and shared with who experience and has lived like some instrument to contact the cosmic forces, in a delicate process that lead to the equilibrium with them, but in completely physical way. The principles subtended to the construction of the form taken from the study of the void and the relations that it concurs, lead to express human values in the construction of the site. The validity of these principles however is tested from the time. The time is what Moneo considers as filter that every architecture is subordinate to and the survival of architecture, or any of its formal characters, reveals them the validity of the principles that have determined it. It manifests thus, in the tie between the spatial and spiritual dimension, between the material and the worldly dimension, the state of necessity that leads, in the construction of the architecture, to establish a contact with the forces of the universe and the intimate world, through a process that translate that necessity in elaboration of a formal system.
Resumo:
Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.
Dall'involucro all'invaso. Lo spazio a pianta centrale nell'opera architettonica di Adalberto Libera
Resumo:
An archetype selected over the centuries Adalberto Libera wrote little, showing more inclination to use the project as the only means of verification. This study uses a survey of the project for purely compositional space in relation to the reason that most other returns with continuity and consistency throughout his work. "The fruit of a type selected over centuries", in the words of Libera, is one of the most widely used and repeated spatial archetypes present in the history of architecture, given its nature as defined by a few consolidated elements and precisely defined with characters of geometric precision and absoluteness, the central space is provided, over the course of evolution of architecture, and its construction aspects as well as symbolic, for various uses, from historical period in which it was to coincide with sacred space for excellence, to others in which it lends itself to many different expressive possibilities of a more "secular". The central space was created on assumptions of a constructive character, and the same exact reason has determined the structural changes over the centuries, calling from time to time with advances in technology, the maximum extent possible and the different applications, which almost always have coincided with the reason for the monumental space. But it’s in the Roman world that the reason for the central space is defined from the start of a series of achievements that fix the character in perpetuity. The Pantheon was seen maximum results and, simultaneously, the archetype indispensable, to the point that it becomes difficult to sustain a discussion of the central space that excludes. But the reason the space station has complied, in ancient Rome, just as exemplary, monuments, public spaces or buildings with very different implications. The same Renaissance, on which Wittkower's proving itself once and for all, the nature and interpretation of sacred space station, and thus the symbolic significance of that invaded underlying interpretations related to Humanism, fixing the space-themed drawing it with the study and direct observation by the four-sixteenth-century masters, the ruins that in those years of renewed interest in the classical world, the first big pieces of excavation of ancient Rome brought to light with great surprise of all. Not a case, the choice to investigate the architectural work of Libera through the grounds of the central space. Investigating its projects and achievements, it turns out as the reason invoked particularly evident from the earliest to latest work, crossing-free period of the war which for many authors in different ways, the distinction between one stage and another, or the final miss. The theme and the occasion for Libera always distinct, it is precisely the key through which to investigate her work, to come to discover that the first-in this case the central plan-is the constant underlying all his work, and the second reason that the quota with or at the same time, we will return different each time and always the same Libera, formed on the major works remained from ancient times, and on this building method, means consciously, that the characters of architectural works, if valid, pass the time, and survive the use and function contingent. As for the facts by which to formalize it, they themselves are purely contingent, and therefore available to be transferred from one work to another, from one project to another, using also the loan. Using the same two words-at-issue and it becomes clear now how the theme of this study is the method of Libera and opportunity to the study of the central space in his work. But there is one aspect that, with respect to space a central plan evolves with the progress of the work of Libera on the archetype, and it is the reason behind all the way, just because an area built entirely on reason centric. It 'just the "center" of space that, ultimately, tells us the real progression and the knowledge that over the years has matured and changed in Libera. In the first phase, heavily laden with symbolic superstructure, even if used in a "bribe" from Free-always ill-disposed to sacrifice the idea of architecture to a phantom-center space is just the figure that identifies the icon represents space itself: the cross, the flame or the statue are different representations of the same idea of center built around an icon. The second part of the work of clearing the space station, changed the size of the orders but the demands of patronage, grows and expands the image space centric, celebratory nature that takes and becomes, in a different way, this same symbol . You see, one in all, as the project of "Civiltà Italiana" or symbolic arch are examples of this different attitude. And at the same point of view, you will understand how the two projects formulated on the reuse of the Mausoleum of Augustus is the key to its passage from first to second phase: the Ara Pacis in the second project, making itself the center of the composition "breaks" the pattern of symbolic figure in the center, because it is itself an architecture. And, in doing so, the transition takes place where the building itself-the central space-to become the center of that space that itself creates and determines, by extending the potential and the expressiveness of the enclosure (or cover) that defines the basin centered. In this second series of projects, which will be the apex and the point of "crisis" in the Palazzo dei Congressi all'E42 received and is no longer so, the symbol at the very geometry of space, but space itself and 'action' will be determined within this; action leading a movement, in the case of the Arco simbolico and the "Civiltà Italiana" or, more frequently, or celebration, as in the great Sala dei Recevimenti all’E42, which, in the first project proposal, is represented as a large area populated by people in suits, at a reception, in fact. In other words, in this second phase, the architecture is no longer a mere container, but it represents the shape of space, representing that which "contains". In the next step-determining the knowledge from which mature in their transition to post-war-is one step that radically changes the way centric space, although formally and compositionally Libera continues the work on the same elements, compounds and relationships in a different way . In this last phase Freedom, center, puts the man in human beings, in the two previous phases, and in a latent, were already at the center of the composition, even if relegated to the role of spectators in the first period, or of supporting actors in the second, now the heart of space. And it’s, as we shall see, the very form of being together in the form of "assembly", in its different shades (up to that sacred) to determine the shape of space, and how to relate the parts that combine to form it. The reconstruction of the birth, evolution and development of the central space of the ground in Libera, was born on the study of the monuments of ancient Rome, intersected on fifty years of recent history, honed on the constancy of a method and practice of a lifetime, becomes itself, Therefore, a project, employing the same mechanisms adopted by Libera; the decomposition and recomposition, research synthesis and unity of form, are in fact the structure of this research work. The road taken by Libera is a lesson in clarity and rationality, above all, and this work would uncover at least a fragment.
Resumo:
The main aims of my PhD research work have been the investigation of the redox, photophysical and electronic properties of carbon nanotubes (CNT) and their possible uses as functional substrates for the (electro)catalytic production of oxygen and as molecular connectors for Quantum-dot Molecular Automata. While for CNT many and diverse applications in electronics, in sensors and biosensors field, as a structural reinforcing in composite materials have long been proposed, the study of their properties as individual species has been for long a challenging task. CNT are in fact virtually insoluble in any solvent and, for years, most of the studies has been carried out on bulk samples (bundles). In Chapter 2 an appropriate description of carbon nanotubes is reported, about their production methods and the functionalization strategies for their solubilization. In Chapter 3 an extensive voltammetric and vis-NIR spectroelectrochemical investigation of true solutions of unfunctionalized individual single wall CNT (SWNT) is reported that permitted to determine for the first time the standard electrochemical potentials of reduction and oxidation as a function of the tube diameter of a large number of semiconducting SWNTs. We also established the Fermi energy and the exciton binding energy for individual tubes in solution and, from the linear correlation found between the potentials and the optical transition energies, one to calculate the redox potentials of SWNTs that are insufficiently abundant or absent in the samples. In Chapter 4 we report on very efficient and stable nano-structured, oxygen-evolving anodes (OEA) that were obtained by the assembly of an oxygen evolving polyoxometalate cluster, (a totally inorganic ruthenium catalyst) with a conducting bed of multiwalled carbon nanotubes (MWCNT). Here, MWCNT were effectively used as carrier of the polyoxometallate for the electrocatalytic production of oxygen and turned out to greatly increase both the efficiency and stability of the device avoiding the release of the catalysts. Our bioinspired electrode addresses the major challenge of artificial photosynthesis, i.e. efficient water oxidation, taking us closer to when we might power the planet with carbon-free fuels. In Chapter 5 a study on surface-active chiral bis-ferrocenes conveniently designed in order to act as prototypical units for molecular computing devices is reported. Preliminary electrochemical studies in liquid environment demonstrated the capability of such molecules to enter three indistinguishable oxidation states. Side chains introduction allowed to organize them in the form of self-assembled monolayers (SAM) onto a surface and to study the molecular and redox properties on solid substrates. Electrochemical studies on SAMs of these molecules confirmed their attitude to undergo fast (Nernstian) electron transfer processes generating, in the positive potential region, either the full oxidized Fc+-Fc+ or the partly oxidized Fc+-Fc species. Finally, in Chapter 6 we report on a preliminary electrochemical study of graphene solutions prepared according to an original procedure recently described in the literature. Graphene is the newly-born of carbon nanomaterials and is certainly bound to be among the most promising materials for the next nanoelectronic generation.
Resumo:
Con il trascorrere del tempo, le reti di stazioni permanenti GNSS (Global Navigation Satellite System) divengono sempre più un valido supporto alle tecniche di rilevamento satellitare. Esse sono al tempo stesso un’efficace materializzazione del sistema di riferimento e un utile ausilio ad applicazioni di rilevamento topografico e di monitoraggio per il controllo di deformazioni. Alle ormai classiche applicazioni statiche in post-processamento, si affiancano le misure in tempo reale sempre più utilizzate e richieste dall’utenza professionale. In tutti i casi risulta molto importante la determinazione di coordinate precise per le stazioni permanenti, al punto che si è deciso di effettuarla tramite differenti ambienti di calcolo. Sono stati confrontati il Bernese, il Gamit (che condividono l’approccio differenziato) e il Gipsy (che utilizza l’approccio indifferenziato). L’uso di tre software ha reso indispensabile l’individuazione di una strategia di calcolo comune in grado di garantire che, i dati ancillari e i parametri fisici adottati, non costituiscano fonte di diversificazione tra le soluzioni ottenute. L’analisi di reti di dimensioni nazionali oppure di reti locali per lunghi intervalli di tempo, comporta il processamento di migliaia se non decine di migliaia di file; a ciò si aggiunge che, talora a causa di banali errori, oppure al fine di elaborare test scientifici, spesso risulta necessario reiterare le elaborazioni. Molte risorse sono quindi state investite nella messa a punto di procedure automatiche finalizzate, da un lato alla preparazione degli archivi e dall’altro all’analisi dei risultati e al loro confronto qualora si sia in possesso di più soluzioni. Dette procedure sono state sviluppate elaborando i dataset più significativi messi a disposizione del DISTART (Dipartimento di Ingegneria delle Strutture, dei Trasporti, delle Acque, del Rilevamento del Territorio - Università di Bologna). E’ stato così possibile, al tempo stesso, calcolare la posizione delle stazioni permanenti di alcune importanti reti locali e nazionali e confrontare taluni fra i più importanti codici scientifici che assolvono a tale funzione. Per quanto attiene il confronto fra i diversi software si è verificato che: • le soluzioni ottenute dal Bernese e da Gamit (i due software differenziati) sono sempre in perfetto accordo; • le soluzioni Gipsy (che utilizza il metodo indifferenziato) risultano, quasi sempre, leggermente più disperse rispetto a quelle degli altri software e mostrano talvolta delle apprezzabili differenze numeriche rispetto alle altre soluzioni, soprattutto per quanto attiene la coordinata Est; le differenze sono però contenute in pochi millimetri e le rette che descrivono i trend sono comunque praticamente parallele a quelle degli altri due codici; • il citato bias in Est tra Gipsy e le soluzioni differenziate, è più evidente in presenza di determinate combinazioni Antenna/Radome e sembra essere legato all’uso delle calibrazioni assolute da parte dei diversi software. E’ necessario altresì considerare che Gipsy è sensibilmente più veloce dei codici differenziati e soprattutto che, con la procedura indifferenziata, il file di ciascuna stazione di ciascun giorno, viene elaborato indipendentemente dagli altri, con evidente maggior elasticità di gestione: se si individua un errore strumentale su di una singola stazione o se si decide di aggiungere o togliere una stazione dalla rete, non risulta necessario il ricalcolo dell’intera rete. Insieme alle altre reti è stato possibile analizzare la Rete Dinamica Nazionale (RDN), non solo i 28 giorni che hanno dato luogo alla sua prima definizione, bensì anche ulteriori quattro intervalli temporali di 28 giorni, intercalati di sei mesi e che coprono quindi un intervallo temporale complessivo pari a due anni. Si è così potuto verificare che la RDN può essere utilizzata per l’inserimento in ITRF05 (International Terrestrial Reference Frame) di una qualsiasi rete regionale italiana nonostante l’intervallo temporale ancora limitato. Da un lato sono state stimate le velocità ITRF (puramente indicative e non ufficiali) delle stazioni RDN e, dall’altro, è stata effettuata una prova di inquadramento di una rete regionale in ITRF, tramite RDN, e si è verificato che non si hanno differenze apprezzabili rispetto all’inquadramento in ITRF, tramite un congruo numero di stazioni IGS/EUREF (International GNSS Service / European REference Frame, SubCommission for Europe dello International Association of Geodesy).
Resumo:
“Cartographic heritage” is different from “cartographic history”. The second term refers to the study of the development of surveying and drawing techniques related to maps, through time, i.e. through different types of cultural environment which were background for the creation of maps. The first term concerns the whole amount of ancient maps, together with these different types of cultural environment, which the history has brought us and which we perceive as cultural values to be preserved and made available to many users (public, institutions, experts). Unfortunately, ancient maps often suffer preservation problems of their analog support, mostly due to aging. Today, metric recovery in digital form and digital processing of historical cartography allow preserving map heritage. Moreover, modern geomatic techniques give us new chances of using historical information, which would be unachievable on analog supports. In this PhD thesis, the whole digital processing of recovery and elaboration of ancient cartography is reported, with a special emphasis on the use of digital tools in preservation and elaboration of cartographic heritage. It is possible to divide the workflow into three main steps, that reflect the chapter structure of the thesis itself: • map acquisition: conversion of the ancient map support from analog to digital, by means of high resolution scanning or 3D surveying (digital photogrammetry or laser scanning techniques); this process must be performed carefully, with special instruments, in order to reduce deformation as much as possible; • map georeferencing: reproducing in the digital image the native metric content of the map, or even improving it by selecting a large number of still existing ground control points; this way it is possible to understand the projection features of the historical map, as well as to evaluate and represent the degree of deformation induced by the old type of cartographic transformation (that can be unknown to us), by surveying errors or by support deformation, usually all errors of too high value with respect to our standards; • data elaboration and management in a digital environment, by means of modern software tools: vectorization, giving the map a new and more attractive graphic view (for instance, by creating a 3D model), superimposing it on current base maps, comparing it to other maps, and finally inserting it in GIS or WebGIS environment as a specific layer. The study is supported by some case histories, each of them interesting from the point of view of one digital cartographic elaboration step at least. The ancient maps taken into account are the following ones: • three maps of the Po river delta, made at the end of the XVI century by a famous land-surveyor, Ottavio Fabri (he is single author in the first map, co-author with Gerolamo Pontara in the second map, co-author with Bonajuto Lorini and others in the third map), who wrote a methodological textbook where he explains a new topographical instrument, the squadra mobile (mobile square) invented and used by himself; today all maps are preserved in the State Archive of Venice; • the Ichnoscenografia of Bologna by Filippo de’ Gnudi, made in the 1702 and today preserved in the Archiginnasio Library of Bologna; it is a scenographic view of the city, captured in a bird’s eye flight, but also with an icnographic value, as the author himself declares; • the map of Bologna by the periti Gregorio Monari and Antonio Laghi, the first map of the city derived from a systematic survey, even though it was made only ten years later (1711–1712) than the map by de’ Gnudi; in this map the scenographic view was abandoned, in favor of a more correct representation by means of orthogonal projection; today the map is preserved in the State Archive of Bologna; • the Gregorian Cadastre of Bologna, made in 1831 and updated until 1927, now preserved in the State Archive of Bologna; it is composed by 140 maps and 12 brogliardi (register volumes). In particular, the three maps of the Po river delta and the Cadastre were studied with respect to their acquisition procedure. Moreover, the first maps were analyzed from the georeferencing point of view, and the Cadastre was analyzed with respect to a possible GIS insertion. Finally, the Ichnoscenografia was used to illustrate a possible application of digital elaboration, such as 3D modeling. Last but not least, we must not forget that the study of an ancient map should start, whenever possible, from the consultation of the precious original analogical document; analysis by means of current digital techniques allow us new research opportunities in a rich and modern multidisciplinary context.
Resumo:
L'obiettivo principale della tesi è lo sviluppo di un modello empirico previsivo di breve periodo che sia in grado di offrire previsioni precise ed affidabili dei consumi di energia elettrica su base oraria del mercato italiano. Questo modello riassume le conoscenze acquisite e l'esperienza fatta durante la mia attuale attività lavorativa presso il Romagna Energia S.C.p.A., uno dei maggiori player italiani del mercato energetico. Durante l'ultimo ventennio vi sono stati drastici cambiamenti alla struttura del mercato elettrico in tutto il mondo. Nella maggior parte dei paesi industrializzati il settore dell'energia elettrica ha modificato la sua originale conformazione di monopolio in mercato competitivo liberalizzato, dove i consumatori hanno la libertà di scegliere il proprio fornitore. La modellazione e la previsione della serie storica dei consumi di energia elettrica hanno quindi assunto un ruolo molto importante nel mercato, sia per i policy makers che per gli operatori. Basandosi sulla letteratura già esistente, sfruttando le conoscenze acquisite 'sul campo' ed alcune intuizioni, si è analizzata e sviluppata una struttura modellistica di tipo triangolare, del tutto innovativa in questo ambito di ricerca, suggerita proprio dal meccanismo fisico attraverso il quale l'energia elettrica viene prodotta e consumata nell'arco delle 24 ore. Questo schema triangolare può essere visto come un particolare modello VARMA e possiede una duplice utilità, dal punto di vista interpretativo del fenomeno da una parte, e previsivo dall'altra. Vengono inoltre introdotti nuovi leading indicators legati a fattori meteorologici, con l'intento di migliorare le performance previsive dello stesso. Utilizzando quindi la serie storica dei consumi di energia elettrica italiana, dall'1 Marzo 2010 al 30 Marzo 2012, sono stati stimati i parametri del modello dello schema previsivo proposto e valutati i risultati previsivi per il periodo dall'1 Aprile 2012 al 30 Aprile 2012, confrontandoli con quelli forniti da fonti ufficiali.
Resumo:
La VI regio augustea di Roma rappresenta uno dei settori urbani maggiormente investiti dalle modifiche radicali compiute dall’uomo nel processo di urbanizzazione della città che ne hanno modificato profondamente la situazione altimetrica e la conformazione originaria. Questi notevoli cambiamenti ebbero origine sin dall’età antica, ma si intensificarono profondamente soprattutto nel periodo rinascimentale quando a partire da Pio IV e soprattutto con Sisto V, attivo in tante altre zone della città, si svilupparono numerose opere di rinnovamento urbanistico che incisero notevolmente sul volto e sulle caratteristiche della zona in esame. A partire dal Rinascimento fino ad arrivare ai grandi scavi della fine del 1800 tutto il quartiere incominciò a “popolarsi” di numerosi edifici di grande mole che andarono ad intaccare completamente le vestigia del periodo antico: la costruzione del Palazzo del Quirinale e dei vari palazzi nobiliari ma soprattutto la costruzione dei numerosi ministeri e della prima stazione Termini alla fine dell’800 comportarono numerosi sventramenti senza la produzione di una adeguata documentazione delle indagini di scavo. Questa ricerca intende ricostruire, in un’ottica diacronica, la topografia di uno dei quartieri centrali della Roma antica attraverso l’analisi dei principali fenomeni che contraddistinguono l’evoluzione del tessuto urbano sia per quanto riguarda le strutture pubbliche che in particolar modo quelle private. Infatti, il dato principale che emerge da questa ricerca è che questa regio si configura, a partire già dal periodo tardo-repubblicano, come un quartiere a vocazione prevalentemente residenziale, abitato soprattutto dall’alta aristocrazia appartenente alle più alte cariche dello Stato romano; oltre a domus ed insulae, sul Quirinale, vennero costruiti lungo il corso di tutta l’età repubblicana alcuni tra i più antichi templi della città che con la loro mole occuparono parte dello spazio collinare fino all’età tardoantica, rappresentando così una macroscopica e costante presenza nell’ingombro dello spazio edificato.
Resumo:
Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.
Resumo:
Joaquín Camaño fu un gesuita della Provincia del Paraguay, vissuto nell' esilio italiano la maggior parte della sua vita dal 1767 al 1820. Il suo lavoro e la sua fama possono essere considerati di minore importanza se paragonati a molti altri gesuiti esiliati per ordine di Carlo III alla fine del XVIII secolo in Emilia-Romagna. Attraverso la mia ricerca approfondisco il ruolo di J. Camaño quale personaggio minore che entra nella vita degli altri espulsi tramite un dinamico network relazionale di cui è stato uno dei principali artefici. Il mio obiettivo è stato quello di studiare l'impatto che ebbero gli esuli gesuiti americani, attraverso la vita di Joaquin Camaño, sul mondo intellettuale italiano, europeo ed americano dopo l'espulsione del 1767. Egli, con i suoi studi, si inserisce nella rinnovata e vivace retorica del “Mondo Nuovo” che in quegli anni assume un grande dinamismo. Nato nella modesta città di La Rioja, in Argentina, si erge come un brillante cartografo, etnografo e linguista nel contesto dell'Illustrazione europea grazie alla sua particolare vita da missionario. Dopo l'espulsione, Joaquin Camaño, insieme ad altri numerosi confratelli americani, arriverà a Faenza, nello Stato Pontificio, dedicandosi allo studio della cartografia, dell'etnografia e delle lingue americane. Le sue ricerche si collocano in un momento nevralgico per la storia del pensiero linguistico-antropologico, quando l'osservazione diretta e la riflessione teorica dei fenomeni si misuravano con la grande varietà umana ormai riscontrata nel mondo.
Resumo:
The study defines a new farm classification and identifies the arable land management. These aspects and several indicators are taken into account to estimate the sustainability level of farms, for organic and conventional regimes. The data source is Italian Farm Account Data Network (RICA) for years 2007-2011, which samples structural and economical information. An environmental data has been added to the previous one to better describe the farm context. The new farm classification describes holding by general informations and farm structure. The general information are: adopted regime and farm location in terms of administrative region, slope and phyto-climatic zone. The farm structures describe the presence of main productive processes and land covers, which are recorded by FADN database. The farms, grouped by homogeneous farm structure or farm typology, are evaluated in terms of sustainability. The farm model MAD has been used to estimate a list of indicators. They describe especially environmental and economical areas of sustainability. Finally arable lands are taken into account to identify arable land managements and crop rotations. Each arable land has been classified by crop pattern. Then crop rotation management has been analysed by spatial and temporal approaches. The analysis reports a high variability inside regimes. The farm structure influences indicators level more than regimes, and it is not always possible to compare the two regimes. However some differences between organic and conventional agriculture have been found. Organic farm structures report different frequency and geographical location than conventional ones. Also different connections among arable lands and farm structures have been identified.