17 resultados para Collapsed objects and Supernovae
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
This thesis proposes a new document model, according to which any document can be segmented in some independent components and transformed in a pattern-based projection, that only uses a very small set of objects and composition rules. The point is that such a normalized document expresses the same fundamental information of the original one, in a simple, clear and unambiguous way. The central part of my work consists of discussing that model, investigating how a digital document can be segmented, and how a segmented version can be used to implement advanced tools of conversion. I present seven patterns which are versatile enough to capture the most relevant documents’ structures, and whose minimality and rigour make that implementation possible. The abstract model is then instantiated into an actual markup language, called IML. IML is a general and extensible language, which basically adopts an XHTML syntax, able to capture a posteriori the only content of a digital document. It is compared with other languages and proposals, in order to clarify its role and objectives. Finally, I present some systems built upon these ideas. These applications are evaluated in terms of users’ advantages, workflow improvements and impact over the overall quality of the output. In particular, they cover heterogeneous content management processes: from web editing to collaboration (IsaWiki and WikiFactory), from e-learning (IsaLearning) to professional printing (IsaPress).
Resumo:
In this work I address the study of language comprehension in an “embodied” framework. Firstly I show behavioral evidence supporting the idea that language modulates the motor system in a specific way, both at a proximal level (sensibility to the effectors) and at the distal level (sensibility to the goal of the action in which the single motor acts are inserted). I will present two studies in which the method is basically the same: we manipulated the linguistic stimuli (the kind of sentence: hand action vs. foot action vs. mouth action) and the effector by which participants had to respond (hand vs. foot vs. mouth; dominant hand vs. non-dominant hand). Response times analyses showed a specific modulation depending on the kind of sentence: participants were facilitated in the task execution (sentence sensibility judgment) when the effector they had to use to respond was the same to which the sentences referred. Namely, during language comprehension a pre-activation of the motor system seems to take place. This activation is analogous (even if less intense) to the one detectable when we practically execute the action described by the sentence. Beyond this effector specific modulation, we also found an effect of the goal suggested by the sentence. That is, the hand effector was pre-activated not only by hand-action-related sentences, but also by sentences describing mouth actions, consistently with the fact that to execute an action on an object with the mouth we firstly have to bring it to the mouth with the hand. After reviewing the evidence on simulation specificity directly referring to the body (for instance, the kind of the effector activated by the language), I focus on the specific properties of the object to which the words refer, particularly on the weight. In this case the hypothesis to test was if both lifting movement perception and lifting movement execution are modulated by language comprehension. We used behavioral and kinematics methods, and we manipulated the linguistic stimuli (the kind of sentence: the lifting of heavy objects vs. the lifting of light objects). To study the movement perception we measured the correlations between the weight of the objects lifted by an actor (heavy objects vs. light objects) and the esteems provided by the participants. To study the movement execution we measured kinematics parameters variance (velocity, acceleration, time to the first peak of velocity) during the actual lifting of objects (heavy objects vs. light objects). Both kinds of measures revealed that language had a specific effect on the motor system, both at a perceptive and at a motoric level. Finally, I address the issue of the abstract words. Different studies in the “embodied” framework tried to explain the meaning of abstract words The limit of these works is that they account only for subsets of phenomena, so results are difficult to generalize. We tried to circumvent this problem by contrasting transitive verbs (abstract and concrete) and nouns (abstract and concrete) in different combinations. The behavioral study was conducted both with German and Italian participants, as the two languages are syntactically different. We found that response times were faster for both the compatible pairs (concrete verb + concrete noun; abstract verb + abstract noun) than for the mixed ones. Interestingly, for the mixed combinations analyses showed a modulation due to the specific language (German vs. Italian): when the concrete word precedes the abstract one responses were faster, regardless of the word grammatical class. Results are discussed in the framework of current views on abstract words. They highlight the important role of developmental and social aspects of language use, and confirm theories assigning a crucial role to both sensorimotor and linguistic experience for abstract words.
Resumo:
FIR spectroscopy is an alternative way of collecting spectra of many inorganic pigments and corrosion products found on art objects, which is not normally observed in the MIR region. Most FIR spectra are traditionally collected in transmission mode but as a real novelty it is now also possible to record FIR spectra in ATR (Attenuated Total Reflectance) mode. In FIR transmission we employ polyethylene (PE) for preparation of pellets by embedding the sample in PE. Unfortunately, the preparation requires heating of the PE in order to produces at transparent pellet. This will affect compounds with low melting points, especially those with structurally incorporated water. Another option in FIR transmission is the use of thin films. We test the use of polyethylene thin film (PETF), both commercial and laboratory-made PETF. ATR collection of samples is possible in both the MIR and FIR region on solid, powdery or liquid samples. Changing from the MIR to the FIR region is easy as it simply requires the change of detector and beamsplitter (which can be performed within a few minutes). No preparation of the sample is necessary, which is a huge advantage over the PE transmission method. The most obvious difference, when comparing transmission with ATR, is the distortion of band shape (which appears asymmetrical in the lower wavenumber region) and intensity differences. However, the biggest difference can be the shift of strong absorbing bands moving to lower wavenumbers in ATR mode. The sometimes huge band shift necessitates the collection of standard library spectra in both FIR transmission and ATR modes, provided these two methods of collecting are to be employed for analyses of unknown samples. Standard samples of 150 pigment and corrosion compounds are thus collected in both FIR transmission and ATR mode in order to build up a digital library of spectra for comparison with unknown samples. XRD, XRF and Raman spectroscopy assists us in confirming the purity or impurity of our standard samples. 24 didactic test tables, with known pigment and binder painted on the surface of a limestone tablet, are used for testing the established library and different ways of collecting in ATR and transmission mode. In ATR, micro samples are scratched from the surface and examined in both the MIR and FIR region. Additionally, direct surface contact of the didactic tablets with the ATR crystal are tested together with water enhanced surface contact. In FIR transmission we compare the powder from our test tablet on the laboratory PETF and embedded in PE. We also compare the PE pellets collected using a 4x beam condenser, focusing the IR beam area from 8 mm to 2 mm. A few samples collected from a mural painting in a Nepalese temple, corrosion products collected from archaeological Chinese bronze objects and samples from a mural paintings in an Italian abbey, are examined by ATR or transmission spectroscopy.
Resumo:
This Phd thesis was entirely developed at the Telescopio Nazionale Galileo (TNG, Roque de los Muchachos, La Palma Canary Islands) with the aim of designing, developing and implementing a new Graphical User Interface (GUI) for the Near Infrared Camera Spectrometer (NICS) installed on the Nasmyth A of the telescope. The idea of a new GUI for NICS has risen for optimizing the astronomers work through a set of powerful tools not present in the existing GUI, such as the possibility to move automatically, an object on the slit or do a very preliminary images analysis and spectra extraction. The new GUI also provides a wide and versatile image display, an automatic procedure to find out the astronomical objects and a facility for the automatic image crosstalk correction. In order to test the overall correct functioning of the new GUI for NICS, and providing some information on the atmospheric extinction at the TNG site, two telluric standard stars have been spectroscopically observed within some engineering time, namely Hip031303 and Hip031567. The used NICS set-up is as follows: Large Field (0.25'' /pixel) mode, 0.5'' slit and spectral dispersion through the AMICI prism (R~100), and the higher resolution (R~1000) JH and HK grisms.
Resumo:
n the last few years, the vision of our connected and intelligent information society has evolved to embrace novel technological and research trends. The diffusion of ubiquitous mobile connectivity and advanced handheld portable devices, amplified the importance of the Internet as the communication backbone for the fruition of services and data. The diffusion of mobile and pervasive computing devices, featuring advanced sensing technologies and processing capabilities, triggered the adoption of innovative interaction paradigms: touch responsive surfaces, tangible interfaces and gesture or voice recognition are finally entering our homes and workplaces. We are experiencing the proliferation of smart objects and sensor networks, embedded in our daily living and interconnected through the Internet. This ubiquitous network of always available interconnected devices is enabling new applications and services, ranging from enhancements to home and office environments, to remote healthcare assistance and the birth of a smart environment. This work will present some evolutions in the hardware and software development of embedded systems and sensor networks. Different hardware solutions will be introduced, ranging from smart objects for interaction to advanced inertial sensor nodes for motion tracking, focusing on system-level design. They will be accompanied by the study of innovative data processing algorithms developed and optimized to run on-board of the embedded devices. Gesture recognition, orientation estimation and data reconstruction techniques for sensor networks will be introduced and implemented, with the goal to maximize the tradeoff between performance and energy efficiency. Experimental results will provide an evaluation of the accuracy of the presented methods and validate the efficiency of the proposed embedded systems.
Resumo:
This thesis investigated affordances and verbal language to demonstrate the flexibility of embodied simulation processes. Starting from the assumption that both object/action understanding and language comprehension are tied to the context in which they take place, six studies clarified the factors that modulate simulation. The studies in chapter 4 and 5 investigated affordance activation in complex scenes, revealing the strong influence of the visual context, which included either objects and actions, on compatibility effects. The study in chapter 6 compared the simulation triggered by visual objects and objects names, showing differences depending on the kind of materials processed. The study in chapter 7 tested the predictions of the WAT theory, confirming that the different contexts in which words are acquired lead to the difference typically observed in the literature between concrete and abstract words. The study in chapter 8 on the grounding of abstract concepts tested the mapping of temporal contents on the spatial frame of reference of the mental timeline, showing that metaphoric congruency effects are not automatic, but flexibly mediated by the context determined by the goals of different tasks. The study in chapter 9 investigated the role of iconicity in verbal language, showing sound-to-shape correspondences when every-day object figures, result that validated the reality of sound-symbolism in ecological contexts. On the whole, this evidence favors embodied views of cognition, and supports the hypothesis of a high flexibility of simulation processes. The reported conceptual effects confirm that the context plays a crucial role in affordances emergence, metaphoric mappings activation and language grounding. In conclusion, this thesis highlights that in an embodied perspective cognition is necessarily situated and anchored to a specific context, as it is sustained by the existence of a specific body immersed in a specific environment.
Resumo:
Quasars and AGN play an important role in many aspects of the modern cosmology. Of particular interest is the issue of the interplay between AGN activity and formation and evolution of galaxies and structures. Studies on nearby galaxies revealed that most (and possibly all) galaxy nuclei contain a super-massive black hole (SMBH) and that between a third and half of them are showing some evidence of activity (Kormendy and Richstone, 1995). The discovery of a tight relation between black holes mass and velocity dispersion of their host galaxy suggests that the evolution of the growth of SMBH and their host galaxy are linked together. In this context, studying the evolution of AGN, through the luminosity function (LF), is fundamental to constrain the theories of galaxy and SMBH formation and evolution. Recently, many theories have been developed to describe physical processes possibly responsible of a common formation scenario for galaxies and their central black hole (Volonteri et al., 2003; Springel et al., 2005a; Vittorini et al., 2005; Hopkins et al., 2006a) and an increasing number of observations in different bands are focused on collecting larger and larger quasar samples. Many issues remain however not yet fully understood. In the context of the VVDS (VIMOS-VLT Deep Survey), we collected and studied an unbiased sample of spectroscopically selected faint type-1 AGN with a unique and straightforward selection function. Indeed, the VVDS is a large, purely magnitude limited spectroscopic survey of faint objects, free of any morphological and/or color preselection. We studied the statistical properties of this sample and its evolution up to redshift z 4. Because of the contamination of the AGN light by their host galaxies at the faint magnitudes explored by our sample, we observed that a significant fraction of AGN in our sample would be missed by the UV excess and morphological criteria usually adopted for the pre-selection of optical QSO candidates. If not properly taken into account, this failure in selecting particular sub-classes of AGN could, in principle, affect some of the conclusions drawn from samples of AGN based on these selection criteria. The absence of any pre-selection in the VVDS leads us to have a very complete sample of AGN, including also objects with unusual colors and continuum shape. The VVDS AGN sample shows in fact redder colors than those expected by comparing it, for example, with the color track derived from the SDSS composite spectrum. In particular, the faintest objects have on average redder colors than the brightest ones. This can be attributed to both a large fraction of dust-reddened objects and a significant contamination from the host galaxy. We have tested these possibilities by examining the global spectral energy distribution of each object using, in addition to the U, B, V, R and I-band magnitudes, also the UV-Galex and the IR-Spitzer bands, and fitting it with a combination of AGN and galaxy emission, allowing also for the possibility of extinction of the AGN flux. We found that for 44% of our objects the contamination from the host galaxy is not negligible and this fraction decreases to 21% if we restrict the analysis to a bright subsample (M1450 <-22.15). Our estimated integral surface density at IAB < 24.0 is 500 AGN per square degree, which represents the highest surface density of a spectroscopically confirmed sample of optically selected AGN. We derived the luminosity function in B-band for 1.0 < z < 3.6 using the 1/Vmax estimator. Our data, more than one magnitude fainter than previous optical surveys, allow us to constrain the faint part of the luminosity function up to high redshift. A comparison of our data with the 2dF sample at low redshift (1 < z < 2.1) shows that the VDDS data can not be well fitted with the pure luminosity evolution (PLE) models derived by previous optically selected samples. Qualitatively, this appears to be due to the fact that our data suggest the presence of an excess of faint objects at low redshift (1.0 < z < 1.5) with respect to these models. By combining our faint VVDS sample with the large sample of bright AGN extracted from the SDSS DR3 (Richards et al., 2006b) and testing a number of different evolutionary models, we find that the model which better represents the combined luminosity functions, over a wide range of redshift and luminosity, is a luminosity dependent density evolution (LDDE) model, similar to those derived from the major Xsurveys. Such a parameterization allows the redshift of the AGN density peak to change as a function of luminosity, thus fitting the excess of faint AGN that we find at 1.0 < z < 1.5. On the basis of this model we find, for the first time from the analysis of optically selected samples, that the peak of the AGN space density shifts significantly towards lower redshift going to lower luminosity objects. The position of this peak moves from z 2.0 for MB <-26.0 to z 0.65 for -22< MB <-20. This result, already found in a number of X-ray selected samples of AGN, is consistent with a scenario of “AGN cosmic downsizing”, in which the density of more luminous AGN, possibly associated to more massive black holes, peaks earlier in the history of the Universe (i.e. at higher redshift), than that of low luminosity ones, which reaches its maximum later (i.e. at lower redshift). This behavior has since long been claimed to be present in elliptical galaxies and it is not easy to reproduce it in the hierarchical cosmogonic scenario, where more massive Dark Matter Halos (DMH) form on average later by merging of less massive halos.
Resumo:
This study aims at analysing Brian O'Nolans literary production in the light of a reconsideration of the role played by his two most famous pseudonyms ,Flann Brien and Myles na Gopaleen, behind which he was active both as a novelist and as a journalist. We tried to establish a new kind of relationship between them and their empirical author following recent cultural and scientific surveys in the field of Humour Studies, Psychology, and Sociology: taking as a starting point the appreciation of the comic attitude in nature and in cultural history, we progressed through a short history of laughter and derision, followed by an overview on humour theories. After having established such a frame, we considered an integration of scientific studies in the field of laughter and humour as a base for our study scheme, in order to come to a definition of the comic author as a recognised, powerful and authoritative social figure who acts as a critic of conventions. The history of laughter and comic we briefly summarized, based on the one related by the French scholar Georges Minois in his work (Minois 2004), has been taken into account in the view that humorous attitude is one of manâs characteristic traits always present and witnessed throughout the ages, though subject in most cases to repression by cultural and political conservative power. This sort of Super-Ego notwithstanding, or perhaps because of that, comic impulse proved irreducible exactly in its influence on the current cultural debates. Basing mainly on Robert R. Provineâs (Provine 2001), Fabio Ceccarelliâs (Ceccarelli 1988), Arthur Koestlerâs (Koestler 1975) and Peter L. Bergerâs (Berger 1995) scientific essays on the actual occurrence of laughter and smile in complex social situations, we underlined the many evidences for how the use of comic, humour and wit (in a Freudian sense) could be best comprehended if seen as a common mind process designed for the improvement of knowledge, in which we traced a strict relation with the play-element the Dutch historian Huizinga highlighted in his famous essay, Homo Ludens (Huizinga 1955). We considered comic and humour/wit as different sides of the same coin, and showed how the demonstrations scientists provided on this particular subject are not conclusive, given that the mental processes could not still be irrefutably shown to be separated as regards graduations in comic expression and reception: in fact, different outputs in expressions might lead back to one and the same production process, following the general âEconomy Ruleâ of evolution; man is the only animal who lies, meaning with this that one feeling is not necessarily biuniquely associated with one and the same outward display, so human expressions are not validation proofs for feelings. Considering societies, we found that in nature they are all organized in more or less the same way, that is, in élites who govern over a community who, in turn, recognizes them as legitimate delegates for that task; we inferred from this the epistemological possibility for the existence of an added ruling figure alongside those political and religious: this figure being the comic, who is the person in charge of expressing true feelings towards given subjects of contention. Any community owns one, and his very peculiar status is validated by the fact that his place is within the community, living in it and speaking to it, but at the same time is outside it in the sense that his action focuses mainly on shedding light on ideas and objects placed out-side the boundaries of social convention: taboos, fears, sacred objects and finally culture are the favourite targets of the comic personâs arrow. This is the reason for the word a(rche)typical as applied to the comic figure in society: atypical in a sense, because unconventional and disrespectful of traditions, critical and never at ease with unblinkered respect of canons; archetypical, because the âvillage foolâ, buffoon, jester or anyone in any kind of society who plays such roles, is an archetype in the Jungian sense, i.e. a personification of an irreducible side of human nature that everybody instinctively knows: a beginner of a tradition, the perfect type, what is most conventional of all and therefore the exact opposite of an atypical. There is an intrinsic necessity, we think, of such figures in societies, just like politicians and priests, who should play an elitist role in order to guide and rule not for their own benefit but for the good of the community. We are not naïve and do know that actual owners of power always tend to keep it indefinitely: the âsocial comicâ as a role of power has nonetheless the distinctive feature of being the only job whose tension is not towards stability. It has got in itself the rewarding permission of contradiction, for the very reason we exposed before that the comic must cast an eye both inside and outside society and his vision may be perforce not consistent, then it is satisfactory for the popularity that gives amongst readers and audience. Finally, the difference between governors, priests and comic figures is the seriousness of the first two (fundamentally monologic) and the merry contradiction of the third (essentially dialogic). MPs, mayors, bishops and pastors should always console, comfort and soothe popular mood in respect of the public convention; the comic has the opposite task of provoking, urging and irritating, accomplishing at the same time a sort of control of the soothing powers of society, keepers of the righteousness. In this view, the comic person assumes a paramount importance in the counterbalancing of power administration, whether in form of acting in public places or in written pieces which could circulate for private reading. At this point comes into question our Irish writer Brian O'Nolan(1911-1966), real name that stood behind the more famous masks of Flann O'Brien, novelist, author of At Swim-Two-Birds (1939), The Hard Life (1961), The Dalkey Archive (1964) and, posthumously, The Third Policeman (1967); and of Myles na Gopaleen, journalist, keeper for more than 25 years of the Cruiskeen Lawn column on The Irish Times (1940-1966), and author of the famous book-parody in Irish An Béal Bocht (1941), later translated in English as The Poor Mouth (1973). Brian O'Nolan, professional senior civil servant of the Republic, has never seen recognized his authorship in literary studies, since all of them concentrated on his alter egos Flann, Myles and some others he used for minor contributions. So far as we are concerned, we think this is the first study which places the real name in the title, this way acknowledging him an unity of intents that no-one before did. And this choice in titling is not a mere mark of distinction for the sake of it, but also a wilful sign of how his opus should now be reconsidered. In effect, the aim of this study is exactly that of demonstrating how the empirical author Brian O'Nolan was the real Deus in machina, the master of puppets who skilfully directed all of his identities in planned directions, so as to completely fulfil the role of the comic figure we explained before. Flann O'Brien and Myles na Gopaleen were personae and not persons, but the impression one gets from the critical studies on them is the exact opposite. Literary consideration, that came only after O'Nolans death, began with Anne Clissmannâs work, Flann O'Brien: A Critical Introduction to His Writings (Clissmann 1975), while the most recent book is Keith Donohueâs The Irish Anatomist: A Study of Flann O'Brien (Donohue 2002); passing through M.Keith Bookerâs Flann O'Brien, Bakhtin and Menippean Satire (Booker 1995), Keith Hopperâs Flann O'Brien: A Portrait of the Artist as a Young Post-Modernist (Hopper 1995) and Monique Gallagherâs Flann O'Brien, Myles et les autres (Gallagher 1998). There have also been a couple of biographies, which incidentally somehow try to explain critical points his literary production, while many critical studies do the same on the opposite side, trying to found critical points of view on the authorâs restless life and habits. At this stage, we attempted to merge into O'Nolan's corpus the journalistic articles he wrote, more than 4,200, for roughly two million words in the 26-year-old running of the column. To justify this, we appealed to several considerations about the figure O'Nolan used as writer: Myles na Gopaleen (later simplified in na Gopaleen), who was the equivalent of the street artist or storyteller, speaking to his imaginary public and trying to involve it in his stories, quarrels and debates of all kinds. First of all, he relied much on language for the reactions he would obtain, playing on, and with, words so as to ironically unmask untrue relationships between words and things. Secondly, he pushed to the limit the convention of addressing to spectators and listeners usually employed in live performing, stretching its role in the written discourse to come to a greater effect of involvement of readers. Lastly, he profited much from what we labelled his âspecific weightâ, i.e. the potential influence in society given by his recognised authority in determined matters, a position from which he could launch deeper attacks on conventional beliefs, so complying with the duty of a comic we hypothesised before: that of criticising society even in threat of losing the benefits the post guarantees. That seemingly masochistic tendency has its rationale. Every representative has many privileges on the assumption that he, or she, has great responsibilities in administrating. The higher those responsibilities are, the higher is the reward but also the severer is the punishment for the misfits done while in charge. But we all know that not everybody accepts the rules and many try to use their power for their personal benefit and do not want to undergo lawâs penalties. The comic, showing in this case more civic sense than others, helped very much in this by the non-accessibility to the use of public force, finds in the role of the scapegoat the right accomplishment of his task, accepting the punishment when his breaking of the conventions is too stark to be forgiven. As Ceccarelli demonstrated, the role of the object of laughter (comic, ridicule) has its very own positive side: there is freedom of expression for the person, and at the same time integration in the society, even though at low levels. Then the banishment of a âsocialâ comic can never get to total extirpation from society, revealing how the scope of the comic lies on an entirely fictional layer, bearing no relation with facts, nor real consequences in terms of physical health. Myles na Gopaleen, mastering these three characteristics we postulated in the highest way, can be considered an author worth noting; and the oeuvre he wrote, the whole collection of Cruiskeen Lawn articles, is rightfully a novel because respects the canons of it especially regarding the authorial figure and his relationship with the readers. In addition, his work can be studied even if we cannot conduct our research on the whole of it, this proceeding being justified exactly because of the resemblances to the real figure of the storyteller: its âchaptersâ âthe daily articlesâ had a format that even the distracted reader could follow, even one who did not read each and every article before. So we can critically consider also a good part of them, as collected in the seven volumes published so far, with the addition of some others outside the collections, because completeness in this case is not at all a guarantee of a better precision in the assessment; on the contrary: examination of the totality of articles might let us consider him as a person and not a persona. Once cleared these points, we proceeded further in considering tout court the works of Brian O'Nolan as the works of a unique author, rather than complicating the references with many names which are none other than well-wrought sides of the same personality. By putting O'Nolan as the correct object of our research, empirical author of the works of the personae Flann O'Brien and Myles na Gopaleen, there comes out a clearer literary landscape: the comic author Brian O'Nolan, self-conscious of his paramount role in society as both a guide and a scourge, in a word as an a(rche)typical, intentionally chose to differentiate his personalities so as to create different perspectives in different fields of knowledge by using, in addition, different means of communication: novels and journalism. We finally compared the newly assessed author Brian O'Nolan with other great Irish comic writers in English, such as James Joyce (the one everybody named as the master in the field), Samuel Beckett, and Jonathan Swift. This comparison showed once more how O'Nolan is in no way inferior to these authors who, greatly celebrated by critics, have nonetheless failed to achieve that great public recognition OâNolan received alias Myles, awarded by the daily audience he reached and influenced with his Cruiskeen Lawn column. For this reason, we believe him to be representative of the comic figureâs function as a social regulator and as a builder of solidarity, such as that Raymond Williams spoke of in his work (Williams 1982), with in mind the aim of building a âculture in commonâ. There is no way for a âculture in commonâ to be acquired if we do not accept the fact that even the most functional society rests on conventions, and in a world more and more âconnectedâ we need someone to help everybody negotiate with different cultures and persons. The comic gives us a worldly perspective which is at the same time comfortable and distressing but in the end not harmful as the one furnished by politicians could be: he lets us peep into parallel worlds without moving too far from our armchair and, as a consequence, is the one who does his best for the improvement of our understanding of things.
Resumo:
Se il lavoro dello storico è capire il passato come è stato compreso dalla gente che lo ha vissuto, allora forse non è azzardato pensare che sia anche necessario comunicare i risultati delle ricerche con strumenti propri che appartengono a un'epoca e che influenzano la mentalità di chi in quell'epoca vive. Emergenti tecnologie, specialmente nell’area della multimedialità come la realtà virtuale, permettono agli storici di comunicare l’esperienza del passato in più sensi. In che modo la storia collabora con le tecnologie informatiche soffermandosi sulla possibilità di fare ricostruzioni storiche virtuali, con relativi esempi e recensioni? Quello che maggiormente preoccupa gli storici è se una ricostruzione di un fatto passato vissuto attraverso la sua ricreazione in pixels sia un metodo di conoscenza della storia che possa essere considerato valido. Ovvero l'emozione che la navigazione in una realtà 3D può suscitare, è un mezzo in grado di trasmettere conoscenza? O forse l'idea che abbiamo del passato e del suo studio viene sottilmente cambiato nel momento in cui lo si divulga attraverso la grafica 3D? Da tempo però la disciplina ha cominciato a fare i conti con questa situazione, costretta soprattutto dall'invasività di questo tipo di media, dalla spettacolarizzazione del passato e da una divulgazione del passato parziale e antiscientifica. In un mondo post letterario bisogna cominciare a pensare che la cultura visuale nella quale siamo immersi sta cambiando il nostro rapporto con il passato: non per questo le conoscenze maturate fino ad oggi sono false, ma è necessario riconoscere che esiste più di una verità storica, a volte scritta a volte visuale. Il computer è diventato una piattaforma onnipresente per la rappresentazione e diffusione dell’informazione. I metodi di interazione e rappresentazione stanno evolvendo di continuo. Ed è su questi due binari che è si muove l’offerta delle tecnologie informatiche al servizio della storia. Lo scopo di questa tesi è proprio quello di esplorare, attraverso l’utilizzo e la sperimentazione di diversi strumenti e tecnologie informatiche, come si può raccontare efficacemente il passato attraverso oggetti tridimensionali e gli ambienti virtuali, e come, nel loro essere elementi caratterizzanti di comunicazione, in che modo possono collaborare, in questo caso particolare, con la disciplina storica. La presente ricerca ricostruisce alcune linee di storia delle principali fabbriche attive a Torino durante la seconda guerra mondiale, ricordando stretta relazione che esiste tra strutture ed individui e in questa città in particolare tra fabbrica e movimento operaio, è inevitabile addentrarsi nelle vicende del movimento operaio torinese che nel periodo della lotta di Liberazione in città fu un soggetto politico e sociale di primo rilievo. Nella città, intesa come entità biologica coinvolta nella guerra, la fabbrica (o le fabbriche) diventa il nucleo concettuale attraverso il quale leggere la città: sono le fabbriche gli obiettivi principali dei bombardamenti ed è nelle fabbriche che si combatte una guerra di liberazione tra classe operaia e autorità, di fabbrica e cittadine. La fabbrica diventa il luogo di "usurpazione del potere" di cui parla Weber, il palcoscenico in cui si tengono i diversi episodi della guerra: scioperi, deportazioni, occupazioni .... Il modello della città qui rappresentata non è una semplice visualizzazione ma un sistema informativo dove la realtà modellata è rappresentata da oggetti, che fanno da teatro allo svolgimento di avvenimenti con una precisa collocazione cronologica, al cui interno è possibile effettuare operazioni di selezione di render statici (immagini), di filmati precalcolati (animazioni) e di scenari navigabili interattivamente oltre ad attività di ricerca di fonti bibliografiche e commenti di studiosi segnatamente legati all'evento in oggetto. Obiettivo di questo lavoro è far interagire, attraverso diversi progetti, le discipline storiche e l’informatica, nelle diverse opportunità tecnologiche che questa presenta. Le possibilità di ricostruzione offerte dal 3D vengono così messe a servizio della ricerca, offrendo una visione integrale in grado di avvicinarci alla realtà dell’epoca presa in considerazione e convogliando in un’unica piattaforma espositiva tutti i risultati. Divulgazione Progetto Mappa Informativa Multimediale Torino 1945 Sul piano pratico il progetto prevede una interfaccia navigabile (tecnologia Flash) che rappresenti la pianta della città dell’epoca, attraverso la quale sia possibile avere una visione dei luoghi e dei tempi in cui la Liberazione prese forma, sia a livello concettuale, sia a livello pratico. Questo intreccio di coordinate nello spazio e nel tempo non solo migliora la comprensione dei fenomeni, ma crea un maggiore interesse sull’argomento attraverso l’utilizzo di strumenti divulgativi di grande efficacia (e appeal) senza perdere di vista la necessità di valicare le tesi storiche proponendosi come piattaforma didattica. Un tale contesto richiede uno studio approfondito degli eventi storici al fine di ricostruire con chiarezza una mappa della città che sia precisa sia topograficamente sia a livello di navigazione multimediale. La preparazione della cartina deve seguire gli standard del momento, perciò le soluzioni informatiche utilizzate sono quelle fornite da Adobe Illustrator per la realizzazione della topografia, e da Macromedia Flash per la creazione di un’interfaccia di navigazione. La base dei dati descrittivi è ovviamente consultabile essendo contenuta nel supporto media e totalmente annotata nella bibliografia. È il continuo evolvere delle tecnologie d'informazione e la massiccia diffusione dell’uso dei computer che ci porta a un cambiamento sostanziale nello studio e nell’apprendimento storico; le strutture accademiche e gli operatori economici hanno fatto propria la richiesta che giunge dall'utenza (insegnanti, studenti, operatori dei Beni Culturali) di una maggiore diffusione della conoscenza storica attraverso la sua rappresentazione informatizzata. Sul fronte didattico la ricostruzione di una realtà storica attraverso strumenti informatici consente anche ai non-storici di toccare con mano quelle che sono le problematiche della ricerca quali fonti mancanti, buchi della cronologia e valutazione della veridicità dei fatti attraverso prove. Le tecnologie informatiche permettono una visione completa, unitaria ed esauriente del passato, convogliando tutte le informazioni su un'unica piattaforma, permettendo anche a chi non è specializzato di comprendere immediatamente di cosa si parla. Il miglior libro di storia, per sua natura, non può farlo in quanto divide e organizza le notizie in modo diverso. In questo modo agli studenti viene data l'opportunità di apprendere tramite una rappresentazione diversa rispetto a quelle a cui sono abituati. La premessa centrale del progetto è che i risultati nell'apprendimento degli studenti possono essere migliorati se un concetto o un contenuto viene comunicato attraverso più canali di espressione, nel nostro caso attraverso un testo, immagini e un oggetto multimediale. Didattica La Conceria Fiorio è uno dei luoghi-simbolo della Resistenza torinese. Il progetto è una ricostruzione in realtà virtuale della Conceria Fiorio di Torino. La ricostruzione serve a arricchire la cultura storica sia a chi la produce, attraverso una ricerca accurata delle fonti, sia a chi può poi usufruirne, soprattutto i giovani, che, attratti dall’aspetto ludico della ricostruzione, apprendono con più facilità. La costruzione di un manufatto in 3D fornisce agli studenti le basi per riconoscere ed esprimere la giusta relazione fra il modello e l’oggetto storico. Le fasi di lavoro attraverso cui si è giunti alla ricostruzione in 3D della Conceria: . una ricerca storica approfondita, basata sulle fonti, che possono essere documenti degli archivi o scavi archeologici, fonti iconografiche, cartografiche, ecc.; . La modellazione degli edifici sulla base delle ricerche storiche, per fornire la struttura geometrica poligonale che permetta la navigazione tridimensionale; . La realizzazione, attraverso gli strumenti della computer graphic della navigazione in 3D. Unreal Technology è il nome dato al motore grafico utilizzato in numerosi videogiochi commerciali. Una delle caratteristiche fondamentali di tale prodotto è quella di avere uno strumento chiamato Unreal editor con cui è possibile costruire mondi virtuali, e che è quello utilizzato per questo progetto. UnrealEd (Ued) è il software per creare livelli per Unreal e i giochi basati sul motore di Unreal. E’ stata utilizzata la versione gratuita dell’editor. Il risultato finale del progetto è un ambiente virtuale navigabile raffigurante una ricostruzione accurata della Conceria Fiorio ai tempi della Resistenza. L’utente può visitare l’edificio e visualizzare informazioni specifiche su alcuni punti di interesse. La navigazione viene effettuata in prima persona, un processo di “spettacolarizzazione” degli ambienti visitati attraverso un arredamento consono permette all'utente una maggiore immersività rendendo l’ambiente più credibile e immediatamente codificabile. L’architettura Unreal Technology ha permesso di ottenere un buon risultato in un tempo brevissimo, senza che fossero necessari interventi di programmazione. Questo motore è, quindi, particolarmente adatto alla realizzazione rapida di prototipi di una discreta qualità, La presenza di un certo numero di bug lo rende, però, in parte inaffidabile. Utilizzare un editor da videogame per questa ricostruzione auspica la possibilità di un suo impiego nella didattica, quello che le simulazioni in 3D permettono nel caso specifico è di permettere agli studenti di sperimentare il lavoro della ricostruzione storica, con tutti i problemi che lo storico deve affrontare nel ricreare il passato. Questo lavoro vuole essere per gli storici una esperienza nella direzione della creazione di un repertorio espressivo più ampio, che includa gli ambienti tridimensionali. Il rischio di impiegare del tempo per imparare come funziona questa tecnologia per generare spazi virtuali rende scettici quanti si impegnano nell'insegnamento, ma le esperienze di progetti sviluppati, soprattutto all’estero, servono a capire che sono un buon investimento. Il fatto che una software house, che crea un videogame di grande successo di pubblico, includa nel suo prodotto, una serie di strumenti che consentano all'utente la creazione di mondi propri in cui giocare, è sintomatico che l'alfabetizzazione informatica degli utenti medi sta crescendo sempre più rapidamente e che l'utilizzo di un editor come Unreal Engine sarà in futuro una attività alla portata di un pubblico sempre più vasto. Questo ci mette nelle condizioni di progettare moduli di insegnamento più immersivi, in cui l'esperienza della ricerca e della ricostruzione del passato si intreccino con lo studio più tradizionale degli avvenimenti di una certa epoca. I mondi virtuali interattivi vengono spesso definiti come la forma culturale chiave del XXI secolo, come il cinema lo è stato per il XX. Lo scopo di questo lavoro è stato quello di suggerire che vi sono grosse opportunità per gli storici impiegando gli oggetti e le ambientazioni in 3D, e che essi devono coglierle. Si consideri il fatto che l’estetica abbia un effetto sull’epistemologia. O almeno sulla forma che i risultati delle ricerche storiche assumono nel momento in cui devono essere diffuse. Un’analisi storica fatta in maniera superficiale o con presupposti errati può comunque essere diffusa e avere credito in numerosi ambienti se diffusa con mezzi accattivanti e moderni. Ecco perchè non conviene seppellire un buon lavoro in qualche biblioteca, in attesa che qualcuno lo scopra. Ecco perchè gli storici non devono ignorare il 3D. La nostra capacità, come studiosi e studenti, di percepire idee ed orientamenti importanti dipende spesso dai metodi che impieghiamo per rappresentare i dati e l’evidenza. Perché gli storici possano ottenere il beneficio che il 3D porta con sè, tuttavia, devono sviluppare un’agenda di ricerca volta ad accertarsi che il 3D sostenga i loro obiettivi di ricercatori e insegnanti. Una ricostruzione storica può essere molto utile dal punto di vista educativo non sono da chi la visita ma, anche da chi la realizza. La fase di ricerca necessaria per la ricostruzione non può fare altro che aumentare il background culturale dello sviluppatore. Conclusioni La cosa più importante è stata la possibilità di fare esperienze nell’uso di mezzi di comunicazione di questo genere per raccontare e far conoscere il passato. Rovesciando il paradigma conoscitivo che avevo appreso negli studi umanistici, ho cercato di desumere quelle che potremo chiamare “leggi universali” dai dati oggettivi emersi da questi esperimenti. Da punto di vista epistemologico l’informatica, con la sua capacità di gestire masse impressionanti di dati, dà agli studiosi la possibilità di formulare delle ipotesi e poi accertarle o smentirle tramite ricostruzioni e simulazioni. Il mio lavoro è andato in questa direzione, cercando conoscere e usare strumenti attuali che nel futuro avranno sempre maggiore presenza nella comunicazione (anche scientifica) e che sono i mezzi di comunicazione d’eccellenza per determinate fasce d’età (adolescenti). Volendo spingere all’estremo i termini possiamo dire che la sfida che oggi la cultura visuale pone ai metodi tradizionali del fare storia è la stessa che Erodoto e Tucidide contrapposero ai narratori di miti e leggende. Prima di Erodoto esisteva il mito, che era un mezzo perfettamente adeguato per raccontare e dare significato al passato di una tribù o di una città. In un mondo post letterario la nostra conoscenza del passato sta sottilmente mutando nel momento in cui lo vediamo rappresentato da pixel o quando le informazioni scaturiscono non da sole, ma grazie all’interattività con il mezzo. La nostra capacità come studiosi e studenti di percepire idee ed orientamenti importanti dipende spesso dai metodi che impieghiamo per rappresentare i dati e l’evidenza. Perché gli storici possano ottenere il beneficio sottinteso al 3D, tuttavia, devono sviluppare un’agenda di ricerca volta ad accertarsi che il 3D sostenga i loro obiettivi di ricercatori e insegnanti. Le esperienze raccolte nelle pagine precedenti ci portano a pensare che in un futuro non troppo lontano uno strumento come il computer sarà l’unico mezzo attraverso cui trasmettere conoscenze, e dal punto di vista didattico la sua interattività consente coinvolgimento negli studenti come nessun altro mezzo di comunicazione moderno.
Resumo:
Smart Environments are currently considered a key factor to connect the physical world with the information world. A Smart Environment can be defined as the combination of a physical environment, an infrastructure for data management (called Smart Space), a collection of embedded systems gathering heterogeneous data from the environment and a connectivity solution to convey these data to the Smart Space. With this vision, any application which takes advantages from the environment could be devised, without the need to directly access to it, since all information are stored in the Smart Space in a interoperable format. Moreover, according to this vision, for each entity populating the physical environment, i.e. users, objects, devices, environments, the following questions can be arise: “Who?”, i.e. which are the entities that should be identified? “Where?” i.e. where are such entities located in physical space? and “What?” i.e. which attributes and properties of the entities should be stored in the Smart Space in machine understandable format, in the sense that its meaning has to be explicitly defined and all the data should be linked together in order to be automatically retrieved by interoperable applications. Starting from this the location detection is a necessary step in the creation of Smart Environments. If the addressed entity is a user and the environment a generic environment, a meaningful way to assign the position, is through a Pedestrian Tracking System. In this work two solution for these type of system are proposed and compared. One of the two solution has been studied and developed in all its aspects during the doctoral period. The work also investigates the problem to create and manage the Smart Environment. The proposed solution is to create, by means of natural interactions, links between objects and between objects and their environment, through the use of specific devices, i.e. Smart Objects
Resumo:
This PhD Thesis is part of a long-term wide research project, carried out by the "Osservatorio Astronomico di Bologna (INAF-OABO)", that has as primary goal the comprehension and reconstruction of formation mechanism of galaxies and their evolution history. There is now substantial evidence, both from theoretical and observational point of view, in favor of the hypothesis that the halo of our Galaxy has been at least partially, built up by the progressive accretion of small fragments, similar in nature to the present day dwarf galaxies of the Local Group. In this context, the photometric and spectroscopic study of systems which populate the halo of our Galaxy (i.e. dwarf spheroidal galaxy, tidal streams, massive globular cluster, etc) permits to discover, not only the origin and behaviour of these systems, but also the structure of our Galactic halo, combined with its formation history. In fact, the study of the population of these objects and also of their chemical compositions, age, metallicities and velocity dispersion, permit us not only an improvement in the understanding of the mechanisms that govern the Galactic formation, but also a valid indirect test for cosmological model itself. Specifically, in this Thesis we provided a complete characterization of the tidal Stream of the Sagittarius dwarf spheroidal galaxy, that is the most striking example of the process of tidal disruption and accretion of a dwarf satellite in to our Galaxy. Using Red Clump stars, extracted from the catalogue of the Sloan Digital Sky Survey (SDSS) we obtained an estimate of the distance, the depth along the line of sight and of the number density for each detected portion of the Stream (and more in general for each detected structure along our line of sight). Moreover comparing the relative number (i.e. the ratio) of Blue Horizontal Branch stars and Red Clump stars (the two features are tracers of different age/different metallicity populations) in the main body of the galaxy and in the Stream, in order to verify the presence of an age-metallicity gradient along the Stream. We also report the detection of a population of Red Clump stars probably associated with the recently discovered Bootes III stellar system. Finally, we also present the results of a survey of radial velocities over a wide region, extending from r ~ 10' out to r ~ 80' within the massive star cluster Omega Centauri. The survey was performed with FLAMES@VLT, to study the velocity dispersion profile in the outer regions of this stellar system. All the results presented in this Thesis, have already been published in refeered journals.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
The dramatic impact that vascular diseases have on human life quality and expectancy nowadays is the reason why both medical and scientific communities put great effort in discovering new and effective ways to fight vascular pathologies. Among the many different treatments, endovascular surgery is a minimally-invasive technique that makes use of X-ray fluoroscopy to obtain real-time images of the patient during interventions. In this context radiopaque biomaterials, i.e. materials able to absorb X-ray radiation, play a fundamental role as they are employed both to enhance visibility of devices during interventions and to protect medical staff and patients from X-ray radiations. Organic-inorganic hybrids are materials that combine characteristics of organic polymers with those of inorganic metal oxides. These materials can be synthesized via the sol-gel process and can be easily applied as thin coatings on different kinds of substrates. Good radiopacity of organic-inorganic hybrids has been recently reported suggesting that these materials might find applications in medical fields where X-ray absorption and visibility is required. The present PhD thesis aimed at developing and characterizing new radiopaque organic-inorganic hybrid materials that can find application in the vascular surgery field as coatings for the improvement of medical devices traceability as well as for the production of X-ray shielding objects and garments. Novel organic-inorganic hybrids based on different polyesters (poly-lactic acid and poly-ε-caprolactone) and polycarbonate (poly-trimethylene carbonate) as the polymeric phase and on titanium oxide as the inorganic phase were synthesized. Study of the phase interactions in these materials allowed to demonstrate that Class II hybrids (where covalent bonds exists between the two phases) can be obtained starting from any kind of polyester or polycarbonate, without the need of polymer pre-functionalization, thanks to the occurrence of transesterification reactions operated by inorganic molecules on ester and carbonate moieties. Polyester based hybrids were successfully coated via dip coating on different kinds of textiles. Coated textiles showed improved radiopacity with respect to the plain fabric while remaining soft to the touch. The hybrid was able to coat single fibers of the yarn rather than coating the yarn as a whole. Openings between yarns were maintained and therefore fabric breathability was preserved. Such coatings are promising for the production of light-weight garments for X-ray protection of medical staff during interventional fluoroscopy, which will help preventing pathologies that stem from chronic X-ray exposure. A means to increase the protection capacity of hybrid-coated fabrics was also investigated and implemented in this thesis. By synthesizing the hybrid in the presence of a suspension of radiopaque tantalum nanoparticles, PDMS-titania hybrid materials with tunable radiopacity were developed and were successfully applied as coatings. A solution for enhancing medical device radiopacity was also successfully investigated. High metal radiopacity was associated with good mechanical and protective properties of organic-inorganic hybrids in the form of a double-layer coating. Tantalum was employed as the constituent of the first layer deposited on sample substrates by means of a sputtering technique. The second layer was composed of a hybrid whose constituents are well-known biocompatible organic and inorganic components, such as the two polymers PCL and PDMS, and titanium oxide, respectively. The metallic layer conferred to the substrate good X-ray visibility. A correlation between radiopacity and coating thickness derived during this study allows to tailor radiopacity simply by controlling the metal layer sputtering deposition time. The applied metal deposition technique also permits easy shaping of the radiopaque layer, allowing production of radiopaque markers for medical devices that can be unambiguously identified by surgeons during implantation and in subsequent radiological investigations. Synthesized PCL-titania and PDMS-titania hybrids strongly adhered to substrates and show good biocompatibility as highlighted by cytotoxicity tests. The PDMS-titania hybrid coating was also characterized by high flexibility that allows it to stand large substrate deformations without detaching nor cracking, thus being suitable for application on flexible medical devices.
Resumo:
La questione energetica ha assunto, negli ultimi anni, un ruolo centrale nel dibattito mondiale in relazione a quattro fattori principali: la non riproducibilità delle risorse naturali, l’aumento esponenziale dei consumi, gli interessi economici e la salvaguardia dell'equilibrio ambientale e climatico del nostro Pianeta. E’ necessario, dunque, cambiare il modello di produzione e consumo dell’energia soprattutto nelle città, dove si ha la massima concentrazione dei consumi energetici. Per queste ragioni, il ricorso alle Fonti Energetiche Rinnovabili (FER) si configura ormai come una misura necessaria, opportuna ed urgente anche nella pianificazione urbanistica. Per migliorare la prestazione energetica complessiva del sistema città bisogna implementare politiche di governo delle trasformazioni che escano da una logica operativa “edificio-centrica” e ricomprendano, oltre al singolo manufatto, le aggregazioni di manufatti e le loro relazioni/ interazioni in termini di input e output materico-energetiche. La sostituzione generalizzata del patrimonio edilizio esistente con nuovi edifici iper-tecnologici, è improponibile. In che modo quindi, è possibile ridefinire la normativa e la prassi urbanistica per generare tessuti edilizi energeticamente efficienti? La presente ricerca propone l’integrazione tra la nascente pianificazione energetica del territorio e le più consolidate norme urbanistiche, nella generazione di tessuti urbani “energy saving” che aggiungano alle prestazioni energetico-ambientali dei singoli manufatti quelle del contesto, in un bilancio energetico complessivo. Questo studio, dopo aver descritto e confrontato le principali FER oggi disponibili, suggerisce una metodologia per una valutazione preliminare del mix di tecnologie e di FER più adatto per ciascun sito configurato come “distretto energetico”. I risultati di tale processo forniscono gli elementi basilari per predisporre le azioni necessarie all’integrazione della materia energetica nei Piani Urbanistici attraverso l’applicazione dei principi della perequazione nella definizione di requisiti prestazionali alla scala insediativa, indispensabili per un corretto passaggio alla progettazione degli “oggetti” e dei “sistemi” urbani.
Resumo:
La ricerca si propone di inquisire il modo in cui il fantastico del primo Novecento utilizza la rappresentazione finzionale degli oggetti ai fini della propria emersione. Si ipotizza che certi schemi rappresentativi elaborati da autori fantastici come Hoffmann, Poe o Maupassant siano riscontrabili anche nella letteratura del XX secolo, e vengano reimpiegati per rispondere a una mutata situazione socioculturale. La tesi è bipartita: la prima parte, che è a sua volta suddivisa in due capitoli, funziona da cornice teorica e storica alle analisi testuali. Vi si discute del concetto di immagine; e da tale discussione viene derivata una precisa idea di spazio e di oggetto letterari. In seguito si procede a una ricostruzione della storia del fantastico ottocentesco (dalla quale non sono assenti riflessioni teoriche e in particolare genologiche), che da un lato è volta a storicizzare l’idea di “genere fantastico”; dall’altro ha come obiettivo l’identificazione di una tipologia di oggetti strutturalmente legata a quel genere narrativo. Due sono le classi di oggetti così individuate, e altrettanti i capitoli che compongono la seconda parte del lavoro. Entrambi i tipi di oggetto, che per semplicità si possono chiamare oggetti-feticcio e oggetti spettrali, stanno a metà strada tra immaginario e reale; ma mentre l’oggetto-feticcio ha qualcosa in più rispetto a un oggetto descritto realisticamente, l’oggetto spettrale ha un che di deficitario, e non giunge al risultato di una completa materializzazione. Tra gli autori affrontati compaiono Papini, Pirandello, Bontempelli, Savinio, Landolfi; né mancano riferimenti ad autori di altre nazioni da Kafka a Sartre, da James a Virginia Woolf, in ottemperanza all’idea di considerare il fantastico italiano all’interno di una più ampia geografia letteraria.