886 resultados para Taking-place : non-representational theories and geography
Resumo:
This study aimed to assess in vitro thermal alterations taking place during the Er:YAG laser cavity preparation of primary tooth enamel at different energies and pulse repetition rates. Forty healthy human primary molars were bisected in a mesio-distal direction, thus providing 80 fragments. Two small orifices were made on the dentin surface to which type K thermocouples were attached. The fragments were individually fixed with wax in a cylindrical PlexiglassA (R) abutment and randomly assigned to eight groups, according to the laser parameters (n = 10): G1 -aEuro parts per thousand 250 mJ/ 3 Hz, G2 -aEuro parts per thousand 250 mJ/ 4 Hz, G3 -aEuro parts per thousand 250 mJ/ 6 Hz, G4 -aEuro parts per thousand 250 mJ/10 Hz, G5 -aEuro parts per thousand 250 mJ/ 15 Hz, G6 -aEuro parts per thousand 300 mJ/ 3 Hz, G7 -aEuro parts per thousand 300 mJ/ 4 Hz and G8 -aEuro parts per thousand 300 mJ/ 6 Hz. An area of 4 mm(2) was delimited. Cavities were done (2 mm long x 2 mm wide x 1 mm thick) using non-contact (12 mm) and focused mode. Temperature values were registered from the start of laser irradiation until the end of cavity preparation. Data were analyzed by one-way ANOVA and Tukey test (p a parts per thousand currency signaEuro parts per thousand 0.05). Groups G1, G2, G6, and G7 were statistically similar and furnished the lowest mean values of temperature rise. The set 250 mJ/10 and 15 Hz yielded the highest temperature values. The sets 250 and 300 mJ and 6 Hz provided temperatures with mean values below the acceptable critical value, suggesting that these parameters ablate the primary tooth enamel. Moreover, the temperature elevation was directly related to the increase in the employed pulse repetition rates. In addition, there was no direct correlation between temperature rise and energy density. Therefore, it is important to use a lower pulse frequency, such as 300 mJ and 6 Hz, during cavity preparation in pediatric patients.
Resumo:
Oral leukoplakias (OL) are potentially malignant lesions that are typically white in color. Smoking is considered a risk factor for developing OL, and dysplastic lesions are more prone to malignant transformation. The aim of this study was to describe the clinical features observed in dysplastic and non-dysplastic OL in both smokers and nonsmokers. A total of 315 cases of OL were retrieved and separated into either dysplastic or non-dysplastic lesions, and these cases were further categorized as originating in either smokers or non-smokers. Frequencies of the type of OL lesion, with respect to whether the patients smoked, were established. The results demonstrated that 131 cases of OL were dysplastic (74 smokers and 57 non-smokers), and 184 were non-dysplastic (96 smokers and 88 non-smokers). For OL cases in smokers for which information about alcohol consumption was also available (84 cases), the results revealed no significant difference in the amount of dysplastic and non-dysplastic lesions. Dysplastic lesions were more frequent in male smokers and in non-smoking females. The median age of smokers with cases of OL was significantly lower than in non-smokers; the lowest median ages were observed for female smokers with dysplastic OL. The most frequent anatomical sites of dysplastic lesions were the floor of the mouth in smokers and the tongue in non-smokers. Dysplastic lesions in smokers were significantly smaller than non-dysplastic lesions in non-smokers. Being a male smoker, being female, being younger, and having smaller lesions were associated with dysplastic features in OL. These clinical data may be important for predicting OL malignant transformation.
Resumo:
Objective: This study evaluated the success in attaining non-HDL-cholesterol (non-HDL-C) goals in the multinational L-TAP 2 study. Methods: 9955 patients >= 20 years of age with dyslipidemia on stable lipid-lowering therapy were enrolled from nine countries. Results: Success rates for non-HDL-C goals were 86% in low, 70% in moderate, and 52% in high-risk patients (63% overall). In patients with triglycerides of >200 mg/dL success rates for non-HDL-C goals were 35% vs. 69% in those with <= 200 mg/dL (p < 0.0001). Among patients attaining their LDL-C goal, 18% did not attain their non-HDL-C goal. In those with coronary disease and at least two risk factors, only 34% and 30% attained respectively their non-HDL-C and LDL-C goals. Rates of failure in attaining both LDL-C and non-HDL-C goals were highest in Latin America. Conclusions: Non-HDL-C goal attainment lagged behind LDL-C goal attainment; this gap was greatest in higher-risk patients. (c) 2012 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Abstract Background The public health system of Brazil is structured by a network of increasing complexity, but the low resolution of emergency care at pre-hospital units and the lack of organization of patient flow overloaded the hospitals, mainly the ones of higher complexity. The knowledge of this phenomenon induced Ribeirão Preto to implement the Medical Regulation Office and the Mobile Emergency Attendance System. The objective of this study was to analyze the impact of these services on the gravity profile of non-traumatic afflictions in a University Hospital. Methods The study conducted a retrospective analysis of the medical records of 906 patients older than 13 years of age who entered the Emergency Care Unit of the Hospital of the University of São Paulo School of Medicine at Ribeirão Preto. All presented acute non-traumatic afflictions and were admitted to the Internal Medicine, Surgery or Neurology Departments during two study periods: May 1996 (prior to) and May 2001 (after the implementation of the Medical Regulation Office and Mobile Emergency Attendance System). Demographics and mortality risk levels calculated by Acute Physiology and Chronic Health Evaluation II (APACHE II) were determined. Results From 1996 to 2001, the mean age increased from 49 ± 0.9 to 52 ± 0.9 (P = 0.021), as did the percentage of co-morbidities, from 66.6 to 77.0 (P = 0.0001), the number of in-hospital complications from 260 to 284 (P = 0.0001), the mean calculated APACHE II mortality risk increased from 12.0 ± 0.5 to 14.8 ± 0.6 (P = 0.0008) and mortality rate from 6.1 to 12.2 (P = 0.002). The differences were more significant for patients admitted to the Internal Medicine Department. Conclusion The implementation of the Medical Regulation and Mobile Emergency Attendance System contributed to directing patients with higher gravity scores to the Emergency Care Unit, demonstrating the potential of these services for hierarchical structuring of pre-hospital networks and referrals.
Resumo:
Background: The methods used for evaluating wound dimensions, especially the chronic ones, are invasive and inaccurate. The fringe projection technique with phase shift is a non-invasive, accurate and low-cost optical method. Objective: The aim is to validate the technique through the determination of dimensions of objects of known topography and with different geometries and colors to simulate the wounds and tones of skin color. Taking into account the influence of skin wound optical factors, the technique will be used to evaluate actual patients’ wound dimensions and to study its limitations in this application. Methods: Four sinusoidal fringe patterns, displaced ¼ of period each, were projected onto the objects surface. The object dimensions were obtained from the unwrapped phase map through the observation of the fringe deformations caused by the object topography and using phase shift analysis. An object with simple geometry was used for dimensional calibration and the topographic dimensions of the others were determined from it. After observing the compatibility with the data and validating the method, it was used for measuring the dimensions of real patients’ wounds. Results and Conclusions: The discrepancies between actual topography and dimensions determined with Fringe Projection Technique and for the known object were lower than 0.50 cm. The method was successful in obtaining the topography of real patient’s wounds. Objects and wounds with sharp topographies or causing shadow or reflection are difficult to be evaluated with this technique.
Resumo:
Trabajo realizado por: Reyes, C., Schiavi, A., Aguilar del Soto,
Resumo:
L’ermeneutica filosofica di Hans-Georg Gadamer – indubbiamente uno dei capisaldi del pensiero novecentesco – rappresenta una filosofia molto composita, sfaccettata e articolata, per così dire formata da una molteplicità di dimensioni diverse che si intrecciano l’una con l’altra. Ciò risulta evidente già da un semplice sguardo alla composizione interna della sua opera principale, Wahrheit und Methode (1960), nella quale si presenta una teoria del comprendere che prende in esame tre differenti dimensioni dell’esperienza umana – arte, storia e linguaggio – ovviamente concepite come fondamentalmente correlate tra loro. Ma questo quadro d’insieme si complica notevolmente non appena si prendano in esame perlomeno alcuni dei numerosi contributi che Gadamer ha scritto e pubblicato prima e dopo il suo opus magnum: contributi che testimoniano l’importante presenza nel suo pensiero di altre tematiche. Di tale complessità, però, non sempre gli interpreti di Gadamer hanno tenuto pienamente conto, visto che una gran parte dei contributi esegetici sul suo pensiero risultano essenzialmente incentrati sul capolavoro del 1960 (ed in particolare sui problemi della legittimazione delle Geisteswissenschaften), dedicando invece minore attenzione agli altri percorsi che egli ha seguito e, in particolare, alla dimensione propriamente etica e politica della sua filosofia ermeneutica. Inoltre, mi sembra che non sempre si sia prestata la giusta attenzione alla fondamentale unitarietà – da non confondere con una presunta “sistematicità”, da Gadamer esplicitamente respinta – che a dispetto dell’indubbia molteplicità ed eterogeneità del pensiero gadameriano comunque vige al suo interno. La mia tesi, dunque, è che estetica e scienze umane, filosofia del linguaggio e filosofia morale, dialogo con i Greci e confronto critico col pensiero moderno, considerazioni su problematiche antropologiche e riflessioni sulla nostra attualità sociopolitica e tecnoscientifica, rappresentino le diverse dimensioni di un solo pensiero, le quali in qualche modo vengono a convergere verso un unico centro. Un centro “unificante” che, a mio avviso, va individuato in quello che potremmo chiamare il disagio della modernità. In altre parole, mi sembra cioè che tutta la riflessione filosofica di Gadamer, in fondo, scaturisca dalla presa d’atto di una situazione di crisi o disagio nella quale si troverebbero oggi il nostro mondo e la nostra civiltà. Una crisi che, data la sua profondità e complessità, si è per così dire “ramificata” in molteplici direzioni, andando ad investire svariati ambiti dell’esistenza umana. Ambiti che pertanto vengono analizzati e indagati da Gadamer con occhio critico, cercando di far emergere i principali nodi problematici e, alla luce di ciò, di avanzare proposte alternative, rimedi, “correttivi” e possibili soluzioni. A partire da una tale comprensione di fondo, la mia ricerca si articola allora in tre grandi sezioni dedicate rispettivamente alla pars destruens dell’ermeneutica gadameriana (prima e seconda sezione) ed alla sua pars costruens (terza sezione). Nella prima sezione – intitolata Una fenomenologia della modernità: i molteplici sintomi della crisi – dopo aver evidenziato come buona parte della filosofia del Novecento sia stata dominata dall’idea di una crisi in cui verserebbe attualmente la civiltà occidentale, e come anche l’ermeneutica di Gadamer possa essere fatta rientrare in questo discorso filosofico di fondo, cerco di illustrare uno per volta quelli che, agli occhi del filosofo di Verità e metodo, rappresentano i principali sintomi della crisi attuale. Tali sintomi includono: le patologie socioeconomiche del nostro mondo “amministrato” e burocratizzato; l’indiscriminata espansione planetaria dello stile di vita occidentale a danno di altre culture; la crisi dei valori e delle certezze, con la concomitante diffusione di relativismo, scetticismo e nichilismo; la crescente incapacità a relazionarsi in maniera adeguata e significativa all’arte, alla poesia e alla cultura, sempre più degradate a mero entertainment; infine, le problematiche legate alla diffusione di armi di distruzione di massa, alla concreta possibilità di una catastrofe ecologica ed alle inquietanti prospettive dischiuse da alcune recenti scoperte scientifiche (soprattutto nell’ambito della genetica). Una volta delineato il profilo generale che Gadamer fornisce della nostra epoca, nella seconda sezione – intitolata Una diagnosi del disagio della modernità: il dilagare della razionalità strumentale tecnico-scientifica – cerco di mostrare come alla base di tutti questi fenomeni egli scorga fondamentalmente un’unica radice, coincidente peraltro a suo giudizio con l’origine stessa della modernità. Ossia, la nascita della scienza moderna ed il suo intrinseco legame con la tecnica e con una specifica forma di razionalità che Gadamer – facendo evidentemente riferimento a categorie interpretative elaborate da Max Weber, Martin Heidegger e dalla Scuola di Francoforte – definisce anche «razionalità strumentale» o «pensiero calcolante». A partire da una tale visione di fondo, cerco quindi di fornire un’analisi della concezione gadameriana della tecnoscienza, evidenziando al contempo alcuni aspetti, e cioè: primo, come l’ermeneutica filosofica di Gadamer non vada interpretata come una filosofia unilateralmente antiscientifica, bensì piuttosto come una filosofia antiscientista (il che naturalmente è qualcosa di ben diverso); secondo, come la sua ricostruzione della crisi della modernità non sfoci mai in una critica “totalizzante” della ragione, né in una filosofia della storia pessimistico-negativa incentrata sull’idea di un corso ineluttabile degli eventi guidato da una razionalità “irrazionale” e contaminata dalla brama di potere e di dominio; terzo, infine, come la filosofia di Gadamer – a dispetto delle inveterate interpretazioni che sono solite scorgervi un pensiero tradizionalista, autoritario e radicalmente anti-illuminista – non intenda affatto respingere l’illuminismo scientifico moderno tout court, né rinnegarne le più importanti conquiste, ma più semplicemente “correggerne” alcune tendenze e recuperare una nozione più ampia e comprensiva di ragione, in grado di render conto anche di quegli aspetti dell’esperienza umana che, agli occhi di una razionalità “limitata” come quella scientista, non possono che apparire come meri residui di irrazionalità. Dopo aver così esaminato nelle prime due sezioni quella che possiamo definire la pars destruens della filosofia di Gadamer, nella terza ed ultima sezione – intitolata Una terapia per la crisi della modernità: la riscoperta dell’esperienza e del sapere pratico – passo quindi ad esaminare la sua pars costruens, consistente a mio giudizio in un recupero critico di quello che egli chiama «un altro tipo di sapere». Ossia, in un tentativo di riabilitazione di tutte quelle forme pre- ed extra-scientifiche di sapere e di esperienza che Gadamer considera costitutive della «dimensione ermeneutica» dell’esistenza umana. La mia analisi della concezione gadameriana del Verstehen e dell’Erfahrung – in quanto forme di un «sapere pratico (praktisches Wissen)» differente in linea di principio da quello teorico e tecnico – conduce quindi ad un’interpretazione complessiva dell’ermeneutica filosofica come vera e propria filosofia pratica. Cioè, come uno sforzo di chiarificazione filosofica di quel sapere prescientifico, intersoggettivo e “di senso comune” effettivamente vigente nella sfera della nostra Lebenswelt e della nostra esistenza pratica. Ciò, infine, conduce anche inevitabilmente ad un’accentuazione dei risvolti etico-politici dell’ermeneutica di Gadamer. In particolare, cerco di esaminare la concezione gadameriana dell’etica – tenendo conto dei suoi rapporti con le dottrine morali di Platone, Aristotele, Kant e Hegel – e di delineare alla fine un profilo della sua ermeneutica filosofica come filosofia del dialogo, della solidarietà e della libertà.
Resumo:
This study concerns the representation of space in Caribbean literature, both francophone and Anglophone and, in particular, but not only, in the martinican literature, in the works of the authors born in the island. The analysis focus on the second half of the last century, a period in which the martinican production of novels and romances increased considerably, and where the representation and the rule of space had a relevant place. So, the thesis explores the literary modalities of this representation. The work is constituted of 5 chapters and the critical and methodological approaches are both of an analytical and comparative type. The first chapter “The caribbean space: geography, history and society” presents the geographic context, through an analysis of the historical and political major events occurred in the Caribbean archipelago, in particular of the French Antilles, from the first colonization until the départementalisation. The first paragraph “The colonized space: historical-political excursus” the explores the history of the European colonization that marked forever the theatre of the relationship between Europe, Africa and the New World. This social situation take a long and complex process of “Re-appropriation and renegotiation of the space”, (second paragraph) always the space of the Other, that interest both the Antillean society and the writers’ universe. So, a series of questions take place in the third paragraph “Landscape and identity”: what is the function of space in the process of identity construction? What are the literary forms and representations of space in the Caribbean context? Could the writing be a tool of cultural identity definition, both individual and collective? The second chapter “The literary representation of the Antillean space” is a methodological analysis of the notions of literary space and descriptive gender. The first paragraph “The literary space of and in the novel” is an excursus of the theory of such critics like Blanchot, Bachelard, Genette and Greimas, and in particular the recent innovation of the 20th century; the second one “Space of the Antilles, space of the writing” is an attempt to apply this theory to the Antillean literary space. Finally the last paragraph “Signs on the page: the symbolic places of the antillean novel landscape” presents an inventory of the most recurrent antillean places (mornes, ravines, traces, cachots, En-ville,…), symbols of the history and the past, described in literary works, but according to new modalities of representation. The third chapter, the core of the thesis, “Re-drawing the map of the French Antilles” focused the study of space representation on francophone literature, in particular on a selected works of four martinican writers, like Roland Brival, Édouard Glissant, Patrick Chamoiseau and Raphaël Confiant. Through this section, a spatial evolution comes out step by step, from the first to the second paragraph, whose titles are linked together “The novel space evolution: from the forest of the morne… to the jungle of the ville”. The virgin and uncontaminated space of the Antilles, prior to the colonisation, where the Indios lived in harmony with the nature, find a representation in both works of Brival (Le sang du roucou, Le dernier des Aloukous) and of Glissant (Le Quatrième siècle, Ormerod). The arrival of the European colonizer brings a violent and sudden metamorphosis of the originary space and landscape, together with the traditions and culture of the Caraïbes population. These radical changes are visible in the works of Chamoiseau (Chronique des sept misères, Texaco, L’esclave vieil homme et le molosse, Livret des villes du deuxième monde, Un dimanche au cachot) and Confiant (Le Nègre et l’Amiral, Eau de Café, Ravines du devant-jour, Nègre marron) that explore the urban space of the creole En-ville. The fourth chapter represents the “2nd step: the Anglophone novel space” in the exploration of literary representation of space, through an analytical study of the works of three Anglophone writers, the 19th century Lafcadio Hearn (A Midsummer Trip To the West Indies, Two Years in the French West Indies, Youma) and the contemporary authors Derek Walcott (Omeros, Map of the New World, What the Twilight says) and Edward Kamau Brathwaite (The Arrivants: A New World Trilogy). The Anglophone voice of the Caribbean archipelago brings a very interesting contribution to the critical idea of a spatial evolution in the literary representation of space, started with francophone production: “The spatial evolution goes on: from the Martiniques Sketches of Hearn… to the modern bards of Caribbean archipelago” is the new linked title of the two paragraphs. The fifth chapter “Extended look, space shared: the Caribbean archipelago” is a comparative analysis of the results achieved in the prior sections, through a dialogue between all the texts in the first paragraph “Francophone and Anglophone representation of space compared: differences and analogies”. The last paragraph instead is an attempt of re-negotiate the conventional notions of space and place, from a geographical and physical meaning, to the new concept of “commonplace”, not synonym of prejudice, but “common place” of sharing and dialogue. The question sets in the last paragraph “The “commonplaces” of the physical and mental map of the Caribbean archipelago: toward a non-place?” contains the critical idea of the entire thesis.
Resumo:
Curved mountain belts have always fascinated geologists and geophysicists because of their peculiar structural setting and geodynamic mechanisms of formation. The need of studying orogenic bends arises from the numerous questions to which geologists and geophysicists have tried to answer to during the last two decades, such as: what are the mechanisms governing orogenic bends formation? Why do they form? Do they develop in particular geological conditions? And if so, what are the most favorable conditions? What are their relationships with the deformational history of the belt? Why is the shape of arcuate orogens in many parts of the Earth so different? What are the factors controlling the shape of orogenic bends? Paleomagnetism demonstrated to be one of the most effective techniques in order to document the deformation of a curved belt through the determination of vertical axis rotations. In fact, the pattern of rotations within a curved belt can reveal the occurrence of a bending, and its timing. Nevertheless, paleomagnetic data alone are not sufficient to constrain the tectonic evolution of a curved belt. Usually, structural analysis integrates paleomagnetic data, in defining the kinematics of a belt through kinematic indicators on brittle fault planes (i.e., slickensides, mineral fibers growth, SC-structures). My research program has been focused on the study of curved mountain belts through paleomagnetism, in order to define their kinematics, timing, and mechanisms of formation. Structural analysis, performed only in some regions, supported and integrated paleomagnetic data. In particular, three arcuate orogenic systems have been investigated: the Western Alpine Arc (NW Italy), the Bolivian Orocline (Central Andes, NW Argentina), and the Patagonian Orocline (Tierra del Fuego, southern Argentina). The bending of the Western Alpine Arc has been investigated so far using different approaches, though few based on reliable paleomagnetic data. Results from our paleomagnetic study carried out in the Tertiary Piedmont Basin, located on top of Alpine nappes, indicate that the Western Alpine Arc is a primary bend that has been subsequently tightened by further ~50° during Aquitanian-Serravallian times (23-12 Ma). This mid-Miocene oroclinal bending, superimposing onto a pre-existing Eocene nonrotational arc, is the result of a composite geodynamic mechanism, where slab rollback, mantle flows, and rotating thrust emplacement are intimately linked. Relying on our paleomagnetic and structural evidence, the Bolivian Orocline can be considered as a progressive bend, whose formation has been driven by the along-strike gradient of crustal shortening. The documented clockwise rotations up to 45° are compatible with a secondary-bending type mechanism occurring after Eocene-Oligocene times (30-40 Ma), and their nature is probably related to the widespread shearing taking place between zones of differential shortening. Since ~15 Ma ago, the activity of N-S left-lateral strike-slip faults in the Eastern Cordillera at the border with the Altiplano-Puna plateau induced up to ~40° counterclockwise rotations along the fault zone, locally annulling the regional clockwise rotation. We proposed that mid-Miocene strike-slip activity developed in response of a compressive stress (related to body forces) at the plateau margins, caused by the progressive lateral (southward) growth of the Altiplano-Puna plateau, laterally spreading from the overthickened crustal region of the salient apex. The growth of plateaux by lateral spreading seems to be a mechanism common to other major plateaux in the Earth (i.e., Tibetan plateau). Results from the Patagonian Orocline represent the first reliable constraint to the timing of bending in the southern tip of South America. They indicate that the Patagonian Orocline did not undergo any significant rotation since early Eocene times (~50 Ma), implying that it may be considered either a primary bend, or an orocline formed during the late Cretaceous-early Eocene deformation phase. This result has important implications on the opening of the Drake Passage at ~32 Ma, since it is definitely not related to the formation of the Patagonian orocline, but the sole consequence of the Scotia plate spreading. Finally, relying on the results and implications from the study of the Western Alpine Arc, the Bolivian Orocline, and the Patagonian Orocline, general conclusions on curved mountain belt formation have been inferred.
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.
Resumo:
This experimental thesis concerns the study of the long-term behaviour of ancient bronzes recently excavated from burial conditions. The scientific interest is to clarify the effect of soil parameters on the degradation mechanisms of ancient bronze alloy. The work took into consideration bronzes recovered from the archaeological sites in the region of Dobrudja, Romania. The first part of research work was dedicated to the characterization of bronze artefacts using non destructive (micro-FTIR, reflectance mode) and micro-destructive (based on sampling and analysis of a stratigraphical section by OM and SEM-EDX) methods. Burial soils were geologically classified and analyzed by chemical methods (pH, conductivity, anions content). Most of objects analyzed showed a coarse and inhomogeneous corroded structure, often made up of several corrosion layers. This has been explained by the silt nature of soils, which contain low amount of clay and are, therefore, quite accessible to water and air. The main cause of a high dissolution rate of bronze alloys is the alternate water saturation and instauration of the soil, for example on a seasonal scale. Moreover, due to the vicinity of the Black Sea, the detrimental effect of chlorine has been evidenced for few objects, which were affected by the bronze disease. A general classification of corrosion layers was achieved by comparing values of the ratio Cu/Sn in the alloy and in the patina. Decuprification is a general trend, and enrichment of copper within the corrosion layers, due to the formation of thick layers of cuprite (Cu2O), is pointed out as well. Uncommon corrosion products and degradation patterns were presented as well, and they are probably due to peculiar local conditions taking place during the burial time, such as anaerobic conditions or fluctuating environmental conditions. In order to acquire a better insight into the corrosion mechanisms, the second part of the thesis has regarded simulation experiments, which were conducted on commercial Cu-Sn alloys, whose composition resembles those of ancient artefacts one. Electrochemical measurements were conducted in natural electrolytes, such as solutions extracted from natural soil (sampled at the archaeological sites) and seawater. Cyclic potentiodynamic experiments allowed appreciating the mechanism of corrosion in both cases. Soil extract’s electrolyte has been evaluated being a non aggressive medium, while artificial solution prepared by increasing the concentration of anions caused the pitting corrosion of the alloy, which is demonstrated by optical observations. In particular, electrochemical impedance spectroscopy allows assessing qualitatively the nature of corroded structures formed in soil and seawater. A double-structured layer is proposed, which differ, in the two cases, for the nature of the internal passive layer, which result defectiveness and porous in case of seawater.
Resumo:
Purpose of this research is to deepen the study on the section in architecture. The survey aims as important elements in the project Teatro Domestico by Aldo Rossi built for the XVII Triennale di Milano in 1986 and, through the implementation on several topics of architecture, verify the timeliness and fertility in the new compositional exercises. Through the study of certain areas of the Rossi’s theory we tried to find a common thread for the reading of the theater project. The theater is the place of the ephemeral and the artificial, which is why his destiny is the end and the fatal loss. The design and construction of theater setting has always had a double meaning between the value of civil architecture and testing of new technologies available. Rossi's experience in this area are clear examples of the inseparable relationship between the representation of architecture as art and design of architecture as a model of reality. In the Teatro Domestico, the distinction between representation and the real world is constantly canceled and returned through the reversal of the meaning and through the skip of scale. At present, studies conducted on the work of Rossi concern the report that the architectural composition is the theory of form, focusing compositional development of a manufacturing process between the typological analysis and form invention. The research, through the analysis of some projects few designs, will try to analyze this issue through the rules of composition both graphical and concrete construction, hoping to decipher the mechanism underlying the invention. The almost total lack of published material on the project Teatro Domestico and the opportunity to visit the archives that preserve the drawings, has allowed the author of this study to deepen the internal issues in the project, thus placing this search as a first step toward possible further analysis on the works of Rossi linked to performance world. The final aim is therefore to produce material that can best describe the work of Rossi. Through the reading of the material published by the same author and the vision of unpublished material preserved in the archives, it was possible to develop new material and increasing knowledge about the work, otherwise difficult to analyze. The research is divided into two groups. The first, taking into account the close relationship most frequently mentioned by Rossi himself between archeology and architectural composition, stresses the importance of tipo such as urban composition reading system as well as open tool of invention. Resuming Ezio Bonfanti’s essay on the work of the architect we wanted to investigate how the paratactic method is applied to the early work conceived and, subsequently as the process reaches a complexity accentuated, while keeping stable the basic terms. Following a brief introduction related to the concept of the section and the different interpretations that over time the term had, we tried to identify with this facility a methodology for reading Rossi’s projects. The result is a constant typological interpretation of the term, not only related to the composition in plant but also through the elevation plans. The section is therefore intended as the overturning of such elevation is marked on the same plane of the terms used, there is a different approach, but a similarity of characters. The identification of architectural phonemes allows comparison with other arts. The research goes in the direction of language trying to identify the relationship between representation and construction, between the ephemeral and the real world. In this sense it will highlight the similarities between the graphic material produced by Ross and some important examples of contemporary author. The comparison between the composition system with the surrealist world of painting and literature will facilitate the understanding and identification of possible rules applied by Rossi. The second part of the research is characterized by a focus on the intent of the project chosen. Teatro Domestico embodies a number of elements that seem to conclude (assuming an end point but also to start) a curriculum author. With it, the experiments carried out on the theater started with the project for the Teatrino Scientifico (1978) through the project for the Teatro del Mondo (1979), into a Laic Tabernacle representative collective and private memory of the city. Starting from a reading of the draft, through the collection of published material, we’ve made an analysis on the explicit themes of the work, finding the conceptual references. Following the taking view of the original materials not published kept at Aldo Rossi's Archive Collection of the Canadian Center for Architecture in Montréal, will be implemented through the existing techniques for digital representation, a virtual reconstruction of the project, adding little to the material, a new element for future studies. The reconstruction is part of a larger research studies where the current technologies of composition and representation in architecture stand side by side with research on the method of composition of this architect. The results achieved are in addition to experiences in the past dealt with the reconstruction of some of the lost works of Aldo Rossi. A partial objective is to reactivate a discourse around this work is considered non-principal, among others born in the prolific activities. Reassessment of development projects which would bring the level of ephemeral works most frequented by giving them the value earned. In conclusion, the research aims to open a new field of interest on the part not only as a technical instrument of representation of an idea but as an actual mechanism through which composition is formed and the idea is developed.
Resumo:
The purpose of this Thesis is to develop a robust and powerful method to classify galaxies from large surveys, in order to establish and confirm the connections between the principal observational parameters of the galaxies (spectral features, colours, morphological indices), and help unveil the evolution of these parameters from $z \sim 1$ to the local Universe. Within the framework of zCOSMOS-bright survey, and making use of its large database of objects ($\sim 10\,000$ galaxies in the redshift range $0 < z \lesssim 1.2$) and its great reliability in redshift and spectral properties determinations, first we adopt and extend the \emph{classification cube method}, as developed by Mignoli et al. (2009), to exploit the bimodal properties of galaxies (spectral, photometric and morphologic) separately, and then combining together these three subclassifications. We use this classification method as a test for a newly devised statistical classification, based on Principal Component Analysis and Unsupervised Fuzzy Partition clustering method (PCA+UFP), which is able to define the galaxy population exploiting their natural global bimodality, considering simultaneously up to 8 different properties. The PCA+UFP analysis is a very powerful and robust tool to probe the nature and the evolution of galaxies in a survey. It allows to define with less uncertainties the classification of galaxies, adding the flexibility to be adapted to different parameters: being a fuzzy classification it avoids the problems due to a hard classification, such as the classification cube presented in the first part of the article. The PCA+UFP method can be easily applied to different datasets: it does not rely on the nature of the data and for this reason it can be successfully employed with others observables (magnitudes, colours) or derived properties (masses, luminosities, SFRs, etc.). The agreement between the two classification cluster definitions is very high. ``Early'' and ``late'' type galaxies are well defined by the spectral, photometric and morphological properties, both considering them in a separate way and then combining the classifications (classification cube) and treating them as a whole (PCA+UFP cluster analysis). Differences arise in the definition of outliers: the classification cube is much more sensitive to single measurement errors or misclassifications in one property than the PCA+UFP cluster analysis, in which errors are ``averaged out'' during the process. This method allowed us to behold the \emph{downsizing} effect taking place in the PC spaces: the migration between the blue cloud towards the red clump happens at higher redshifts for galaxies of larger mass. The determination of $M_{\mathrm{cross}}$ the transition mass is in significant agreement with others values in literature.
Resumo:
In the framework of developing defect-based life models, in which breakdown is explicitly associated with partial discharge (PD)-induced damage growth from a defect, ageing tests and PD measurements were carried out in the lab on polyethylene (PE) layered specimens containing artificial cavities. PD activity was monitored continuously during aging. A quasi-deterministic series of stages can be observed in the behavior of the main PD parameters (i.e. discharge repetition rate and amplitude). Phase-resolved PD patterns at various ageing stages were reproduced by numerical simulation which is based on a physical discharge model devoid of adaptive parameters. The evolution of the simulation parameters provides insight into the physical-chemical changes taking place at the dielectric/cavity interface during the aging process. PD activity shows similar time behavior under constant cavity gas volume and constant cavity gas pressure conditions, suggesting that the variation of PD parameters may not be attributed to the variation of the gas pressure. Brownish PD byproducts, consisting of oxygen containing moieties, and degradation pits were found at the dielectric/cavity interface. It is speculated that the change of PD activity is related to the composition of the cavity gas, as well as to the properties of dielectric/cavity interface.
Resumo:
Can space and place foster child development, and in particular social competence and ecological literacy? If yes, how can space and place do that? This study shows that the answer to the first question is positive and then tries to explain the way space and place can make a difference. The thesis begins with the review of literature from different disciplines – child development and child psychology, education, environmental psychology, architecture and landscape architecture. Some bridges among such disciplines are created and in some cases the ideas from the different areas of research merge: thus, this is an interdisciplinary study. The interdisciplinary knowledge from these disciplines is translated into a range of design suggestions that can foster the development of social competence and ecological literacy. Using scientific knowledge from different disciplines is a way of introducing forms of evidence into the development of design criteria. However, the definition of design criteria also has to pass through the study of a series of school buildings and un-built projects: case studies can give a positive contribution to the criteria because examples and good practices can help “translating” the theoretical knowledge into design ideas and illustrations. To do that, the different case studies have to be assessed in relation to the various themes that emerged in the literature review. Finally, research by design can be used to help define the illustrated design criteria: based on all the background knowledge that has been built, the role of the architect is to provide a series of different design solutions that can give answers to the different “questions” emerged in the literature review.