841 resultados para Cognitive Change Process


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies have shown that platelet APP ratio (representing the percentage of 120-130 kDa to 110 kDa isoforms of the amyloid precursor protein) is reduced in patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD). In the present study, we sought to determine if baseline APP ratio predicts the conversion from MCI to AD dementia after 4 years of longitudinal assessment. Fifty-five older adults with varying degrees of cognitive impairment (34 with MCI and 21 with AD) were assessed at baseline and after 4 years. MCI patients were re-classified according to the conversion status upon follow-up: 25 individuals retained the diagnostic status of MCI and were considered as stable cases (MCI-MCI); conversely, in nine cases the diagnosis of dementia due to AD was ascertained. The APP ratio (APPr) was determined by the Western blot method in samples of platelets collected at baseline. We found a significant reduction of APPr in MCI patients who converted to dementia upon follow-up. These individuals had baseline APPr values similar to those of demented AD patients. The overall accuracy of APPr to identify subjects with MCI who will progress to AD was 0.74 +/- A 0.10, p = 0.05. The cut-off of 1.12 yielded a sensitivity of 75 % and a specificity of 75 %. Platelet APPr may be a surrogate marker of the disease process in AD, with potential implications for the assessment of abnormalities in the APP metabolism in patients with and at risk for dementia. However, diagnostic accuracy was relatively low. Therefore, studies in larger samples are needed to determine whether APPr may warrant its use as a biomarker to support the early diagnosis of AD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Dengue has become a global public health threat, with over 100 million infections annually; to date there is no specific vaccine or any antiviral drug. The structures of the envelope (E) proteins of the four known serotype of the dengue virus (DENV) are already known, but there are insufficient molecular details of their structural behavior in solution in the distinct environmental conditions in which the DENVs are submitted, from the digestive tract of the mosquito up to its replication inside the host cell. Such detailed knowledge becomes important because of the multifunctional character of the E protein: it mediates the early events in cell entry, via receptor endocytosis and, as a class II protein, participates determinately in the process of membrane fusion. The proposed infection mechanism asserts that once in the endosome, at low pH, the E homodimers dissociate and insert into the endosomal lipid membrane, after an extensive conformational change, mainly on the relative arrangement of its three domains. In this work we employ all-atom explicit solvent Molecular Dynamics simulations to specify the thermodynamic conditions in that the E proteins are induced to experience extensive structural changes, such as during the process of reducing pH. We study the structural behavior of the E protein monomer at acid pH solution of distinct ionic strength. Extensive simulations are carried out with all the histidine residues in its full protonated form at four distinct ionic strengths. The results are analyzed in detail from structural and energetic perspectives, and the virtual protein movements are described by means of the principal component analyses. As the main result, we found that at acid pH and physiological ionic strength, the E protein suffers a major structural change; for lower or higher ionic strengths, the crystal structure is essentially maintained along of all extensive simulations. On the other hand, at basic pH, when all histidine residues are in the unprotonated form, the protein structure is very stable for ionic strengths ranging from 0 to 225 mM. Therefore, our findings support the hypothesis that the histidines constitute the hot points that induce configurational changes of E protein in acid pH, and give extra motivation to the development of new ideas for antivirus compound design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increase in research on the components of Body Image, validated instruments are needed to evaluate its dimensions. The Body Change Inventory (BCI) assesses strategies used to alter body size among adolescents. The scope of this study was to describe the translation and evaluation for semantic equivalence of the BCI in the Portuguese language. The process involved the steps of (1) translation of the questionnaire to the Portuguese language; (2) back-translation to English; (3) evaluation of semantic equivalence; and (4) assessment of comprehension by professional experts and the target population. The six subscales of the instrument were translated into the Portuguese language. Language adaptations were made to render the instrument suitable for the Brazilian reality. The questions were interpreted as easily understandable by both experts and young people. The Body Change Inventory has been translated and adapted into Portuguese. Evaluation of the operational, measurement and functional equivalence are still needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. - Persistent impairment in cognitive function has been described in euthymic individuals with bipolar disorder. Collective work indicates that obesity is associated with reduced cognitive function in otherwise healthy individuals. This sub-group post-hoc analysis preliminarily explores and examines the association between overweight/obesity and cognitive function in euthymic individuals with bipolar disorder. Methods. - Euthymic adults with DSM-IV-TR-defined bipolar I or II disorder were enrolled. Subjects included in this post-hoc analysis (n = 67) were divided into two groups (normal weight, body mass index [BMI] of 18.5-24.9 kg/m(2); overweight/obese, BMI >= 25.0 kg/m(2)). Demographic and clinical information were obtained at screening. At baseline, study participants completed a comprehensive cognitive battery to assess premorbid IQ, verbal learning and memory, attention and psychomotor processing speed, executive function, general intellectual abilities, recollection and habit memory, as well as self-perceptions of cognitive failures. Results. - BMI was negatively correlated with attention and psychomotor processing speed as measured by the Digit Symbol Substitution Test (P < 0.01). Overweight and obese bipolar individuals had a significantly lower score on the Verbal Fluency Test when compared to normal weight subjects (P < 0.05). For all other measures of cognitive function, non-significant trends suggesting a negative association with BMI were observed, with the exception of measures of executive function (i.e. Trail Making Test B) and recollection memory (i.e. process-dissociation task). Conclusion. - Notwithstanding the post-hoc methodology and relatively small sample size, the results of this study suggest a possible negative effect of overweight/obesity on cognitive function in euthymic individuals with bipolar disorder. Taken together, these data provide the impetus for more rigorous evaluation of the mediational role of overweight/obesity (and other medical co-morbidity) on cognitive function in psychiatric populations. (C) 2011 Elsevier Masson SAS. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The responsiveness of oral health-related quality of life (OHRQoL) instruments has become relevant, given the increasing tendency to use OHRQoL measures as outcomes in clinical trials and evaluations studies. The purpose of this study was to assess the responsiveness of the Brazilian Scale of Oral Health Outcomes for 5-year-old children (SOHO-5) to dental treatment. Methods One hundred and fifty-four children and their parents completed the child self- and parental’ reports of the SOHO-5 prior to treatment and 7 to 14 days after the completion of treatment. The post-treatment questionnaire also included a global transition judgment that assessed subject’s perceptions of change in their oral health following treatment. Change scores were calculated by subtracting post-treatment SOHO-5 scores from pre-treatment scores. Longitudinal construct validity was assessed by using one-way analysis of variance to examine the association between change scores and the global transition judgments. Measures of responsiveness included standardized effect sizes (ES) and standardized response mean (SRM). Results The improvement of children’s oral health after treatment are reflected in mean pre- and post-treatment SOHO-5 scores that declined from 2.67 to 0.61 (p < 0.001) for the child-self reports, and 4.04 to 0.71 (p < 0.001) for the parental reports. Mean change scores showed a gradient in the expected direction across categories of the global transition judgment, and there were significant differences in the pre- and post-treatment scores of those who reported improving a little (p < 0.05) and those who reported improving a lot (p < 0.001). For both versions, the ES and SRM based on change scores mean for total scores and for categories of global transitions judgments were moderate to large. Conclusions The Brazilian SOHO-5 is responsive to change and can be used as an outcome indicator in future clinical trials. Both the parental and the child versions presented satisfactory results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Chronic exposure to musical auditory stimulation has been reported to improve cardiac autonomic regulation. However, it is not clear if music acutely influences it in response to autonomic tests. We evaluated the acute effects of music on heart rate variability (HRV) responses to the postural change maneuver (PCM) in women. Method We evaluated 12 healthy women between 18 and 28 years old and HRV was analyzed in the time (SDNN, RMSSD, NN50 and pNN50) and frequency (LF, HF and LF/HF ratio) domains. In the control protocol, the women remained at seated rest for 10 minutes and quickly stood up within three seconds and remained standing still for 15 minutes. In the music protocol, the women remained at seated rest for 10 minutes, were exposed to music for 10 minutes and quickly stood up within three seconds and remained standing still for 15 minutes. HRV was recorded at the following time: rest, music (music protocol) 0–5, 5–10 and 10–15 min during standing. Results In the control protocol the SDNN, RMSSD and pNN50 indexes were reduced at 10–15 minutes after the volunteers stood up, while the LF (nu) index was increased at the same moment compared to seated rest. In the protocol with music, the indexes were not different from control but the RMSSD, pNN50 and LF (nu) were different from the music period. Conclusion Musical auditory stimulation attenuates the cardiac autonomic responses to the PCM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this research was to evaluate the bioremediation of a soil contaminated with wastes from a plasticizers industry, located in São Paulo, Brazil. A 100-kg soil sample containing alcohols, adipates and phthalates was treated in an aerobic slurry-phase reactor using indigenous and acclimated microorganisms from the sludge of a wastewater treatment plant of the plasticizers industry (11gVSS kg-1 dry soil), during 120 days. The soil pH and temperature were not corrected during bioremediation; soil humidity was corrected weekly to maintain 40%. The biodegradation of the pollutants followed first-order kinetics; the removal efficiencies were above 61% and, among the analyzed plasticizers, adipate was removed to below the detection limit. Biological molecular analysis during bioremediation revealed a significant change in the dominant populations initially present in the reactor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Biofuels produced from sugarcane bagasse (SB) have shown promising results as a suitable alternative of gasoline. Biofuels provide unique, strategic, environmental and socio-economic benefits. However, production of biofuels from SB has negative impact on environment due to the use of harsh chemicals during pretreatment. Consecutive sulfuric acid-sodium hydroxide pretreatment of SB is an effective process which eventually ameliorates the accessibility of cellulase towards cellulose for the sugars production. Alkaline hydrolysate of SB is black liquor containing high amount of dissolved lignin. Results This work evaluates the environmental impact of residues generated during the consecutive acid-base pretreatment of SB. Advanced oxidative process (AOP) was used based on photo-Fenton reaction mechanism (Fenton Reagent/UV). Experiments were performed in batch mode following factorial design L9 (Taguchi orthogonal array design of experiments), considering the three operation variables: temperature (°C), pH, Fenton Reagent (Fe2+/H2O2) + ultraviolet. Reduction of total phenolics (TP) and total organic carbon (TOC) were responsive variables. Among the tested conditions, experiment 7 (temperature, 35°C; pH, 2.5; Fenton reagent, 144 ml H2O2+153 ml Fe2+; UV, 16W) revealed the maximum reduction in TP (98.65%) and TOC (95.73%). Parameters such as chemical oxygen demand (COD), biochemical oxygen demand (BOD), BOD/COD ratio, color intensity and turbidity also showed a significant change in AOP mediated lignin solution than the native alkaline hydrolysate. Conclusion AOP based on Fenton Reagent/UV reaction mechanism showed efficient removal of TP and TOC from sugarcane bagasse alkaline hydrolysate (lignin solution). To the best of our knowledge, this is the first report on statistical optimization of the removal of TP and TOC from sugarcane bagasse alkaline hydrolysate employing Fenton reagent mediated AOP process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A decline in cognitive ability is a typical feature of the normal aging process, and of neurodegenerative disorders such as Alzheimer’s, Parkinson’s and Huntington’s diseases. Although their etiologies differ, all of these disorders involve local activation of innate immune pathways and associated inflammatory cytokines. However, clinical trials of anti-inflammatory agents in neurodegenerative disorders have been disappointing, and it is therefore necessary to better understand the complex roles of the inflammatory process in neurological dysfunction. The dietary phytochemical curcumin can exert anti-inflammatory, antioxidant and neuroprotective actions. Here we provide evidence that curcumin ameliorates cognitive deficits associated with activation of the innate immune response by mechanisms requiring functional tumor necrosis factor α receptor 2 (TNFR2) signaling. In vivo, the ability of curcumin to counteract hippocampusdependent spatial memory deficits, to stimulate neuroprotective mechanisms such as upregulation of BDNF, to decrease glutaminase levels, and to modulate N-methyl- D –aspartate receptor levels was absent in mice lacking functional TNFRs. Curcumin treatment protected cultured neurons against glutamate-induced excitotoxicity by a mechanism requiring TNFR2 activation. Our results suggest the possibility that therapeutic approaches against cognitive decline designed to selectively enhance TNFR2 signaling are likely to be more beneficial than the use of anti-inflammatory drugs per se.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]A petroleum expert’s view on risks and benefits of oil exploration today in Canarias, considering the climate change facts. The talk starts with an overview of the total petroleum development process, from exploration to post-abandonment, indicating some important risks and benefits for each, from a petroleum industry and a personal perspective. Then there is a part of the talk about the agreed facts of climate change, and what this means for us all. The end of the talk brings together these two sections in a summary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topic of this study is surprise, re gard as an evolutionary complex process, with manifold implication in different fields, from neurological, since aspecific correlate of surprise exist more or less at every level of neuronal processes (e.g. Rao e Ballard, 1999.), to behavioral , inasmuch a s our ability to quickly valuate(assess), recognize and learn from surprising events, are be regarded as pivotal for survival (e.g. Ranganath e Rainer, 2003). In particular this work, going from belief that surprise is really a psychoevolutive mechanism of primary relevance, has the objective to investigate if there may be a substantial connection between development of surprise' emotion and specific developmental problems, or, if in subjects with pervasive developmental disorders surprise may embody (represent) a essential mechanism of emotional tuning, and consequently if abnormalities in such process may be at the base of at least a part of cognitive and behavioural problems that determine (describe) this pathology. Theoretical reasons lead us to conside r this particular pathologic condition, recall to a broad area of research concern the comprehension of belief as marker of ability to reasons about mental states of others (i.e. Theory of Mind), and in addition, at the detection of specific subjects' diff iculty in this field. On the experimental side, as well as limited of this work, we have to compare comprehension and expression of surprise in a sample of 21 children with pervasive developmental disorders (PDD), with a sample of 35 children without deve lopmental problems, in a range of age 3-12. Method After the customary approach to become friendly with the child, an experimenter and an accomplice showed three boxes of nuts, easily to distinguish one from the other because of their different colours an d , working together with the child, the contents of one of the boxes were replaced and a different material (macaroni, pebbles) was put in the box. for the purpose of preparing a surprise for someone. At this stage, the accomplice excused himself/herself and left and the experimenter suggested to the child that he prepare another surprise, replacing the contents in the second box. When the accomplice came back, the child was asked to prepare a surprise for him by picking out the box that he thought was the right one for the purpose. After, and the child doesn't know it, the accomplice change the content of one of the boxes with candies and asked out to the children to open the box, in order to see if he show surprise. Result Date have obtain a significant difference between autistic and normal group, in all four tests. The expression of surprise too, is present in significantly lower degree in autistic group than in control group. Moreover, autistic children do not provide appropriate metarappresentative explanations. Conclusion Our outcome, with knowledge of the limit of our investigation at an experimental level (low number of the champions, no possibility of video registration to firm the expressions ) orient to consider eventuality that surprise, may be seen as relevant component, or indicative, in autistic spectrum disorders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se il lavoro dello storico è capire il passato come è stato compreso dalla gente che lo ha vissuto, allora forse non è azzardato pensare che sia anche necessario comunicare i risultati delle ricerche con strumenti propri che appartengono a un'epoca e che influenzano la mentalità di chi in quell'epoca vive. Emergenti tecnologie, specialmente nell’area della multimedialità come la realtà virtuale, permettono agli storici di comunicare l’esperienza del passato in più sensi. In che modo la storia collabora con le tecnologie informatiche soffermandosi sulla possibilità di fare ricostruzioni storiche virtuali, con relativi esempi e recensioni? Quello che maggiormente preoccupa gli storici è se una ricostruzione di un fatto passato vissuto attraverso la sua ricreazione in pixels sia un metodo di conoscenza della storia che possa essere considerato valido. Ovvero l'emozione che la navigazione in una realtà 3D può suscitare, è un mezzo in grado di trasmettere conoscenza? O forse l'idea che abbiamo del passato e del suo studio viene sottilmente cambiato nel momento in cui lo si divulga attraverso la grafica 3D? Da tempo però la disciplina ha cominciato a fare i conti con questa situazione, costretta soprattutto dall'invasività di questo tipo di media, dalla spettacolarizzazione del passato e da una divulgazione del passato parziale e antiscientifica. In un mondo post letterario bisogna cominciare a pensare che la cultura visuale nella quale siamo immersi sta cambiando il nostro rapporto con il passato: non per questo le conoscenze maturate fino ad oggi sono false, ma è necessario riconoscere che esiste più di una verità storica, a volte scritta a volte visuale. Il computer è diventato una piattaforma onnipresente per la rappresentazione e diffusione dell’informazione. I metodi di interazione e rappresentazione stanno evolvendo di continuo. Ed è su questi due binari che è si muove l’offerta delle tecnologie informatiche al servizio della storia. Lo scopo di questa tesi è proprio quello di esplorare, attraverso l’utilizzo e la sperimentazione di diversi strumenti e tecnologie informatiche, come si può raccontare efficacemente il passato attraverso oggetti tridimensionali e gli ambienti virtuali, e come, nel loro essere elementi caratterizzanti di comunicazione, in che modo possono collaborare, in questo caso particolare, con la disciplina storica. La presente ricerca ricostruisce alcune linee di storia delle principali fabbriche attive a Torino durante la seconda guerra mondiale, ricordando stretta relazione che esiste tra strutture ed individui e in questa città in particolare tra fabbrica e movimento operaio, è inevitabile addentrarsi nelle vicende del movimento operaio torinese che nel periodo della lotta di Liberazione in città fu un soggetto politico e sociale di primo rilievo. Nella città, intesa come entità biologica coinvolta nella guerra, la fabbrica (o le fabbriche) diventa il nucleo concettuale attraverso il quale leggere la città: sono le fabbriche gli obiettivi principali dei bombardamenti ed è nelle fabbriche che si combatte una guerra di liberazione tra classe operaia e autorità, di fabbrica e cittadine. La fabbrica diventa il luogo di "usurpazione del potere" di cui parla Weber, il palcoscenico in cui si tengono i diversi episodi della guerra: scioperi, deportazioni, occupazioni .... Il modello della città qui rappresentata non è una semplice visualizzazione ma un sistema informativo dove la realtà modellata è rappresentata da oggetti, che fanno da teatro allo svolgimento di avvenimenti con una precisa collocazione cronologica, al cui interno è possibile effettuare operazioni di selezione di render statici (immagini), di filmati precalcolati (animazioni) e di scenari navigabili interattivamente oltre ad attività di ricerca di fonti bibliografiche e commenti di studiosi segnatamente legati all'evento in oggetto. Obiettivo di questo lavoro è far interagire, attraverso diversi progetti, le discipline storiche e l’informatica, nelle diverse opportunità tecnologiche che questa presenta. Le possibilità di ricostruzione offerte dal 3D vengono così messe a servizio della ricerca, offrendo una visione integrale in grado di avvicinarci alla realtà dell’epoca presa in considerazione e convogliando in un’unica piattaforma espositiva tutti i risultati. Divulgazione Progetto Mappa Informativa Multimediale Torino 1945 Sul piano pratico il progetto prevede una interfaccia navigabile (tecnologia Flash) che rappresenti la pianta della città dell’epoca, attraverso la quale sia possibile avere una visione dei luoghi e dei tempi in cui la Liberazione prese forma, sia a livello concettuale, sia a livello pratico. Questo intreccio di coordinate nello spazio e nel tempo non solo migliora la comprensione dei fenomeni, ma crea un maggiore interesse sull’argomento attraverso l’utilizzo di strumenti divulgativi di grande efficacia (e appeal) senza perdere di vista la necessità di valicare le tesi storiche proponendosi come piattaforma didattica. Un tale contesto richiede uno studio approfondito degli eventi storici al fine di ricostruire con chiarezza una mappa della città che sia precisa sia topograficamente sia a livello di navigazione multimediale. La preparazione della cartina deve seguire gli standard del momento, perciò le soluzioni informatiche utilizzate sono quelle fornite da Adobe Illustrator per la realizzazione della topografia, e da Macromedia Flash per la creazione di un’interfaccia di navigazione. La base dei dati descrittivi è ovviamente consultabile essendo contenuta nel supporto media e totalmente annotata nella bibliografia. È il continuo evolvere delle tecnologie d'informazione e la massiccia diffusione dell’uso dei computer che ci porta a un cambiamento sostanziale nello studio e nell’apprendimento storico; le strutture accademiche e gli operatori economici hanno fatto propria la richiesta che giunge dall'utenza (insegnanti, studenti, operatori dei Beni Culturali) di una maggiore diffusione della conoscenza storica attraverso la sua rappresentazione informatizzata. Sul fronte didattico la ricostruzione di una realtà storica attraverso strumenti informatici consente anche ai non-storici di toccare con mano quelle che sono le problematiche della ricerca quali fonti mancanti, buchi della cronologia e valutazione della veridicità dei fatti attraverso prove. Le tecnologie informatiche permettono una visione completa, unitaria ed esauriente del passato, convogliando tutte le informazioni su un'unica piattaforma, permettendo anche a chi non è specializzato di comprendere immediatamente di cosa si parla. Il miglior libro di storia, per sua natura, non può farlo in quanto divide e organizza le notizie in modo diverso. In questo modo agli studenti viene data l'opportunità di apprendere tramite una rappresentazione diversa rispetto a quelle a cui sono abituati. La premessa centrale del progetto è che i risultati nell'apprendimento degli studenti possono essere migliorati se un concetto o un contenuto viene comunicato attraverso più canali di espressione, nel nostro caso attraverso un testo, immagini e un oggetto multimediale. Didattica La Conceria Fiorio è uno dei luoghi-simbolo della Resistenza torinese. Il progetto è una ricostruzione in realtà virtuale della Conceria Fiorio di Torino. La ricostruzione serve a arricchire la cultura storica sia a chi la produce, attraverso una ricerca accurata delle fonti, sia a chi può poi usufruirne, soprattutto i giovani, che, attratti dall’aspetto ludico della ricostruzione, apprendono con più facilità. La costruzione di un manufatto in 3D fornisce agli studenti le basi per riconoscere ed esprimere la giusta relazione fra il modello e l’oggetto storico. Le fasi di lavoro attraverso cui si è giunti alla ricostruzione in 3D della Conceria: . una ricerca storica approfondita, basata sulle fonti, che possono essere documenti degli archivi o scavi archeologici, fonti iconografiche, cartografiche, ecc.; . La modellazione degli edifici sulla base delle ricerche storiche, per fornire la struttura geometrica poligonale che permetta la navigazione tridimensionale; . La realizzazione, attraverso gli strumenti della computer graphic della navigazione in 3D. Unreal Technology è il nome dato al motore grafico utilizzato in numerosi videogiochi commerciali. Una delle caratteristiche fondamentali di tale prodotto è quella di avere uno strumento chiamato Unreal editor con cui è possibile costruire mondi virtuali, e che è quello utilizzato per questo progetto. UnrealEd (Ued) è il software per creare livelli per Unreal e i giochi basati sul motore di Unreal. E’ stata utilizzata la versione gratuita dell’editor. Il risultato finale del progetto è un ambiente virtuale navigabile raffigurante una ricostruzione accurata della Conceria Fiorio ai tempi della Resistenza. L’utente può visitare l’edificio e visualizzare informazioni specifiche su alcuni punti di interesse. La navigazione viene effettuata in prima persona, un processo di “spettacolarizzazione” degli ambienti visitati attraverso un arredamento consono permette all'utente una maggiore immersività rendendo l’ambiente più credibile e immediatamente codificabile. L’architettura Unreal Technology ha permesso di ottenere un buon risultato in un tempo brevissimo, senza che fossero necessari interventi di programmazione. Questo motore è, quindi, particolarmente adatto alla realizzazione rapida di prototipi di una discreta qualità, La presenza di un certo numero di bug lo rende, però, in parte inaffidabile. Utilizzare un editor da videogame per questa ricostruzione auspica la possibilità di un suo impiego nella didattica, quello che le simulazioni in 3D permettono nel caso specifico è di permettere agli studenti di sperimentare il lavoro della ricostruzione storica, con tutti i problemi che lo storico deve affrontare nel ricreare il passato. Questo lavoro vuole essere per gli storici una esperienza nella direzione della creazione di un repertorio espressivo più ampio, che includa gli ambienti tridimensionali. Il rischio di impiegare del tempo per imparare come funziona questa tecnologia per generare spazi virtuali rende scettici quanti si impegnano nell'insegnamento, ma le esperienze di progetti sviluppati, soprattutto all’estero, servono a capire che sono un buon investimento. Il fatto che una software house, che crea un videogame di grande successo di pubblico, includa nel suo prodotto, una serie di strumenti che consentano all'utente la creazione di mondi propri in cui giocare, è sintomatico che l'alfabetizzazione informatica degli utenti medi sta crescendo sempre più rapidamente e che l'utilizzo di un editor come Unreal Engine sarà in futuro una attività alla portata di un pubblico sempre più vasto. Questo ci mette nelle condizioni di progettare moduli di insegnamento più immersivi, in cui l'esperienza della ricerca e della ricostruzione del passato si intreccino con lo studio più tradizionale degli avvenimenti di una certa epoca. I mondi virtuali interattivi vengono spesso definiti come la forma culturale chiave del XXI secolo, come il cinema lo è stato per il XX. Lo scopo di questo lavoro è stato quello di suggerire che vi sono grosse opportunità per gli storici impiegando gli oggetti e le ambientazioni in 3D, e che essi devono coglierle. Si consideri il fatto che l’estetica abbia un effetto sull’epistemologia. O almeno sulla forma che i risultati delle ricerche storiche assumono nel momento in cui devono essere diffuse. Un’analisi storica fatta in maniera superficiale o con presupposti errati può comunque essere diffusa e avere credito in numerosi ambienti se diffusa con mezzi accattivanti e moderni. Ecco perchè non conviene seppellire un buon lavoro in qualche biblioteca, in attesa che qualcuno lo scopra. Ecco perchè gli storici non devono ignorare il 3D. La nostra capacità, come studiosi e studenti, di percepire idee ed orientamenti importanti dipende spesso dai metodi che impieghiamo per rappresentare i dati e l’evidenza. Perché gli storici possano ottenere il beneficio che il 3D porta con sè, tuttavia, devono sviluppare un’agenda di ricerca volta ad accertarsi che il 3D sostenga i loro obiettivi di ricercatori e insegnanti. Una ricostruzione storica può essere molto utile dal punto di vista educativo non sono da chi la visita ma, anche da chi la realizza. La fase di ricerca necessaria per la ricostruzione non può fare altro che aumentare il background culturale dello sviluppatore. Conclusioni La cosa più importante è stata la possibilità di fare esperienze nell’uso di mezzi di comunicazione di questo genere per raccontare e far conoscere il passato. Rovesciando il paradigma conoscitivo che avevo appreso negli studi umanistici, ho cercato di desumere quelle che potremo chiamare “leggi universali” dai dati oggettivi emersi da questi esperimenti. Da punto di vista epistemologico l’informatica, con la sua capacità di gestire masse impressionanti di dati, dà agli studiosi la possibilità di formulare delle ipotesi e poi accertarle o smentirle tramite ricostruzioni e simulazioni. Il mio lavoro è andato in questa direzione, cercando conoscere e usare strumenti attuali che nel futuro avranno sempre maggiore presenza nella comunicazione (anche scientifica) e che sono i mezzi di comunicazione d’eccellenza per determinate fasce d’età (adolescenti). Volendo spingere all’estremo i termini possiamo dire che la sfida che oggi la cultura visuale pone ai metodi tradizionali del fare storia è la stessa che Erodoto e Tucidide contrapposero ai narratori di miti e leggende. Prima di Erodoto esisteva il mito, che era un mezzo perfettamente adeguato per raccontare e dare significato al passato di una tribù o di una città. In un mondo post letterario la nostra conoscenza del passato sta sottilmente mutando nel momento in cui lo vediamo rappresentato da pixel o quando le informazioni scaturiscono non da sole, ma grazie all’interattività con il mezzo. La nostra capacità come studiosi e studenti di percepire idee ed orientamenti importanti dipende spesso dai metodi che impieghiamo per rappresentare i dati e l’evidenza. Perché gli storici possano ottenere il beneficio sottinteso al 3D, tuttavia, devono sviluppare un’agenda di ricerca volta ad accertarsi che il 3D sostenga i loro obiettivi di ricercatori e insegnanti. Le esperienze raccolte nelle pagine precedenti ci portano a pensare che in un futuro non troppo lontano uno strumento come il computer sarà l’unico mezzo attraverso cui trasmettere conoscenze, e dal punto di vista didattico la sua interattività consente coinvolgimento negli studenti come nessun altro mezzo di comunicazione moderno.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research argues for an analysis of textual and cultural forms in the American horror film (1968- 1998), by defining the so-called postmodern characters. The “postmodern” term will not mean a period of the history of cinema, but a series of forms and strategies recognizable in many American films. From a bipolar re-mediation and cognitive point of view, the postmodern phenomenon is been considered as a formal and epistemological re-configuration of the cultural “modern” system. The first section of the work examines theoretical problems about the “postmodern phenomenon” by defining its cultural and formal constants in different areas (epistemology, economy, mass-media): the character of convergence, fragmentation, manipulation and immersion represent the first ones, while the “excess” is the morphology of the change, by realizing the “fluctuation” of the previous consolidated system. The second section classifies the textual and cultural forms of American postmodern film, generally non-horror. The “classic narrative” structure – coherent and consequent chain of causal cues toward a conclusion – is scattered by the postmodern constant of “fragmentation”. New textual models arise, fragmenting the narrative ones into the aggregations of data without causal-temporal logics. Considering the process of “transcoding”1 and “remediation”2 between media, and the principle of “convergence” in the phenomenon, the essay aims to define these structures in postmodern film as “database forms” and “navigable space forms.” The third section applies this classification to American horror film (1968-1998). The formal constant of “excess” in the horror genre works on the paradigm of “vision”: if postmodern film shows a crisis of the “truth” in the vision, in horror movies the excess of vision becomes “hyper-vision” – that is “multiplication” of the death/blood/torture visions – and “intra-vision”, that shows the impossibility of recognizing the “real” vision from the virtual/imaginary. In this perspective, the textual and cultural forms and strategies of postmodern horror film are predominantly: the “database-accumulation” forms, where the events result from a very simple “remote cause” serving as a pretext (like in Night of the Living Dead); the “database-catalogue” forms, where the events follow one another displaying a “central” character or theme. In the first case, the catalogue syntagms are connected by “consecutive” elements, building stories linked by the actions of a single character (usually the killer), or connected by non-consecutive episodes about a general theme: examples of the first kind are built on the model of The Wizard of Gore; the second ones, on the films such as Mario Bava’s I tre volti della paura. The “navigable space” forms are defined: hyperlink a, where one universe is fluctuating between reality and dream, as in Rosemary’s Baby; hyperlink b (where two non-hierarchical universes are convergent, the first one real and the other one fictional, as in the Nightmare series); hyperlink c (where more worlds are separated but contiguous in the last sequence, as in Targets); the last form, navigable-loop, includes a textual line which suddenly stops and starts again, reflecting the pattern of a “loop” (as in Lost Highway). This essay analyses in detail the organization of “visual space” into the postmodern horror film by tracing representative patterns. It concludes by examining the “convergence”3 of technologies and cognitive structures of cinema and new media.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.