1000 resultados para Universitat de Girona. Càtedra de Cultura Científica i Comunicació Digital
Resumo:
RESUM: En aquest treball de recerca s’han analitzat els referents culturals i els jocs de paraules que apareixen en l’obra El senyor dels anells. Com a reflexió inicial em baso en la hipòtesi que la caracterització de hòbbits, ents i homes, tres races que apareixen en la Terra Mitjana, el món secundari creat per Tolkien, es nodreix, a banda d’altres elements, de jocs de paraules i referents culturals britànics. És fonamental que aquesta afirmació sigui tinguda en compte en l’acte de traducció per respectar la intenció de J. R. R. Tolkien, que en aquesta obra buscava també la creació d’una mitologia britànica versemblant, com si de veritat la Terra Mitjana hagués constituït el passat real del nostre planeta amb els avantpassats britànics com a protagonistes. La composició d’aquest estudi es divideix en: (1) característiques de la literatura fantàstica; (2) aspectes relacionats amb l’autor des de dues vessants: l’una com a acadèmic i l’altra com a escriptor i creador; (3) teoria de la traducció de referents culturals i de jocs de paraules; i (4) corpus de 144 termes extrets del text de sortida analitzats i traduïts a partir de fitxes de traducció. Puc concloure que s’ha demostrat la hipòtesi que em plantejava: els referents culturals i els jocs de paraules són recursos actius en El senyor dels anells i una bona anàlisi d’aquests camps és pertinent per comprendre la magnitud de l’imaginari de J. R. R. Tolkien i realitzar una traducció respectuosa amb la intencionalitat de l’autor. El senyor dels anells és una fairy-story, una obra de fantasia, una fugida de la realitat consensuada (Tolkien no s’amaga d’aquesta fugida i la veu positiva) en què els elements propis del món primari (la cultura britànica i comunitats antigues com l’anglosaxona, la celta o l’escandinava) i del món secundari (els personatges que habiten la Terra Mitjana en el marc temporal de la Tercera Edat) funcionen d’acord amb els principis dels dos móns. Considero que el procés seguit tant en la metodologia, com en l’elaboració dels continguts, com en la traducció dels termes analitzats a través de les fitxes és positiu i útil. Aquesta sistematització del procés i el resultat final poden ser rellevants per a la comunitat traductora, tant per al futur traductor com per al professional.
Resumo:
Com a part inicial d'un procés més ampli d’implementació dels principis del Disseny Instruccional Universal a la Universitat de Vic, aquest estudi ha examinat i explorat els estils d'aprenentatge dels estudiants de primer curs de la Diplomatura de Mestre a la Universitat de Vic. Els objectius han estat, analitzar els perfils dels estils d’aprenentatge dels alumnes de primer curs d'educació per determinar si hi havia diferències significatives en l'estil d'aprenentatge preferit en base a especialitat escollida, edat i sexe, així com també detectar correlacions entre estilsd'aprenentatge i rendiment acadèmic. S’ha estudiat una mostra de 243 estudiants. L'instrument de mesura ha estat el “Index of Learning Styles Questuinnaire”. S’ha procedit a fer un anàlisi descriptiu i ANOVA. Els resultats mostren una distribució de les quatre dimensions del ILS similars als perfils d’altres poblacions d'altres universitats. La immensa majoria dels estudiants resulten visuals i sensorial i indiferents per a les dimensions actiu-reflexiu i seqüencial-global. Per especialitat i edat només les dimensions actiureflexiu ivisual-verbal han mostrat diferències significatives. Per gènere no s'han trobat diferèncie significatives . Pel que respecta al rendiment acadèmic, només la dimensió visual-verbal sembla tenir una relació significativa, obtenint qualificacions sensiblement més altes aquells estudiants amb un perfil verbal més alts que els efectes visuals. S’ha generat dues noves variables "nivell de dispersió" i "grau de risc". No s’ha trobat cap relació entre aquests dos factors i el rendiment acadèmic. S’han discutit els resultats. L’instrument ILSQ sembla presentar un valor moderat com a element predictiu de dificultats l'aprenentatge de l’alumnat i el seu rendiment acadèmic.
Resumo:
El treball de recerca té com a principal objectiu l'estudi del cinema documental rus contemporani a través de l'obra cinematogràfica d'Alexander Sokurov, Sergei Dvortsevoi, Sergei Loznitsa i Victor Kossakovski. En un primer moment la investigació s'havia encaminat en un estudi comparatiu sobre les noves tendències del documental i els models de realisme proposats des de la Rússia post-comunista. El treball s'ha realitzat a partir de tres vies d'investigació. La primera ha consistit en una exhaustiva recerca bibliogràfica sobre cinema documental i cinema soviètic. La segona s'ha plantejat a partir d'un anàlisi acurat de les diverses pel•lícules. Finalment, la tercera via s'ha desenvolupat a partir d'un treball de camp realitzat durant una estada a Rússia, un període en el qual va ser possible entrevistar dos dels cineastes protagonistes de l'estudi, Sergei Dvortsevoi i Victor Kossakovski, així com el crític de cinema Andrei Xemijakin. També va ser fonamental l'assistència a la taula rodona i la master class impartida per Sergei Loznitsa en el marc del desè aniversari del Màster en Teoria i Pràctica de Documental Creatiu de la Universitat Autònoma de Barcelona. Tot i que es poden traçar vincles entre el treball dels quatre cineastes escollits i algunes de les pràctiques contemporànies en l'àmbit de la no-ficció, com pot ser l'experiència de Sergei Loznitsa en el terreny del found-footage, o els documentals experimentals de caràcter assagístic d'Alexander Sokurov, així com la tendència observacional i el pas al cinema de ficció de Segei Dvortsevoi, o l'ús de la tecnologia digital en les últimes pel•lícules de Victor Kossakovski. Tot i aquestes aproximacions, es pot afirmar que el model de realisme proposat per aquests cineastes troba el seu autèntic llegat en el cinema soviètic. Una herència que comença amb el cinema de Dziga Vertov –pioner del documental artístic i revolucionari- i acaba en el d'Artavadz Pelechian –cineasta armeni i un dels màxims representatnts del documental poètic-. El treball de recerca ha estat presentat en forma de comunicació en el congrés internacional “IMAGEing Reality: Representing the Real in Film, Television and New Media”, celebrat a Pamplona el mes d'Octubre de 2009. La comunicació s'ha redactat en format article i està pendent de publicació.
Resumo:
Actualment s’està duent a terme el projecte de tesi consistent en estudiar la producció de pèptids antimicrobians d'ús fitosanitari en plantes-biofactoria. Durant aquest període de la tesi doctoral he realitzat la transformació (mitjançant Agrobacterium tumefaciens) de plantes d’arròs per a la síntesi massiva d’una sèrie de pèptids antimicrobians, BP188, BP183, BP183TAG, BP215, BP173 i BP178. Es tracta d’una sèrie de derivats de BP100 que presenta unes propietats fitosanitàries molt interessants: elevada activitat contra bacteris d’interès fitosanitari, toxicitat reduïda i estabilitat moderada. El treball que he realitzat per dur a terme aquesta tasca ha requerit inicialment realitzar els clonatges necessaris per a les transformacions; per a continuació realitzar les transformacions de calls d’arròs varietat Sènia, seleccionar les cèl•lules transformades, regenerar plantes transgèniques i analitzar-ne la presència del transgén.
Resumo:
In an earlier investigation (Burger et al., 2000) five sediment cores near the RodriguesTriple Junction in the Indian Ocean were studied applying classical statistical methods(fuzzy c-means clustering, linear mixing model, principal component analysis) for theextraction of endmembers and evaluating the spatial and temporal variation ofgeochemical signals. Three main factors of sedimentation were expected by the marinegeologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. Thedisplay of fuzzy membership values and/or factor scores versus depth providedconsistent results for two factors only; the ultra-basic component could not beidentified. The reason for this may be that only traditional statistical methods wereapplied, i.e. the untransformed components were used and the cosine-theta coefficient assimilarity measure.During the last decade considerable progress in compositional data analysis was madeand many case studies were published using new tools for exploratory analysis of thesedata. Therefore it makes sense to check if the application of suitable data transformations,reduction of the D-part simplex to two or three factors and visualinterpretation of the factor scores would lead to a revision of earlier results and toanswers to open questions . In this paper we follow the lines of a paper of R. Tolosana-Delgado et al. (2005) starting with a problem-oriented interpretation of the biplotscattergram, extracting compositional factors, ilr-transformation of the components andvisualization of the factor scores in a spatial context: The compositional factors will beplotted versus depth (time) of the core samples in order to facilitate the identification ofthe expected sources of the sedimentary process.Kew words: compositional data analysis, biplot, deep sea sediments
Resumo:
In order to obtain a high-resolution Pleistocene stratigraphy, eleven continuouslycored boreholes, 100 to 220m deep were drilled in the northern part of the PoPlain by Regione Lombardia in the last five years. Quantitative provenanceanalysis (QPA, Weltje and von Eynatten, 2004) of Pleistocene sands was carriedout by using multivariate statistical analysis (principal component analysis, PCA,and similarity analysis) on an integrated data set, including high-resolution bulkpetrography and heavy-mineral analyses on Pleistocene sands and of 250 majorand minor modern rivers draining the southern flank of the Alps from West toEast (Garzanti et al, 2004; 2006). Prior to the onset of major Alpine glaciations,metamorphic and quartzofeldspathic detritus from the Western and Central Alpswas carried from the axial belt to the Po basin longitudinally parallel to theSouthAlpine belt by a trunk river (Vezzoli and Garzanti, 2008). This scenariorapidly changed during the marine isotope stage 22 (0.87 Ma), with the onset ofthe first major Pleistocene glaciation in the Alps (Muttoni et al, 2003). PCA andsimilarity analysis from core samples show that the longitudinal trunk river at thistime was shifted southward by the rapid southward and westward progradation oftransverse alluvial river systems fed from the Central and Southern Alps.Sediments were transported southward by braided river systems as well as glacialsediments transported by Alpine valley glaciers invaded the alluvial plain.Kew words: Detrital modes; Modern sands; Provenance; Principal ComponentsAnalysis; Similarity, Canberra Distance; palaeodrainage
Resumo:
Emergent molecular measurement methods, such as DNA microarray, qRTPCR, andmany others, offer tremendous promise for the personalized treatment of cancer. Thesetechnologies measure the amount of specific proteins, RNA, DNA or other moleculartargets from tumor specimens with the goal of “fingerprinting” individual cancers. Tumorspecimens are heterogeneous; an individual specimen typically contains unknownamounts of multiple tissues types. Thus, the measured molecular concentrations resultfrom an unknown mixture of tissue types, and must be normalized to account for thecomposition of the mixture.For example, a breast tumor biopsy may contain normal, dysplastic and cancerousepithelial cells, as well as stromal components (fatty and connective tissue) and bloodand lymphatic vessels. Our diagnostic interest focuses solely on the dysplastic andcancerous epithelial cells. The remaining tissue components serve to “contaminate”the signal of interest. The proportion of each of the tissue components changes asa function of patient characteristics (e.g., age), and varies spatially across the tumorregion. Because each of the tissue components produces a different molecular signature,and the amount of each tissue type is specimen dependent, we must estimate the tissuecomposition of the specimen, and adjust the molecular signal for this composition.Using the idea of a chemical mass balance, we consider the total measured concentrationsto be a weighted sum of the individual tissue signatures, where weightsare determined by the relative amounts of the different tissue types. We develop acompositional source apportionment model to estimate the relative amounts of tissuecomponents in a tumor specimen. We then use these estimates to infer the tissuespecificconcentrations of key molecular targets for sub-typing individual tumors. Weanticipate these specific measurements will greatly improve our ability to discriminatebetween different classes of tumors, and allow more precise matching of each patient tothe appropriate treatment
Resumo:
The Hardy-Weinberg law, formulated about 100 years ago, states that under certainassumptions, the three genotypes AA, AB and BB at a bi-allelic locus are expected to occur inthe proportions p2, 2pq, and q2 respectively, where p is the allele frequency of A, and q = 1-p.There are many statistical tests being used to check whether empirical marker data obeys theHardy-Weinberg principle. Among these are the classical xi-square test (with or withoutcontinuity correction), the likelihood ratio test, Fisher's Exact test, and exact tests in combinationwith Monte Carlo and Markov Chain algorithms. Tests for Hardy-Weinberg equilibrium (HWE)are numerical in nature, requiring the computation of a test statistic and a p-value.There is however, ample space for the use of graphics in HWE tests, in particular for the ternaryplot. Nowadays, many genetical studies are using genetical markers known as SingleNucleotide Polymorphisms (SNPs). SNP data comes in the form of counts, but from the countsone typically computes genotype frequencies and allele frequencies. These frequencies satisfythe unit-sum constraint, and their analysis therefore falls within the realm of compositional dataanalysis (Aitchison, 1986). SNPs are usually bi-allelic, which implies that the genotypefrequencies can be adequately represented in a ternary plot. Compositions that are in exactHWE describe a parabola in the ternary plot. Compositions for which HWE cannot be rejected ina statistical test are typically “close" to the parabola, whereas compositions that differsignificantly from HWE are “far". By rewriting the statistics used to test for HWE in terms ofheterozygote frequencies, acceptance regions for HWE can be obtained that can be depicted inthe ternary plot. This way, compositions can be tested for HWE purely on the basis of theirposition in the ternary plot (Graffelman & Morales, 2008). This leads to nice graphicalrepresentations where large numbers of SNPs can be tested for HWE in a single graph. Severalexamples of graphical tests for HWE (implemented in R software), will be shown, using SNPdata from different human populations
Resumo:
The amalgamation operation is frequently used to reduce the number of parts of compositional data but it is a non-linear operation in the simplex with the usual geometry,the Aitchison geometry. The concept of balances between groups, a particular coordinate system designed over binary partitions of the parts, could be an alternative to theamalgamation in some cases. In this work we discuss the proper application of bothconcepts using a real data set corresponding to behavioral measures of pregnant sows
Resumo:
En este artículo se presenta la metodología y los resultados derivados de la aplicación de una adaptación del denominado modelo de evaluación de reducción de amenazas (Threat Reduction Assesment) en el Parque Natural de la Zona Volcánica de la Garrotxa (PNZVG). Endefinitiva, se pretende valorar la efectividad de la gestión a partir del grado de reducción de amenazas en el PNZVG. El estudio se realizó a partir de la elaboración de una evaluación externa e independiente, que ha contado a la vez con una estrecha colaboración de los órganos gestores y rectores del Parque, así como con una activa participación de distintos agentes sociales clave. Se concluye que, después de veinticinco años de existencia del Parque, muchas de las amenazas iniciales sólo se han reducido de forma modesta e incluso a partir del índice adaptado, considerado un enfoque más realista; con lo cual se llega a la preocupante conclusión de que algunas de las amenazas más importantes se han incrementado
Resumo:
Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult toachieve because the relative values of the forecast components often fail to behave ina way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It hasbeen shown that cause-specic mortality forecasts are pessimistic when compared withall-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approachof using log mortality rates and forecasts the density of deaths in the life table. Sincethese values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbingstate), they are intrinsically relative rather than absolute values across decrements aswell as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison(1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that theunit sum constraint is honoured. The structure of the best-known, single-decrementmortality-rate forecasting model, devised by Lee and Carter (1992), is expressed incompositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortalityby cause of death for Japan
Resumo:
Theory of compositional data analysis is often focused on the composition only. However in practical applications we often treat a composition together with covariableswith some other scale. This contribution systematically gathers and develop statistical tools for this situation. For instance, for the graphical display of the dependenceof a composition with a categorical variable, a colored set of ternary diagrams mightbe a good idea for a first look at the data, but it will fast hide important aspects ifthe composition has many parts, or it takes extreme values. On the other hand colored scatterplots of ilr components could not be very instructive for the analyst, if theconventional, black-box ilr is used.Thinking on terms of the Euclidean structure of the simplex, we suggest to set upappropriate projections, which on one side show the compositional geometry and on theother side are still comprehensible by a non-expert analyst, readable for all locations andscales of the data. This is e.g. done by defining special balance displays with carefully-selected axes. Following this idea, we need to systematically ask how to display, explore,describe, and test the relation to complementary or explanatory data of categorical, real,ratio or again compositional scales.This contribution shows that it is sufficient to use some basic concepts and very fewadvanced tools from multivariate statistics (principal covariances, multivariate linearmodels, trellis or parallel plots, etc.) to build appropriate procedures for all these combinations of scales. This has some fundamental implications in their software implementation, and how might they be taught to analysts not already experts in multivariateanalysis
Resumo:
Self-organizing maps (Kohonen 1997) is a type of artificial neural network developedto explore patterns in high-dimensional multivariate data. The conventional versionof the algorithm involves the use of Euclidean metric in the process of adaptation ofthe model vectors, thus rendering in theory a whole methodology incompatible withnon-Euclidean geometries.In this contribution we explore the two main aspects of the problem:1. Whether the conventional approach using Euclidean metric can shed valid resultswith compositional data.2. If a modification of the conventional approach replacing vectorial sum and scalarmultiplication by the canonical operators in the simplex (i.e. perturbation andpowering) can converge to an adequate solution.Preliminary tests showed that both methodologies can be used on compositional data.However, the modified version of the algorithm performs poorer than the conventionalversion, in particular, when the data is pathological. Moreover, the conventional ap-proach converges faster to a solution, when data is \well-behaved".Key words: Self Organizing Map; Artificial Neural networks; Compositional data
Resumo:
In most psychological tests and questionnaires, a test score is obtained bytaking the sum of the item scores. In virtually all cases where the test orquestionnaire contains multidimensional forced-choice items, this traditionalscoring method is also applied. We argue that the summation of scores obtained with multidimensional forced-choice items produces uninterpretabletest scores. Therefore, we propose three alternative scoring methods: a weakand a strict rank preserving scoring method, which both allow an ordinalinterpretation of test scores; and a ratio preserving scoring method, whichallows a proportional interpretation of test scores. Each proposed scoringmethod yields an index for each respondent indicating the degree to whichthe response pattern is inconsistent. Analysis of real data showed that withrespect to rank preservation, the weak and strict rank preserving methodresulted in lower inconsistency indices than the traditional scoring method;with respect to ratio preservation, the ratio preserving scoring method resulted in lower inconsistency indices than the traditional scoring method
Resumo:
Functional Data Analysis (FDA) deals with samples where a whole function is observedfor each individual. A particular case of FDA is when the observed functions are densityfunctions, that are also an example of infinite dimensional compositional data. In thiswork we compare several methods for dimensionality reduction for this particular typeof data: functional principal components analysis (PCA) with or without a previousdata transformation and multidimensional scaling (MDS) for diferent inter-densitiesdistances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (householdsincome distributions)