996 resultados para Castellot de la Roca Roja (Benifallet, Catalunya : Jaciment arqueològic)
Resumo:
El préstec de documents és un dels pilars fonamentals de la biblioteca per donar suport a l'estudi, docència i recerca. A la Biblioteca de la Universitat Oberta de Catalunya, la diferència principal respecte a altres biblioteques universitàries, és el nou concepte virtual sense parets, i aquest concepte s'estén també a aquest servei de préstec. Els usuaris no han d'accedir necessàriament a un espai físic per consultar les fonts d'informació disponibles, ni per endur-se documents en préstec. Des de casa, i de manera remota, poden demanar els documents en els que estiguin interessats, des del mateix catàleg i sense cap tipus d'intermediari gràcies al desenvolupament d'un nou servei de préstec de documents, que genera alhora un nou model dins d'un campus virtual.
Resumo:
La Biblioteca de la Universitat Oberta de Catalunya ve oferint des de fa diversos semestres les anomenades Prestatgeries virtuals, un servei que se situa dins de l'aula virtual i que serveix per posar a disposició dels estudiants tota una sèrie de recursos documental que li faciliten el seguiment i aprenentatge de les assignatures a les quals està matriculat.
Resumo:
The general perspective of M-technologies and M-Services at the Spanish universities is not still in a very high level when we are ending the first decade of the 21st century. Some Universities and some of their libraries are starting to try out with M-technologies, but are still far from a model of massive exploitation, less than in some other countries. A deep study is needed to know the main reasons, study that we will not do in this paper. This general perspective does not mean that there are no significant initiatives which start to trust in M-technologies from Universities and their libraries. Models based in M-technologies make more sense than ever in open universities and in open libraries. That's the reason why the UOC's Library began in late 90s its first experiences in the M-Technologies and M-Libraries developments. In 1999 the appropriate technology offered the opportunity to carry out the first pilot test with SMS, and then applying the WAP technology. At those moments we managed to link-up mobile phones to the OPAC through a WAP system that allowed searching the catalogue by categories and finding the final location of a document, offering also the address of the library in which the user could loan it. Since then, UOC (and its library) directs its efforts towards adapting the offer of services to all sorts of M-devices used by end users. Left the WAP technology, nowadays the library is experimenting with some new devices like e-books, and some new services to get more feedback through the OPAC and metalibrary search products. We propose the case of Open University of Catalonia, in two levels: M-services applied in the library and M-technologies applied in some other university services and resources.
Resumo:
Over the past year, the Open University of Catalonia library has been designing its new website with this question in mind. Our main concern has been how to integrate the library in the student day to day study routine to not to be only a satellite tool. We present the design of the website that, in a virtual library like ours, it is not only a website but the whole library itself. The central point of the web is my library, a space that associates the library resources with the student's curriculum and their course subjects. There the students can save the resources as favourites, comment or share them. They have also access to all the services the library offers them. The resources are imported from multiple tools such as Millennium, SFX, Metalib and Dspace to the Drupal CMS. Then the resources' metadata can be enriched with other contextual information from other sources, for example the course subjects. And finally they can be exported in standard, open data formats making them available for linked data applications.
Resumo:
L'objectiu del projecte que s'exposa és el de crear serveis bibliotecaris virtuals, per donar suport a l'activitat docent i acadèmica dels estudis del Graduat Multimèdia a Distància oferts per la Fundació Politècnica de Catalunya (FPC) i la Universitat Oberta de Catalunya.
Resumo:
L'acostament entre el professional de la salut i el qui necessita els seus serveis és un tema d'interès per a la psicologia de la salut des que va començar com a disciplina. En el context de la societat de la informació i el coneixement apareix un nou escenari d'intervenció d'aquests dos col·lectius que és necessari conèixer. Per això s'han començat diferents propostes, però la que presenta el grup d'investigació de Psicologia de la Salut i Xarxa (PSINET) de la Universitat Oberta de Catalunya s'encamina a potenciar la creació d'espais virtuals de trobada entre ambdós col·lectius (professionals de la salut i usuaris de serveis de salut). L'establiment de plataformes digitals de serveis sanitaris per als ciutadans del segle XXI passa primer per conèixer la realitat dels diferents col·lectius implicats en la relació salut i xarxa. L'objectiu que es planteja en aquest estudi se centra en el primer col·lectiu i en el descobriment del que hi ha sobre salut a Internet. Per a això, seguint una metodologia de recerca exhaustiva per Internet, s'han recollit webs sobre salut en català i castellà, i s'ha fet una anàlisi de dades textuals de la informació que contenien els webs en català. Aquesta anàlisi ha permès conèixer i descriure el prototip de web sobre salut que hi ha a la xarxa en el moment de fer l'estudi.
Resumo:
Resum en anglès del projecte de recerca L'empresa xarxa a Catalunya. TIC, productivitat, competitivitat, salaris i beneficis a l'empresa catalana té com a objectiu principal constatar que la consolidació d'un nou model estratègic, organitzatiu i d'activitat empresarial, vinculat amb la inversió i l'ús de les TIC (o empresa xarxa), modifica substancialment els patrons de comportament dels resultats empresarials, en especial la productivitat, la competitivitat, les retribucions dels treballadors i el benefici. La contrastació empírica de les hipòtesis de treball l'hem feta per mitjà de les dades d'una enquesta a una mostra representativa de 2.038 empreses catalanes. Amb la perspectiva de l'impacte de la inversió i l'ús de les TIC no s'aprecia una relació directa entre els processos d'innovació digital i els resultats de l'activitat de l'empresa catalana. En aquest sentit, hem hagut de segmentar el teixit productiu català per a buscar les organitzacions en què el procés de coinnovació tecnològica digital i organitzativa és més present i en què la intensitat de l'ús del coneixement és un recurs molt freqüent per a poder copsar impactes rellevants en els principals resultats empresarials. Això és així perquè l'economia catalana, avui, presenta una estructura productiva dual.
Resumo:
In an earlier investigation (Burger et al., 2000) five sediment cores near the RodriguesTriple Junction in the Indian Ocean were studied applying classical statistical methods(fuzzy c-means clustering, linear mixing model, principal component analysis) for theextraction of endmembers and evaluating the spatial and temporal variation ofgeochemical signals. Three main factors of sedimentation were expected by the marinegeologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. Thedisplay of fuzzy membership values and/or factor scores versus depth providedconsistent results for two factors only; the ultra-basic component could not beidentified. The reason for this may be that only traditional statistical methods wereapplied, i.e. the untransformed components were used and the cosine-theta coefficient assimilarity measure.During the last decade considerable progress in compositional data analysis was madeand many case studies were published using new tools for exploratory analysis of thesedata. Therefore it makes sense to check if the application of suitable data transformations,reduction of the D-part simplex to two or three factors and visualinterpretation of the factor scores would lead to a revision of earlier results and toanswers to open questions . In this paper we follow the lines of a paper of R. Tolosana-Delgado et al. (2005) starting with a problem-oriented interpretation of the biplotscattergram, extracting compositional factors, ilr-transformation of the components andvisualization of the factor scores in a spatial context: The compositional factors will beplotted versus depth (time) of the core samples in order to facilitate the identification ofthe expected sources of the sedimentary process.Kew words: compositional data analysis, biplot, deep sea sediments
Resumo:
In order to obtain a high-resolution Pleistocene stratigraphy, eleven continuouslycored boreholes, 100 to 220m deep were drilled in the northern part of the PoPlain by Regione Lombardia in the last five years. Quantitative provenanceanalysis (QPA, Weltje and von Eynatten, 2004) of Pleistocene sands was carriedout by using multivariate statistical analysis (principal component analysis, PCA,and similarity analysis) on an integrated data set, including high-resolution bulkpetrography and heavy-mineral analyses on Pleistocene sands and of 250 majorand minor modern rivers draining the southern flank of the Alps from West toEast (Garzanti et al, 2004; 2006). Prior to the onset of major Alpine glaciations,metamorphic and quartzofeldspathic detritus from the Western and Central Alpswas carried from the axial belt to the Po basin longitudinally parallel to theSouthAlpine belt by a trunk river (Vezzoli and Garzanti, 2008). This scenariorapidly changed during the marine isotope stage 22 (0.87 Ma), with the onset ofthe first major Pleistocene glaciation in the Alps (Muttoni et al, 2003). PCA andsimilarity analysis from core samples show that the longitudinal trunk river at thistime was shifted southward by the rapid southward and westward progradation oftransverse alluvial river systems fed from the Central and Southern Alps.Sediments were transported southward by braided river systems as well as glacialsediments transported by Alpine valley glaciers invaded the alluvial plain.Kew words: Detrital modes; Modern sands; Provenance; Principal ComponentsAnalysis; Similarity, Canberra Distance; palaeodrainage
Resumo:
Emergent molecular measurement methods, such as DNA microarray, qRTPCR, andmany others, offer tremendous promise for the personalized treatment of cancer. Thesetechnologies measure the amount of specific proteins, RNA, DNA or other moleculartargets from tumor specimens with the goal of “fingerprinting” individual cancers. Tumorspecimens are heterogeneous; an individual specimen typically contains unknownamounts of multiple tissues types. Thus, the measured molecular concentrations resultfrom an unknown mixture of tissue types, and must be normalized to account for thecomposition of the mixture.For example, a breast tumor biopsy may contain normal, dysplastic and cancerousepithelial cells, as well as stromal components (fatty and connective tissue) and bloodand lymphatic vessels. Our diagnostic interest focuses solely on the dysplastic andcancerous epithelial cells. The remaining tissue components serve to “contaminate”the signal of interest. The proportion of each of the tissue components changes asa function of patient characteristics (e.g., age), and varies spatially across the tumorregion. Because each of the tissue components produces a different molecular signature,and the amount of each tissue type is specimen dependent, we must estimate the tissuecomposition of the specimen, and adjust the molecular signal for this composition.Using the idea of a chemical mass balance, we consider the total measured concentrationsto be a weighted sum of the individual tissue signatures, where weightsare determined by the relative amounts of the different tissue types. We develop acompositional source apportionment model to estimate the relative amounts of tissuecomponents in a tumor specimen. We then use these estimates to infer the tissuespecificconcentrations of key molecular targets for sub-typing individual tumors. Weanticipate these specific measurements will greatly improve our ability to discriminatebetween different classes of tumors, and allow more precise matching of each patient tothe appropriate treatment
Resumo:
The Hardy-Weinberg law, formulated about 100 years ago, states that under certainassumptions, the three genotypes AA, AB and BB at a bi-allelic locus are expected to occur inthe proportions p2, 2pq, and q2 respectively, where p is the allele frequency of A, and q = 1-p.There are many statistical tests being used to check whether empirical marker data obeys theHardy-Weinberg principle. Among these are the classical xi-square test (with or withoutcontinuity correction), the likelihood ratio test, Fisher's Exact test, and exact tests in combinationwith Monte Carlo and Markov Chain algorithms. Tests for Hardy-Weinberg equilibrium (HWE)are numerical in nature, requiring the computation of a test statistic and a p-value.There is however, ample space for the use of graphics in HWE tests, in particular for the ternaryplot. Nowadays, many genetical studies are using genetical markers known as SingleNucleotide Polymorphisms (SNPs). SNP data comes in the form of counts, but from the countsone typically computes genotype frequencies and allele frequencies. These frequencies satisfythe unit-sum constraint, and their analysis therefore falls within the realm of compositional dataanalysis (Aitchison, 1986). SNPs are usually bi-allelic, which implies that the genotypefrequencies can be adequately represented in a ternary plot. Compositions that are in exactHWE describe a parabola in the ternary plot. Compositions for which HWE cannot be rejected ina statistical test are typically “close" to the parabola, whereas compositions that differsignificantly from HWE are “far". By rewriting the statistics used to test for HWE in terms ofheterozygote frequencies, acceptance regions for HWE can be obtained that can be depicted inthe ternary plot. This way, compositions can be tested for HWE purely on the basis of theirposition in the ternary plot (Graffelman & Morales, 2008). This leads to nice graphicalrepresentations where large numbers of SNPs can be tested for HWE in a single graph. Severalexamples of graphical tests for HWE (implemented in R software), will be shown, using SNPdata from different human populations
Resumo:
The amalgamation operation is frequently used to reduce the number of parts of compositional data but it is a non-linear operation in the simplex with the usual geometry,the Aitchison geometry. The concept of balances between groups, a particular coordinate system designed over binary partitions of the parts, could be an alternative to theamalgamation in some cases. In this work we discuss the proper application of bothconcepts using a real data set corresponding to behavioral measures of pregnant sows
Resumo:
Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult toachieve because the relative values of the forecast components often fail to behave ina way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It hasbeen shown that cause-specic mortality forecasts are pessimistic when compared withall-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approachof using log mortality rates and forecasts the density of deaths in the life table. Sincethese values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbingstate), they are intrinsically relative rather than absolute values across decrements aswell as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison(1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that theunit sum constraint is honoured. The structure of the best-known, single-decrementmortality-rate forecasting model, devised by Lee and Carter (1992), is expressed incompositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortalityby cause of death for Japan
Resumo:
Theory of compositional data analysis is often focused on the composition only. However in practical applications we often treat a composition together with covariableswith some other scale. This contribution systematically gathers and develop statistical tools for this situation. For instance, for the graphical display of the dependenceof a composition with a categorical variable, a colored set of ternary diagrams mightbe a good idea for a first look at the data, but it will fast hide important aspects ifthe composition has many parts, or it takes extreme values. On the other hand colored scatterplots of ilr components could not be very instructive for the analyst, if theconventional, black-box ilr is used.Thinking on terms of the Euclidean structure of the simplex, we suggest to set upappropriate projections, which on one side show the compositional geometry and on theother side are still comprehensible by a non-expert analyst, readable for all locations andscales of the data. This is e.g. done by defining special balance displays with carefully-selected axes. Following this idea, we need to systematically ask how to display, explore,describe, and test the relation to complementary or explanatory data of categorical, real,ratio or again compositional scales.This contribution shows that it is sufficient to use some basic concepts and very fewadvanced tools from multivariate statistics (principal covariances, multivariate linearmodels, trellis or parallel plots, etc.) to build appropriate procedures for all these combinations of scales. This has some fundamental implications in their software implementation, and how might they be taught to analysts not already experts in multivariateanalysis
Resumo:
Self-organizing maps (Kohonen 1997) is a type of artificial neural network developedto explore patterns in high-dimensional multivariate data. The conventional versionof the algorithm involves the use of Euclidean metric in the process of adaptation ofthe model vectors, thus rendering in theory a whole methodology incompatible withnon-Euclidean geometries.In this contribution we explore the two main aspects of the problem:1. Whether the conventional approach using Euclidean metric can shed valid resultswith compositional data.2. If a modification of the conventional approach replacing vectorial sum and scalarmultiplication by the canonical operators in the simplex (i.e. perturbation andpowering) can converge to an adequate solution.Preliminary tests showed that both methodologies can be used on compositional data.However, the modified version of the algorithm performs poorer than the conventionalversion, in particular, when the data is pathological. Moreover, the conventional ap-proach converges faster to a solution, when data is \well-behaved".Key words: Self Organizing Map; Artificial Neural networks; Compositional data