331 resultados para Martín Fierro
Resumo:
L'objectiu d'aquest projecte és el disseny, raonat i recolzat en dades objectives, d'una plataforma de virtualització basada en VMWARE ESXi (versió gratuïta de VMWARE), orientada al segment d'empreses i organitzacions que per la seva grandària podrien beneficiar-se d'un entorn de servidors virtualizados però que pel seu pressupost no poden accedir a tecnologies capdavanteres d'implantació.
Resumo:
La Generalitat de Catalunya ha decidit obrir concurs públic per rebre propostes sobre el disseny d'una BD que els hi serveixi de magatzem d'informació per a la futura aplicació de gestió d'amonestacions i sancions en centres educatius d'ensenyament secundari. El marc de la col·laboració se centrarà només en el disseny de la BD en una primera fase del pla de sistemes d'informació de la Generalitat de Catalunya. A nivell general, aquest projecte pretén guardar la informació dels alumnes matriculats (núm. expedient, dades personals i de contacte, etc.), dels cursos on estan matriculats i dels professors que imparteixen aquests cursos (incloent la informació sobre quins són els tutors de cada grup i els horaris d'atenció als pares). Tota aquesta informació es gestionarà a través de cadascú dels diferents instituts d'ensenyament de Catalunya. Amb la informació abans esmentada, el sistema ha de permetre introduir les incidències o amonestacions dels alumnes, indicant-ne les seves dades necessàries (alumne, data, hora, tipus d'incidència, professor, si s'ha comunicat als pares, etc.). En cas de tenir una amonestació prou greu o una acumulació d'amonestacions s'ha de poder introduir una sanció indicant els motius i la resolució de la mateixa. El sistema a dissenyar ha de permetre emmagatzemar tota la informació comentada anteriorment i permetre generar les consultes més habituals que es realitzen. Adicionalment a aquest funcionament, la base de dades s'haurà d'encarregar de precalcular i emmagatzemar diversa informació estadística, tal i com es detalla més endavant en els requisits del mòdul estadístic.
Resumo:
Desarrollo de un videojuego para GNU Linux.
Resumo:
El presente proyecto pretende realizar un estudio de viabilidad de las soluciones de virtualización basadas en software Open Source, con la finalidad de esclarecer su grado de madurez para poder ser implantadas en entornos empresariales complejos y de gran capacidad. Se pretende con ello lograr una reducción de costes globales minimizando lo máximo posible cualquier merma en rendimiento o funcionalidades. Para ello, se parte de un cliente con una infraestructura virtual basada en un hipervisor comercial y se analizarán las opciones Open Source que permitan mantener el mayor número de características del producto ya instalado. Se introducirán las tecnologías Open Source de virtualización y aquellas que tengan mayor potencial para cubrir las necesidades del cliente se probarán y analizarán. De tal manera, que se obtenga un diseño final robusto y de fácil gestión para los integrantes de los departamentos tecnológicos de las empresas.
Resumo:
Proceso a seguir para la publicación de un proyecto de Software Libre y la creación de una comunidad que lo continúe.
Resumo:
Aquest projecte consisteix en el desenvolupament d'ANDRe, una aplicacióde programari lliure que permet crear, de forma visual, documents PDFcombinant pàgines d'altres documents del mateix format. L'aplicació haestat desenvolupada en Java i utilitza la biblioteca PDFBox de l'ApacheFoundation per a llegir i escriure els documents.
Resumo:
La novela "La canción de la pena eterna" (1995), de la escritora Wang Anyi, es considerada actualmente un clásico moderno de las letras chinas. El objetivo de este trabajo se articula sobre la base de dos orientaciones. Por un lado, se realiza un extenso recorrido interpretativo por la novela, atendiendo a distintos criterios para extraer conclusiones sobre la motivación e intencionalidad de la escritora. Por otro lado, a partir de esas conclusiones, se sitúa la novela en el canon literario chino, en virtud de su propia caracterización, las influencias recibidas y su posible contribución.
Resumo:
The Asian Diaspora in the Americas in the 16th and 17th has been neglected by scholars for a long time. This fact is baffling, not only for the great interest of this topic in of itself, but also because it could provide new knowledge of colonial Mexico, especially in terms of the interaction among the many groups that populated the colony. This early movement of people and ideas across the largest extension of water in the planet is characteristic of what has been called the ¿archaic globalization,¿ and thus research on these matters could contribute to the history of globalization.In this presentation, I seek to further elaborate on the themes outlined by Edward Slack in The Chinos in New Spain: A Corrective Lens for a Distorted Image, an article published in 2009 in the Journal of World History. Firstly, I would like to bring forth some evidence that indicates that Asian religious practices were present in Mexico in the 1600s. Furthermore, I will argue that the traces of these practices are still visible today, in the form of a popular fortune-telling tradition. Secondly, I intend to provide some information about the arrival, settlement and distribution of the Asian Diaspora. I will focus on their distribution within Mexico City. Thirdly, I will elaborate on their occupations, social status and daily life, as well as in the patterns in marriages and relations with other groups. And lastly, I will show how the guild of barbers served as an Asian immigrant reception network.
Resumo:
In an earlier investigation (Burger et al., 2000) five sediment cores near the RodriguesTriple Junction in the Indian Ocean were studied applying classical statistical methods(fuzzy c-means clustering, linear mixing model, principal component analysis) for theextraction of endmembers and evaluating the spatial and temporal variation ofgeochemical signals. Three main factors of sedimentation were expected by the marinegeologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. Thedisplay of fuzzy membership values and/or factor scores versus depth providedconsistent results for two factors only; the ultra-basic component could not beidentified. The reason for this may be that only traditional statistical methods wereapplied, i.e. the untransformed components were used and the cosine-theta coefficient assimilarity measure.During the last decade considerable progress in compositional data analysis was madeand many case studies were published using new tools for exploratory analysis of thesedata. Therefore it makes sense to check if the application of suitable data transformations,reduction of the D-part simplex to two or three factors and visualinterpretation of the factor scores would lead to a revision of earlier results and toanswers to open questions . In this paper we follow the lines of a paper of R. Tolosana-Delgado et al. (2005) starting with a problem-oriented interpretation of the biplotscattergram, extracting compositional factors, ilr-transformation of the components andvisualization of the factor scores in a spatial context: The compositional factors will beplotted versus depth (time) of the core samples in order to facilitate the identification ofthe expected sources of the sedimentary process.Kew words: compositional data analysis, biplot, deep sea sediments
Resumo:
In order to obtain a high-resolution Pleistocene stratigraphy, eleven continuouslycored boreholes, 100 to 220m deep were drilled in the northern part of the PoPlain by Regione Lombardia in the last five years. Quantitative provenanceanalysis (QPA, Weltje and von Eynatten, 2004) of Pleistocene sands was carriedout by using multivariate statistical analysis (principal component analysis, PCA,and similarity analysis) on an integrated data set, including high-resolution bulkpetrography and heavy-mineral analyses on Pleistocene sands and of 250 majorand minor modern rivers draining the southern flank of the Alps from West toEast (Garzanti et al, 2004; 2006). Prior to the onset of major Alpine glaciations,metamorphic and quartzofeldspathic detritus from the Western and Central Alpswas carried from the axial belt to the Po basin longitudinally parallel to theSouthAlpine belt by a trunk river (Vezzoli and Garzanti, 2008). This scenariorapidly changed during the marine isotope stage 22 (0.87 Ma), with the onset ofthe first major Pleistocene glaciation in the Alps (Muttoni et al, 2003). PCA andsimilarity analysis from core samples show that the longitudinal trunk river at thistime was shifted southward by the rapid southward and westward progradation oftransverse alluvial river systems fed from the Central and Southern Alps.Sediments were transported southward by braided river systems as well as glacialsediments transported by Alpine valley glaciers invaded the alluvial plain.Kew words: Detrital modes; Modern sands; Provenance; Principal ComponentsAnalysis; Similarity, Canberra Distance; palaeodrainage
Resumo:
Emergent molecular measurement methods, such as DNA microarray, qRTPCR, andmany others, offer tremendous promise for the personalized treatment of cancer. Thesetechnologies measure the amount of specific proteins, RNA, DNA or other moleculartargets from tumor specimens with the goal of “fingerprinting” individual cancers. Tumorspecimens are heterogeneous; an individual specimen typically contains unknownamounts of multiple tissues types. Thus, the measured molecular concentrations resultfrom an unknown mixture of tissue types, and must be normalized to account for thecomposition of the mixture.For example, a breast tumor biopsy may contain normal, dysplastic and cancerousepithelial cells, as well as stromal components (fatty and connective tissue) and bloodand lymphatic vessels. Our diagnostic interest focuses solely on the dysplastic andcancerous epithelial cells. The remaining tissue components serve to “contaminate”the signal of interest. The proportion of each of the tissue components changes asa function of patient characteristics (e.g., age), and varies spatially across the tumorregion. Because each of the tissue components produces a different molecular signature,and the amount of each tissue type is specimen dependent, we must estimate the tissuecomposition of the specimen, and adjust the molecular signal for this composition.Using the idea of a chemical mass balance, we consider the total measured concentrationsto be a weighted sum of the individual tissue signatures, where weightsare determined by the relative amounts of the different tissue types. We develop acompositional source apportionment model to estimate the relative amounts of tissuecomponents in a tumor specimen. We then use these estimates to infer the tissuespecificconcentrations of key molecular targets for sub-typing individual tumors. Weanticipate these specific measurements will greatly improve our ability to discriminatebetween different classes of tumors, and allow more precise matching of each patient tothe appropriate treatment
Resumo:
The Hardy-Weinberg law, formulated about 100 years ago, states that under certainassumptions, the three genotypes AA, AB and BB at a bi-allelic locus are expected to occur inthe proportions p2, 2pq, and q2 respectively, where p is the allele frequency of A, and q = 1-p.There are many statistical tests being used to check whether empirical marker data obeys theHardy-Weinberg principle. Among these are the classical xi-square test (with or withoutcontinuity correction), the likelihood ratio test, Fisher's Exact test, and exact tests in combinationwith Monte Carlo and Markov Chain algorithms. Tests for Hardy-Weinberg equilibrium (HWE)are numerical in nature, requiring the computation of a test statistic and a p-value.There is however, ample space for the use of graphics in HWE tests, in particular for the ternaryplot. Nowadays, many genetical studies are using genetical markers known as SingleNucleotide Polymorphisms (SNPs). SNP data comes in the form of counts, but from the countsone typically computes genotype frequencies and allele frequencies. These frequencies satisfythe unit-sum constraint, and their analysis therefore falls within the realm of compositional dataanalysis (Aitchison, 1986). SNPs are usually bi-allelic, which implies that the genotypefrequencies can be adequately represented in a ternary plot. Compositions that are in exactHWE describe a parabola in the ternary plot. Compositions for which HWE cannot be rejected ina statistical test are typically “close" to the parabola, whereas compositions that differsignificantly from HWE are “far". By rewriting the statistics used to test for HWE in terms ofheterozygote frequencies, acceptance regions for HWE can be obtained that can be depicted inthe ternary plot. This way, compositions can be tested for HWE purely on the basis of theirposition in the ternary plot (Graffelman & Morales, 2008). This leads to nice graphicalrepresentations where large numbers of SNPs can be tested for HWE in a single graph. Severalexamples of graphical tests for HWE (implemented in R software), will be shown, using SNPdata from different human populations
Resumo:
The amalgamation operation is frequently used to reduce the number of parts of compositional data but it is a non-linear operation in the simplex with the usual geometry,the Aitchison geometry. The concept of balances between groups, a particular coordinate system designed over binary partitions of the parts, could be an alternative to theamalgamation in some cases. In this work we discuss the proper application of bothconcepts using a real data set corresponding to behavioral measures of pregnant sows
Resumo:
Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult toachieve because the relative values of the forecast components often fail to behave ina way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It hasbeen shown that cause-specic mortality forecasts are pessimistic when compared withall-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approachof using log mortality rates and forecasts the density of deaths in the life table. Sincethese values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbingstate), they are intrinsically relative rather than absolute values across decrements aswell as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison(1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that theunit sum constraint is honoured. The structure of the best-known, single-decrementmortality-rate forecasting model, devised by Lee and Carter (1992), is expressed incompositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortalityby cause of death for Japan