44 resultados para Open Data, Dati Aperti, Open Government Data
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Organizations across the globe are creating and distributing products that include open source software. To ensure compliance with the open source licenses, each company needs to evaluate exactly what open source licenses and copyrights are included - resulting in duplicated effort and redundancy. This talk will provide an overview of a new Software Package Data Exchange (SPDX) specification. This specification will provide a common format to share information about the open source licenses and copyrights that are included in any software package, with the goal of saving time and improving data accuracy. This talk will review the progress of the initiative; discuss the benefits to organizations using open source and share information on how you can contribute.
Resumo:
En el present treball hem tractat d'aportar una visió actual del món de les dades obertes enllaçades en l'àmbit de l'educació. Hem revisat tant les aplicacions que van dirigides a implementar aquestes tecnologies en els repositoris de dades existents (pàgines web, repositoris d'objectes educacionals, repositoris de cursos i programes educatius) com a ser suport de nous paradigmes dins del món de l'educació.
Resumo:
Two claims pervade the literature on the political economy of market reforms: that economic crises cause reforms; and that crises matter because they bring into question the validity of the economic model held to be responsible for them. Economic crises are said to spur a process of learning that is conducive to the abandonment of failing models and to the adoption of successful models. But although these claims have become the conventional wisdom, they have been hardly tested empirically due to the lack of agreement on what constitutes a crisis and to difficulties in measuring learning from them. I propose a model of rational learning from experience and apply it to the decision to open the economy. Using data from 1964 through 1990, I show that learning from the 1982 debt crisis was relevant to the first wave of adoption of an export promotion strategy, but learning was conditional on the high variability of economic outcomes in countries that opened up to trade. Learning was also symbolic in that the sheer number of other countries that liberalized was a more important driver of others’ decisions to follow suit.
Resumo:
Report for the scientific sojourn at the Simon Fraser University, Canada, from July to September 2007. General context: landscape change during the last years is having significant impacts on biodiversity in many Mediterranean areas. Land abandonment, urbanisation and specially fire are profoundly transforming large areas in the Western Mediterranean basin and we know little on how these changes influence species distribution and in particular how these species will respond to further change in a context of global change including climate. General objectives: integrate landscape and population dynamics models in a platform allowing capturing species distribution responses to landscape changes and assessing impact on species distribution of different scenarios of further change. Specific objective 1: develop a landscape dynamic model capturing fire and forest succession dynamics in Catalonia and linked to a stochastic landscape occupancy (SLOM) (or spatially explicit population, SEPM) model for the Ortolan bunting, a species strongly linked to fire related habitat in the region. Predictions from the occupancy or spatially explicit population Ortolan bunting model (SEPM) should be evaluated using data from the DINDIS database. This database tracks bird colonisation of recently burnt big areas (&50 ha). Through a number of different SEPM scenarios with different values for a number of parameter, we should be able to assess different hypothesis in factors driving bird colonisation in new burnt patches. These factors to be mainly, landscape context (i.e. difficulty to reach the patch, and potential presence of coloniser sources), dispersal constraints, type of regenerating vegetation after fire, and species characteristics (niche breadth, etc).
Resumo:
While the Internet has given educators access to a steady supply of Open Educational Resources, the educational rubrics commonly shared on the Web are generally in the form of static, non-semantic presentational documents or in the proprietary data structures of commercial content and learning management systems.With the advent of Semantic Web Standards, producers of online resources have a new framework to support the open exchange of software-readable datasets. Despite these advances, the state of the art of digital representation of rubrics as sharable documents has not progressed.This paper proposes an ontological model for digital rubrics. This model is built upon the Semantic Web Standards of the World Wide Web Consortium (W3C), principally the Resource Description Framework (RDF) and Web Ontology Language (OWL).
Resumo:
In an earlier investigation (Burger et al., 2000) five sediment cores near the RodriguesTriple Junction in the Indian Ocean were studied applying classical statistical methods(fuzzy c-means clustering, linear mixing model, principal component analysis) for theextraction of endmembers and evaluating the spatial and temporal variation ofgeochemical signals. Three main factors of sedimentation were expected by the marinegeologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. Thedisplay of fuzzy membership values and/or factor scores versus depth providedconsistent results for two factors only; the ultra-basic component could not beidentified. The reason for this may be that only traditional statistical methods wereapplied, i.e. the untransformed components were used and the cosine-theta coefficient assimilarity measure.During the last decade considerable progress in compositional data analysis was madeand many case studies were published using new tools for exploratory analysis of thesedata. Therefore it makes sense to check if the application of suitable data transformations,reduction of the D-part simplex to two or three factors and visualinterpretation of the factor scores would lead to a revision of earlier results and toanswers to open questions . In this paper we follow the lines of a paper of R. Tolosana-Delgado et al. (2005) starting with a problem-oriented interpretation of the biplotscattergram, extracting compositional factors, ilr-transformation of the components andvisualization of the factor scores in a spatial context: The compositional factors will beplotted versus depth (time) of the core samples in order to facilitate the identification ofthe expected sources of the sedimentary process.Kew words: compositional data analysis, biplot, deep sea sediments
Resumo:
Compositional data naturally arises from the scientific analysis of the chemicalcomposition of archaeological material such as ceramic and glass artefacts. Data of thistype can be explored using a variety of techniques, from standard multivariate methodssuch as principal components analysis and cluster analysis, to methods based upon theuse of log-ratios. The general aim is to identify groups of chemically similar artefactsthat could potentially be used to answer questions of provenance.This paper will demonstrate work in progress on the development of a documentedlibrary of methods, implemented using the statistical package R, for the analysis ofcompositional data. R is an open source package that makes available very powerfulstatistical facilities at no cost. We aim to show how, with the aid of statistical softwaresuch as R, traditional exploratory multivariate analysis can easily be used alongside, orin combination with, specialist techniques of compositional data analysis.The library has been developed from a core of basic R functionality, together withpurpose-written routines arising from our own research (for example that reported atCoDaWork'03). In addition, we have included other appropriate publicly availabletechniques and libraries that have been implemented in R by other authors. Availablefunctions range from standard multivariate techniques through to various approaches tolog-ratio analysis and zero replacement. We also discuss and demonstrate a smallselection of relatively new techniques that have hitherto been little-used inarchaeometric applications involving compositional data. The application of the libraryto the analysis of data arising in archaeometry will be demonstrated; results fromdifferent analyses will be compared; and the utility of the various methods discussed
Resumo:
The statistical analysis of compositional data should be treated using logratios of parts,which are difficult to use correctly in standard statistical packages. For this reason afreeware package, named CoDaPack was created. This software implements most of thebasic statistical methods suitable for compositional data.In this paper we describe the new version of the package that now is calledCoDaPack3D. It is developed in Visual Basic for applications (associated with Excel©),Visual Basic and Open GL, and it is oriented towards users with a minimum knowledgeof computers with the aim at being simple and easy to use.This new version includes new graphical output in 2D and 3D. These outputs could bezoomed and, in 3D, rotated. Also a customization menu is included and outputs couldbe saved in jpeg format. Also this new version includes an interactive help and alldialog windows have been improved in order to facilitate its use.To use CoDaPack one has to access Excel© and introduce the data in a standardspreadsheet. These should be organized as a matrix where Excel© rows correspond tothe observations and columns to the parts. The user executes macros that returnnumerical or graphical results. There are two kinds of numerical results: new variablesand descriptive statistics, and both appear on the same sheet. Graphical output appearsin independent windows. In the present version there are 8 menus, with a total of 38submenus which, after some dialogue, directly call the corresponding macro. Thedialogues ask the user to input variables and further parameters needed, as well aswhere to put these results. The web site http://ima.udg.es/CoDaPack contains thisfreeware package and only Microsoft Excel© under Microsoft Windows© is required torun the software.Kew words: Compositional data Analysis, Software
Resumo:
La infraestructura europea ICOS (Integrated Carbon Observation System), tiene como misión proveer de mediciones de gases de efecto invernadero a largo plazo, lo que ha de permitir estudiar el estado actual y comportamiento futuro del ciclo global del carbono. En este contexto, geomati.co ha desarrollado un portal de búsqueda y descarga de datos que integra las mediciones realizadas en los ámbitos terrestre, marítimo y atmosférico, disciplinas que hasta ahora habían gestionado los datos de forma separada. El portal permite hacer búsquedas por múltiples ámbitos geográficos, por rango temporal, por texto libre o por un subconjunto de magnitudes, realizar vistas previas de los datos, y añadir los conjuntos de datos que se crean interesantes a un “carrito” de descargas. En el momento de realizar la descarga de una colección de datos, se le asignará un identificador universal que permitirá referenciarla en eventuales publicaciones, y repetir su descarga en el futuro (de modo que los experimentos publicados sean reproducibles). El portal se apoya en formatos abiertos de uso común en la comunidad científica, como el formato NetCDF para los datos, y en el perfil ISO de CSW, estándar de catalogación y búsqueda propio del ámbito geoespacial. El portal se ha desarrollado partiendo de componentes de software libre existentes, como Thredds Data Server, GeoNetwork Open Source y GeoExt, y su código y documentación quedarán publicados bajo una licencia libre para hacer posible su reutilización en otros proyecto
Resumo:
In this project a research both in finding predictors via clustering techniques and in reviewing the Data Mining free software is achieved. The research is based in a case of study, from where additionally to the KDD free software used by the scientific community; a new free tool for pre-processing the data is presented. The predictors are intended for the e-learning domain as the data from where these predictors have to be inferred are student qualifications from different e-learning environments. Through our case of study not only clustering algorithms are tested but also additional goals are proposed.
Resumo:
The reason for this study is to propose a new quantitative approach on how to assess the quality of Open Access University Institutional Repositories. The results of this new approach are tested in the Spanish University Repositories. The assessment method is based in a binary codification of a proposal of features that objectively describes the repositories. The purposes of this method are assessing the quality and an almost automatically system for updating the data of the characteristics. First of all a database was created with the 38 Spanish institutional repositories. The variables of analysis are presented and explained either if they are coming from bibliography or are a set of new variables. Among the characteristics analyzed are the features of the software, the services of the repository, the features of the information system, the Internet visibility and the licenses of use. Results from Spanish universities ARE provided as a practical example of the assessment and for having a picture of the state of the development of the open access movement in Spain.
Resumo:
The main objective of this paper aims at developing a methodology that takes into account the human factor extracted from the data base used by the recommender systems, and which allow to resolve the specific problems of prediction and recommendation. In this work, we propose to extract the user's human values scale from the data base of the users, to improve their suitability in open environments, such as the recommender systems. For this purpose, the methodology is applied with the data of the user after interacting with the system. The methodology is exemplified with a case study
Resumo:
This paper investigates the effects of government spending on the real exchange rate and the trade balance in the US using a new VAR identification procedure based on spending forecast revisions. I find that the real exchange rate appreciates and the trade balance deteriorates after a government spending shock, although the effects are quantitatively small. The findings broadly match the theoretical predictions of the standard Mundell-Fleming model and differ substantially from those existing in literature. Differences are attributable to the fact that, because of fiscal foresight, the government spending is non-fundamental for the variables typically used in open economy VARs. Here, on the contrary, the estimated shock is fundamental.