762 resultados para Scrum, Scrum Best Practices, Visual Studio Online, Agile
Resumo:
El objetivo de este trabajo es conocer cómo se incorporan a la labor docente, y en ese contexto cómo enseñan historia, los profesores noveles egresados de los profesorados en historia de la UNGS (Universidad Nacional de General Sarmiento) y del ISFD no 42 (Instituto Superior de Formación Docente). Son considerados docentes noveles o principiantes aquellos profesores que se encuentran en los primeros tres años del ejercicio de la profesión. La información aquí presentada fue recogida a través de una investigación de tipo cualitativa realizada entre el IDH (Instituto del Desarrollo Humano), perteneciente a la Universidad Nacional de General Sarmiento y el Instituto Superior de Formación Docente no 42 de San Miguel, provincia de Buenos Aires Este proyecto, realizado entre los años 2008 y 2010, fue dirigido por la doctora Gabriela Diker, quien en ese momento se desempeñaba como coordinadora del área de formación del IDH de la UNGS. Preguntas como: ¿cuáles son las dificultades que deben afrontar los profesores principiantes?, ¿qué respuestas elaboran ante dichas dificultades?, ¿cómo desarrollan el proceso de enseñanza de la historia?, ¿qué recursos y materiales utilizan para ello?, ¿cómo influyen las trayectorias de formación docente sobre las prácticas que desarrollan los profesores principiantes? servirán de guía a fin de poder indagar la trayectoria docente en este período inicial para luego caracterizarlo, analizarlo y finalmente volver a evaluar los alcances y límites de la información brindada por la investigación. Un punto interesante será comparar los trayectos formativos por los que atravesaron los graduados de ambas instituciones, teniendo en cuenta que un grupo de graduados se formó en una institución universitaria y el otro colectivo de profesores proviene de una institución terciaria. El objetivo es identificar si las instituciones formadoras de base ejercen alguna influencia que permita diferenciar comparativamente las prácticas de un grupo respecto del otro
Resumo:
The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides continuous pH measurements made during 2013 expedition with a Satlantic SeaFET instrument that was connected to the flowthrough system. Data calibration was performed according to Bresnahan et al. (2014) (using spectrophotometric pH measurements on discrete samples (Clayton and Byrne 1993). pH_internal values were taken to calibrate the data (rather than pH_external) because of the better calibration coefficient (there was no trend associated with it). The equations of Clayton and Byrne (1993) was used to compute pH from the measured absorbance values at the temperature of measurement. The data was converted to in situ temperature using the "CO2-sys" program which can be downloaded from http://cdiac.ornl.gov/ftp/co2sys/.
Resumo:
PURPOSE To evaluate the effect of the vitreomacular interface (VMI) on treatment efficacy of intravitreal therapy in uveitic cystoid macular oedema (CME). METHODS Retrospective analysis of CME resolution, CME recurrence rate and monthly course of central retinal thickness (CRT), retinal volume (RV) and best corrected visual acuity (BCVA) after intravitreal injection with respect to the VMI configuration on spectral-domain OCT using chi-squared test and repeated measures anova adjusted for confounding covariates epiretinal membrane, administered drug and subretinal fluid. RESULTS Fifty-nine eyes of 53 patients (mean age: 47.4 ± 16.9 years) were included. VMI status had no effect on complete CME resolution rate (p = 0.16, corrected p-value: 0.32), time until resolution (p = 0.09, corrected p-value: 0.27) or CME relapse rate (p = 0.29, corrected p-value: 0.29). Change over time did not differ among the VMI configuration groups for BVCA (p = 0.82) and RV (p = 0.18), but CRT decrease was greater and faster in the posterior vitreous detachment (PVD) group compared to the posterior vitreous attachment (PVA) and vitreous macular adhesion (VMA) groups (p = 0.04). Also, the percentage of patients experiencing a ≥ 20% CRT thickness decrease after intravitreal injection was greater in the PVD group (83%) compared to the VMA (64%) and the PVA (16%) group (p = 0.027), however, not after correction for multiple testing (corrected p-value: 0.11). CONCLUSION The VMI configuration seems to be a factor contributing to treatment efficacy in uveitic CME in terms of CRT decrease, although BCVA outcome did not differ according to VMI status.
Resumo:
AIM To report the finding of extension of the 4th hyper-reflective band and retinal tissue into the optic disc in patients with cavitary optic disc anomalies (CODAs). METHODS In this observational study, 10 patients (18 eyes) with sporadic or autosomal dominant CODA were evaluated with enhanced depth imaging optical coherence tomography (EDI-OCT) and colour fundus images for the presence of 4th hyper-reflective band extension into the optic disc. RESULTS Of 10 CODA patients (18 eyes), five patients (8 eyes) showed a definite 4th hyper-reflective band (presumed retinal pigment epithelium (RPE)) extension into the optic disc. In these five patients (seven eyes), the inner retinal layers also extended with the 4th hyper-reflective band into the optic disc. Best corrected visual acuity ranged from 20/20 to 20/200. In three patients (four eyes), retinal splitting/schisis was present and in two patients (two eyes), the macula was involved. In all cases, the 4th hyper-reflective band extended far beyond the termination of the choroid into the optic disc. The RPE extension was found either temporally or nasally in areas of optic nerve head excavation, most often adjacent to peripapillary pigment. Compared with eyes without RPE extension, eyes with RPE extension were more myopic (mean dioptres -0.9±2.6 vs -8.8±5, p=0.043). CONCLUSIONS The RPE usually stops near the optic nerve border separated by a border tissue. With CODA, extension of this hyper-reflective band and retinal tissue into the disc is possible and best evaluable using EDI-OCT or analogous image modalities. Whether this is a finding specific for CODA, linked to specific gene loci or is also seen in patients with other optic disc abnormalities needs further evaluation.
Resumo:
production, during the summer of 2010. This farm is integrated at the Spanish research network for the sugar beet development (AIMCRA) which regarding irrigation, focuses on maximizing water saving and cost reduction. According to AIMCRA 0 s perspective for promoting irrigation best practices, it is essential to understand soil response to irrigation i.e. maximum irrigation length for each soil infiltration capacity. The Use of Humidity Sensors provides foundations to address soil 0 s behavior at the irrigation events and, therefore, to establish the boundaries regarding irrigation length and irrigation interval. In order to understand to what extent farmer 0 s performance at Tordesillas farm could have been potentially improved, this study aims to address suitable irrigation length and intervals for the given soil properties and evapotranspiration rates. In this sense, several humidity sensors were installed: (1) A Frequency Domain Reflectometry (FDR) EnviroScan Probe taking readings at 10, 20, 40 and 60cm depth and (2) different Time Domain Reflectometry (TDR) Echo 2 and Cr200 probes buried in a 50cm x 30cm x 50cm pit and placed along the walls at 10, 20, 30 and 40 cm depth. Moreover, in order to define soil properties, a textural analysis at the Tordesillas Farm was conducted. Also, data from the Tordesillas meteorological station was utilized.
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
In this paper we present a revisited classification of term variation in the light of the Linked Data initiative. Linked Data refers to a set of best practices for publishing and connecting structured data on the Web with the idea of transforming it into a global graph. One of the crucial steps of this initiative is the linking step, in which datasets in one or more languages need to be linked or connected with one another. We claim that the linking process would be facilitated if datasets are enriched with lexical and terminological information. Being that the final aim, we propose a classification of lexical, terminological and semantic variants that will become part of a model of linguistic descriptions that is currently being proposed within the framework of the W3C Ontology-Lexica Community Group to enrich ontologies and Linked Data vocabularies. Examples of modeling solutions of the different types of variants are also provided.
Resumo:
The integration of correlation processes in design systems has as a target measurements in 3D directly and according to the users criteria in order to generate the required database for the development of the project. In the phase of photogrammetric works, internal and external orientation parameters are calculated and stereo models are created from standard images. The aforementioned are integrated in the system where the measurement of the selected items is done by applying developed correlation algorithms. The processing period has the tools to carry out the calculations in an easy and automatic way, as well as image measurement techniques to acquire the most correct information. The proposed software development is done on Visual Studio platforms for PC, applying the most apt codes and symbols according to the terms of reference required for the design. The results of generating the data base in an interactive way with the geometric study of the structures, facilitates and improves the quality of the works in the projects.
Resumo:
Workflow technology continues to play an important role as a means for specifying and enacting computational experiments in modern science. Reusing and re-purposing workflows allow scientists to do new experiments faster, since the workflows capture useful expertise from others. As workflow libraries grow, scientists face the challenge of finding workflows appropriate for their task, understanding what each workflow does, and reusing relevant portions of a given workflow.We believe that workflows would be easier to understand and reuse if high-level views (abstractions) of their activities were available in workflow libraries. As a first step towards obtaining these abstractions, we report in this paper on the results of a manual analysis performed over a set of real-world scientific workflows from Taverna, Wings, Galaxy and Vistrails. Our analysis has resulted in a set of scientific workflow motifs that outline (i) the kinds of data-intensive activities that are observed in workflows (Data-Operation motifs), and (ii) the different manners in which activities are implemented within workflows (Workflow-Oriented motifs). These motifs are helpful to identify the functionality of the steps in a given workflow, to develop best practices for workflow design, and to develop approaches for automated generation of workflow abstractions.
Resumo:
El vertiginoso avance de la informática y las telecomunicaciones en las últimas décadas ha incidido invariablemente en la producción y la prestación de servicios, en la educación, en la industria, en la medicina, en las comunicaciones e inclusive en las relaciones interpersonales. No obstante estos avances, y a pesar de la creciente aportación del software al mundo actual, durante su desarrollo continuamente se incurre en el mismo tipo de problemas que provocan un retraso sistemático en los plazos de entrega, se exceda en presupuesto, se entregue con una alta tasa de errores y su utilidad sea inferior a la esperada. En gran medida, esta problemática es atribuible a defectos en los procesos utilizados para recoger, documentar, acordar y modificar los requisitos del sistema. Los requisitos son los cimientos sobre los cuáles se construye un producto software, y sin embargo, la incapacidad de gestionar sus cambios es una de las principales causas por las que un producto software se entrega fuera de tiempo, se exceda en coste y no cumpla con la calidad esperada por el cliente. El presente trabajo de investigación ha identificado la necesidad de contar con metodologías que ayuden a desplegar un proceso de Gestión de Requisitos en pequeños grupos y entornos de trabajo o en pequeñas y medianas empresas. Para efectos de esta tesis llamaremos Small-Settings a este tipo de organizaciones. El objetivo de este trabajo de tesis doctoral es desarrollar un metamodelo que permita, por un lado, la implementación y despliegue del proceso de Gestión de Requisitos de forma natural y a bajo coste y, por otro lado, el desarrollo de mecanismos para la mejora continua del mismo. Este metamodelo esta soportado por el desarrollo herramientas que permiten mantener una biblioteca de activos de proceso para la Gestión de Requisitos y a su vez contar con plantillas para implementar el proceso partiendo del uso de activos previamente definidos. El metamodelo contempla el desarrollo de prácticas y actividades para guiar, paso a paso, la implementación del proceso de Gestión de Requisitos para una Small-Setting utilizando un modelo de procesos como referencia y una biblioteca de activos de proceso como principal herramienta de apoyo. El mantener los activos de proceso bien organizados, indexados, y fácilmente asequibles, facilita la introducción de las mejores prácticas al interior de una organización. ABSTRACT The fast growth of computer science and telecommunication in recent decades has invariably affected the provision of products and services in education, industry, healthcare, communications and also interpersonal relationships. In spite of such progress and the active role of the software in the world, its development and production continually incurs in the same type of problems that cause systematic delivery delays, over budget, a high error rate and consequently its use is lower than expected. These problems are largely attributed to defects in the processes used to identify, document, organize, and track all system's requirements. It is generally accepted that requirements are the foundation upon which the software process is built, however, the inability to manage changes in requirements is one of the principal factors that contribute to delays on the software development process, which in turn, may cause customer dissatisfaction. The aim of the present research work has identified the need for appropriate methodologies to help on the requirement management process for those organizations that are categorised as small and medium size enterprises, small groups within large companies, or small projects. For the purposes of this work, these organizations are named Small-Settings. The main goal of this research work is to develop a metamodel to manage the requirement process using a Process Asset Library (PAL) and to provide predefined tools and actives to help on the implementation process. The metamodel includes the development of practices and activities to guide step by step the deployment of the requirement management process in Small-Settings. Keeping assets organized, indexed, and readily available are a main factor to the success of the organization process improvement effort and facilitate the introduction of best practices within the organization. The Process Asset Library (PAL) will become a repository of information used to keep and make available all process assets that are useful to those who are defining, implementing, and managing processes in the organization.
Resumo:
En este proyecto se realiza un estudio sobre herramientas que facilitan la creación y distribución de aplicaciones en distintas plataformas móviles, con el fin de poder seleccionar la herramienta más apropiada en función del proyecto a desarrollar. Previo al estudio de las herramientas para el desarrollo en plataformas múltiples se realiza un estudio de las herramientas y metodologías que facilitan los propietarios de los entornos IOS y Android. Este estudio previo permitirá al lector conocer en más detalle las particularidades de cada uno de estos dos entornos, así como las pautas y buenas prácticas a seguir en el desarrollo de aplicaciones para dispositivos móviles. Una vez finalizado el estudio, el lector sabrá escoger una herramienta de desarrollo adaptada a cada proyecto en función de su objeto, los recursos disponibles y las habilidades de los miembros del equipo de desarrollo. Adicionalmente al estudio, y como ejemplo de aplicación, en el proyecto se realiza un caso práctico de selección de herramienta y aplicación de la herramienta seleccionada a un proyecto de desarrollo concreto. El caso práctico consiste en la creación de un entorno que permite generar aplicaciones para la visualización de apuntes. Las aplicaciones permitirán ver contenidos de tipo multimedia como ficheros de texto, sonidos, imágenes, vídeos y enlaces a contenidos externos. Además estas aplicaciones se generarán sin que el autor de las mismas tenga que modificar alguna de las líneas del código. Para ello, se han definido una serie de ficheros de configuración en los que el autor de la aplicación deberá indicar los contenidos a mostrar y su ubicación. Se han seleccionado recursos de tipo “código abierto” para el desarrollo del caso práctico, con el fin de evitar los costes asociados a las posibles licencias. El equipo de desarrollo del caso práctico estará formado únicamente por el autor de este proyecto de fin de grado, lo que hace del caso de estudio un desarrollo sencillo, de manera que su futuro mantenimiento y escalabilidad no deberían verse afectados por la necesidad de contar con equipos de desarrolladores con conocimientos específicos o complejos. ABSTRACT. This document contains a study of tools that ease the creation and the distribution of the applications through different mobile platforms. The objective o this document is to allow the selection of the most appropriate tool, depending on the development objectives. Previous to this study about the tools for developing on multiple platforms, a study of IOS and Android tools and their methodologies is included on this document. This previous analysis will allow the reader to know in more detail the peculiarities of each of these environments, together with theirs requirements and the best practices of the applications development for mobile devices. By the end of this document the reader would be able to choose the adequate development tool for a project depending of its objective, its available resources and the developers team’s capabilities. Beside this study and as example of case study this final project includes a practical case of tool selection and its application to a specific development. The case study consists in the creation of an environment that allows generating applications to visualise notes. These applications will allow seeing contents of multimedia type such as: text files, sounds, images, videos, and links to external content. Furthermore these applications will be generated without their author having to modify any line of code, because a group of configuration files will be defined for such purpose. The author of the application only has to update this configuration with the content to show by the application and its location. The selected resources for the case study were of the type “open source” in order to avoid the cost associated to the potential licenses. The developers’ team for this case study has only one member, the author of this final project document and practical case developer. As a result the case study is a very simple development in a way that the future potential maintenance and scalability should not depend on the necessity of a highly qualified developers’ teams with a very specific knowledge on mobile platforms development.
Resumo:
Purpose – Linked data is gaining great interest in the cultural heritage domain as a new way for publishing, sharing and consuming data. The paper aims to provide a detailed method and MARiMbA a tool for publishing linked data out of library catalogues in the MARC 21 format, along with their application to the catalogue of the National Library of Spain in the datos.bne.es project. Design/methodology/approach – First, the background of the case study is introduced. Second, the method and process of its application are described. Third, each of the activities and tasks are defined and a discussion of their application to the case study is provided. Findings – The paper shows that the FRBR model can be applied to MARC 21 records following linked data best practices, librarians can successfully participate in the process of linked data generation following a systematic method, and data sources quality can be improved as a result of the process. Originality/value – The paper proposes a detailed method for publishing and linking linked data from MARC 21 records, provides practical examples, and discusses the main issues found in the application to a real case. Also, it proposes the integration of a data curation activity and the participation of librarians in the linked data generation process.
Resumo:
In this paper we present a revisited classification of term variation in the light of the Linked Data initiative. Linked Data refers to a set of best practices for publishing and connecting structured data on the Web with the idea of transforming it into a global graph. One of the crucial steps of this initiative is the linking step, in which datasets in one or more languages need to be linked or connected with one another. We claim that the linking process would be facilitated if datasets are enriched with lexical and terminological information. Being that the final aim, we propose a classification of lexical, terminological and semantic variants that will become part of a model of linguistic descriptions that is currently being proposed within the framework of the W3C Ontology- Lexica Community Group to enrich ontologies and Linked Data vocabularies. Examples of modeling solutions of the different types of variants are also provided.