15 resultados para Best practices of transformation
em Universidad Politécnica de Madrid
Resumo:
In Chile, during the last three decades there has been a strong decentralization process whose main objective has been to improve the management of schools through the transfer of responsibilities and resources of education and thus improve the outcome of learning, reducing equity gaps between schools and territories. This is how, there has been an evolution of school principals professional profile from an administrative to a management approach, in which principals have become project managers of educational projects. From a competence model for school leaders, based on IPMA guidelines, the present article presents an analysis of best practices for school management, allowing to generate a link between competencies and school management, from the perspective of project management. Results showed that the different competence elements, have relative weights according to the different practice fields, and that this analysis can be considered as a strategic element in educational project planning and development.
Resumo:
In Chile, during the last three decades there has been a strong decentralization process whose main objective has been to improve the management of schools through the transfer of responsibilities and resources of education and thus improve the outcome of learning, reducing equity gaps between schools and territories. This is how, there has been an evolution of school principals professional profile from an administrative to a management approach, in which principals have become project managers of educational projects. From a competence model for school leaders, based on IPMA guidelines, the present article presents an analysis of best practices for school management, allowing to generate a link between competencies and school management, from the perspective of project management. Results showed that the different competence elements, have relative weights according to the different practice fields, and that this analysis can be considered as a strategic element in educational project planning and development.
Resumo:
En este proyecto se realiza un estudio sobre herramientas que facilitan la creacin y distribucin de aplicaciones en distintas plataformas mviles, con el fin de poder seleccionar la herramienta ms apropiada en funcin del proyecto a desarrollar. Previo al estudio de las herramientas para el desarrollo en plataformas mltiples se realiza un estudio de las herramientas y metodologas que facilitan los propietarios de los entornos IOS y Android. Este estudio previo permitir al lector conocer en ms detalle las particularidades de cada uno de estos dos entornos, as como las pautas y buenas prcticas a seguir en el desarrollo de aplicaciones para dispositivos mviles. Una vez finalizado el estudio, el lector sabr escoger una herramienta de desarrollo adaptada a cada proyecto en funcin de su objeto, los recursos disponibles y las habilidades de los miembros del equipo de desarrollo. Adicionalmente al estudio, y como ejemplo de aplicacin, en el proyecto se realiza un caso prctico de seleccin de herramienta y aplicacin de la herramienta seleccionada a un proyecto de desarrollo concreto. El caso prctico consiste en la creacin de un entorno que permite generar aplicaciones para la visualizacin de apuntes. Las aplicaciones permitirn ver contenidos de tipo multimedia como ficheros de texto, sonidos, imgenes, vdeos y enlaces a contenidos externos. Adems estas aplicaciones se generarn sin que el autor de las mismas tenga que modificar alguna de las lneas del cdigo. Para ello, se han definido una serie de ficheros de configuracin en los que el autor de la aplicacin deber indicar los contenidos a mostrar y su ubicacin. Se han seleccionado recursos de tipo cdigo abierto para el desarrollo del caso prctico, con el fin de evitar los costes asociados a las posibles licencias. El equipo de desarrollo del caso prctico estar formado nicamente por el autor de este proyecto de fin de grado, lo que hace del caso de estudio un desarrollo sencillo, de manera que su futuro mantenimiento y escalabilidad no deberan verse afectados por la necesidad de contar con equipos de desarrolladores con conocimientos especficos o complejos. ABSTRACT. This document contains a study of tools that ease the creation and the distribution of the applications through different mobile platforms. The objective o this document is to allow the selection of the most appropriate tool, depending on the development objectives. Previous to this study about the tools for developing on multiple platforms, a study of IOS and Android tools and their methodologies is included on this document. This previous analysis will allow the reader to know in more detail the peculiarities of each of these environments, together with theirs requirements and the best practices of the applications development for mobile devices. By the end of this document the reader would be able to choose the adequate development tool for a project depending of its objective, its available resources and the developers teams capabilities. Beside this study and as example of case study this final project includes a practical case of tool selection and its application to a specific development. The case study consists in the creation of an environment that allows generating applications to visualise notes. These applications will allow seeing contents of multimedia type such as: text files, sounds, images, videos, and links to external content. Furthermore these applications will be generated without their author having to modify any line of code, because a group of configuration files will be defined for such purpose. The author of the application only has to update this configuration with the content to show by the application and its location. The selected resources for the case study were of the type open source in order to avoid the cost associated to the potential licenses. The developers team for this case study has only one member, the author of this final project document and practical case developer. As a result the case study is a very simple development in a way that the future potential maintenance and scalability should not depend on the necessity of a highly qualified developers teams with a very specific knowledge on mobile platforms development.
Resumo:
La siniestralidad por salida izquierda de va en carreteras de gran capacidad es un problema que, adems de las dramticas situaciones a las que da lugar, inflige a la sociedad elevados costes. Por ello, debe prestarse una intensa atencin al diseo de las medianas y a la disposicin de barreras en ellas, con el objetivo de evitar que se produzcan este tipo de accidentes y limitar las consecuencias de los que an as tengan lugar. Habitualmente las medianas de autovas se disean aplicando casi sistemticamente los parmetros mnimos exigidos normativamente y generalmente con barreras de seguridad adosadas o muy prximas a uno de los arcenes interiores. Sin embargo, tanto las recomendaciones tcnicas nacionales como la bibliografa internacional recomiendan llevar a cabo un estudio econmico de alternativas antes que colocar barreras y, si est justificada su disposicin, alejarla de la calzada disponindola prxima al eje de la mediana. En esta tesis se analizan las ventajas y limitaciones que tiene la disposicin de barrera prxima al eje de la mediana. Se ha investigado sobre el comportamiento de los vehculos al circular por la mediana y se muestra cmo se ha instalado en la obra de la autova A40, Tramo: Villarrubia de SantiagoSanta Cruz de la Zarza, destacando los aspectos ms novedosos y llamativos pero que se ajustan a las mejores prcticas en la materia y tambin a la normativa de aplicacin. ABSTRACT Many dramatic situations are caused by crossmedian traffic accidents which imply high costs for society, both in human and economic terms. It is therefore important that special attention should be paid to the design of highway medians and to the installation of safety barriers so as to avoid these kinds of incidents and to reduce their consequences. Highway median are usually designed with the application of minimum parameters, according to regulations, with the installation of safety barriers against or close to the inside border. However, Spanish technical regulations and international bibliography recommend a prior study to be carried out with the purpose of finding alternatives to this installation of safety barriers and if necessary, the installation of the safety barrier close to the centre of the median. This thesis directs its analysis towards the advantages and restrictions of installing the safety barrier close to the centre of the median. Research has shown vehicle response when within the median and we show the installation of safety barriers in the A40 highway stretch: Villarrubia de Santiago Santa Cruz de la Zarza, highlighting the aspects that should be taken into account as best practices for road safety and technical regulations.
Resumo:
La tesis doctoral se centra en la posibilidad de entender que la prctica de arquitectura puede encontrar en las prcticas comunicativas un apoyo instrumental, que sobrepasa cualquier simplificacin clsica del uso de los medios como una mera aplicacin superficial, post-producida o sencillamente promocional. A partir de esta premisa se exponen casos del ltimo cuarto del siglo XX y se detecta que amenazas como el riesgo de la banalizacin, la posible saturacin de la imagen pblica o la previsible asociacin incorrecta con otros individuos en presentaciones grupales o por temticas, han podido influir en un crecimiento notable de la adquisicin de control, por parte de los arquitectos, en sus oportunidades mediticas. Esto es, como si la arquitectura hubiera empezado a superar y optimizar algo inevitable, que las frmulas expositivas y las publicaciones, o ms bien del exponer(se) y publicar(se), son herramientas disponibles para activar algn tipo de gestin intelectual de la comunicacin e informacin circulante sobre si misma. Esta prctica de autoedicin se analiza en un periodo concreto de la trayectoria de OMA -Office for Metropolitan Architecture-, estudio considerado pionero en el uso eficiente, oportunista y personalizado de los medios. As, la segunda parte de la tesis se ocupa del anlisis de su conocida monografa S,M,L,XL (1995), un volumen que cont con gran participacin por parte de sus protagonistas durante la edicin, y de cuyo proceso de produccin apenas se haba investigado. Esta publicacin seal un punto de inflexin en su gnero alterando todo formato y restricciones anteriores, y se ha convertido en un volumen emblemtico para la disciplina que ninguna rplica posterior ha podido superar. Aqu se presenta a su vez como el desencadenante de la construccin de un gran evento que concluye en la transformacin de la identidad de OMA en 10 aos, paradjicamente entre el nacimiento de la Fundacin Groszstadt y el arranque de la actividad de AMO, dos entidades paralelas clave anexas a OMA. Este planteamiento deviene de cmo la investigacin desvela que S,M,L,XL es una pieza ms, central pero no independiente, dentro de una suma de acciones e individuos, as como otras publicaciones, exposiciones, eventos y tambin artculos ensayados y proyectos, en particular Bigness, Generic City, Euralille y los concursos de 1989. Son significativos aspectos como la apertura a una autora mltiple, encabezada por Rem Koolhaas y el diseador grfico Bruce Mau, acompaados en los agradecimientos de la editora Jennifer Sigler y cerca de una centena de nombres, cuyas aportaciones no necesariamente se basan en la construccin de fragmentos del libro. La supresin de ciertos lmites permite superar tambin las tareas inicialmente relevantes en la edicin de una publicacin. Un objetivo general de la tesis es tambin la reflexin sobre relaciones anteriormente cuestionadas, como la establecida entre la arquitectura y los mercados o la economa. Tomando como punto de partida la idea de design intelligence sugerida por Michael Speaks (2001), se extrae de sus argumentos que lo esencial es el hallazgo de la singularidad o inteligencia propia de cada estudio de arquitectura o diseo. Asimismo se explora si en la construccin de ese tipo de frmulas magistrales se alojaban tambin combinaciones de inters y productivas entre asuntos como la eficiencia y la creatividad, o la organizacin y las ideas. En esta dinmica de relaciones bidireccionales, y en ese presente de exceso de informacin, se fundamenta la propuesta de una equivalencia ms evidenciada entre la socializacin del trabajo del arquitecto, al compartirlo pblicamente e introducir nuevas conversaciones, y la relacin inversa a partir del trabajo sobre la socializacin misma. Como si la consciencia sobre el uso de los medios pudiera ser efectivamente instrumental, y contribuir al desarrollo de la prctica de arquitectura, desde una perspectiva idealmente comprometida e intelectual. ABSTRACT The dissertation argues the possibility to understand that the practice of architecture can find an instrumental support in the practices of communication, overcoming any classical simplification of the use of media, generally reduced to superficial treatments or promotional efforts. Thus some cases of the last decades of the 20th century are presented. Some threats detected, such as the risk of triviality, the saturation of the public image or the foreseeable wrong association among individuals when they are introduced as part of thematic groups, might have encouraged a noticeable increase of command taken by architects when there is chance to intervene in a media environment. In other words, it can be argued that architecture has started to overcome and optimize the inevitable, the fact that exhibition formulas and publications, or simply the practice of (self)exhibition or (self)publication, are tools at our disposal for the activation of any kind of intellectual management of communication and circulating information about itself. This practice of self-edition is analyzed in a specific timeframe of OMAs trajectory, an office that is considered as a ground-breaking actor in the efficient and opportunistic use of media. Then the second part of the thesis dissects their monograph S,M,L,XL (1995), a volume in which its main characters were deeply involved in terms of edition and design, a process barely analyzed up to now. This publication marked a turning point in its own genre, disrupting old formats and traditional restrictions. It became such an emblematic volume for the discipline that none of the following attempts of replica has ever been able to improve this precedent. Here, the book is also presented as the element that triggers the construction of a big event that concludes in the transformation of OMA identity in 10 years. Paradoxically, between the birth of the Groszstadt Foundation and the early steps of AMO, both two entities parallel and connected to OMA. This positions emerge from how the research unveils that S,M,L,XL is one more piece, a key one but not an unrelated element, within a sum of actions and individuals, as well as other publications, exhibitions, articles and projects, in particular Bigness, Generic City, Euralille and the competitions of 1989. Among the remarkable innovations of the monograph, there is an outstanding openness to a regime of multiple authorship, headed by Rem Koolhaas and the graphic designer Bruce Mau, who share the acknowledgements page with the editor, Jennifer Sigler, and almost 100 people, not necessarily responsible for specific fragments of the book. In this respect, the dissolution of certain limits made possible that the expected tasks in the edition of a publication could be trespassed. A general goal of the thesis is also to open a debate on typically questioned relations, particularly between architecture and markets or economy. Using the idea of design intelligence, outlined by Michael Speaks in 2001, the thesis pulls out its essence, basically the interest in detecting the singularity, or particular intelligence of every office of architecture and design. Then it explores if in the construction of this kind of ingenious formulas one could find interesting and useful combinations among issues like efficiency and creativity, or organization and ideas. This dynamic of bidirectional relations, rescued urgently at this present moment of excess of information, is based on the proposal for a more evident equivalence between the socialization of the work in architecture, anytime it is shared in public, and the opposite concept, the work on the proper act of socialization itself. As if a new awareness of the capacities of the use of media could turn it into an instrumental force, capable of contributing to the development of the practice of architecture, from an ideally committed and intelectual perspective.
Resumo:
production, during the summer of 2010. This farm is integrated at the Spanish research network for the sugar beet development (AIMCRA) which regarding irrigation, focuses on maximizing water saving and cost reduction. According to AIMCRA 0 s perspective for promoting irrigation best practices, it is essential to understand soil response to irrigation i.e. maximum irrigation length for each soil inltration capacity. The Use of Humidity Sensors provides foundations to address soil 0 s behavior at the irrigation events and, therefore, to establish the boundaries regarding irrigation length and irrigation interval. In order to understand to what extent farmer 0 s performance at Tordesillas farm could have been potentially improved, this study aims to address suitable irrigation length and intervals for the given soil properties and evapotranspiration rates. In this sense, several humidity sensors were installed: (1) A Frequency Domain Reectometry (FDR) EnviroScan Probe taking readings at 10, 20, 40 and 60cm depth and (2) different Time Domain Reectometry (TDR) Echo 2 and Cr200 probes buried in a 50cm x 30cm x 50cm pit and placed along the walls at 10, 20, 30 and 40 cm depth. Moreover, in order to dene soil properties, a textural analysis at the Tordesillas Farm was conducted. Also, data from the Tordesillas meteorological station was utilized.
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and intelligently will include at least a module for POS tagging. The more an application needs to understand the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based <Subject, Predicate, Object> triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based <Subject, Predicate, Object> triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the <Unit, Attribute, Value> triple structure recommended for annotations in these works (which is isomorphic to the <Subject, Predicate, Object> triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTags annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTags scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTags (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTags (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTags annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTags annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTaggers schema, a concrete instance of OntoTags abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely Bitexts DataLexica (http://www.bitext.com/EN/datalexica.asp), LACELLs (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), Connexors FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTags underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTaggers configuration, a concrete instance of OntoTags abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELLs tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTaggers schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAGS UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. CONFIRMED by the development of: o OntoTags annotation scheme, o OntoTags annotation architecture, o OntoTaggers (XML, RDF, OWL) annotation schemas, o OntoTaggers configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. H.3 was CONFIRMED by means of the development of OntoTaggers ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexors FDG, Bitexts DataLexica and LACELLs tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). CONFIRMED by means of the development of OntoTaggers RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. REJECTED: OntoTaggers experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTags ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTags architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTags linguistic ontologies). On the one hand, OntoTags network of ontologies consists of The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; The OIO (OntoTags Integration Ontology), which Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO; Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. On the other hand, OntoTags ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: As far as morphosyntactic annotations are concerned, OntoTags ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; As for syntactic annotations, OntoTags ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; Regarding semantic annotations, OntoTags ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTags ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELLs tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTags annotation scheme and OntoTaggers annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).
Resumo:
In this paper we present a revisited classification of term variation in the light of the Linked Data initiative. Linked Data refers to a set of best practices for publishing and connecting structured data on the Web with the idea of transforming it into a global graph. One of the crucial steps of this initiative is the linking step, in which datasets in one or more languages need to be linked or connected with one another. We claim that the linking process would be facilitated if datasets are enriched with lexical and terminological information. Being that the final aim, we propose a classification of lexical, terminological and semantic variants that will become part of a model of linguistic descriptions that is currently being proposed within the framework of the W3C Ontology-Lexica Community Group to enrich ontologies and Linked Data vocabularies. Examples of modeling solutions of the different types of variants are also provided.
Resumo:
In this paper we present a revisited classification of term variation in the light of the Linked Data initiative. Linked Data refers to a set of best practices for publishing and connecting structured data on the Web with the idea of transforming it into a global graph. One of the crucial steps of this initiative is the linking step, in which datasets in one or more languages need to be linked or connected with one another. We claim that the linking process would be facilitated if datasets are enriched with lexical and terminological information. Being that the final aim, we propose a classification of lexical, terminological and semantic variants that will become part of a model of linguistic descriptions that is currently being proposed within the framework of the W3C Ontology- Lexica Community Group to enrich ontologies and Linked Data vocabularies. Examples of modeling solutions of the different types of variants are also provided.
Resumo:
According to the PMBOK (Project Management Body of Knowledge), project management is the application of knowledge, skills, tools, and techniques to project activities to meet the project requirements [1]. Project Management has proven to be one of the most important disciplines at the moment of determining the success of any project [2][3][4]. Given that many of the activities covered by this discipline can be said that are horizontal for any kind of domain, the importance of acknowledge the concepts and practices becomes even more obvious. The specific case of the projects that fall in the domain of Software Engineering are not the exception about the great influence of Project Management for their success. The critical role that this discipline plays in the industry has come to numbers. A report by McKinsey & Co [4] shows that the establishment of programs for the teaching of critical skills of project management can improve the performance of the project in time and costs. As an example of the above, the reports exposes: One defense organization used these programs to train several waves of project managers and leaders who together administered a portfolio of more than 1,000 capital projects ranging in Project management size from $100,000 to $500 million. Managers who successfully completed the training were able to cut costs on most projects by between 20 and 35 percent. Over time, the organization expects savings of about 15 percent of its entire baseline spending. In a white paper by the PMI (Project Management Institute) about the value of project management [5], it is stated that: Leading organizations across sectors and geographic borders have been steadily embracing project management as a way to control spending and improve project results. According to the research made by the PMI for the paper, after the economical crisis Executives discovered that adhering to project management methods and strategies reduced risks, cut costs and improved success ratesall vital to surviving the economic crisis. In every elite company, a proper execution of the project management discipline has become a must. Several members of the software industry have putted effort into achieving ways of assuring high quality results from projects; many standards, best practices, methodologies and other resources have been produced by experts from different fields of expertise. In the industry and the academic community, there is a continuous research on how to teach better software engineering together with project management [4][6]. For the general practices of Project Management the PMI produced a guide of the required knowledge that any project manager should have in their toolbox to lead any kind of project, this guide is called the PMBOK. On the side of best practices 10 and required knowledge for the Software Engineering discipline, the IEEE (Institute of Electrical and Electronics Engineers) developed the SWEBOK (Software Engineering Body of Knowledge) in collaboration with software industry experts and academic researchers, introducing into the guide many of the needed knowledge for a 5-year expertise software engineer [7]. The SWEBOK also covers management from the perspective of a software project. This thesis is developed to provide guidance to practitioners and members of the academic community about project management applied to software engineering. The way used in this thesis to get useful information for practitioners is to take an industry-approved guide for software engineering professionals such as the SWEBOK, and compare the content to what is found in the PMBOK. After comparing the contents of the SWEBOK and the PMBOK, what is found missing in the SWEBOK is used to give recommendations on how to enrich project management skills for a software engineering professional. Recommendations for members of the academic community on the other hand, are given taking into account the GSwE2009 (Graduated Software Engineering 2009) standard [8]. GSwE2009 is often used as a main reference for software engineering master programs [9]. The standard is mostly based on the content of the SWEBOK, plus some contents that are considered to reinforce the education of software engineering. Given the similarities between the SWEBOK and the GSwE2009, the results of comparing SWEBOK and PMBOK are also considered valid to enrich what the GSwE2009 proposes. So in the end the recommendations for practitioners end up being also useful for the academic community and their strategies to teach project management in the context of software engineering.
Resumo:
This paper reports the results of the assessment of a range of measures implemented in bus systems in five European cities to improve the use of public transport by increasing its attractiveness and enhancing its image in urban areas. This research was conducted as part of the EBSF project (European Bus System of the Future) from 2008 to 2012. New buses (prototypes), new vehicle and infrastructure technologies, and operational best practices were introduced, all of which were combined in a system approach. The measures were assessed using multicriteria analysis to simultaneously evaluate a certain number of criteria that need to be aggregated. Each criterion is measured by one or more key performance indicators (KPI) calculated in two scenarios (reference scenario, with no measure implemented; and project scenario, with the implementation of some measures), in order to evaluate the difference in the KPI performance between the reference and project scenario. The results indicate that the measures produce a greater benefit in issues related to bus system productivity and customer satisfaction, with the greatest impact on aspects of perceptions of comfort, cleanliness and quality of service, information to passengers and environmental issues. The study also reveals that the implementation of several measures has greater social utility than very specific and isolated measures.
Resumo:
The current crisis, with its particularly severe configuration in Southern European countries, provides an opportunity to probe the interrelation of economic crunches and the production of space, and also to imagine potential paths of sociospatial emancipation from the dictates of global markets. This introductory chapter offers a preliminary interpretive framework exploring the fundamental role of urban and territorial restructuring in the formation, management and resolution of capitalist crises and, conversely, periods of crisis as key stages in the history of urbanization. I will begin by contextualizing the 2007-8 economic slump, the subsequent global recession and its uneven impact on states and cities in the longue dure of capitalist productions of space, studying the transformation of spatial configurations in previous episodes of economic stagnation. This broader perspective will then be used to analyze currently emerging formations of austerity urbanism, showing how the practices of crisis management incorporate a strategy for economic and institutional restructuring that eventually impacts on urban policy, and indeed in the production of urban space itself.
Resumo:
Las obras de infraestructura que construye el ser humano para optimizar los recursos naturales y satisfacer sus necesidades, producen impactos tanto positivos como negativos en el ambiente. Mxico cuenta con una gran cantidad de recursos naturales y lugares que han sido favorecidos por la naturaleza, donde la sobrecarga de las actividades antropognicas genera problemas de impacto ambiental, especialmente en las zonas costeras y en su entorno. El objetivo del presente trabajo fue aportar informacin acerca de las principales presiones que recibe el sistema y cmo esto afecta a las propuestas de soluciones integrales y a la capacidad para recuperar el estado de equilibrio en las zonas costeras. En la presente investigacin, se desarroll una metodologa para la caracterizacin de zonas costeras, basada en un modelo sistmico, con el propsito de tener una herramienta de planificacin para proyectos ambientalmente sustentables, integrando una base de datos con las mejores prcticas de planificacin, lo que facilitar el diagnstico y la evaluacin de la capacidad adaptativa de recuperacin del sistema. Asimismo, se utiliz un modelo sistmico como una metodologa para organizar la gran complejidad que implica la interrelacin e interconexin que existe entre los mltiples componentes, y con ello obtener el conocimiento para su caracterizacin. Con base en el modelo de Zachman, se realiz un anlisis para la deteccin de las fortalezas y debilidades del sistema, lo que permiti visualizar el impacto de los riesgos a que est expuesta una zona costera. Las principales aportaciones de este trabajo fueron el desarrollo de la FICHA DE CARACTERIZACIN DE LA ZONA COSTERA y la inclusin, en dicha ficha, de la estimacin del nivel de la resiliencia fsica, ambiental, social, econmica y poltica. La metodologa propuesta, es una aportacin que permite integrar los componentes, las relaciones e interconexiones que existen en el sistema costero. La metodologa tiene la ventaja de ser flexible y se pueden agregar o desechar componentes de acuerdo a las particularidades de cada caso de estudio; adicionalmente, se propone utilizar esta herramienta como ayuda en el monitoreo peridico del sistema. Lo anterior como parte de un observatorio integrado al Sistema Nacional de Gestin Costera que se propone como parte de futuras lneas de investigacin. Como caso de estudio, se realiz la caracterizacin del complejo sistema Banco Chinchorro, lo que result en la inclusin (en la FICHA DE CARACTERIZACIN DE LA ZONA COSTERA), de las lecciones aprendidas con la deteccin de buenas y malas prcticas, esto redund en la mejora de la metodologa propuesta para la gestin de la zona costera. All infrastructures that build the human being to optimize natural resources and meet their needs, generate both, positive and negative impacts on the environment, since the acquisition and transformation of resources in coastal areas affect their balance. Mexico has a large number of natural resources and places that have been favored by nature, whereas the overhead of anthropogenic activities leads to problems of environmental impact, especially in coastal areas and in its surroundings. The aim of this study was to provide information about the main pressures that a system receives and how this affects the proposed solutions and the ability to restore the state of balance in coastal areas. In this research, a methodology for the characterization of coastal zones, based on a systemic model, in order to develop a planning tool for environmentally sustainable projects, was developed, integrating a database with the best practices for planning, conservation and balance of coastal areas. This will facilitate the diagnosis and evaluation of the adaptive resilience of the system. A systemic model was used as a methodology to organize the vast complexity of the relationship and interconnection between the multiple components, and so thus gain knowledge for its characterization. Based on the Zachman model, an analysis to detect the strengths and weaknesses of the system was performed, allowing visualizing the impact of the risks that the coastal zone is exposed to. The main contributions of this study was the development of the COASTAL CHARACTERIZATION RECORD, and the inclusion, on that record, of the estimation of the physical, environmental, social, economic and political resilience. The proposed methodology is a contribution that allows integrating the components, relationships and interconnections existing in the coastal system. The methodology has the advantage of being flexible and components can be added or discarded according to the particularities of each case study; Additionally, this is not only a diagnostic tool, it is proposed to use it as an aid in monitoring periodically the system, this as part of an integrated monitoring into the National System of Coastal Management that is proposed as part of future research. As a case study, the characterization of the coastal zone Banco Chinchorro was done, resulting in the inclusion, in the COASTAL CHARACTERIZATION RECORD, of the documented lessons learned from the good and bad practices detection, improvement of the methodology proposed for the management of the coastal zone.