762 resultados para Best practices of transformation
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and intelligently will include at least a module for POS tagging. The more an application needs to understand the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based <Subject, Predicate, Object> triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based <Subject, Predicate, Object> triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the <Unit, Attribute, Value> triple structure recommended for annotations in these works (which is isomorphic to the <Subject, Predicate, Object> triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTags annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTags scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTags (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTags (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTags annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTags annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTaggers schema, a concrete instance of OntoTags abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely Bitexts DataLexica (http://www.bitext.com/EN/datalexica.asp), LACELLs (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), Connexors FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTags underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTaggers configuration, a concrete instance of OntoTags abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELLs tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTaggers schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAGS UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. CONFIRMED by the development of: o OntoTags annotation scheme, o OntoTags annotation architecture, o OntoTaggers (XML, RDF, OWL) annotation schemas, o OntoTaggers configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. H.3 was CONFIRMED by means of the development of OntoTaggers ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexors FDG, Bitexts DataLexica and LACELLs tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). CONFIRMED by means of the development of OntoTaggers RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. REJECTED: OntoTaggers experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTags ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTags architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTags linguistic ontologies). On the one hand, OntoTags network of ontologies consists of The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; The OIO (OntoTags Integration Ontology), which Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO; Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. On the other hand, OntoTags ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: As far as morphosyntactic annotations are concerned, OntoTags ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; As for syntactic annotations, OntoTags ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; Regarding semantic annotations, OntoTags ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTags ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELLs tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTags annotation scheme and OntoTaggers annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).
Resumo:
In this paper we present a revisited classification of term variation in the light of the Linked Data initiative. Linked Data refers to a set of best practices for publishing and connecting structured data on the Web with the idea of transforming it into a global graph. One of the crucial steps of this initiative is the linking step, in which datasets in one or more languages need to be linked or connected with one another. We claim that the linking process would be facilitated if datasets are enriched with lexical and terminological information. Being that the final aim, we propose a classification of lexical, terminological and semantic variants that will become part of a model of linguistic descriptions that is currently being proposed within the framework of the W3C Ontology-Lexica Community Group to enrich ontologies and Linked Data vocabularies. Examples of modeling solutions of the different types of variants are also provided.
Resumo:
In this paper we present a revisited classification of term variation in the light of the Linked Data initiative. Linked Data refers to a set of best practices for publishing and connecting structured data on the Web with the idea of transforming it into a global graph. One of the crucial steps of this initiative is the linking step, in which datasets in one or more languages need to be linked or connected with one another. We claim that the linking process would be facilitated if datasets are enriched with lexical and terminological information. Being that the final aim, we propose a classification of lexical, terminological and semantic variants that will become part of a model of linguistic descriptions that is currently being proposed within the framework of the W3C Ontology- Lexica Community Group to enrich ontologies and Linked Data vocabularies. Examples of modeling solutions of the different types of variants are also provided.
Resumo:
According to the PMBOK (Project Management Body of Knowledge), project management is the application of knowledge, skills, tools, and techniques to project activities to meet the project requirements [1]. Project Management has proven to be one of the most important disciplines at the moment of determining the success of any project [2][3][4]. Given that many of the activities covered by this discipline can be said that are horizontal for any kind of domain, the importance of acknowledge the concepts and practices becomes even more obvious. The specific case of the projects that fall in the domain of Software Engineering are not the exception about the great influence of Project Management for their success. The critical role that this discipline plays in the industry has come to numbers. A report by McKinsey & Co [4] shows that the establishment of programs for the teaching of critical skills of project management can improve the performance of the project in time and costs. As an example of the above, the reports exposes: One defense organization used these programs to train several waves of project managers and leaders who together administered a portfolio of more than 1,000 capital projects ranging in Project management size from $100,000 to $500 million. Managers who successfully completed the training were able to cut costs on most projects by between 20 and 35 percent. Over time, the organization expects savings of about 15 percent of its entire baseline spending. In a white paper by the PMI (Project Management Institute) about the value of project management [5], it is stated that: Leading organizations across sectors and geographic borders have been steadily embracing project management as a way to control spending and improve project results. According to the research made by the PMI for the paper, after the economical crisis Executives discovered that adhering to project management methods and strategies reduced risks, cut costs and improved success ratesall vital to surviving the economic crisis. In every elite company, a proper execution of the project management discipline has become a must. Several members of the software industry have putted effort into achieving ways of assuring high quality results from projects; many standards, best practices, methodologies and other resources have been produced by experts from different fields of expertise. In the industry and the academic community, there is a continuous research on how to teach better software engineering together with project management [4][6]. For the general practices of Project Management the PMI produced a guide of the required knowledge that any project manager should have in their toolbox to lead any kind of project, this guide is called the PMBOK. On the side of best practices 10 and required knowledge for the Software Engineering discipline, the IEEE (Institute of Electrical and Electronics Engineers) developed the SWEBOK (Software Engineering Body of Knowledge) in collaboration with software industry experts and academic researchers, introducing into the guide many of the needed knowledge for a 5-year expertise software engineer [7]. The SWEBOK also covers management from the perspective of a software project. This thesis is developed to provide guidance to practitioners and members of the academic community about project management applied to software engineering. The way used in this thesis to get useful information for practitioners is to take an industry-approved guide for software engineering professionals such as the SWEBOK, and compare the content to what is found in the PMBOK. After comparing the contents of the SWEBOK and the PMBOK, what is found missing in the SWEBOK is used to give recommendations on how to enrich project management skills for a software engineering professional. Recommendations for members of the academic community on the other hand, are given taking into account the GSwE2009 (Graduated Software Engineering 2009) standard [8]. GSwE2009 is often used as a main reference for software engineering master programs [9]. The standard is mostly based on the content of the SWEBOK, plus some contents that are considered to reinforce the education of software engineering. Given the similarities between the SWEBOK and the GSwE2009, the results of comparing SWEBOK and PMBOK are also considered valid to enrich what the GSwE2009 proposes. So in the end the recommendations for practitioners end up being also useful for the academic community and their strategies to teach project management in the context of software engineering.
Resumo:
This paper reports the results of the assessment of a range of measures implemented in bus systems in five European cities to improve the use of public transport by increasing its attractiveness and enhancing its image in urban areas. This research was conducted as part of the EBSF project (European Bus System of the Future) from 2008 to 2012. New buses (prototypes), new vehicle and infrastructure technologies, and operational best practices were introduced, all of which were combined in a system approach. The measures were assessed using multicriteria analysis to simultaneously evaluate a certain number of criteria that need to be aggregated. Each criterion is measured by one or more key performance indicators (KPI) calculated in two scenarios (reference scenario, with no measure implemented; and project scenario, with the implementation of some measures), in order to evaluate the difference in the KPI performance between the reference and project scenario. The results indicate that the measures produce a greater benefit in issues related to bus system productivity and customer satisfaction, with the greatest impact on aspects of perceptions of comfort, cleanliness and quality of service, information to passengers and environmental issues. The study also reveals that the implementation of several measures has greater social utility than very specific and isolated measures.
Resumo:
The current crisis, with its particularly severe configuration in Southern European countries, provides an opportunity to probe the interrelation of economic crunches and the production of space, and also to imagine potential paths of sociospatial emancipation from the dictates of global markets. This introductory chapter offers a preliminary interpretive framework exploring the fundamental role of urban and territorial restructuring in the formation, management and resolution of capitalist crises and, conversely, periods of crisis as key stages in the history of urbanization. I will begin by contextualizing the 2007-8 economic slump, the subsequent global recession and its uneven impact on states and cities in the longue dure of capitalist productions of space, studying the transformation of spatial configurations in previous episodes of economic stagnation. This broader perspective will then be used to analyze currently emerging formations of austerity urbanism, showing how the practices of crisis management incorporate a strategy for economic and institutional restructuring that eventually impacts on urban policy, and indeed in the production of urban space itself.
Resumo:
Las obras de infraestructura que construye el ser humano para optimizar los recursos naturales y satisfacer sus necesidades, producen impactos tanto positivos como negativos en el ambiente. Mxico cuenta con una gran cantidad de recursos naturales y lugares que han sido favorecidos por la naturaleza, donde la sobrecarga de las actividades antropognicas genera problemas de impacto ambiental, especialmente en las zonas costeras y en su entorno. El objetivo del presente trabajo fue aportar informacin acerca de las principales presiones que recibe el sistema y cmo esto afecta a las propuestas de soluciones integrales y a la capacidad para recuperar el estado de equilibrio en las zonas costeras. En la presente investigacin, se desarroll una metodologa para la caracterizacin de zonas costeras, basada en un modelo sistmico, con el propsito de tener una herramienta de planificacin para proyectos ambientalmente sustentables, integrando una base de datos con las mejores prcticas de planificacin, lo que facilitar el diagnstico y la evaluacin de la capacidad adaptativa de recuperacin del sistema. Asimismo, se utiliz un modelo sistmico como una metodologa para organizar la gran complejidad que implica la interrelacin e interconexin que existe entre los mltiples componentes, y con ello obtener el conocimiento para su caracterizacin. Con base en el modelo de Zachman, se realiz un anlisis para la deteccin de las fortalezas y debilidades del sistema, lo que permiti visualizar el impacto de los riesgos a que est expuesta una zona costera. Las principales aportaciones de este trabajo fueron el desarrollo de la FICHA DE CARACTERIZACIN DE LA ZONA COSTERA y la inclusin, en dicha ficha, de la estimacin del nivel de la resiliencia fsica, ambiental, social, econmica y poltica. La metodologa propuesta, es una aportacin que permite integrar los componentes, las relaciones e interconexiones que existen en el sistema costero. La metodologa tiene la ventaja de ser flexible y se pueden agregar o desechar componentes de acuerdo a las particularidades de cada caso de estudio; adicionalmente, se propone utilizar esta herramienta como ayuda en el monitoreo peridico del sistema. Lo anterior como parte de un observatorio integrado al Sistema Nacional de Gestin Costera que se propone como parte de futuras lneas de investigacin. Como caso de estudio, se realiz la caracterizacin del complejo sistema Banco Chinchorro, lo que result en la inclusin (en la FICHA DE CARACTERIZACIN DE LA ZONA COSTERA), de las lecciones aprendidas con la deteccin de buenas y malas prcticas, esto redund en la mejora de la metodologa propuesta para la gestin de la zona costera. All infrastructures that build the human being to optimize natural resources and meet their needs, generate both, positive and negative impacts on the environment, since the acquisition and transformation of resources in coastal areas affect their balance. Mexico has a large number of natural resources and places that have been favored by nature, whereas the overhead of anthropogenic activities leads to problems of environmental impact, especially in coastal areas and in its surroundings. The aim of this study was to provide information about the main pressures that a system receives and how this affects the proposed solutions and the ability to restore the state of balance in coastal areas. In this research, a methodology for the characterization of coastal zones, based on a systemic model, in order to develop a planning tool for environmentally sustainable projects, was developed, integrating a database with the best practices for planning, conservation and balance of coastal areas. This will facilitate the diagnosis and evaluation of the adaptive resilience of the system. A systemic model was used as a methodology to organize the vast complexity of the relationship and interconnection between the multiple components, and so thus gain knowledge for its characterization. Based on the Zachman model, an analysis to detect the strengths and weaknesses of the system was performed, allowing visualizing the impact of the risks that the coastal zone is exposed to. The main contributions of this study was the development of the COASTAL CHARACTERIZATION RECORD, and the inclusion, on that record, of the estimation of the physical, environmental, social, economic and political resilience. The proposed methodology is a contribution that allows integrating the components, relationships and interconnections existing in the coastal system. The methodology has the advantage of being flexible and components can be added or discarded according to the particularities of each case study; Additionally, this is not only a diagnostic tool, it is proposed to use it as an aid in monitoring periodically the system, this as part of an integrated monitoring into the National System of Coastal Management that is proposed as part of future research. As a case study, the characterization of the coastal zone Banco Chinchorro was done, resulting in the inclusion, in the COASTAL CHARACTERIZATION RECORD, of the documented lessons learned from the good and bad practices detection, improvement of the methodology proposed for the management of the coastal zone.
Resumo:
Aneuploidy or chromosome imbalance is the most massive genetic abnormality of cancer cells. It used to be considered the cause of cancer when it was discovered more than 100 years ago. Since the discovery of the gene, the aneuploidy hypothesis has lost ground to the hypothesis that mutation of cellular genes causes cancer. According to this hypothesis, cancers are diploid and aneuploidy is secondary or nonessential. Here we reexamine the aneuploidy hypothesis in view of the fact that nearly all solid cancers are aneuploid, that many carcinogens are nongenotoxic, and that mutated genes from cancer cells do not transform diploid human or animal cells. By regrouping the gene poolas in speciationaneuploidy inevitably will alter many genetic programs. This genetic revolution can explain the numerous unique properties of cancer cells, such as invasiveness, dedifferentiation, distinct morphology, and specific surface antigens, much better than gene mutation, which is limited by the conservation of the existing chromosome structure. To determine whether aneuploidy is a cause or a consequence of transformation, we have analyzed the chromosomes of Chinese hamster embryo (CHE) cells transformed in vitro. This system allows (i) detection of transformation within 2 months and thus about 5 months sooner than carcinogenesis and (ii) the generation of many more transformants per cost than carcinogenesis. To minimize mutation of cellular genes, we have used nongenotoxic carcinogens. It was found that 44 out of 44 colonies of CHE cells transformed by benz[a]pyrene, methylcholanthrene, dimethylbenzanthracene, and colcemid, or spontaneously were between 50 and 100% aneuploid. Thus, aneuploidy originated with transformation. Two of two chemically transformed colonies tested were tumorigenic 2 months after inoculation into hamsters. The cells of transformed colonies were heterogeneous in chromosome number, consistent with the hypothesis that aneuploidy can perpetually destabilize the chromosome number because it unbalances the elements of the mitotic apparatus. Considering that all 44 transformed colonies analyzed were aneuploid, and the early association between aneuploidy, transformation, and tumorigenicity, we conclude that aneuploidy is the cause rather than a consequence of transformation.
Resumo:
A comparison was made of the competence for neoplastic transformation in three different sublines of NIH 3T3 cells and multiple clonal derivatives of each. Over 90% of the neoplastic foci produced by an uncloned transformed (t-SA) subline on a confluent background of nontransformed cells were of the dense, multilayered type, but about half of the t-SA clones produced only light foci in assays without background. This asymmetry apparently arose from the failure of the light focus formers to register on a background of nontransformed cells. Comparison was made of the capacity for confluence-mediated transformation between uncloned parental cultures and their clonal derivatives by using two nontransformed sublines, one of which was highly sensitive and the other relatively refractory to confluence-mediated transformation. Transformation was more frequent in the clones than in the uncloned parental cultures for both sublines. This was dramatically so in the refractory subline, where the uncloned culture showed no overt sign of transformation in serially repeated assays but increasing numbers of its clones exhibited progressive transformation. The reason for the greater susceptibility of the pure clones is apparently the suppression of transformation among the diverse membership that makes up the uncloned parental culture. Progressive selection toward increasing degrees of transformation in confluent cultures plays a major role in the development of dense focus formers, but direct induction by the constraint of confluence may contribute by heritably damaging cells. In view of our finding of increased susceptibility to transformation in clonal versus uncloned populations, expansion of some clones at the expense of others during the aging process would contribute to the marked increase of cancer with age.
Resumo:
Jaagsiekte sheep retrovirus (JSRV) is the causative agent of ovine pulmonary carcinoma, a unique animal model for human bronchioalveolar carcinoma. We previously isolated a JSRV proviral clone and showed that it was both infectious and oncogenic. Thus JSRV is necessary and sufficient for the development of ovine pulmonary carcinoma, but no data are available on the mechanisms of transformation. Inspection of the JSRV genome reveals standard retroviral genes, but no evidence for a viral oncogene. However, an alternate ORF in pol (orf-x) might be a candidate for a transforming gene. We tested whether the JSRV genome might encode a transforming gene by transfecting an expression plasmid for JSRV [pCMVJS21, driven by the cytomegalovirus (CMV) immediate early promoter] into mouse NIH 3T3 cells. Foci of transformed cells appeared in the transfected cultures 23 weeks posttransfection; cloned transformants showed anchorage independence for growth, and they expressed JSRV RNA. These results indicate that the JRSV genome contains information with direct transforming potential for NIH 3T3 cells. Transfection of a mutated version of pCMVJS21 in which the orf-x protein was terminated by two stop codons also gave transformed foci. Thus, orf-x was eliminated as the candidate transforming gene. In addition, another derivative of pCMVJS21 (pCMVJS21GP) in which the gag, pol (and orf-x) coding sequences were deleted also gave transformed foci. These results indicate that the envelope gene carries the transforming potential. This is an unusual example of a native retroviral structural protein with transformation potential.
Resumo:
Prolonged incubation of NIH 3T3 cells under the growth constraint of confluence results in a persistent impairment of proliferation when the cells are subcultured at low density and a greatly increased probability of neoplastic transformation in assays for transformation. These properties, along with the large accumulation of age pigment bodies in the confluent cells, are cardinal cellular characteristics of aging in organisms and validate the system as a model of cellular aging. Two cultures labeled alpha and beta were obtained after prolonged confluence; both were dominated by cells that were both slowed in growth at low population density and enhanced in growth capacity at high density, a marker of neoplastic transformation. An experiment was designed to study the reversibility of these age-related properties by serial subculture at low density of the two uncloned cultures and their progeny clones derived from assuredly single cells. Both uncloned cultures had many transformed cells and a reduced growth rate on subculture. Serial subculture resulted in a gradual increase in growth rates of both populations, but a reversal of transformation only in the alpha population. The clones originating from both populations varied in the degree of growth impairment and neoplastic transformation. None of the alpha clones increased in growth rate on low density passage nor did the transformed clones among them revert to normal growth behavior. The fastest growing beta clone was originally slower than the control clone, but caught up to it after four weekly subcultures. The other beta clones retained their reduced growth rates. Four of the five beta clones, including the fastest grower, were transformed, and none reverted on subculture. We conclude that the apparent reversal of impaired growth and transformation in the uncloned parental alpha population resulted from the selective growth at low density of fast growing nontransformed clones. The reversal of impaired growth in the uncloned parental beta population was also the result of selective growth of fast growing clones, but in this case they were highly transformed so no apparent reversal of transformation occurred. The clonal results indicate that neither the impaired growth nor the neoplastic transformation found in aging cells is reversible. We discuss the possible contribution of epigenetic and genetic processes to these irreversible changes.
Resumo:
Challenges in treating children with an autism spectrum disorder (ASD) in medical settings are identified and discussed. Although research supports interventions for children with ASD including positive reinforcement, environmental modification, and visual supports and systems, limited research on the efficacy of these interventions in medical environments and with specific procedures exists. Based on the available intervention literature, this project proposes a picture schedule reinforcement system for use during blood draw procedures for ASD children with diabetes. Future efforts should include increased education for medical providers and health professionals as psychological interventions continue to inform best practices in care for children with ASD in medical settings.
Resumo:
Desde o final do Sculo XX e incio do Sculo XXI, estudos analisam a elevada taxa de insucesso ou insatisfao com os Programas de Lean. Esta taxa tem se demonstrado demasiadamente elevada, variando entre 66% e 90%. Como efeito deste insucesso, tem-se o desperdcio de tempo, dinheiro, recursos e, talvez o pior, tem-se a propagao do medo nos agentes de mudana em empreitar novas iniciativas de mudana. Estudos apontam a falta de alinhamento de tais projetos com a Cultura Organizacional como uma das questes fundamentais deste insucesso. Partindo desta temtica de pesquisa, este ensaio terico pode ser caracterizado como uma abordagem qualitativa de anlise do problema, de natureza bsica de pesquisa buscando gerar conhecimentos novos e teis s organizaes, sem aplicao prtica prevista neste primeiro estgio de pesquisa. A fonte de evidncias para sustentar o modelo proposto foi reviso dos estudos de caso encontrados na literatura, sendo utilizadas tanto uma Reviso Bibliogrfica Sistemtica (RBS) quanto Exploratria, de tal maneira a buscar o \"estado da arte\" no campo de estudo. A Fundamentao Terica do trabalho baseada na literatura de quatro grandes campos de estudo: (i) Estratgia, (ii) Lean, (iii) Cultura Organizacional e (iv) Gesto de Mudanas. A RBS tem foco nas intersees destes grandes campos, agregando 190 trabalhos internacionais. Por sua vez, a Reviso Exploratria traz algumas das principais referncias dos trs campos de estudo, como: Edgar Schein, John Kotter, Kim Cameron, Robert Quinn, David Mann, dentre outros. Desta maneira, este trabalho estudou a influncia da cultura organizacional nos projetos de transformao e, a partir da ruptura com a teoria atual, construiu e props uma sistemtica terica, intitulada de \"Sistemtica de Transformao\" (ou simplesmente \"Sistemtica T\"), a qual prope o alinhamento entre trs dimenses: Estratgia, Projeto de Transformao e Cultura Organizacional. Fazendo uso desta sistemtica, esperado que os agentes de mudana consigam ter um planejamento mais eficaz do processo de diagnstico, avaliao e gesto da cultura organizacional alinhado Estratgia e tambm ao Projeto de Transformao da organizao, com nfase nos Programas de Lean. A proposio e uso desta sistemtica pode favorecer tanto a discusso acadmica na rea de Gesto de Operaes sobre o tema, quanto fornecer subsdios para aplicaes prticas mais eficazes.
Resumo:
As world communication, technology, and trade become increasingly integrated through globalization, multinational corporations seek employees with global leadership experience and skills. However, the demand for these skills currently outweighs the supply. Given the rarity of globally ready leaders, global competency development should be emphasized in higher education programs. The reality, however, is that university graduate programs are often outdated and focus mostly on cognitive learning. Global leadership competence requires moving beyond the cognitive domain of learning to create socially responsible and culturally connected global leaders. This requires attention to development methods; however, limited research in global leadership development methods has been conducted. A new conceptual model, the global leadership development ecosystem, was introduced in this study to guide the design and evaluation of global leadership development programs. It was based on three theories of learning and was divided into four development methodologies. This study quantitatively tested the model and used it as a framework for an in-depth examination of the design of one International MBA program. The program was first benchmarked, by means of a qualitative best practices analysis, against the top-ranking IMBA programs in the world. Qualitative data from students, faculty, administrators, and staff was then examined, using descriptive and focused data coding. Quantitative data analysis, using PASW Statistics software, and a hierarchical regression, showed the individual effect of each of the four development methods, as well as their combined effect, on student scores on a global leadership assessment. The analysis revealed that each methodology played a distinct and important role in developing different competencies of global leadership. It also confirmed the critical link between self-efficacy and global leadership development.
Resumo:
Paper submitted to the Sixth International Conference on Social Science Methodology, Amsterdam, The Netherlands, August 16-20, 2004.