13 resultados para Good Lives Model-Comprehensive (GLM-C)

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In spite of the increasing presence of Semantic Web Facilities, only a limited amount of the available resources in the Internet provide a semantic access. Recent initiatives such as the emerging Linked Data Web are providing semantic access to available data by porting existing resources to the semantic web using different technologies, such as database-semantic mapping and scraping. Nevertheless, existing scraping solutions are based on ad-hoc solutions complemented with graphical interfaces for speeding up the scraper development. This article proposes a generic framework for web scraping based on semantic technologies. This framework is structured in three levels: scraping services, semantic scraping model and syntactic scraping. The first level provides an interface to generic applications or intelligent agents for gathering information from the web at a high level. The second level defines a semantic RDF model of the scraping process, in order to provide a declarative approach to the scraping task. Finally, the third level provides an implementation of the RDF scraping model for specific technologies. The work has been validated in a scenario that illustrates its application to mashup technologies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The CENTURY soil organic matter model was adapted for the DSSAT (Decision Support System for Agrotechnology Transfer), modular format in order to better simulate the dynamics of soil organic nutrient processes (Gijsman et al., 2002). The CENTURY model divides the soil organic carbon (SOC) into three hypothetical pools: microbial or active material (SOC1), intermediate (SOC2) and the largely inert and stable material (SOC3) (Jones et al., 2003). At the beginning of the simulation, CENTURY model needs a value of SOC3 per soil layer which can be estimated by the model (based on soil texture and management history) or given as an input. Then, the model assigns about 5% and 95% of the remaining SOC to SOC1 and SOC2, respectively. The model performance when simulating SOC and nitrogen (N) dynamics strongly depends on the initialization process. The common methods (e.g. Basso et al., 2011) to initialize SOC pools deal mostly with carbon (C) mineralization processes and less with N. Dynamics of SOM, SOC, and soil organic N are linked in the CENTURY-DSSAT model through the C/N ratio of decomposing material that determines either mineralization or immobilization of N (Gijsman et al., 2002). The aim of this study was to evaluate an alternative method to initialize the SOC pools in the DSSAT-CENTURY model from apparent soil N mineralization (Napmin) field measurements by using automatic inverse calibration (simulated annealing). The results were compared with the ones obtained by the iterative initialization procedure developed by Basso et al., 2011.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los patógenos han desarrollado estrategias para sobrevivir en su entorno, infectar a sus huéspedes, multiplicarse dentro de estos y posteriormente transmitirse a otros huéspedes. Todos estos componentes hacen parte de la eficacia biológica de los patógenos, y les permiten ser los causantes de enfermedades infecciosas tanto en hombres y animales, como en plantas. El proceso de infección produce efectos negativos en la eficacia biológica del huésped y la gravedad de los efectos, dependerá de la virulencia del patógeno. Por su parte, el huésped ha desarrollado mecanismos de respuesta en contra del patógeno, tales como la resistencia, por la que reduce la multiplicación del patógeno, o la tolerancia, por la que disminuye el efecto negativo de la infección. Estas respuestas del huésped a la infección producen efectos negativos en la eficacia biológica del patógeno, actuando como una presión selectiva sobre su población. Si la presión selectiva sobre el patógeno varía según el huésped, se predice que un mismo patógeno no podrá aumentar su eficacia biológica en distintos huéspedes y estará más adaptado a un huésped y menos a otro, disminuyendo su gama de huéspedes. Esto supone que la adaptación de un patógeno a distintos huéspedes estará a menudo dificultada por compromisos (trade-off) en diferentes componentes de la eficacia biológica del patógeno. Hasta el momento, la evidencia de compromisos de la adaptación del patógeno a distintos huéspedes no es muy abundante, en lo que se respecta a los virus de plantas. En las últimas décadas, se ha descrito un aumento en la incidencia de virus nuevos o previamente descritos que producen enfermedades infecciosas con mayor gravedad y/o diferente patogenicidad, como la infección de huéspedes previamente resistentes. Esto se conoce como la emergencia de enfermedades infecciosas y está causada por patógenos emergentes, que proceden de un huésped reservorio donde se encuentran adaptados. Los huéspedes que actúan como reservorios pueden ser plantas silvestres, que a menudo presentan pocos síntomas o muy leves a pesar de estar infectados con diferentes virus, y asimismo se encuentran en ecosistemas con ninguna o poca intervención humana. El estudio de los factores ecológicos y biológicos que actúan en el proceso de la emergencia de enfermedades infecciosas, ayudará a entender sus causas para crear estrategias de prevención y control. Los virus son los principales patógenos causales de la emergencia de enfermedades infecciosas en humanos, animales y plantas y un buen modelo para entender los procesos de la emergencia. Asimismo, las plantas a diferencia de los animales, son huéspedes fáciles de manipular y los virus que las afectan, más seguros para el trabajo en laboratorio que los virus de humanos y animales, otros modelos también usados en la investigación. Por lo tanto, la interacción virus – planta es un buen modelo experimental para el estudio de la emergencia de enfermedades infecciosas. El estudio de la emergencia de virus en plantas tiene también un interés particular, debido a que los virus pueden ocasionar pérdidas económicas en los cultivos agrícolas y poner en riesgo la durabilidad de la resistencia de plantas mejoradas, lo que supone un riesgo en la seguridad alimentaria con impactos importantes en la sociedad, comparables con las enfermedades infecciosas de humanos y animales domésticos. Para que un virus se convierta en un patógeno emergente debe primero saltar desde su huésped reservorio a un nuevo huésped, segundo adaptarse al nuevo huésped hasta que la infección dentro de la población de éste se vuelva independiente del reservorio y finalmente debe cambiar su epidemiología. En este estudio, se escogió la emergencia del virus del mosaico del pepino dulce (PepMV) en el tomate, como modelo experimental para estudiar la emergencia de un virus en una nueva especie de huésped, así como las infecciones de distintos genotipos del virus del moteado atenuado del pimiento (PMMoV) en pimiento, para estudiar la emergencia de un virus que aumenta su patogenicidad en un huésped previamente resistente. El estudio de ambos patosistemas nos permitió ampliar el conocimiento sobre los factores ecológicos y evolutivos en las dos primeras fases de la emergencia de enfermedades virales en plantas. El PepMV es un patógeno emergente en cultivos de tomate (Solanum lycopersicum) a nivel mundial, que se describió primero en 1980 infectando pepino dulce (Solanum muricatum L.) en Perú, y casi una década después causando una epidemia en cultivos de tomate en Holanda. La introducción a Europa posiblemente fue a través de semillas infectadas de tomate procedentes de Perú, y desde entonces se han descrito nuevos aislados que se agrupan en cuatro cepas (EU, LP, CH2, US1) que infectan a tomate. Sin embargo, el proceso de su emergencia desde pepino dulce hasta tomate es un interrogante de gran interés, porque es uno de los virus emergentes más recientes y de gran importancia económica. Para la emergencia de PepMV en tomate, se recolectaron muestras de tomate silvestre procedentes del sur de Perú, se analizó la presencia y diversidad de aislados de PepMV y se caracterizaron tanto biológicamente (gama de huéspedes), como genéticamente (secuencias genomicas). Se han descrito en diferentes regiones del mundo aislados de PMMoV que han adquirido la capacidad de infectar variedades previamente resistentes de pimiento (Capsicum spp), es decir, un típico caso de emergencia de virus que implica la ampliación de su gama de huéspedes y un aumento de patogenicidad. Esto tiene gran interés, ya que compromete el uso de variedades resistentes obtenidas por mejora genética, que es la forma de control de virus más eficaz que existe. Para estudiar la emergencia de genotipos altamente patogénicos de PMMoV, se analizaron clones biológicos de PMMoV procedentes de aislados de campo cuya patogenicidad era conocida (P1,2) y por mutagénesis se les aumentó la patogenicidad (P1,2,3 y P1,2,3,4), introduciendo las mutaciones descritas como responsables de estos fenotipos. Se analizó si el aumento de la patogenicidad conlleva un compromiso en la eficacia biológica de los genotipos de PMMoV. Para ello se evaluaron diferentes componentes de la eficacia biológica del virus en diferentes huéspedes con distintos alelos de resistencia. Los resultados de esta tesis demuestran: i). El potencial de las plantas silvestres como reservorios de virus emergentes, en este caso tomates silvestres del sur de Perú, así como la existencia en estas plantas de aislados de PepMV de una nueva cepa no descrita que llamamos PES. ii) El aumento de la gama de huéspedes no es una condición estricta para la emergencia de los virus de plantas. iii) La adaptación es el mecanismo más probable en la emergencia de PepMV en tomate cultivado. iv) El aumento de la patogenicidad tiene un efecto pleiotrópico en distintos componentes de la eficacia biológica, así mismo el signo y magnitud de este efecto dependerá del genotipo del virus, del huésped y de la interacción de estos factores. ABSTRACT host Pathogens have evolved strategies to survive in their environment, infecting their hosts, multiplying inside them and being transmitted to other hosts. All of these components form part of the pathogen fitness, and allow them to be the cause of infectious diseases in humans, animals, and plants. The infection process produces negative effects on the host fitness and the effects severity will depend on the pathogen virulence. On the other hand, hosts have developed response mechanisms against pathogens such as resistance, which reduces the growth of pathogens, or tolerance, which decreases the negative effects of infection. T he se responses of s to infection cause negative effects on the pathogen fitness, acting as a selective pressure on its population. If the selective pressures on pathogens va ry according to the host s , probably one pathogen cannot increase its fitness in different hosts and will be more adapted to one host and less to another, decreasing its host range. This means that the adaptation of one pathogen to different hosts , will be often limited by different trade - off components of biological effectiveness of pathogen. Nowadays , trade - off evidence of pathogen adaptation to different hosts is not extensive, in relation with plant viruses. In last decades, an increase in the incidence of new or previously detected viruses has been described, causing infectious diseases with increased severity and/or different pathogenicity, such as the hosts infection previously resistants. This is known as the emergence of infectious diseases and is caused by emerging pathogens that come from a reservoir host where they are adapted. The hosts which act as reservoirs can be wild plants, that often have few symptoms or very mild , despite of being infected with different viruses, and being found in ecosystems with little or any human intervention. The study of ecological and biological factors , acting in the process of the infectious diseases emergence will help to understand its causes to create strategies for its prevention and control. Viruses are the main causative pathogens of the infectious diseases emergence in humans, animals and plants, and a good model to understand the emergency processes. Likewise, plants in contrast to animals are easy host to handle and viruses that affect them, safer for laboratory work than viruses of humans and animals, another models used in research. Therefore, the interaction plant-virus is a good experimental model for the study of the infectious diseases emergence. The study of virus emergence in plants also has a particular interest, because the viruses can cause economic losses in agricultural crops and threaten the resistance durability of improved plants, it suppose a risk for food security with significant impacts on society, comparable with infectious diseases of humans and domestic animals. To become an emerging pathogen, a virus must jump first from its reservoir host to a new host, then adapt to a new host until the infection within the population becomes independent from the reservoir, and finally must change its epidemiology. In this study, the emergence of pepino mosaic virus (PepMV) in tomato, was selected as experimental model to study the emergence of a virus in a new host specie, as well as the infections of different genotypes of pepper mild mottle virus (PMMoV) in pepper, to study the emergence of a virus that increases its pathogenicity in a previously resistant host. The study of both Pathosystems increased our knowledge about the ecological and evolutionary factors in the two first phases of the emergence of viral diseases in plants. The PepMV is an emerging pathogen in tomato (Solanum lycopersicum L.) in the world, which was first described in 1980 by infecting pepino (Solanum muricatum L.) in Peru, and almost after a decade caused an epidemic in tomato crops in Netherlands. The introduction to Europe was possibly through infected tomato seeds from Peru, and from then have been described new isolates that are grouped in four strains (EU, LP, CH2, US1) that infect tomato. However, the process of its emergence from pepino up tomato is a very interesting question, because it is one of the newest emerging viruses and economically important. For the PepMV emergence in tomato, wild tomato samples from southern Peru were collected, and the presence and diversity of PepMV isolates were analyzed and characterized at biological (host range) and genetics (genomic sequences) levels. Isolates from PMMoV have been described in different world regions which have acquired the ability to infect pepper varieties that were previously resistants (Capsicum spp), it means, a typical case of virus emergence which involves the host range extension and an increased pathogenicity. This is of great interest due to involve the use of resistant varieties obtained by breeding, which is the most effective way to control virus. To study the emergence of highly pathogenic genotypes of PMMoV, biological clones from field isolates whose pathogenicity was known were analyzed (P1,2) and by mutagenesis we increased its pathogenicity (P1,2,3 and P1,2, 3,4), introducing the mutations described as responsible for these phenotypes. We analyzed whether the increased pathogenicity involves a trade-off in fitness of PMMoV genotypes. For this aim, different components of virus fitness in different hosts with several resistance alleles were evaluated. The results of this thesis show: i). The potential of wild plants as reservoirs of emerging viruses, in this case wild tomatoes in southern Peru, and the existence in these plants of PepMV isolates of a new undescribed strain that we call PES. ii) The host range expansion is not a strict condition for the plant virus emergence. iii) The adaptation is the most likely mechanism in the PepMV emergence in cultivated tomato. iv) The increased pathogenicity has a pleiotropic effect on several fitness components, besides the sign and magnitude of this effect depends on the virus genotype, the host and the interaction of both.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las comunicaciones inalámbricas han transformado profundamente la forma en la que la gente se comunica en el día a día y es, sin lugar a dudas, una de las tecnologías de nuestro tiempo que más rápidamente evoluciona. Este rápido crecimiento implica retos enormes en la tecnología subyacente, debido y entre otros motivos, a la gran demanda de capacidad de los nuevos servicios inalámbricos. Los sistemas Multiple Input Multiple Output (MIMO) han despertado mucho interés como medio de mejorar el rendimiento global del sistema, satisfaciendo de este modo y en cierta medida los nuevo requisitos exigidos. De hecho, el papel relevante de esta tecnología en los actuales esfuerzos de estandarización internacionales pone de manifiesto esta utilidad. Los sistemas MIMO sacan provecho de los grados de libertad espaciales, disponibles a través del entorno multitrayecto, para mejorar el rendimiento de la comunicación con una destacable eficiencia espectral. Con el fin de alcanzar esta mejora en el rendimiento, la diversidad espacial y por diagrama han sido empleadas tradicionalmente para reducir la correlación entre los elementos radiantes, ya que una correlación baja es condición necesaria, si bien no suficiente, para dicha mejora. Tomando como referencia, o punto de partida, las técnicas empleadas para obtener diversidad por diagrama, esta tesis doctoral surge de la búsqueda de la obtención de diversidad por diagrama y/o multiplexación espacial a través del comportamiento multimodal de la antena microstrip, proponiendo para ello un modelo cuasi analítico original para el análisis y diseño de antenas microstrip multipuerto, multimodo y reconfigurables. Este novedoso enfoque en este campo, en vez de recurrir a simulaciones de onda completa por medio de herramientas comerciales tal y como se emplea en las publicaciones existentes, reduce significativamente el esfuerzo global de análisis y diseño, en este último caso por medio de guías de diseño generales. Con el fin de lograr el objetivo planteado y después de una revisión de los principales conceptos de los sistemas MIMO que se emplearán más adelante, se fija la atención en encontrar, implementar y verificar la corrección y exactitud de un modelo analítico que sirva de base sobre la cual añadir las mejoras necesarias para obtener las características buscadas del modelo cuasi analítico propuesto. Posteriormente y partiendo del modelo analítico base seleccionado, se exploran en profundidad y en diferentes entornos multitrayecto, las posibilidades en cuanto a rendimiento se refiere de diversidad por diagrama y multiplexación espacial, proporcionadas por el comportamiento multimodal de las antenas parche microstrip sin cargar. Puesto que cada modo de la cavidad tiene su propia frecuencia de resonancia, es necesario encontrar formas de desplazar la frecuencia de resonancia de cada modo empleado para ubicarlas en la misma banda de frecuencia, manteniendo cada modo al mismo tiempo tan independiente como sea posible. Este objetivo puede lograrse cargando adecuadamente la cavidad con cargas reactivas, o alterando la geometría del parche radiante. Por consiguiente, la atención en este punto se fija en el diseño, implementación y verificación de un modelo cuasi analítico para el análisis de antenas parche microstrip multipuerto, multimodo y cargadas que permita llevar a cabo la tarea indicada, el cuál es una de las contribuciones principales de esta tesis doctoral. Finalmente y basándose en el conocimiento adquirido a través del modelo cuasi analítico, se proporcionan y aplican guías generales para el diseño de antenas microstrip multipuerto, multimodo y reconfigurables para sistemas MIMO, con el fin de mejorar su diversidad por diagrama y/o su capacidad por medio del comportamiento multimodal de las antenas parche microstrip. Se debe destacar que el trabajo presentado en esta tesis doctoral ha dado lugar a una publicación en una revista técnica internacional de un alto factor de impacto. De igual manera, el trabajo también ha sido presentado en algunas de las más importantes conferencias internacionales en el ámbito de las antenas ABSTRACT Wireless communications have deeply transformed the way people communicate on daily basis and it is undoubtedly one of the most rapidly evolving technologies of our time. This fast growing behaviour involves huge challenges on the bearing technology, due to and among others reasons, the high demanding capacity of new wireless services. MIMO systems have given rise to considerable interest as a means to enhance the overall system performance, thus satisfying somehow the new demanding requirements. Indeed, the significant role of this technology on current international standardization efforts, highlights this usefulness. MIMO systems make profit from the spatial degrees of freedom available through the multipath scenario to improve the communication performance with a remarkable spectral efficiency. In order to achieve this performance improvement, spatial and pattern diversity have been traditionally used to decrease the correlation between antenna elements, as low correlation is a necessary but not sufficient condition. Taking as a reference, or starting point, the techniques used to achieve pattern diversity, this Philosophiae Doctor (Ph.D.) arises from the pursuit of obtaining pattern diversity and/or spatial multiplexing capabilities through the multimode microstrip behaviour, thus proposing a novel quasi analytical model for the analysis and design of reconfigurable multimode multiport microstrip antennas. This innovative approach on this field, instead of resorting to full-wave simulations through commercial tools as done in the available publications, significantly reduces the overall analysis and design effort, in this last case through comprehensive design guidelines. In order to achieve this goal and after a review of the main concepts of MIMO systems which will be followed used, the spotlight is fixed on finding, implementing and verifying the correctness and accuracy of a base quasi analytical model over which add the necessary enhancements to obtain the sought features of the quasi analytical model proposed. Afterwards and starting from the base quasi analytical model selected, the pattern diversity and spatial multiplexing performance capabilities provided by the multimode behaviour of unloaded microstrip patch antennas under different multipath environments are fully explored. As each cavity mode has its own resonant frequency, it is required to find ways to displace the resonant frequency of each used mode to place them at the same frequency band while keeping each mode as independent as possible. This objective can be accomplished with an appropriate loading of the cavity with reactive loads, or through the alteration of the geometry of the radiation patch. Thus, the focus is set at this point on the design, implementation and verification of a quasi analytical model for the analysis of loaded multimode multiport microstrip patch antennas to carry out the aforementioned task, which is one of the main contributions of this Ph.D. Finally and based on the knowledge acquired through the quasi analytical model, comprehensive guidelines to design reconfigurable multimode MIMO microstrip antennas to improve the spatial multiplexing and/or diversity system performance by means of the multimode microstrip patch antenna behaviour are given and applied. It shall be highlighted that the work presented in this Ph.D. has given rise to a publication in an international technical journal of high impact factor. Moreover, the work has also been presented at some of the most important international conferences in antenna area.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The construction industry, one of the most important ones in the development of a country, generates unavoidable impacts on the environment. The social demand towards greater respect for the environment is a high and general outcry. Therefore, the construction industry needs to reduce the impact it produces. Proper waste management is not enough; we must take a further step in environmental management, where new measures need to be introduced for the prevention at source, such as good practices to promote recycling. Following the amendment of the legal frame applicable to Construction and Demolition Waste (C&D waste), important developments have been incorporated in European and International laws, aiming to promote the culture of reusing and recycling. This change of mindset, that is progressively taking place in society, is allowing for the consideration of C&D waste no longer as an unusable waste, but as a reusable material. The main objective of the work presented in this paper is to enhance C&D waste management systems through the development of preventive measures during the construction process. These measures concern all the agents intervening in the construction process as only the personal implication of all of them can ensure an efficient management of the C&D waste generated. Finally, a model based on preventive measures achieves organizational cohesion between the different stages of the construction process, as well as promoting the conservation of raw materials through the use and waste minimization. All of these in order to achieve a C&D waste management system, whose primary goal is zero waste generation

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have recently demonstrated a biosensor based on a lattice of SU8 pillars on a 1 μm SiO2/Si wafer by measuring vertically reflectivity as a function of wavelength. The biodetection has been proven with the combination of Bovine Serum Albumin (BSA) protein and its antibody (antiBSA). A BSA layer is attached to the pillars; the biorecognition of antiBSA involves a shift in the reflectivity curve, related with the concentration of antiBSA. A detection limit in the order of 2 ng/ml is achieved for a rhombic lattice of pillars with a lattice parameter (a) of 800 nm, a height (h) of 420 nm and a diameter(d) of 200 nm. These results correlate with calculations using 3D-finite difference time domain method. A 2D simplified model is proposed, consisting of a multilayer model where the pillars are turned into a 420 nm layer with an effective refractive index obtained by using Beam Propagation Method (BPM) algorithm. Results provided by this model are in good correlation with experimental data, reaching a reduction in time from one day to 15 minutes, giving a fast but accurate tool to optimize the design and maximizing sensitivity, and allows analyzing the influence of different variables (diameter, height and lattice parameter). Sensitivity is obtained for a variety of configurations, reaching a limit of detection under 1 ng/ml. Optimum design is not only chosen because of its sensitivity but also its feasibility, both from fabrication (limited by aspect ratio and proximity of the pillars) and fluidic point of view. (© 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Solar drying is one of the important processes used for extending the shelf life of agricultural products. Regarding consumer requirements, solar drying should be more suitable in terms of curtailing total drying time and preserving product quality. Therefore, the objective of this study was to develop a fuzzy logic-based control system, which performs a ?human-operator-like? control approach through using the previously developed low-cost model-based sensors. Fuzzy logic toolbox of MatLab and Borland C++ Builder tool were utilized to develop a required control system. An experimental solar dryer, constructed by CONA SOLAR (Austria) was used during the development of the control system. Sensirion sensors were used to characterize the drying air at different positions in the dryer, and also the smart sensor SMART-1 was applied to be able to include the rate of wood water extraction into the control system (the difference of absolute humidity of the air between the outlet and the inlet of solar dryer is considered by SMART-1 to be the extracted water). A comprehensive test over a 3 week period for different fuzzy control models has been performed, and data, obtained from these experiments, were analyzed. Findings from this study would suggest that the developed fuzzy logic-based control system is able to tackle difficulties, related to the control of solar dryer process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The uptake of Linked Data (LD) has promoted the proliferation of datasets and their associated ontologies bringing their semantic to the data being published. These ontologies should be evaluated at different stages, both during their development and their publication. As important as correctly modelling the intended part of the world to be captured in an ontology, is publishing, sharing and facilitating the (re)use of the obtained model. In this paper, 11 evaluation characteristics, with respect to publish, share and facilitate the reuse, are proposed. In particular, 6 good practices and 5 pitfalls are presented, together with their associated detection methods. In addition, a grid-based rating system is generated. Both contributions, the set of evaluation characteristics and the grid system, could be useful for ontologists in order to reuse existing LD vocabularies or to check the one being built.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

En los últimos años la externalización de TI ha ganado mucha importancia en el mercado y, por ejemplo, el mercado externalización de servicios de TI sigue creciendo cada año. Ahora más que nunca, las organizaciones son cada vez más los compradores de las capacidades necesarias mediante la obtención de productos y servicios de los proveedores, desarrollando cada vez menos estas capacidades dentro de la empresa. La selección de proveedores de TI es un problema de decisión complejo. Los gerentes que enfrentan una decisión sobre la selección de proveedores de TI tienen dificultades en la elaboración de lo que hay que pensar, además en sus discursos. También de acuerdo con un estudio del SEI (Software Engineering Institute) [40], del 20 al 25 por ciento de los grandes proyectos de adquisición de TI fracasan en dos años y el 50 por ciento fracasan dentro de cinco años. La mala gestión, la mala definición de requisitos, la falta de evaluaciones exhaustivas, que pueden ser utilizadas para llegar a los mejores candidatos para la contratación externa, la selección de proveedores y los procesos de contratación inadecuados, la insuficiencia de procedimientos de selección tecnológicos, y los cambios de requisitos no controlados son factores que contribuyen al fracaso del proyecto. La mayoría de los fracasos podrían evitarse si el cliente aprendiese a comprender los problemas de decisión, hacer un mejor análisis de decisiones, y el buen juicio. El objetivo principal de este trabajo es el desarrollo de un modelo de decisión para la selección de proveedores de TI que tratará de reducir la cantidad de fracasos observados en las relaciones entre el cliente y el proveedor. La mayor parte de estos fracasos son causados por una mala selección, por parte del cliente, del proveedor. Además de estos problemas mostrados anteriormente, la motivación para crear este trabajo es la inexistencia de cualquier modelo de decisión basado en un multi modelo (mezcla de modelos adquisición y métodos de decisión) para el problema de la selección de proveedores de TI. En el caso de estudio, nueve empresas españolas fueron analizadas de acuerdo con el modelo de decisión para la selección de proveedores de TI desarrollado en este trabajo. Dos softwares se utilizaron en este estudio de caso: Expert Choice, y D-Sight. ABSTRACT In the past few years IT outsourcing has gained a lot of importance in the market and, for example, the IT services outsourcing market is still growing every year. Now more than ever, organizations are increasingly becoming acquirers of needed capabilities by obtaining products and services from suppliers and developing less and less of these capabilities in-house. IT supplier selection is a complex and opaque decision problem. Managers facing a decision about IT supplier selection have difficulty in framing what needs to be thought about further in their discourses. Also according to a study from SEI (Software Engineering Institute) [40], 20 to 25 percent of large information technology (IT) acquisition projects fail within two years and 50 percent fail within five years. Mismanagement, poor requirements definition, lack of comprehensive evaluations, which can be used to come up with the best candidates for outsourcing, inadequate supplier selection and contracting processes, insufficient technology selection procedures, and uncontrolled requirements changes are factors that contribute to project failure. The majority of project failures could be avoided if the acquirer learns how to understand the decision problems, make better decision analysis, and good judgment. The main objective of this work is the development of a decision model for IT supplier selection that will try to decrease the amount of failures seen in the relationships between the client-supplier. Most of these failures are caused by a not well selection of the supplier. Besides these problems showed above, the motivation to create this work is the inexistence of any decision model based on multi model (mixture of acquisition models and decision methods) for the problem of IT supplier selection. In the case study, nine different Spanish companies were analyzed based on the IT supplier selection decision model developed in this work. Two software products were used in this case study, Expert Choice and D-Sight.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work addresses heat losses in a CVD reactor for polysilicon production. Contributions to the energy consumption of the so-called Siemens process are evaluated, and a comprehensive model for heat loss is presented. A previously-developed model for radiative heat loss is combined with conductive heat loss theory and a new model for convective heat loss. Theoretical calculations are developed and theoretical energy consumption of the polysilicon deposition process is obtained. The model is validated by comparison with experimental results obtained using a laboratory-scale CVD reactor. Finally, the model is used to calculate heat consumption in a 36-rod industrial reactor; the energy consumption due to convective heat loss per kilogram of polysilicon produced is calculated to be 22-30 kWh/kg along a deposition process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Las transformaciones martensíticas (MT) se definen como un cambio en la estructura del cristal para formar una fase coherente o estructuras de dominio multivariante, a partir de la fase inicial con la misma composición, debido a pequeños intercambios o movimientos atómicos cooperativos. En el siglo pasado se han descubierto MT en diferentes materiales partiendo desde los aceros hasta las aleaciones con memoria de forma, materiales cerámicos y materiales inteligentes. Todos muestran propiedades destacables como alta resistencia mecánica, memoria de forma, efectos de superelasticidad o funcionalidades ferroicas como la piezoelectricidad, electro y magneto-estricción etc. Varios modelos/teorías se han desarrollado en sinergia con el desarrollo de la física del estado sólido para entender por qué las MT generan microstructuras muy variadas y ricas que muestran propiedades muy interesantes. Entre las teorías mejor aceptadas se encuentra la Teoría Fenomenológica de la Cristalografía Martensítica (PTMC, por sus siglas en inglés) que predice el plano de hábito y las relaciones de orientación entre la austenita y la martensita. La reinterpretación de la teoría PTMC en un entorno de mecánica del continuo (CM-PTMC) explica la formación de los dominios de estructuras multivariantes, mientras que la teoría de Landau con dinámica de inercia desentraña los mecanismos físicos de los precursores y otros comportamientos dinámicos. La dinámica de red cristalina desvela la reducción de la dureza acústica de las ondas de tensión de red que da lugar a transformaciones débiles de primer orden en el desplazamiento. A pesar de las diferencias entre las teorías estáticas y dinámicas dado su origen en diversas ramas de la física (por ejemplo mecánica continua o dinámica de la red cristalina), estas teorías deben estar inherentemente conectadas entre sí y mostrar ciertos elementos en común en una perspectiva unificada de la física. No obstante las conexiones físicas y diferencias entre las teorías/modelos no se han tratado hasta la fecha, aun siendo de importancia crítica para la mejora de modelos de MT y para el desarrollo integrado de modelos de transformaciones acopladas de desplazamiento-difusión. Por lo tanto, esta tesis comenzó con dos objetivos claros. El primero fue encontrar las conexiones físicas y las diferencias entre los modelos de MT mediante un análisis teórico detallado y simulaciones numéricas. El segundo objetivo fue expandir el modelo de Landau para ser capaz de estudiar MT en policristales, en el caso de transformaciones acopladas de desplazamiento-difusión, y en presencia de dislocaciones. Comenzando con un resumen de los antecedente, en este trabajo se presentan las bases físicas de los modelos actuales de MT. Su capacidad para predecir MT se clarifica mediante el ansis teórico y las simulaciones de la evolución microstructural de MT de cúbicoatetragonal y cúbicoatrigonal en 3D. Este análisis revela que el modelo de Landau con representación irreducible de la deformación transformada es equivalente a la teoría CM-PTMC y al modelo de microelasticidad para predecir los rasgos estáticos durante la MT, pero proporciona una mejor interpretación de los comportamientos dinámicos. Sin embargo, las aplicaciones del modelo de Landau en materiales estructurales están limitadas por su complejidad. Por tanto, el primer resultado de esta tesis es el desarrollo del modelo de Landau nolineal con representación irreducible de deformaciones y de la dinámica de inercia para policristales. La simulación demuestra que el modelo propuesto es consistente fcamente con el CM-PTMC en la descripción estática, y también permite una predicción del diagrama de fases con la clásica forma ’en C’ de los modos de nucleación martensítica activados por la combinación de temperaturas de enfriamiento y las condiciones de tensión aplicada correlacionadas con la transformación de energía de Landau. Posteriomente, el modelo de Landau de MT es integrado con un modelo de transformación de difusión cuantitativa para elucidar la relajación atómica y la difusión de corto alcance de los elementos durante la MT en acero. El modelo de transformaciones de desplazamiento y difusión incluye los efectos de la relajación en borde de grano para la nucleación heterogenea y la evolución espacio-temporal de potenciales de difusión y movilidades químicas mediante el acoplamiento de herramientas de cálculo y bases de datos termo-cinéticos de tipo CALPHAD. El modelo se aplica para estudiar la evolución microstructural de aceros al carbono policristalinos procesados por enfriamiento y partición (Q&P) en 2D. La microstructura y la composición obtenida mediante la simulación se comparan con los datos experimentales disponibles. Los resultados muestran el importante papel jugado por las diferencias en movilidad de difusión entre la fase austenita y martensita en la distibución de carbono en las aceros. Finalmente, un modelo multi-campo es propuesto mediante la incorporación del modelo de dislocación en grano-grueso al modelo desarrollado de Landau para incluir las diferencias morfológicas entre aceros y aleaciones con memoria de forma con la misma ruptura de simetría. La nucleación de dislocaciones, la formación de la martensita ’butterfly’, y la redistribución del carbono después del revenido son bien representadas en las simulaciones 2D del estudio de la evolución de la microstructura en aceros representativos. Con dicha simulación demostramos que incluyendo las dislocaciones obtenemos para dichos aceros, una buena comparación frente a los datos experimentales de la morfología de los bordes de macla, la existencia de austenita retenida dentro de la martensita, etc. Por tanto, basado en un modelo integral y en el desarrollo de códigos durante esta tesis, se ha creado una herramienta de modelización multiescala y multi-campo. Dicha herramienta acopla la termodinámica y la mecánica del continuo en la macroescala con la cinética de difusión y los modelos de campo de fase/Landau en la mesoescala, y también incluye los principios de la cristalografía y de la dinámica de red cristalina en la microescala. ABSTRACT Martensitic transformation (MT), in a narrow sense, is defined as the change of the crystal structure to form a coherent phase, or multi-variant domain structures out from a parent phase with the same composition, by small shuffles or co-operative movements of atoms. Over the past century, MTs have been discovered in different materials from steels to shape memory alloys, ceramics, and smart materials. They lead to remarkable properties such as high strength, shape memory/superelasticity effects or ferroic functionalities including piezoelectricity, electro- and magneto-striction, etc. Various theories/models have been developed, in synergy with development of solid state physics, to understand why MT can generate these rich microstructures and give rise to intriguing properties. Among the well-established theories, the Phenomenological Theory of Martensitic Crystallography (PTMC) is able to predict the habit plane and the orientation relationship between austenite and martensite. The re-interpretation of the PTMC theory within a continuum mechanics framework (CM-PTMC) explains the formation of the multivariant domain structures, while the Landau theory with inertial dynamics unravels the physical origins of precursors and other dynamic behaviors. The crystal lattice dynamics unveils the acoustic softening of the lattice strain waves leading to the weak first-order displacive transformation, etc. Though differing in statics or dynamics due to their origins in different branches of physics (e.g. continuum mechanics or crystal lattice dynamics), these theories should be inherently connected with each other and show certain elements in common within a unified perspective of physics. However, the physical connections and distinctions among the theories/models have not been addressed yet, although they are critical to further improving the models of MTs and to develop integrated models for more complex displacivediffusive coupled transformations. Therefore, this thesis started with two objectives. The first one was to reveal the physical connections and distinctions among the models of MT by means of detailed theoretical analyses and numerical simulations. The second objective was to expand the Landau model to be able to study MTs in polycrystals, in the case of displacive-diffusive coupled transformations, and in the presence of the dislocations. Starting with a comprehensive review, the physical kernels of the current models of MTs are presented. Their ability to predict MTs is clarified by means of theoretical analyses and simulations of the microstructure evolution of cubic-to-tetragonal and cubic-to-trigonal MTs in 3D. This analysis reveals that the Landau model with irreducible representation of the transformed strain is equivalent to the CM-PTMC theory and microelasticity model to predict the static features during MTs but provides better interpretation of the dynamic behaviors. However, the applications of the Landau model in structural materials are limited due its the complexity. Thus, the first result of this thesis is the development of a nonlinear Landau model with irreducible representation of strains and the inertial dynamics for polycrystals. The simulation demonstrates that the updated model is physically consistent with the CM-PTMC in statics, and also permits a prediction of a classical ’C shaped’ phase diagram of martensitic nucleation modes activated by the combination of quenching temperature and applied stress conditions interplaying with Landau transformation energy. Next, the Landau model of MT is further integrated with a quantitative diffusional transformation model to elucidate atomic relaxation and short range diffusion of elements during the MT in steel. The model for displacive-diffusive transformations includes the effects of grain boundary relaxation for heterogeneous nucleation and the spatio-temporal evolution of diffusion potentials and chemical mobility by means of coupling with a CALPHAD-type thermo-kinetic calculation engine and database. The model is applied to study for the microstructure evolution of polycrystalline carbon steels processed by the Quenching and Partitioning (Q&P) process in 2D. The simulated mixed microstructure and composition distribution are compared with available experimental data. The results show that the important role played by the differences in diffusion mobility between austenite and martensite to the partitioning in carbon steels. Finally, a multi-field model is proposed by incorporating the coarse-grained dislocation model to the developed Landau model to account for the morphological difference between steels and shape memory alloys with same symmetry breaking. The dislocation nucleation, the formation of the ’butterfly’ martensite, and the redistribution of carbon after tempering are well represented in the 2D simulations for the microstructure evolution of the representative steels. With the simulation, we demonstrate that the dislocations account for the experimental observation of rough twin boundaries, retained austenite within martensite, etc. in steels. Thus, based on the integrated model and the in-house codes developed in thesis, a preliminary multi-field, multiscale modeling tool is built up. The new tool couples thermodynamics and continuum mechanics at the macroscale with diffusion kinetics and phase field/Landau model at the mesoscale, and also includes the essentials of crystallography and crystal lattice dynamics at microscale.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Un nuevo sistema de gobernanza para afrontar los retos del siglo XXI en la educación universitaria en Perú basado en el modelo de análisis de políticas, surge de observar el efecto de la competencia en los mercados, de la distribución de los escasos recursos según productividad y rendimiento, y de la gestión ineficiente de las universidades ya que estos parámetros están cambiando los criterios de confianza y legitimidad del sistema universitario en Perú. Las universidades se perciben más como instituciones del sector público, mientras que los servicios que ofrecen deben más bien contribuir a la modernización de la sociedad emergente y a la economía del conocimiento. Las reformas universitarias- iniciadas en los años 80 - han estado inspiradas en las organizaciones universitarias exitosas que han logrado modificar su gobernanza y van dirigidas a transformar ciertas instituciones burocráticas en organizaciones capaces de desempeñar la función de actores en esta competición global por los recursos y los mejores talentos. En este contexto, la universidad peruana se enfrenta a dos grandes desafíos: el de adaptarse a las nuevas perspectivas mundiales, y el poder dar mejor respuesta a las demandas, necesidades y expectativas de la sociedad. Un cambio en el sistema de gobernanza para la educación superior universitaria dará una solución integral a estos desafíos permitiéndole enfrentar los problemas de la universidad para su desarrollo e inserción en las corrientes mundiales. La metodología planteada en la investigación es cualitativa parte del análisis de la realidad como un TODO, sin reducirlos a sus partes integrantes, con la interpretación de los hechos, buscando entender las variables que intervienen. Se propone una política para la educación universitaria en Perú que se permeabilice a la sociedad, cambiando el modelo de planificación de un modelo de reforma social a un modelo de análisis de políticas, donde el Estado Peruano actúe como único responsable de responder a la sociedad demandante como su representante legal, y con unos organismo externo e independiente que siente las bases de la práctica, como se está haciendo en muchos modelos universitarios del mundo. Esta investigación presenta una primera fase conceptual, que aborda la evolución histórica de las universidades en el Perú, analizando y clarificando las fuerzas impulsoras a través del tiempo y distinguir las principales líneas que le imprimen dirección y sentido a los cambios de una realidad educativa universitaria. Así mismo, en esta fase se hace un análisis de la situación actual de las universidades en el Perú para llegar a determinar en qué situación se encuentra y si está preparada para enfrentar los retos de la educación universitaria mundial, para esto se analizan los modelos universitarios de mayor prestigio en el mundo. El marco teórico anterior permite sentar, en una segunda fase de la investigación, las bases científicas del modelo que se propone: el modelo de planificación de análisis de políticas para el sistema universitario peruano. Este modelo de ámbito público propuesto para la educación universitaria peruana basa su estrategia en un modelo de planificación con un objetivo común: “Mejorar la calidad de la educación superior universitaria peruana con el fin de aumentar la empleabilidad y la movilidad de los ciudadanos así como la competitividad internacional de la educación universitaria en Perú”, y con unas líneas de acción concretadas en cuatro objetivos específicos: 1) competencias (genéricas y específicas de las áreas temáticas); 2) enfoques de enseñanza, aprendizaje y evaluación; 3) créditos académicos; 4) calidad de los programa. Así como los fundamentos metodológicos del modelo de análisis de políticas, utilizado como estructura política, teniendo en cuenta las características básicas del modelo: a) Planificación desde arriba; b) Se centra en la toma de decisiones; c) Separación entre conocimiento experto y decisión; d) El estudio de los resultados orienta el proceso decisor. Finalmente, se analiza una fase de validación del modelo propuesto para la educación superior universitaria peruana, con los avances ya realizados en Perú en temas de educación superior, como es, el actual contexto de la nueva Ley Universitaria N°30220 promulgada el 8 de julio de 2014, la creación del SUNEDU y la reorganización del SINEACE, que tienen como propósito atender la crisis universitaria centrada en tres ejes principales incluidos en la ley, considerados como bases para una reforma. Primero, el Estado asume la rectoría de las políticas educativas en todos los niveles educativos. El segundo aspecto consiste en instalar un mecanismo de regulación de la calidad que junto con la reestructuración de aquellos otros existentes debieran sentar las bases para que las familias y estudiantes tengan la garantía pública de que el servicio que se ofrece, sin importar sus características particulares, presenten un mínimo común de calidad y un tercer aspecto es que la ley se reafirma en que la universidad es un espacio de construcción de conocimiento basado en la investigación y la formación integral. Las finalidades, la estructura y organización, las formas de graduación, las características del cuerpo docente, la obligatoriedad por los estudios generales, etc., indican que la reflexión académica es el centro articulador de la vida universitaria. Esta validación también se ha confrontado con los resultados de las entrevistas cualitativas a juicio de experto que se han realizado a rectores de universidades públicas y privadas así como a rectores miembros de la ex ANR, miembros de organizaciones como CONCYTEC, IEP, CNE, CONEAU, ICACIT e investigadores en educación superior, con la finalidad de analizar la sostenibilidad del modelo propuesto en el tiempo. Los resultados evidencian, que en el sistema universitario peruano se puede implementar un cambio hacía un modelo de educación superior universitaria, con una política educativa que se base en un objetivo común claramente definido, un calendario para lograrlo y un conjunto objetivos específicos, con un cambio de estructura política de reforma social a un modelo de análisis de políticas. Así mismo se muestran los distintos aspectos que los interesados en la educación superior universitaria deben considerar, si se quiere ocupar un espacio en el futuro y si interesa que la universidad peruana pueda contribuir para que la sociedad se forje caminos posibles a través de una buena docencia que se refleje en su investigación, con alumnos internacionales, sobre todo, en los postgrados; con un investigación que se traduzca en publicaciones, patentes, etc., de impacto mundial, con relevancia en la sociedad porque contribuye a su desarrollo, concretándose en trabajos de muy diversos tipos, promovidos junto con empresas, gobiernos en sus diversos niveles, instituciones públicas o privadas, etc., para que aporten financiación a la universidad. ABSTRACT A new system of governance to meet the challenges of the twenty-first century university education in Peru based on the model of policy analysis, comes to observe the effect of market competition, distribution of scarce resources according to productivity and performance, and inefficient management of universities as these parameters are changing the criteria of trust and legitimacy of the university system in Peru. Universities are perceived more as public sector institutions, while the services provided should rather contribute to the modernization of society and the emerging knowledge economy. The-university reforms initiated in the 80s - have been inspired by successful university organizations that have succeeded in changing its governance and as attempting to transform certain bureaucratic institutions into organizations that act as actors in this global competition for resources and top talent. In this context, the Peruvian university faces two major challenges: to adapt to the new global outlook, and to better respond to the demands, needs and expectations of society. A change in the system of governance for university education give a comprehensive solution to address these challenges by allowing the problems of the university development and integration into global flows. The methodology proposed in this research is qualitative part of the analysis of reality as a whole, without reducing them to their constituent parts, with the interpretation of the facts, seeking to understand the variables involved. a policy for university education in Peru that permeabilizes society is proposed changing the planning model of a model of social reform a model of policy analysis, where the Peruvian State to act as the sole responsible for responding to the applicant as its legal representative, and with external and independent body that provides the basis of practice, as is being done in many university models in the world. This research presents an initial conceptual phase, which deals with the historical development of universities in Peru, analyzing and clarifying the driving forces over time and distinguish the main lines that give direction and meaning to changes in university educational reality. Also, at this stage an analysis of the current situation of universities in Peru is done to be able to determine what the situation is and whether it is prepared to meet the challenges of the global higher education, for this university models are analyzed most prestigious in the world. The above theoretical framework allows to lay in a second phase of research, the scientific basis of the model proposed: the planning model of policy analysis for the Peruvian university system. This proposed model of public sphere for the Peruvian college bases its strategy on a planning model with a common goal: "To improve the quality of the Peruvian university education in order to enhance the employability and mobility of citizens and the international competitiveness of higher education in Peru ", and lines of action materialized in four specific objectives: 1) competences (generic and specific subject areas); 2) approaches to teaching, learning and assessment; 3) credits; 4) quality of the program. As well as the methodological foundations of policy analysis model, used as political structure, taking into account the basic characteristics of the model: a) Planning from above; b) focuses on decision making; c) Separation between expertise and decision; d) The study of the results process guides the decision maker. Finally, a validation phase of the proposed Peruvian university higher education, with the progress already made in Peru on issues of higher education model is analyzed, as is the current context of the new University Law No. 30220 promulgated on July 8 2014, the creation of SUNEDU and reorganization of SINEACE, which are intended to serve the university crisis centered on three main areas included in the law, considered as the basis for reform. First, the State assumes the stewardship of education policies at all educational levels. The second aspect is to install a mechanism for regulating the quality along with the restructuring of those existing ones should lay the foundation for families and students to guarantee that public service is offered, regardless of their individual characteristics, are of common minimum quality and a third aspect is that the law reaffirms that the university is building a space of research-based knowledge and comprehensive training. The aims, structure and organization, forms of graduation, faculty characteristics, the requirement for the general studies, etc., indicate that the academic reflection is the coordinating center of university life. This validation has also been confronted with the results of qualitative interviews with expert judgment that has been made to directors of public and private universities as well as leading members of the former ANR members of organizations like CONCYTEC, IEP, CNE, CONEAU, ICACIT and researchers in higher education, in order to analyze the sustainability of the proposed model in time. The results show, that the Peruvian university system can implement a change to a model of university education, an educational policy based on clearly defined common goal, a timetable for achieving specific objectives set and, with a change social policy structure to a model of reform policy analysis. It also shows the various aspects that those interested in university education should consider, if you want to occupy a space in the future and if interested in the Peruvian university can contribute to society possible paths is forged through research good teaching, international students, especially in graduate programs; with research that results in publications, patents, etc., global impact, relevance to society because it contributes to their development taking shape in very different types of jobs, promoted with businesses, governments at various levels, public institutions or private, etc., to provide funding to the university.