963 resultados para we and they


Relevância:

100.00% 100.00%

Publicador:

Resumo:

To assess the effect of deregulated Ha-ras and bcl-2, individually and in combination on epidermal keratinocyte homeostasis and during multistep skin carcinogenesis, we generated skin-specific transgenic mice and keratinocyte transfectants constitutively expressing oncogenic Ha-ras and bcl-2 proteins. The deregulated Ha-ras and bcl-2 expression contributing to homeostatic imbalances in the skin had an additive effect on the probability of tumor development. They were also cooperative in incidence, growth, and latency of tumor formation, and they exhibited synergistic cooperation in malignant transformation of benign papillomas. To explain the homeostatic imbalances by Ha-ras and bcl-2 overexpression in the skin, we investigated the three major cellular processes of proliferation, cell death, and differentiation. Epidermal expression of Bcl-2 retarded keratinocyte proliferation in the epidermis of neonatal mice compared with results for control littermates. Constitutive expression of Ha-ras increased keratinocyte proliferation, and co-expression of bcl-2 modestly suppressed the ras-mediated abnormal proliferation of neonatal keratinocytes. Bcl-2 proteins in keratinocytes protected UV-treated cells from apoptotic cell death regardless of oncogenic ras expression in both non-neoplastic neonatal epidermis and human keratinocyte cell lines. The spontaneous apoptotic index (AI) was also lower in papillomas constitutively expressing bcl-2 compared with the ones that developed in control mice. Ras-overexpressing epidermis, including that in ras/bcl-2 double transgenic mice, had abnormal differentiation patterns compared with controls. The oncogenic ras protein had alterations in both epidermal distribution and the extent of cytokeratin 14 and involucrin expression. Abnormal expression of the hyperproliferation marker cytokeratin 6 and modest down regulation of cytokeratin 1 were also detected. Late appearance of filaggrin was another abnormal phenotype of the ras-expressing epidermis. Overexpression of bcl-2 had no effect on epidermal differentiation. Together, these findings suggest that constitutive expression of oncogenic Ha-ras and bcl-2 are important determinants of epidermal proliferation, viability and differentiation. In summary, our results demonstrated that the disruption of epidermal homeostasis by overexpressed ras and bcl-2 predisposes to hyperplastic growth of the epidermis and to papilloma development and that these proteins with distinct mechanisms for oncogenesis are functionally synergistic for malignant transformation of chemically induced skin carcinogenesis. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chemical and isotopic (Nd and Sr) compositions have been determined for 12 Cretaceous basaltic samples (108 Ma old) from Holes 417D and 418A of Legs 51,52 and 53. We have found that: (1) The chemical compositions are typical of MORB. They do not vary systematically with the stratigraphic positions of the analyzed samples; thus, the chemical evolution is independent of the eruption sequence that occurred at this Cretaceous ridge. (2) REE patterns for all rocks are characterized by a strong LREE depletion with (La/Sm)N = 0.38-0.50; no significant Eu anomalies are found; HREE are nearly flat or slightly depleted towards Yb-Lu and have 12-18 * chondritic abundances. Combining the results of previous studies, it suggests that no significant temporal and spatial variation in magma chemistry (especially for LIL elements) has occurred in the 'normal' ridge segments over the last 150 Ma. (3) lsotopically, 143Nd/144Nd ratios vary from 0.513026 to 0.513154, corresponding to epsilon-Nd(0) = +7.5 to +10, and they fall in the typical range of MORB. However, these rocks have unexpectedly high 87Sr/86Sr ratios (0.70355-0.70470) which are attributed to the result of seawater-rock interaction. (4) The Nd model ages (Tin), ranging from 1.53 to 2.47 (average 2.06) AE, suggest that the upper mantle source(s) underwent a large scale chemical differentiation leading to LREE and other LIL element depletion about 2 AE ago, assuming a simple two-stage model. More realistically, the variation in Tm(Nd) or epsilon-Nd could be derived from mixing of heterogeneous mantle sources that were a consequence of continuous mantle differentiation and continental formation. (5) Because of the low mg values (0.52-0.63), the analyzed basaltic rocks do not represent primary liquids of mantle melting. The variation in La/Sm ratios and TiO2 are not compatible with a model in which all rocks are genetically related by a simple fractional crystallization. Rather, it is proposed that the basaltic rocks might have been derived from some heterogeneous upper mantle source with or without later magmatic mixing, and followed by some shallow-level fraetionations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Goal-level Independent and-parallelism (IAP) is exploited by scheduling for simultaneous execution two or more goals which will not interfere with each other at run time. This can be done safely even if such goals can produce multiple answers. The most successful IAP implementations to date have used recomputation of answers and sequentially ordered backtracking. While in principle simplifying the implementation, recomputation can be very inefficient if the granularity of the parallel goals is large enough and they produce several answers, while sequentially ordered backtracking limits parallelism. And, despite the expected simplification, the implementation of the classic schemes has proved to involve complex engineering, with the consequent difficulty for system maintenance and expansion, and still frequently run into the well-known trapped goal and garbage slot problems. This work presents ideas about an alternative parallel backtracking model for IAP and a simulation studio. The model features parallel out-of-order backtracking and relies on answer memoization to reuse and combine answers. Whenever a parallel goal backtracks, its siblings also perform backtracking, but after storing the bindings generated by previous answers. The bindings are then reinstalled when combining answers. In order not to unnecessarily penalize forward execution, non-speculative and-parallel goals which have not been executed yet take precedence over sibling goals which could be backtracked over. Using a simulator, we show that this approach can bring significant performance advantages over classical approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La sequía es un fenómeno natural que se origina por el descenso de las precipitaciones con respecto a una media, y que resulta en la disponibilidad insuficiente de agua para alguna actividad. La creciente presión que se ha venido ejerciendo sobre los recursos hídricos ha hecho que los impactos de la sequía se hayan visto agravados a la vez que ha desencadenado situaciones de escasez de agua en muchas partes del planeta. Los países con clima mediterráneo son especialmente vulnerables a las sequías, y, su crecimiento económico dependiente del agua da lugar a impactos importantes. Para reducir los impactos de la sequía es necesaria una reducción de la vulnerabilidad a las sequías que viene dada por una gestión más eficiente y por una mejor preparación. Para ello es muy importante disponer de información acerca de los impactos y el alcance de este fenómeno natural. Esta investigación trata de abarcar el tema de los impactos de las sequías, de manera que plantea todos los tipos de impactos que pueden darse y además compara sus efectos en dos países (España y Chile). Para ello se proponen modelos de atribución de impactos que sean capaces de medir las pérdidas económicas causadas por la falta de agua. Los modelos propuestos tienen una base econométrica en la que se incluyen variables clave a la hora de evaluar los impactos como es una variable relacionada con la disponibilidad de agua, y otras de otra naturaleza para distinguir los efectos causados por otras fuentes de variación. Estos modelos se adaptan según la fase del estudio en la que nos encontremos. En primer lugar se miden los impactos directos sobre el regadío y se introduce en el modelo un factor de aleatoriedad para evaluar el riesgo económico de sequía. Esto se hace a dos niveles geográficos (provincial y de Unidad de Demanda Agraria) y además en el último se introduce no solo el riesgo de oferta sino también el riesgo de demanda de agua. La introducción de la perspectiva de riesgo en el modelo da lugar a una herramienta de gestión del riesgo económico que puede ser utilizada para estrategias de planificación. Más adelante una extensión del modelo econométrico se desarrolla para medir los impactos en el sector agrario (impactos directos sobre el regadío y el secano e impactos indirectos sobre la Agro Industria) para ello se adapta el modelo y se calculan elasticidades concatenadas entre la falta de agua y los impactos secundarios. Por último se plantea un modelo econométrico para el caso de estudio en Chile y se evalúa el impacto de las sequías debidas al fenómeno de La Niña. iv Los resultados en general muestran el valor que brinda el conocimiento más preciso acerca de los impactos, ya que en muchas ocasiones se tiende a sobreestimar los daños realmente producidos por la falta de agua. Los impactos indirectos de la sequía confirman su alcance a la vez que son amortiguados a medida que nos acercamos al ámbito macroeconómico. En el caso de Chile, su diferente gestión muestra el papel que juegan el fenómeno de El Niño y La Niña sobre los precios de los principales cultivos del país y sobre el crecimiento del sector. Para reducir las pérdidas y su alcance se deben plantear más medidas de mitigación que centren su esfuerzo en una gestión eficiente del recurso. Además la prevención debe jugar un papel muy importante para reducir los riesgos que pueden sufrirse ante situaciones de escasez. ABSTRACT Drought is a natural phenomenon that originates by the decrease in rainfall in comparison to the average, and that results in water shortages for some activities. The increasing pressure on water resources has augmented the impact of droughts just as water scarcity has become an additional problem in many parts of the planet. Countries with Mediterranean climate are especially vulnerable to drought, and its waterdependent economic growth leads to significant impacts. To reduce the negative impacts it is necessary to deal with drought vulnerability, and to achieve this objective a more efficient management is needed. The availability of information about the impacts and the scope of droughts become highly important. This research attempts to encompass the issue of drought impacts, and therefore it characterizes all impact types that may occur and also compares its effects in two different countries (Spain and Chile). Impact attribution models are proposed in order to measure the economic losses caused by the lack of water. The proposed models are based on econometric approaches and they include key variables for measuring the impacts. Variables related to water availability, crop prices or time trends are included to be able to distinguish the effects caused by any of the possible sources. These models are adapted for each of the parts of the study. First, the direct impacts on irrigation are measured and a source of variability is introduced into the model to assess the economic risk of drought. This is performed at two geographic levels provincial and Agricultural Demand Unit. In the latter, not only the supply risk is considered but also the water demand risk side. The introduction of the risk perspective into the model results in a risk management tool that can be used for planning strategies. Then an extension of the econometric model is developed to measure the impacts on the agricultural sector (direct impacts on irrigated and rainfed productions and indirect impacts on the Agri-food Industry). For this aim the model is adapted and concatenated elasticities between the lack of water and the impacts are estimated. Finally an econometric model is proposed for the Chilean case study to evaluate the impact of droughts, especially caused by El Niño Southern Oscillation. The overall results show the value of knowing better about the precise impacts that often tend to be overestimated. The models allow for measuring accurate impacts due to the lack of water. Indirect impacts of drought confirm their scope while they confirm also its dilution as we approach the macroeconomic variables. In the case of Chile, different management strategies of the country show the role of ENSO phenomena on main crop prices and on economic trends. More mitigation measures focused on efficient resource management are necessary to reduce drought losses. Besides prevention must play an important role to reduce the risks that may be suffered due to shortages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artículo presenta el estudio de la rotura de paneles sándwich de yeso laminado y lana de roca bajo solicitaciones de flexo-tracción dentro de su plano. Estos paneles se emplean para conformar tabiques interiores de edificación y con frecuencia se fisuran por flechas excesivas en los forjados. Actualmente no hay modelos de cálculo fiables ni datos experimentales que permitan estudiar este problema. Este trabajo presenta los resultados de una campaña experimental encaminada a caracterizar el comportamiento en rotura de los paneles sándwich y de sus componentes individuales. Además, se presenta un modelo cohesivo con fisura embebida que permite simular el comportamiento en rotura del panel sándwich conjunto. Por último se presentan los resultados de los ensayos de fractura en modo mixto (tracción/cortante) de paneles comerciales y se reproduce su comportamiento con el modelo cohesivo propuesto, obteniéndose un buen ajuste. This paper presents the study of plasterboard and rockwool sandwich panels cracking under flexural loading. These panels are usually used to perform interior partition walls and they frequently show cracking pathology due to excessive deflexion of the slabs. There are currently no reliable simulation models and experimental data for the study of this problem. This paper presents the results of an experimental campaign aimed to characterize the fracture behaviour of sandwich panels and their individual components. In addition, the paper presents a cohesive model with embedded crack to simulate the fracture behaviour of the panel. Finally we present the results of tests for mixed mode fracture (tensile / shear) commercial panels and their behaviour is reproduced with the cohesive model proposed, yielding a good fit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cisternal organelle that resides in the axon initial segment (AIS) of neocortical and hippocampal pyramidal cells is thought to be involved in regulating the Ca(2+) available to maintain AIS scaffolding proteins, thereby preserving normal AIS structure and function. Through immunocytochemistry and correlative light and electron microscopy, we show here that the actin-binding protein ?-actinin is present in the typical cistenal organelle of rodent pyramidal neurons as well as in a large structure in the AIS of a subpopulation of layer V pyramidal cells that we have called the "giant saccular organelle." Indeed, this localization of ?-actinin in the AIS is dependent on the integrity of the actin cytoskeleton. Moreover, in the cisternal organelle of cultured hippocampal neurons, ?-actinin colocalizes extensively with synaptopodin, a protein that interacts with both actin and ?-actinin, and they appear concomitantly during the development of these neurons. Together, these results indicate that ?-actinin and the actin cytoskeleton are important components of the cisternal organelle that are probably required to stabilize the AIS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, there is a significant quantity of linguistic data available on the Web. However, linguistic resources are often published using proprietary formats and, as such, it can be difficult to interface with one another and they end up confined in “data silos”. The creation of web standards for the publishing of data on the Web and projects to create Linked Data have lead to interest in the creation of resources that can be published using Web principles. One of the most important aspects of “Lexical Linked Data” is the sharing of lexica and machine readable dictionaries. It is for this reason, that the lemon format has been proposed, which we briefly describe. We then consider two resources that seem ideal candidates for the Linked Data cloud, namely WordNet 3.0 and Wiktionary, a large document based dictionary. We discuss the challenges of converting both resources to lemon , and in particular for Wiktionary, the challenge of processing the mark-up, and handling inconsistencies and underspecification in the source material. Finally, we turn to the task of creating links between the two resources and present a novel algorithm for linking lexica as lexical Linked Data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cuando una colectividad de sistemas dinámicos acoplados mediante una estructura irregular de interacciones evoluciona, se observan dinámicas de gran complejidad y fenómenos emergentes imposibles de predecir a partir de las propiedades de los sistemas individuales. El objetivo principal de esta tesis es precisamente avanzar en nuestra comprensión de la relación existente entre la topología de interacciones y las dinámicas colectivas que una red compleja es capaz de mantener. Siendo este un tema amplio que se puede abordar desde distintos puntos de vista, en esta tesis se han estudiado tres problemas importantes dentro del mismo que están relacionados entre sí. Por un lado, en numerosos sistemas naturales y artificiales que se pueden describir mediante una red compleja la topología no es estática, sino que depende de la dinámica que se desarrolla en la red: un ejemplo son las redes de neuronas del cerebro. En estas redes adaptativas la propia topología emerge como consecuencia de una autoorganización del sistema. Para conocer mejor cómo pueden emerger espontáneamente las propiedades comúnmente observadas en redes reales, hemos estudiado el comportamiento de sistemas que evolucionan según reglas adaptativas locales con base empírica. Nuestros resultados numéricos y analíticos muestran que la autoorganización del sistema da lugar a dos de las propiedades más universales de las redes complejas: a escala mesoscópica, la aparición de una estructura de comunidades, y, a escala macroscópica, la existencia de una ley de potencias en la distribución de las interacciones en la red. El hecho de que estas propiedades aparecen en dos modelos con leyes de evolución cuantitativamente distintas que siguen unos mismos principios adaptativos sugiere que estamos ante un fenómeno que puede ser muy general, y estar en el origen de estas propiedades en sistemas reales. En segundo lugar, proponemos una medida que permite clasificar los elementos de una red compleja en función de su relevancia para el mantenimiento de dinámicas colectivas. En concreto, estudiamos la vulnerabilidad de los distintos elementos de una red frente a perturbaciones o grandes fluctuaciones, entendida como una medida del impacto que estos acontecimientos externos tienen en la interrupción de una dinámica colectiva. Los resultados que se obtienen indican que la vulnerabilidad dinámica es sobre todo dependiente de propiedades locales, por tanto nuestras conclusiones abarcan diferentes topologías, y muestran la existencia de una dependencia no trivial entre la vulnerabilidad y la conectividad de los elementos de una red. Finalmente, proponemos una estrategia de imposición de una dinámica objetivo genérica en una red dada e investigamos su validez en redes con diversas topologías que mantienen regímenes dinámicos turbulentos. Se obtiene como resultado que las redes heterogéneas (y la amplia mayora de las redes reales estudiadas lo son) son las más adecuadas para nuestra estrategia de targeting de dinámicas deseadas, siendo la estrategia muy efectiva incluso en caso de disponer de un conocimiento muy imperfecto de la topología de la red. Aparte de la relevancia teórica para la comprensión de fenómenos colectivos en sistemas complejos, los métodos y resultados propuestos podrán dar lugar a aplicaciones en sistemas experimentales y tecnológicos, como por ejemplo los sistemas neuronales in vitro, el sistema nervioso central (en el estudio de actividades síncronas de carácter patológico), las redes eléctricas o los sistemas de comunicaciones. ABSTRACT The time evolution of an ensemble of dynamical systems coupled through an irregular interaction scheme gives rise to dynamics of great of complexity and emergent phenomena that cannot be predicted from the properties of the individual systems. The main objective of this thesis is precisely to increase our understanding of the interplay between the interaction topology and the collective dynamics that a complex network can support. This is a very broad subject, so in this thesis we will limit ourselves to the study of three relevant problems that have strong connections among them. First, it is a well-known fact that in many natural and manmade systems that can be represented as complex networks the topology is not static; rather, it depends on the dynamics taking place on the network (as it happens, for instance, in the neuronal networks in the brain). In these adaptive networks the topology itself emerges from the self-organization in the system. To better understand how the properties that are commonly observed in real networks spontaneously emerge, we have studied the behavior of systems that evolve according to local adaptive rules that are empirically motivated. Our numerical and analytical results show that self-organization brings about two of the most universally found properties in complex networks: at the mesoscopic scale, the appearance of a community structure, and, at the macroscopic scale, the existence of a power law in the weight distribution of the network interactions. The fact that these properties show up in two models with quantitatively different mechanisms that follow the same general adaptive principles suggests that our results may be generalized to other systems as well, and they may be behind the origin of these properties in some real systems. We also propose a new measure that provides a ranking of the elements in a network in terms of their relevance for the maintenance of collective dynamics. Specifically, we study the vulnerability of the elements under perturbations or large fluctuations, interpreted as a measure of the impact these external events have on the disruption of collective motion. Our results suggest that the dynamic vulnerability measure depends largely on local properties (our conclusions thus being valid for different topologies) and they show a non-trivial dependence of the vulnerability on the connectivity of the network elements. Finally, we propose a strategy for the imposition of generic goal dynamics on a given network, and we explore its performance in networks with different topologies that support turbulent dynamical regimes. It turns out that heterogeneous networks (and most real networks that have been studied belong in this category) are the most suitable for our strategy for the targeting of desired dynamics, the strategy being very effective even when the knowledge on the network topology is far from accurate. Aside from their theoretical relevance for the understanding of collective phenomena in complex systems, the methods and results here discussed might lead to applications in experimental and technological systems, such as in vitro neuronal systems, the central nervous system (where pathological synchronous activity sometimes occurs), communication systems or power grids.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La prevalencia de las alergias está aumentando desde mediados del siglo XX, y se estima que actualmente afectan a alrededor del 2-8 % de la población, pero las causas de este aumento aún no están claras. Encontrar el origen del mecanismo por el cual una proteína inofensiva se convierte en capaz de inducir una respuesta alérgica es de vital importancia para prevenir y tratar estas enfermedades. Aunque la caracterización de alérgenos relevantes ha ayudado a mejorar el manejo clínico y a aclarar los mecanismos básicos de las reacciones alérgicas, todavía queda un largo camino para establecer el origen de la alergenicidad y reactividad cruzada. El objetivo de esta tesis ha sido caracterizar las bases moleculares de la alergenicidad tomando como modelo dos familias de panalergenos (proteínas de transferencia de lípidos –LTPs- y taumatinas –TLPs-) y estudiando los mecanismos que median la sensibilización y la reactividad cruzada para mejorar tanto el diagnóstico como el tratamiento de la alergia. Para ello, se llevaron a cabo dos estrategias: estudiar la reactividad cruzada de miembros de familias de panalérgenos; y estudiar moléculas-co-adyuvantes que pudieran favorecer la capacidad alergénica de dichas proteínas. Para estudiar la reactividad cruzada entre miembros de la misma familia de proteínas, se seleccionaron LTPs y TLPs, descritas como alergenos, tomando como modelo la alergia a frutas. Por otra parte, se estudiaron los perfiles de sensibilización a alérgenos de trigo relacionados con el asma del panadero, la enfermedad ocupacional más relevante de origen alérgico. Estos estudios se llevaron a cabo estandarizando ensayos tipo microarrays con alérgenos y analizando los resultados por la teoría de grafos. En relación al estudiar moléculas-co-adyuvantes que pudieran favorecer la capacidad alergénica de dichas proteínas, se llevaron a cabo estudios sobre la interacción de los alérgenos alimentarios con células del sistema inmune humano y murino y el epitelio de las mucosas, analizando la importancia de moléculas co-transportadas con los alérgenos en el desarrollo de una respuesta Th2. Para ello, Pru p 3(LTP y alérgeno principal del melocotón) se selección como modelo para llevarlo a cabo. Por otra parte, se analizó el papel de moléculas activadoras del sistema inmune producidas por patógenos en la inducción de alergias alimentarias seleccionando el modelo kiwi-alternaria, y el papel de Alt a 1, alérgeno mayor de dicho hongo, en la sensibilización a Act d 2, alérgeno mayor de kiwi. En resumen, el presente trabajo presenta una investigación innovadora aportando resultados de gran utilidad tanto para la mejora del diagnóstico como para nuevas investigaciones sobre la alergia y el esclarecimiento final de los mecanismos que caracterizan esta enfermedad. ABSTRACT Allergies are increasing their prevalence from mid twentieth century, and they are currently estimated to affect around 2-8% of the population but the underlying causes of this increase remain still elusive. The understanding of the mechanism by which a harmless protein becomes capable of inducing an allergic response provides us the basis to prevent and treat these diseases. Although the characterization of relevant allergens has led to improved clinical management and has helped to clarify the basic mechanisms of allergic reactions, it seems justified in aspiring to molecularly dissecting these allergens to establish the structural basis of their allergenicity and cross-reactivity. The aim of this thesis was to characterize the molecular basis of the allergenicity of model proteins belonging to different families (Lipid Transfer Proteins –LTPs-, and Thaumatin-like Proteins –TLPs-) in order to identify mechanisms that mediate sensitization and cross reactivity for developing new strategies in the management of allergy, both diagnosis and treatment, in the near future. With this purpose, two strategies have been conducted: studies of cross-reactivity among panallergen families and molecular studies of the contribution of cofactors in the induction of the allergic response by these panallergens. Following the first strategy, we studied the cross-reactivity among members of two plant panallergens (LTPs , Lipid Transfer Proteins , and TLPs , Thaumatin-like Proteins) using the peach allergy as a model. Similarly, we characterized the sensitization profiles to wheat allergens in baker's asthma development, the most relevant occupational disease. These studies were performed using allergen microarrays and the graph theory for analyzing the results. Regarding the second approach, we analyzed the interaction of plant allergens with immune and epithelial cells. To perform these studies , we examined the importance of ligands and co-transported molecules of plant allergens in the development of Th2 responses. To this end, Pru p 3, nsLTP (non-specific Lipid Transfer Protein) and peach major allergen, was selected as a model to investigate its interaction with cells of the human and murine immune systems as well as with the intestinal epithelium and the contribution of its ligand in inducing an allergic response was studied. Moreover, we analyzed the role of pathogen associated molecules in the induction of food allergy. For that, we selected the kiwi- alternaria system as a model and the role of Alt a 1 , major allergen of the fungus, in the development of Act d 2-sensitization was studied. In summary, this work presents an innovative research providing useful results for improving diagnosis and leading to further research on allergy and the final clarification of the mechanisms that characterize this disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Four European fuel cycle scenarios involving transmutation options (in coherence with PATEROS and CPESFR EU projects) have been addressed from a point of view of resources utilization and economic estimates. Scenarios include: (i) the current fleet using Light Water Reactor (LWR) technology and open fuel cycle, (ii) full replacement of the initial fleet with Fast Reactors (FR) burning U?Pu MOX fuel, (iii) closed fuel cycle with Minor Actinide (MA) transmutation in a fraction of the FR fleet, and (iv) closed fuel cycle with MA transmutation in dedicated Accelerator Driven Systems (ADS). All scenarios consider an intermediate period of GEN-III+ LWR deployment and they extend for 200 years, looking for long term equilibrium mass flow achievement. The simulations were made using the TR_EVOL code, capable to assess the management of the nuclear mass streams in the scenario as well as economics for the estimation of the levelized cost of electricity (LCOE) and other costs. Results reveal that all scenarios are feasible according to nuclear resources demand (natural and depleted U, and Pu). Additionally, we have found as expected that the FR scenario reduces considerably the Pu inventory in repositories compared to the reference scenario. The elimination of the LWR MA legacy requires a maximum of 55% fraction (i.e., a peak value of 44 FR units) of the FR fleet dedicated to transmutation (MA in MOX fuel, homogeneous transmutation) or an average of 28 units of ADS plants (i.e., a peak value of 51 ADS units). Regarding the economic analysis, the main usefulness of the provided economic results is for relative comparison of scenarios and breakdown of LCOE contributors rather than provision of absolute values, as technological readiness levels are low for most of the advanced fuel cycle stages. The obtained estimations show an increase of LCOE ? averaged over the whole period ? with respect to the reference open cycle scenario of 20% for Pu management scenario and around 35% for both transmutation scenarios. The main contribution to LCOE is the capital costs of new facilities, quantified between 60% and 69% depending on the scenario. An uncertainty analysis is provided around assumed low and high values of processes and technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work describes an experience with a methodology for learning based on competences in Linear Algebra for engineering students. The experience has been based in autonomous team work of students. DERIVE tutorials for Linear Algebra topics are provided to the students. They have to work with the tutorials as their homework. After, worksheets with exercises have been prepared to be solved by the students organized in teams, using DERIVE function previously defined in the tutorials. The students send to the instructor the solution of the proposed exercises and they fill a survey with their impressions about the following items: ease of use of the files, usefulness of the tutorials for understanding the mathematical topics and the time spent in the experience. As a final work, we have designed an activity directed to the interested students. They have to prepare a project, related with a real problem in Science and Engineering. The students are free to choose the topic and to develop it but they have to use DERIVE in the solution. Obviously they are guided by the instructor. Some examples of activities related with Orthogonal Transformations will be presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La investigación para el conocimiento del cerebro es una ciencia joven, su inicio se remonta a Santiago Ramón y Cajal en 1888. Desde esta fecha a nuestro tiempo la neurociencia ha avanzado mucho en el desarrollo de técnicas que permiten su estudio. Desde la neurociencia cognitiva hoy se explican muchos modelos que nos permiten acercar a nuestro entendimiento a capacidades cognitivas complejas. Aun así hablamos de una ciencia casi en pañales que tiene un lago recorrido por delante. Una de las claves del éxito en los estudios de la función cerebral ha sido convertirse en una disciplina que combina conocimientos de diversas áreas: de la física, de las matemáticas, de la estadística y de la psicología. Esta es la razón por la que a lo largo de este trabajo se entremezclan conceptos de diferentes campos con el objetivo de avanzar en el conocimiento de un tema tan complejo como el que nos ocupa: el entendimiento de la mente humana. Concretamente, esta tesis ha estado dirigida a la integración multimodal de la magnetoencefalografía (MEG) y la resonancia magnética ponderada en difusión (dMRI). Estas técnicas son sensibles, respectivamente, a los campos magnéticos emitidos por las corrientes neuronales, y a la microestructura de la materia blanca cerebral. A lo largo de este trabajo hemos visto que la combinación de estas técnicas permiten descubrir sinergias estructurofuncionales en el procesamiento de la información en el cerebro sano y en el curso de patologías neurológicas. Más específicamente en este trabajo se ha estudiado la relación entre la conectividad funcional y estructural y en cómo fusionarlas. Para ello, se ha cuantificado la conectividad funcional mediante el estudio de la sincronización de fase o la correlación de amplitudes entre series temporales, de esta forma se ha conseguido un índice que mide la similitud entre grupos neuronales o regiones cerebrales. Adicionalmente, la cuantificación de la conectividad estructural a partir de imágenes de resonancia magnética ponderadas en difusión, ha permitido hallar índices de la integridad de materia blanca o de la fuerza de las conexiones estructurales entre regiones. Estas medidas fueron combinadas en los capítulos 3, 4 y 5 de este trabajo siguiendo tres aproximaciones que iban desde el nivel más bajo al más alto de integración. Finalmente se utilizó la información fusionada de MEG y dMRI para la caracterización de grupos de sujetos con deterioro cognitivo leve, la detección de esta patología resulta relevante en la identificación precoz de la enfermedad de Alzheimer. Esta tesis está dividida en seis capítulos. En el capítulos 1 se establece un contexto para la introducción de la connectómica dentro de los campos de la neuroimagen y la neurociencia. Posteriormente en este capítulo se describen los objetivos de la tesis, y los objetivos específicos de cada una de las publicaciones científicas que resultaron de este trabajo. En el capítulo 2 se describen los métodos para cada técnica que fue empleada: conectividad estructural, conectividad funcional en resting state, redes cerebrales complejas y teoría de grafos y finalmente se describe la condición de deterioro cognitivo leve y el estado actual en la búsqueda de nuevos biomarcadores diagnósticos. En los capítulos 3, 4 y 5 se han incluido los artículos científicos que fueron producidos a lo largo de esta tesis. Estos han sido incluidos en el formato de la revista en que fueron publicados, estando divididos en introducción, materiales y métodos, resultados y discusión. Todos los métodos que fueron empleados en los artículos están descritos en el capítulo 2 de la tesis. Finalmente, en el capítulo 6 se concluyen los resultados generales de la tesis y se discuten de forma específica los resultados de cada artículo. ABSTRACT In this thesis I apply concepts from mathematics, physics and statistics to the neurosciences. This field benefits from the collaborative work of multidisciplinary teams where physicians, psychologists, engineers and other specialists fight for a common well: the understanding of the brain. Research on this field is still in its early years, being its birth attributed to the neuronal theory of Santiago Ramo´n y Cajal in 1888. In more than one hundred years only a very little percentage of the brain functioning has been discovered, and still much more needs to be explored. Isolated techniques aim at unraveling the system that supports our cognition, nevertheless in order to provide solid evidence in such a field multimodal techniques have arisen, with them we will be able to improve current knowledge about human cognition. Here we focus on the multimodal integration of magnetoencephalography (MEG) and diffusion weighted magnetic resonance imaging. These techniques are sensitive to the magnetic fields emitted by the neuronal currents and to the white matter microstructure, respectively. The combination of such techniques could bring up evidences about structural-functional synergies in the brain information processing and which part of this synergy fails in specific neurological pathologies. In particular, we are interested in the relationship between functional and structural connectivity, and how two integrate this information. We quantify the functional connectivity by studying the phase synchronization or the amplitude correlation between time series obtained by MEG, and so we get an index indicating similarity between neuronal entities, i.e. brain regions. In addition we quantify structural connectivity by performing diffusion tensor estimation from the diffusion weighted images, thus obtaining an indicator of the integrity of the white matter or, if preferred, the strength of the structural connections between regions. These quantifications are then combined following three different approaches, from the lowest to the highest level of integration, in chapters 3, 4 and 5. We finally apply the fused information to the characterization or prediction of mild cognitive impairment, a clinical entity which is considered as an early step in the continuum pathological process of dementia. The dissertation is divided in six chapters. In chapter 1 I introduce connectomics within the fields of neuroimaging and neuroscience. Later in this chapter we describe the objectives of this thesis, and the specific objectives of each of the scientific publications that were produced as result of this work. In chapter 2 I describe the methods for each of the techniques that were employed, namely structural connectivity, resting state functional connectivity, complex brain networks and graph theory, and finally, I describe the clinical condition of mild cognitive impairment and the current state of the art in the search for early biomarkers. In chapters 3, 4 and 5 I have included the scientific publications that were generated along this work. They have been included in in their original format and they contain introduction, materials and methods, results and discussion. All methods that were employed in these papers have been described in chapter 2. Finally, in chapter 6 I summarize all the results from this thesis, both locally for each of the scientific publications and globally for the whole work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the determination of the realized thermal niche and the effects of climate change on the range distribution of two brown trout populations inhabiting two streams in the Duero River basin (Iberian Peninsula) at the edge of the natural distribution area of this species. For reaching these goals, new methodological developments were applied to improve reliability of forecasts. Water temperature data were collected using 11 thermographs located along the altitudinal gradient, and they were used to model the relationship between stream temperature and air temperature along the river continuum. Trout abundance was studied using electrofishing at 37 sites to determine the current distribution. The RCP4.5 and RCP8.5 change scenarios adopted by the International Panel of Climate Change for its Fifth Assessment Report were used for simulations and local downscaling in this study. We found more reliable results using the daily mean stream temperature than maximum daily temperature and their respective seven days moving-average to determine the distribution thresholds. Thereby, the observed limits of the summer distribution of brown trout were linked to thresholds between 18.1ºC and 18.7ºC. These temperatures characterise a realised thermal niche narrower than the physiological thermal range. In the most unfavourable climate change scenario, the thermal habitat loss of brown trout increased to 38% (Cega stream) and 11% (Pirón stream) in the upstream direction at the end of the century; however, at the Cega stream, the range reduction could reach 56% due to the effect of a ?warm-window? opening in the piedmont reach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

LINCOLN UNIVERSITY - On March 25, 1965, a bus loaded with Lincoln University students and staff arrived in Montgomery, Ala. to join the Selma march for racial and voting equality. Although the Civil Rights Act of 1964 was in force, African-Americans continued to feel the effects of segregation. The 1960s was a decade of social unrest and change. In the Deep South, specifically Alabama, racial segregation was a cultural norm resistant to change. Governor George Wallace never concealed his personal viewpoints and political stance of the white majority, declaring “Segregation now, segregation tomorrow, segregation forever.” The march was aimed at obtaining African-Americans their constitutionally protected right to vote. However, Alabama’s deep-rooted culture of racial bias began to be challenged by a shift in American attitudes towards equality. Both black and whites wanted to end discrimination by using passive resistance, a movement utilized by Dr. Martin Luther King Jr. That passive resistance was often met with violence, sometimes at the hands of law enforcement and local citizens. The Selma to Montgomery march was a result of a protest for voting equality. The Student Nonviolent Coordinating Committee (SNCC) and the Southern Christian Leadership Counsel (SCLC) among other students marched along the streets to bring awareness to the voter registration campaign, which was organized to end discrimination in voting based on race. Violent acts of police officers and others were some of the everyday challenges protesters were facing. Forty-one participants from Lincoln University arrived in Montgomery to take part in the 1965 march for equality. Students from Lincoln University’s Journalism 383 class spent part of their 2015 spring semester researching the historical event. Here are their stories: Peter Kellogg “We’ve been watching the television, reading about it in the newspapers,” said Peter Kellogg during a February 2015 telephone interview. “Everyone knew the civil rights movement was going on, and it was important that we give him (Robert Newton) some assistance … and Newton said we needed to get involve and do something,” Kellogg, a lecturer in the 1960s at Lincoln University, discussed how the bus trip originated. “That’s why the bus happened,” Kellogg said. “Because of what he (Newton) did - that’s why Lincoln students went and participated.” “People were excited and the people along the sidewalk were supportive,” Kellogg said. However, the mood flipped from excited to scared and feeling intimidated. “It seems though every office building there was a guy in a blue uniform with binoculars standing in the crowd with troops and police. And if looks could kill me, we could have all been dead.” He says the hatred and intimidation was intense. Kellogg, being white, was an immediate target among many white people. He didn’t realize how dangerous the event in Alabama was until he and the others in the bus heard about the death of Viola Liuzzo. The married mother of five from Detroit was shot and killed by members of the Ku Klux Klan while shuttling activists to the Montgomery airport. “We found out about her death on the ride back,” Kellogg recalled. “Because it was a loss of life, and it shows the violence … we could have been exposed to that danger!” After returning to LU, Kellogg’s outlook on life took a dramatic turn. Kellogg noted King’s belief that a person should be willing to die for important causes. “The idea is that life is about something larger and more important than your own immediate gratification, and career success or personal achievements,” Kellogg said. “The civil rights movement … it made me, it made my life more significant because it was about something important.” The civil rights movement influenced Kellogg to change his career path and to become a black history lecturer. Until this day, he has no regrets and believes that his choices made him as a better individual. The bus ride to Alabama, he says, began with the actions of just one student. Robert Newton Robert Newton was the initiator, recruiter and leader of the Lincoln University movement to join Dr. Martin Luther King’s march in Selma. “In the 60s much of the civil rights activists came out of college,” said Newton during a recent phone interview. Many of the events that involved segregation compelled college students to fight for equality. “We had selected boycotts of merchants, when blacks were not allowed to try on clothes,” Newton said. “You could buy clothes at department stores, but no blacks could work at the department stores as sales people. If you bought clothes there you couldn’t try them on, you had to buy them first and take them home and try them on.” Newton said the students risked their lives to be a part of history and influence change. He not only recognized the historic event of his fellow Lincolnites, but also recognized other college students and historical black colleges and universities who played a vital role in history. “You had the S.N.C.C organization, in terms of voting rights and other things, including a lot of participation and working off the bureau,” Newton said. Other schools and places such as UNT, Greenville and Howard University and other historically black schools had groups that came out as leaders. Newton believes that much has changed from 50 years ago. “I think we’ve certainly come a long way from what I’ve seen from the standpoint of growing up outside of Birmingham, Alabama,” Newton said. He believes that college campuses today are more organized in their approach to social causes. “The campus appears to be some more integrated amongst students in terms of organizations and friendships.” Barbara Flint Dr. Barbara Flint grew up in the southern part of Arkansas and came to Lincoln University in 1961. She describes her experience at Lincoln as “being at Lincoln when the world was changing.“ She was an active member of Lincoln’s History Club, which focused on current events and issues and influenced her decision to join the Selma march. “The first idea was to raise some money and then we started talking about ‘why can’t we go?’ I very much wanted to be a living witness in history.” Reflecting on the march and journey to Montgomery, Flint describes it as being filled with tension. “We were very conscious of the fact that once we got on the road past Tennessee we didn’t know what was going to happen,” said Flint during a February 2015 phone interview. “Many of the students had not been beyond Missouri, so they didn’t have that sense of what happens in the South. Having lived there you knew the balance as well as what is likely to happen and what is not likely to happen. As my father use to say, ‘you have to know how to stay on that line of balance.’” Upon arriving in Alabama she remembers the feeling of excitement and relief from everyone on the bus. “We were tired and very happy to be there and we were trying to figure out where we were going to join and get into the march,” Flint said. “There were so many people coming in and then we were also trying to stay together; that was one of the things that really stuck out for me, not just for us but the people who were coming in. You didn’t want to lose sight of the people you came with.” Flint says she was keenly aware of her surroundings. For her, it was more than just marching forward. “I can still hear those helicopters now,” Flint recalled. “Every time the helicopters would come over the sound would make people jump and look up - I think that demonstrated the extent of the tenseness that was there at the time because the helicopters kept coming over every few minutes.” She said that the marchers sang “we are not afraid,” but that fear remained with every step. “Just having been there and being a witness and marching you realize that I’m one of those drops that’s going to make up this flood and with this flood things will move,” said Flint. As a student at Lincoln in 1965, Flint says the Selma experience undoubtedly changed her life. “You can’t expect to do exactly what you came to Lincoln to do,” Flint says. “That march - along with all the other marchers and the action that was taking place - directly changed the paths that I and many other people at Lincoln would take.” She says current students and new generations need to reflect on their personal role in society. “Decide what needs to be done and ask yourself ‘how can I best contribute to it?’” Flint said. She notes technology and social media can be used to reach audiences in ways unavailable to her generation in 1965. “So you don’t always have to wait for someone else to step out there and say ‘let’s march,’ you can express your vision and your views and you have the means to do so (so) others can follow you. Jaci Newsom Jaci Newsom came to Lincoln in 1965 from Atlanta. She came to Lincoln to major in sociology and being in Jefferson City was largely different from what she had grown up with. “To be able to come into a restaurant, sit down and be served a nice meal was eye-opening to me,” said Newsom during a recent interview. She eventually became accustomed to the relaxed attitude of Missouri and was shocked by the situation she encountered on an out-of-town trip. “I took a bus trip from Atlanta to Pensacola and I encountered the worse racism that I have ever seen. I was at bus stop, I went in to be served and they would not serve me. There was a policeman sitting there at the table and he told me that privately owned places could select not to serve you.” Newsom describes her experience of marching in Montgomery as being one with a purpose. “We felt as though we achieved something - we felt a sense of unity,” Newsom said. “We were very excited (because) we were going to hear from Martin Luther King. To actually be in the presence of him and the other civil rights workers there was just such enthusiasm and excitement yet there was also some apprehension of what we might encounter.” Many of the marchers showed their inspiration and determination while pressing forward towards the grounds of the Alabama Capitol building. Newsom recalled that the marchers were singing the lyrics “ain’t gonna let nobody turn me around” andwe shall overcome.” “ I started seeing people just like me,” Newsom said. “I don’t recall any of the scowling, the hitting, the things I would see on TV later. I just saw a sea of humanity marching towards the Capitol. I don’t remember what Martin Luther King said but it was always the same message: keep the faith; we’re going to get where we’re going and let us remember what our purpose is.” Newsom offers advice on what individuals can do to make their society a more productive and peaceful place. “We have come a long way and we have ways to change things that we did not have before,” Newsom said. “You need to work in positive ways to change.” Referencing the recent unrest in Ferguson, Mo., she believes that people become destructive as a way to show and vent anger. Her generation, she says, was raised to react in lawful ways – and believe in hope. “We have faith to do things in a way that was lawful and it makes me sad what people do when they feel without hope, and there is hope,” Newsom says. “Non-violence does work - we need to include everyone to make this world a better place.” Newsom graduated from Lincoln in 1969 and describes her experience at Lincoln as, “I grew up and did more growing at Lincoln than I think I did for the rest of my life.”