932 resultados para J31 - Wage Level and Structure


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nearly continuous cores of Quaternary fine-grained sediments with distinct dark-light colored cycles were recovered from Sites 794, 795, and 797 in the basinal parts of the Japan Sea during Leg 127. A comparison of gray value (darkness) profiles supplemented by visual inspection of core photographs between sites indicated that most of the dark and light layers were correlatable between sites, and that two of the dark layers lie close to adjacent marker ash layers. These observations indicate that deposition of dark and light layers resulted from basin-wide synchronous events. In order to understand the origin of these dark-light cycles, petrographical, mineralogical, compositional, and paleontological studies were carried out on closely spaced samples from the upper Quaternary sediments recovered from Site 797. Age model was constructed based on comparison between variation in diatom abundance and the standard oxygen isotope curve of Imbrie et al. (1984), the latter was interpolated between the five age controlled levels established at Site 797. The two curves show similar patterns which enabled us to "tune" the sediment ages to the oxygen isotope stages. We have to use variation in diatom abundance as a substitute for oxygen isotope curve since oxygen isotopic data are not available at the studied sites. Bottom water oxygenation conditions were estimated based on two criteria: (1) the degree of lamina preservation and (2) the ratio of Corg to Stot. The surface water productivity was deduced from the Corg and biogenic silica content. Results suggest that the bottom water oxygenation level and the surface water productivity varied significantly in response to the glacial-interglacial cycles. Glacio-eustatic sea-level changes and subsequent changes in water circulation in the Japan Sea appear to have been responsible for these variations and consequent changes in sediment composition throughout the Quaternary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The course of sea-level fluctuations during Termination II (TII; the penultimate deglaciation), which is critical for understanding ice-sheet dynamics and suborbital climate variability, has yet to be established. This is partly because most shallow-water sequences encompassing TII were eroded during sea-level lowstands of the last glacial period or were deposited below the present sea level. Here we report a new sequence recording sea-level changes during TII in the Pleistocene sequence at Hole M0005D (water depth: 59.63 m below sea level [mbsl]) off Tahiti, French Polynesia, which was drilled during Integrated Ocean Drilling Program Expedition 310. Lithofacies variations and stratigraphic changes in the taxonomic composition, preservation states, and intraspecific test morphology of large benthic foraminifers indicate a deepening-upward sequence in the interval from Core 310-M0005D-26R (core depth: 134 mbsl) through -16R (core depth: 106 mbsl). Reconstruction of relative sea levels, based on paleodepth estimations using large benthic foraminifers, indicated a rise in sea level of about 90 m during this interval, suggesting its correlation with one of the terminations. Assuming that this rise in sea level corresponds to that during TII, after correcting for subsidence since the time of deposition, a highstand sea-level position would be 2 ± 15 m above present sea level (masl), which is generally consistent with highstand sea-level positions in MIS 5e (4 ± 2 masl). If this rise in sea level corresponds to that during older terminations, the subsidence-corrected highstand sea-level positions (30 ± 15 masl for Termination III and 54 ± 15 masl for Termination IV) are not consistent with reported ranges of interglacial sea-level highstands (-18 to 15 masl). Therefore, the studied interval likely records the rise in sea level and associated environmental changes during TII. In particular, the intervening cored materials between the two episodes of sea-level rise found in the studied interval might record the sea-level reversal event during TII. This conclusion is consistent with U/Th ages of around 133 ka, which were obtained from slightly diagenetically altered (i.e., < 1% calcite) in situ corals in the studied interval (Core 310-M0005D-20R [core depth: 118 mbsl]). This study also suggests that our inverse approach to correlate a stratigraphic interval with an approximate time frame could be useful as an independent check on the accuracy of uranium-series dating, which has been applied extensively to fossil corals in late Quaternary sea-level studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We used a controlled CO2 perturbation experiment to test hypotheses about changes in diversity, composition and structure of soft-bottom intertidal macrobenthic assemblages, under realistic and locally relevant scenarios of seawater acidification. Patches of undisturbed sediment were collected from 2 types of intertidal sedimentary habitat in the Ria Formosa coastal lagoon (South Portugal) and exposed to 2 levels of seawater acidification (pH reduced by 0.3 and 0.6 units) and 1 unmanipulated (control) level. After 75 d the assemblages differed significantly between the 2 types of sediment and between field controls and the ex situ treatments, but not among the 3 pH levels tested. The naturally high values of total alkalinity buffered seawater from the changes imposed on carbonate chemistry and may have contributed to offsetting acidification at the local scale. Observed differences on biota were strongly related to the organic matter content and grain-size of the sediments, particularly to the fractions of medium and coarse sand. Soft-bottom intertidal macrofauna was significantly affected by the stress of being held in an artificial environment, but not by CO2-induced seawater acidification. Given the previously observed variations in the sensitivities of marine organisms to seawater acidification, direct extrapolations of the present findings to different regions or other types of assemblages do not seem advisable. However, the contribution of ex situ studies to the assessment of ecosystem-level responses to environmental disturbances could generally be improved by incorporating adequate field controls in the experimental design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the past five million yrs, benthic d18O records indicate a large range of climates, from warmer than today during the Pliocene Warm Period to considerably colder during glacials. Antarctic ice cores have revealed Pleistocene glacial-interglacial CO2 variability of 60-100 ppm, while sea level fluctuations of typically 125 m are documented by proxy data. However, in the pre-ice core period, CO2 and sea level proxy data are scarce and there is disagreement between different proxies and different records of the same proxy. This hampers comprehensive understanding of the long-term relations between CO2, sea level and climate. Here, we drive a coupled climate-ice sheet model over the past five million years, inversely forced by a stacked benthic d18O record. We obtain continuous simulations of benthic d18O, sea level and CO2 that are mutually consistent. Our model shows CO2 concentrations of 300 to 470 ppm during the Early Pliocene. Furthermore, we simulate strong CO2 variability during the Pliocene and Early Pleistocene. These features are broadly supported by existing and new d11B-based proxy CO2 data, but less by alkenone-based records. The simulated concentrations and variations therein are larger than expected from global mean temperature changes. Our findings thus suggest a smaller Earth System Sensitivity than previously thought. This is explained by a more restricted role of land ice variability in the Pliocene. The largest uncertainty in our simulation arises from the mass balance formulation of East Antarctica, which governs the variability in sea level, but only modestly affects the modeled CO2 concentrations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The middle Miocene delta18O increase represents a fundamental change in earth's climate system due to a major expansion and permanent establishment of the East Antarctic Ice Sheet accompanied by some effect of deepwater cooling. The long-term cooling trend in the middle to late Miocene was superimposed by several punctuated periods of glaciations (Mi-Events) characterized by oxygen isotopic shifts that have been related to the waxing and waning of the Antarctic ice-sheet and bottom water cooling. Here, we present a high-resolution benthic stable oxygen isotope record from ODP Site 1085 located at the southwestern African continental margin that provides a detailed chronology for the middle to late Miocene (13.9-7.3 Ma) climate transition in the eastern South Atlantic. A composite Fe intensity record obtained by XRF core scanning ODP Sites 1085 and 1087 was used to construct an astronomically calibrated chronology based on orbital tuning. The oxygen isotope data exhibit four distinct delta18O excursions, which have astronomical ages of 13.8, 13.2, 11.7, and 10.4 Ma and correspond to the Mi3, Mi4, Mi5, and Mi6 events. A global climate record was extracted from the oxygen isotopic composition. Both long- and short-term variabilities in the climate record are discussed in terms of sea-level and deep-water temperature changes. The oxygen isotope data support a causal link between sequence boundaries traced from the shelf and glacioeustatic changes due to ice-sheet growth. Spectral analysis of the benthic delta18O record shows strong power in the 400-kyr and 100-kyr bands documenting a paleoceanographic response to eccentricity-modulated variations in precession. A spectral peak around 180-kyr might be related to the asymmetry of the obliquity cycle indicating that the response of the dominantly unipolar Antarctic ice-sheet to obliquityinduced variations probably controlled the middle to late Miocene climate system. Maxima in the delta18O record, interpreted as glacial periods, correspond to minima in 100-kyr eccentricity cycle and minima in the 174-kyr obliquity modulation. Strong middle to late Miocene glacial events are associated with 400-kyr eccentricity minima and obliquity modulation minima. Thus, fluctuations in the amplitude of obliquity and eccentricity seem to be the driving force for the middle to late Miocene climate variability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Climate change mediates marine chemical and physical environments and therefore influences marine organisms. While increasing atmospheric CO2 level and associated ocean acidification has been predicted to stimulate marine primary productivity and may affect community structure, the processes that impact food chain and biological CO2 pump are less documented. We hypothesized that copepods, as the secondary marine producer, may respond to future changes in seawater carbonate chemistry associated with ocean acidification due to increasing atmospheric CO2 concentration. Here, we show that the copepod, Centropages tenuiremis, was able to perceive the chemical changes in seawater induced under elevated CO2 concentration (>1700 µatm, pH < 7.60) with avoidance strategy. The copepod's respiration increased at the elevated CO2 (1000 µatm), associated acidity (pH 7.83) and its feeding rates also increased correspondingly, except for the initial acclimating period, when it fed less. Our results imply that marine secondary producers increase their respiration and feeding rate in response to ocean acidification to balance the energy cost against increased acidity and CO2 concentration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper estimates the impact of industrial agglomeration on firm-level productivity in Chinese manufacturing sectors. To account for spatial autocorrelation across regions, we formulate a hierarchical spatial model at the firm level and develop a Bayesian estimation algorithm. A Bayesian instrumental-variables approach is used to address endogeneity bias of agglomeration. Robust to these potential biases, we find that agglomeration of the same industry (i.e. localization) has a productivity-boosting effect, but agglomeration of urban population (i.e. urbanization) has no such effects. Additionally, the localization effects increase with educational levels of employees and the share of intermediate inputs in gross output. These results may suggest that agglomeration externalities occur through knowledge spillovers and input sharing among firms producing similar manufactures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chinese government commits to reach its peak carbon emissions before 2030, which requires China to implement new policies. Using a CGE model, this study conducts simulation studies on the functions of an energy tax and a carbon tax and analyzes their effects on macro-economic indices. The Chinese economy is affected at an acceptable level by the two taxes. GDP will lose less than 0.8% with a carbon tax of 100, 50, or 10 RMB/ton CO2 or 5% of the delivery price of an energy tax. Thus, the loss of real disposable personal income is smaller. Compared with implementing a single tax, a combined carbon and energy tax induces more emission reductions with relatively smaller economic costs. With these taxes, the domestic competitiveness of energy intensive industries is improved. Additionally, we found that the sooner such taxes are launched, the smaller the economic costs and the more significant the achieved emission reductions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a detailed genetic study of Castanea sativa in El Bierzo, a major nut production region with interesting features. It is located within a glacial refuge at one extreme of the distribution area (northwest Spain); it has a centenary tradition of chestnut management; and more importantly, it shows an unusual degree of genetic isolation. Seven nuclear microsatellite markers were selected to analyze the genetic variability and structure of 169 local trees grafted for nut production. We analyzed in the same manner 62 local nuts. The selected loci were highly discriminant for the genotypes studied, giving a combined probability of identity of 6.1 × 10−6. An unprecedented density of trees was sampled for this project over the entire region, and nuts were collected representing 18 cultivars marketed by local producers. Several instances of misclassification by local growers were detected. Fixation index estimates and analysis of molecular variance (AMOVA) data are supportive of an unexpectedly high level of genetic differentiation in El Bierzo, larger than that estimated in a previous study with broader geographical scope but based on limited local sampling (Pereira-Lorenzo et al., Tree Genet Genomes 6: 701–715, 2010a). Likewise, we have determined that clonality due to grafting had been previously overestimated. In line with these observations, no significant spatial structure was found using both a model-based Bayesian procedure and Mantel’s tests. Taken together, our results evidence the need for more fine-scale genetic studies if conservation strategies are to be efficiently improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

¿Suministrarán las fuentes de energía renovables toda la energía que el mundo necesita algún día? Algunos argumentan que sí, mientras que otros dicen que no. Sin embargo, en algunas regiones del mundo, la producción de electricidad a través de fuentes de energía renovables ya está en una etapa prometedora de desarrollo en la que su costo de generación de electricidad compite con fuentes de electricidad convencionales, como por ejemplo la paridad de red. Este logro ha sido respaldado por el aumento de la eficiencia de la tecnología, la reducción de los costos de producción y, sobre todo, los años de intervenciones políticas de apoyo financiero. La difusión de los sistemas solares fotovoltaicos (PV) en Alemania es un ejemplo relevante. Alemania no sólo es el país líder en términos de capacidad instalada de sistemas fotovoltaicos (PV) en todo el mundo, sino también uno de los países pioneros donde la paridad de red se ha logrado recientemente. No obstante, podría haber una nube en el horizonte. La tasa de difusión ha comenzado a declinar en muchas regiones. Además, las empresas solares locales – que se sabe son importantes impulsores de la difusión – han comenzado a enfrentar dificultades para manejar sus negocios. Estos acontecimientos plantean algunas preguntas importantes: ¿Es ésta una disminución temporal en la difusión? ¿Los adoptantes continuarán instalando sistemas fotovoltaicos? ¿Qué pasa con los modelos de negocio de las empresas solares locales? Con base en el caso de los sistemas fotovoltaicos en Alemania a través de un análisis multinivel y dos revisiones literarias complementarias, esta tesis doctoral extiende el debate proporcionando riqueza múltiple de datos empíricos en un conocimiento de contexto limitado. El primer análisis se basa en la perspectiva del adoptante, que explora el nivel "micro" y el proceso social que subyace a la adopción de los sistemas fotovoltaicos. El segundo análisis es una perspectiva a nivel de empresa, que explora los modelos de negocio de las empresas y sus roles impulsores en la difusión de los sistemas fotovoltaicos. El tercero análisis es una perspectiva regional, la cual explora el nivel "meso", el proceso social que subyace a la adopción de sistemas fotovoltaicos y sus técnicas de modelado. Los resultados incluyen implicaciones tanto para académicos como políticos, no sólo sobre las innovaciones en energía renovable relativas a la paridad de red, sino también, de manera inductiva, sobre las innovaciones ambientales impulsadas por las políticas que logren la competitividad de costes. ABSTRACT Will renewable energy sources supply all of the world energy needs one day? Some argue yes, while others say no. However, in some regions of the world, the electricity production through renewable energy sources is already at a promising stage of development at which their electricity generation costs compete with conventional electricity sources’, i.e., grid parity. This achievement has been underpinned by the increase of technology efficiency, reduction of production costs and, above all, years of policy interventions of providing financial support. The diffusion of solar photovoltaic (PV) systems in Germany is an important frontrunner case in point. Germany is not only the top country in terms of installed PV systems’ capacity worldwide but also one of the pioneer countries where the grid parity has recently been achieved. However, there might be a cloud on the horizon. The diffusion rate has started to decline in many regions. In addition, local solar firms – which are known to be important drivers of diffusion – have started to face difficulties to run their businesses. These developments raise some important questions: Is this a temporary decline on diffusion? Will adopters continue to install PV systems? What about the business models of the local solar firms? Based on the case of PV systems in Germany through a multi-level analysis and two complementary literature reviews, this PhD Dissertation extends the debate by providing multiple wealth of empirical details in a context-limited knowledge. The first analysis is based on the adopter perspective, which explores the “micro” level and the social process underlying the adoption of PV systems. The second one is a firm-level perspective, which explores the business models of firms and their driving roles in diffusion of PV systems. The third one is a regional perspective, which explores the “meso” level, i.e., the social process underlying the adoption of PV systems and its modeling techniques. The results include implications for both scholars and policymakers, not only about renewable energy innovations at grid parity, but also in an inductive manner, about policy-driven environmental innovations that achieve the cost competiveness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los sistemas empotrados han sido concebidos tradicionalmente como sistemas de procesamiento específicos que realizan una tarea fija durante toda su vida útil. Para cumplir con requisitos estrictos de coste, tamaño y peso, el equipo de diseño debe optimizar su funcionamiento para condiciones muy específicas. Sin embargo, la demanda de mayor versatilidad, un funcionamiento más inteligente y, en definitiva, una mayor capacidad de procesamiento comenzaron a chocar con estas limitaciones, agravado por la incertidumbre asociada a entornos de operación cada vez más dinámicos donde comenzaban a ser desplegados progresivamente. Esto trajo como resultado una necesidad creciente de que los sistemas pudieran responder por si solos a eventos inesperados en tiempo diseño tales como: cambios en las características de los datos de entrada y el entorno del sistema en general; cambios en la propia plataforma de cómputo, por ejemplo debido a fallos o defectos de fabricación; y cambios en las propias especificaciones funcionales causados por unos objetivos del sistema dinámicos y cambiantes. Como consecuencia, la complejidad del sistema aumenta, pero a cambio se habilita progresivamente una capacidad de adaptación autónoma sin intervención humana a lo largo de la vida útil, permitiendo que tomen sus propias decisiones en tiempo de ejecución. Éstos sistemas se conocen, en general, como sistemas auto-adaptativos y tienen, entre otras características, las de auto-configuración, auto-optimización y auto-reparación. Típicamente, la parte soft de un sistema es mayoritariamente la única utilizada para proporcionar algunas capacidades de adaptación a un sistema. Sin embargo, la proporción rendimiento/potencia en dispositivos software como microprocesadores en muchas ocasiones no es adecuada para sistemas empotrados. En este escenario, el aumento resultante en la complejidad de las aplicaciones está siendo abordado parcialmente mediante un aumento en la complejidad de los dispositivos en forma de multi/many-cores; pero desafortunadamente, esto hace que el consumo de potencia también aumente. Además, la mejora en metodologías de diseño no ha sido acorde como para poder utilizar toda la capacidad de cómputo disponible proporcionada por los núcleos. Por todo ello, no se están satisfaciendo adecuadamente las demandas de cómputo que imponen las nuevas aplicaciones. La solución tradicional para mejorar la proporción rendimiento/potencia ha sido el cambio a unas especificaciones hardware, principalmente usando ASICs. Sin embargo, los costes de un ASIC son altamente prohibitivos excepto en algunos casos de producción en masa y además la naturaleza estática de su estructura complica la solución a las necesidades de adaptación. Los avances en tecnologías de fabricación han hecho que la FPGA, una vez lenta y pequeña, usada como glue logic en sistemas mayores, haya crecido hasta convertirse en un dispositivo de cómputo reconfigurable de gran potencia, con una cantidad enorme de recursos lógicos computacionales y cores hardware empotrados de procesamiento de señal y de propósito general. Sus capacidades de reconfiguración han permitido combinar la flexibilidad propia del software con el rendimiento del procesamiento en hardware, lo que tiene la potencialidad de provocar un cambio de paradigma en arquitectura de computadores, pues el hardware no puede ya ser considerado más como estático. El motivo es que como en el caso de las FPGAs basadas en tecnología SRAM, la reconfiguración parcial dinámica (DPR, Dynamic Partial Reconfiguration) es posible. Esto significa que se puede modificar (reconfigurar) un subconjunto de los recursos computacionales en tiempo de ejecución mientras el resto permanecen activos. Además, este proceso de reconfiguración puede ser ejecutado internamente por el propio dispositivo. El avance tecnológico en dispositivos hardware reconfigurables se encuentra recogido bajo el campo conocido como Computación Reconfigurable (RC, Reconfigurable Computing). Uno de los campos de aplicación más exóticos y menos convencionales que ha posibilitado la computación reconfigurable es el conocido como Hardware Evolutivo (EHW, Evolvable Hardware), en el cual se encuentra enmarcada esta tesis. La idea principal del concepto consiste en convertir hardware que es adaptable a través de reconfiguración en una entidad evolutiva sujeta a las fuerzas de un proceso evolutivo inspirado en el de las especies biológicas naturales, que guía la dirección del cambio. Es una aplicación más del campo de la Computación Evolutiva (EC, Evolutionary Computation), que comprende una serie de algoritmos de optimización global conocidos como Algoritmos Evolutivos (EA, Evolutionary Algorithms), y que son considerados como algoritmos universales de resolución de problemas. En analogía al proceso biológico de la evolución, en el hardware evolutivo el sujeto de la evolución es una población de circuitos que intenta adaptarse a su entorno mediante una adecuación progresiva generación tras generación. Los individuos pasan a ser configuraciones de circuitos en forma de bitstreams caracterizados por descripciones de circuitos reconfigurables. Seleccionando aquellos que se comportan mejor, es decir, que tienen una mejor adecuación (o fitness) después de ser evaluados, y usándolos como padres de la siguiente generación, el algoritmo evolutivo crea una nueva población hija usando operadores genéticos como la mutación y la recombinación. Según se van sucediendo generaciones, se espera que la población en conjunto se aproxime a la solución óptima al problema de encontrar una configuración del circuito adecuada que satisfaga las especificaciones. El estado de la tecnología de reconfiguración después de que la familia de FPGAs XC6200 de Xilinx fuera retirada y reemplazada por las familias Virtex a finales de los 90, supuso un gran obstáculo para el avance en hardware evolutivo; formatos de bitstream cerrados (no conocidos públicamente); dependencia de herramientas del fabricante con soporte limitado de DPR; una velocidad de reconfiguración lenta; y el hecho de que modificaciones aleatorias del bitstream pudieran resultar peligrosas para la integridad del dispositivo, son algunas de estas razones. Sin embargo, una propuesta a principios de los años 2000 permitió mantener la investigación en el campo mientras la tecnología de DPR continuaba madurando, el Circuito Virtual Reconfigurable (VRC, Virtual Reconfigurable Circuit). En esencia, un VRC en una FPGA es una capa virtual que actúa como un circuito reconfigurable de aplicación específica sobre la estructura nativa de la FPGA que reduce la complejidad del proceso reconfiguración y aumenta su velocidad (comparada con la reconfiguración nativa). Es un array de nodos computacionales especificados usando descripciones HDL estándar que define recursos reconfigurables ad-hoc: multiplexores de rutado y un conjunto de elementos de procesamiento configurables, cada uno de los cuales tiene implementadas todas las funciones requeridas, que pueden seleccionarse a través de multiplexores tal y como ocurre en una ALU de un microprocesador. Un registro grande actúa como memoria de configuración, por lo que la reconfiguración del VRC es muy rápida ya que tan sólo implica la escritura de este registro, el cual controla las señales de selección del conjunto de multiplexores. Sin embargo, esta capa virtual provoca: un incremento de área debido a la implementación simultánea de cada función en cada nodo del array más los multiplexores y un aumento del retardo debido a los multiplexores, reduciendo la frecuencia de funcionamiento máxima. La naturaleza del hardware evolutivo, capaz de optimizar su propio comportamiento computacional, le convierten en un buen candidato para avanzar en la investigación sobre sistemas auto-adaptativos. Combinar un sustrato de cómputo auto-reconfigurable capaz de ser modificado dinámicamente en tiempo de ejecución con un algoritmo empotrado que proporcione una dirección de cambio, puede ayudar a satisfacer los requisitos de adaptación autónoma de sistemas empotrados basados en FPGA. La propuesta principal de esta tesis está por tanto dirigida a contribuir a la auto-adaptación del hardware de procesamiento de sistemas empotrados basados en FPGA mediante hardware evolutivo. Esto se ha abordado considerando que el comportamiento computacional de un sistema puede ser modificado cambiando cualquiera de sus dos partes constitutivas: una estructura hard subyacente y un conjunto de parámetros soft. De esta distinción, se derivan dos lineas de trabajo. Por un lado, auto-adaptación paramétrica, y por otro auto-adaptación estructural. El objetivo perseguido en el caso de la auto-adaptación paramétrica es la implementación de técnicas de optimización evolutiva complejas en sistemas empotrados con recursos limitados para la adaptación paramétrica online de circuitos de procesamiento de señal. La aplicación seleccionada como prueba de concepto es la optimización para tipos muy específicos de imágenes de los coeficientes de los filtros de transformadas wavelet discretas (DWT, DiscreteWavelet Transform), orientada a la compresión de imágenes. Por tanto, el objetivo requerido de la evolución es una compresión adaptativa y más eficiente comparada con los procedimientos estándar. El principal reto radica en reducir la necesidad de recursos de supercomputación para el proceso de optimización propuesto en trabajos previos, de modo que se adecúe para la ejecución en sistemas empotrados. En cuanto a la auto-adaptación estructural, el objetivo de la tesis es la implementación de circuitos auto-adaptativos en sistemas evolutivos basados en FPGA mediante un uso eficiente de sus capacidades de reconfiguración nativas. En este caso, la prueba de concepto es la evolución de tareas de procesamiento de imagen tales como el filtrado de tipos desconocidos y cambiantes de ruido y la detección de bordes en la imagen. En general, el objetivo es la evolución en tiempo de ejecución de tareas de procesamiento de imagen desconocidas en tiempo de diseño (dentro de un cierto grado de complejidad). En este caso, el objetivo de la propuesta es la incorporación de DPR en EHW para evolucionar la arquitectura de un array sistólico adaptable mediante reconfiguración cuya capacidad de evolución no había sido estudiada previamente. Para conseguir los dos objetivos mencionados, esta tesis propone originalmente una plataforma evolutiva que integra un motor de adaptación (AE, Adaptation Engine), un motor de reconfiguración (RE, Reconfiguration Engine) y un motor computacional (CE, Computing Engine) adaptable. El el caso de adaptación paramétrica, la plataforma propuesta está caracterizada por: • un CE caracterizado por un núcleo de procesamiento hardware de DWT adaptable mediante registros reconfigurables que contienen los coeficientes de los filtros wavelet • un algoritmo evolutivo como AE que busca filtros wavelet candidatos a través de un proceso de optimización paramétrica desarrollado específicamente para sistemas caracterizados por recursos de procesamiento limitados • un nuevo operador de mutación simplificado para el algoritmo evolutivo utilizado, que junto con un mecanismo de evaluación rápida de filtros wavelet candidatos derivado de la literatura actual, asegura la viabilidad de la búsqueda evolutiva asociada a la adaptación de wavelets. En el caso de adaptación estructural, la plataforma propuesta toma la forma de: • un CE basado en una plantilla de array sistólico reconfigurable de 2 dimensiones compuesto de nodos de procesamiento reconfigurables • un algoritmo evolutivo como AE que busca configuraciones candidatas del array usando un conjunto de funcionalidades de procesamiento para los nodos disponible en una biblioteca accesible en tiempo de ejecución • un RE hardware que explota la capacidad de reconfiguración nativa de las FPGAs haciendo un uso eficiente de los recursos reconfigurables del dispositivo para cambiar el comportamiento del CE en tiempo de ejecución • una biblioteca de elementos de procesamiento reconfigurables caracterizada por bitstreams parciales independientes de la posición, usados como el conjunto de configuraciones disponibles para los nodos de procesamiento del array Las contribuciones principales de esta tesis se pueden resumir en la siguiente lista: • Una plataforma evolutiva basada en FPGA para la auto-adaptación paramétrica y estructural de sistemas empotrados compuesta por un motor computacional (CE), un motor de adaptación (AE) evolutivo y un motor de reconfiguración (RE). Esta plataforma se ha desarrollado y particularizado para los casos de auto-adaptación paramétrica y estructural. • En cuanto a la auto-adaptación paramétrica, las contribuciones principales son: – Un motor computacional adaptable mediante registros que permite la adaptación paramétrica de los coeficientes de una implementación hardware adaptativa de un núcleo de DWT. – Un motor de adaptación basado en un algoritmo evolutivo desarrollado específicamente para optimización numérica, aplicada a los coeficientes de filtros wavelet en sistemas empotrados con recursos limitados. – Un núcleo IP de DWT auto-adaptativo en tiempo de ejecución para sistemas empotrados que permite la optimización online del rendimiento de la transformada para compresión de imágenes en entornos específicos de despliegue, caracterizados por tipos diferentes de señal de entrada. – Un modelo software y una implementación hardware de una herramienta para la construcción evolutiva automática de transformadas wavelet específicas. • Por último, en cuanto a la auto-adaptación estructural, las contribuciones principales son: – Un motor computacional adaptable mediante reconfiguración nativa de FPGAs caracterizado por una plantilla de array sistólico en dos dimensiones de nodos de procesamiento reconfigurables. Es posible mapear diferentes tareas de cómputo en el array usando una biblioteca de elementos sencillos de procesamiento reconfigurables. – Definición de una biblioteca de elementos de procesamiento apropiada para la síntesis autónoma en tiempo de ejecución de diferentes tareas de procesamiento de imagen. – Incorporación eficiente de la reconfiguración parcial dinámica (DPR) en sistemas de hardware evolutivo, superando los principales inconvenientes de propuestas previas como los circuitos reconfigurables virtuales (VRCs). En este trabajo también se comparan originalmente los detalles de implementación de ambas propuestas. – Una plataforma tolerante a fallos, auto-curativa, que permite la recuperación funcional online en entornos peligrosos. La plataforma ha sido caracterizada desde una perspectiva de tolerancia a fallos: se proponen modelos de fallo a nivel de CLB y de elemento de procesamiento, y usando el motor de reconfiguración, se hace un análisis sistemático de fallos para un fallo en cada elemento de procesamiento y para dos fallos acumulados. – Una plataforma con calidad de filtrado dinámica que permite la adaptación online a tipos de ruido diferentes y diferentes comportamientos computacionales teniendo en cuenta los recursos de procesamiento disponibles. Por un lado, se evolucionan filtros con comportamientos no destructivos, que permiten esquemas de filtrado en cascada escalables; y por otro, también se evolucionan filtros escalables teniendo en cuenta requisitos computacionales de filtrado cambiantes dinámicamente. Este documento está organizado en cuatro partes y nueve capítulos. La primera parte contiene el capítulo 1, una introducción y motivación sobre este trabajo de tesis. A continuación, el marco de referencia en el que se enmarca esta tesis se analiza en la segunda parte: el capítulo 2 contiene una introducción a los conceptos de auto-adaptación y computación autonómica (autonomic computing) como un campo de investigación más general que el muy específico de este trabajo; el capítulo 3 introduce la computación evolutiva como la técnica para dirigir la adaptación; el capítulo 4 analiza las plataformas de computación reconfigurables como la tecnología para albergar hardware auto-adaptativo; y finalmente, el capítulo 5 define, clasifica y hace un sondeo del campo del hardware evolutivo. Seguidamente, la tercera parte de este trabajo contiene la propuesta, desarrollo y resultados obtenidos: mientras que el capítulo 6 contiene una declaración de los objetivos de la tesis y la descripción de la propuesta en su conjunto, los capítulos 7 y 8 abordan la auto-adaptación paramétrica y estructural, respectivamente. Finalmente, el capítulo 9 de la parte 4 concluye el trabajo y describe caminos de investigación futuros. ABSTRACT Embedded systems have traditionally been conceived to be specific-purpose computers with one, fixed computational task for their whole lifetime. Stringent requirements in terms of cost, size and weight forced designers to highly optimise their operation for very specific conditions. However, demands for versatility, more intelligent behaviour and, in summary, an increased computing capability began to clash with these limitations, intensified by the uncertainty associated to the more dynamic operating environments where they were progressively being deployed. This brought as a result an increasing need for systems to respond by themselves to unexpected events at design time, such as: changes in input data characteristics and system environment in general; changes in the computing platform itself, e.g., due to faults and fabrication defects; and changes in functional specifications caused by dynamically changing system objectives. As a consequence, systems complexity is increasing, but in turn, autonomous lifetime adaptation without human intervention is being progressively enabled, allowing them to take their own decisions at run-time. This type of systems is known, in general, as selfadaptive, and are able, among others, of self-configuration, self-optimisation and self-repair. Traditionally, the soft part of a system has mostly been so far the only place to provide systems with some degree of adaptation capabilities. However, the performance to power ratios of software driven devices like microprocessors are not adequate for embedded systems in many situations. In this scenario, the resulting rise in applications complexity is being partly addressed by rising devices complexity in the form of multi and many core devices; but sadly, this keeps on increasing power consumption. Besides, design methodologies have not been improved accordingly to completely leverage the available computational power from all these cores. Altogether, these factors make that the computing demands new applications pose are not being wholly satisfied. The traditional solution to improve performance to power ratios has been the switch to hardware driven specifications, mainly using ASICs. However, their costs are highly prohibitive except for some mass production cases and besidesthe static nature of its structure complicates the solution to the adaptation needs. The advancements in fabrication technologies have made that the once slow, small FPGA used as glue logic in bigger systems, had grown to be a very powerful, reconfigurable computing device with a vast amount of computational logic resources and embedded, hardened signal and general purpose processing cores. Its reconfiguration capabilities have enabled software-like flexibility to be combined with hardware-like computing performance, which has the potential to cause a paradigm shift in computer architecture since hardware cannot be considered as static anymore. This is so, since, as is the case with SRAMbased FPGAs, Dynamic Partial Reconfiguration (DPR) is possible. This means that subsets of the FPGA computational resources can now be changed (reconfigured) at run-time while the rest remains active. Besides, this reconfiguration process can be triggered internally by the device itself. This technological boost in reconfigurable hardware devices is actually covered under the field known as Reconfigurable Computing. One of the most exotic fields of application that Reconfigurable Computing has enabled is the known as Evolvable Hardware (EHW), in which this dissertation is framed. The main idea behind the concept is turning hardware that is adaptable through reconfiguration into an evolvable entity subject to the forces of an evolutionary process, inspired by that of natural, biological species, that guides the direction of change. It is yet another application of the field of Evolutionary Computation (EC), which comprises a set of global optimisation algorithms known as Evolutionary Algorithms (EAs), considered as universal problem solvers. In analogy to the biological process of evolution, in EHW the subject of evolution is a population of circuits that tries to get adapted to its surrounding environment by progressively getting better fitted to it generation after generation. Individuals become circuit configurations representing bitstreams that feature reconfigurable circuit descriptions. By selecting those that behave better, i.e., with a higher fitness value after being evaluated, and using them as parents of the following generation, the EA creates a new offspring population by using so called genetic operators like mutation and recombination. As generations succeed one another, the whole population is expected to approach to the optimum solution to the problem of finding an adequate circuit configuration that fulfils system objectives. The state of reconfiguration technology after Xilinx XC6200 FPGA family was discontinued and replaced by Virtex families in the late 90s, was a major obstacle for advancements in EHW; closed (non publicly known) bitstream formats; dependence on manufacturer tools with highly limiting support of DPR; slow speed of reconfiguration; and random bitstream modifications being potentially hazardous for device integrity, are some of these reasons. However, a proposal in the first 2000s allowed to keep investigating in this field while DPR technology kept maturing, the Virtual Reconfigurable Circuit (VRC). In essence, a VRC in an FPGA is a virtual layer acting as an application specific reconfigurable circuit on top of an FPGA fabric that reduces the complexity of the reconfiguration process and increases its speed (compared to native reconfiguration). It is an array of computational nodes specified using standard HDL descriptions that define ad-hoc reconfigurable resources; routing multiplexers and a set of configurable processing elements, each one containing all the required functions, which are selectable through functionality multiplexers as in microprocessor ALUs. A large register acts as configuration memory, so VRC reconfiguration is very fast given it only involves writing this register, which drives the selection signals of the set of multiplexers. However, large overheads are introduced by this virtual layer; an area overhead due to the simultaneous implementation of every function in every node of the array plus the multiplexers, and a delay overhead due to the multiplexers, which also reduces maximum frequency of operation. The very nature of Evolvable Hardware, able to optimise its own computational behaviour, makes it a good candidate to advance research in self-adaptive systems. Combining a selfreconfigurable computing substrate able to be dynamically changed at run-time with an embedded algorithm that provides a direction for change, can help fulfilling requirements for autonomous lifetime adaptation of FPGA-based embedded systems. The main proposal of this thesis is hence directed to contribute to autonomous self-adaptation of the underlying computational hardware of FPGA-based embedded systems by means of Evolvable Hardware. This is tackled by considering that the computational behaviour of a system can be modified by changing any of its two constituent parts: an underlying hard structure and a set of soft parameters. Two main lines of work derive from this distinction. On one side, parametric self-adaptation and, on the other side, structural self-adaptation. The goal pursued in the case of parametric self-adaptation is the implementation of complex evolutionary optimisation techniques in resource constrained embedded systems for online parameter adaptation of signal processing circuits. The application selected as proof of concept is the optimisation of Discrete Wavelet Transforms (DWT) filters coefficients for very specific types of images, oriented to image compression. Hence, adaptive and improved compression efficiency, as compared to standard techniques, is the required goal of evolution. The main quest lies in reducing the supercomputing resources reported in previous works for the optimisation process in order to make it suitable for embedded systems. Regarding structural self-adaptation, the thesis goal is the implementation of self-adaptive circuits in FPGA-based evolvable systems through an efficient use of native reconfiguration capabilities. In this case, evolution of image processing tasks such as filtering of unknown and changing types of noise and edge detection are the selected proofs of concept. In general, evolving unknown image processing behaviours (within a certain complexity range) at design time is the required goal. In this case, the mission of the proposal is the incorporation of DPR in EHW to evolve a systolic array architecture adaptable through reconfiguration whose evolvability had not been previously checked. In order to achieve the two stated goals, this thesis originally proposes an evolvable platform that integrates an Adaptation Engine (AE), a Reconfiguration Engine (RE) and an adaptable Computing Engine (CE). In the case of parametric adaptation, the proposed platform is characterised by: • a CE featuring a DWT hardware processing core adaptable through reconfigurable registers that holds wavelet filters coefficients • an evolutionary algorithm as AE that searches for candidate wavelet filters through a parametric optimisation process specifically developed for systems featured by scarce computing resources • a new, simplified mutation operator for the selected EA, that together with a fast evaluation mechanism of candidate wavelet filters derived from existing literature, assures the feasibility of the evolutionary search involved in wavelets adaptation In the case of structural adaptation, the platform proposal takes the form of: • a CE based on a reconfigurable 2D systolic array template composed of reconfigurable processing nodes • an evolutionary algorithm as AE that searches for candidate configurations of the array using a set of computational functionalities for the nodes available in a run time accessible library • a hardware RE that exploits native DPR capabilities of FPGAs and makes an efficient use of the available reconfigurable resources of the device to change the behaviour of the CE at run time • a library of reconfigurable processing elements featured by position-independent partial bitstreams used as the set of available configurations for the processing nodes of the array Main contributions of this thesis can be summarised in the following list. • An FPGA-based evolvable platform for parametric and structural self-adaptation of embedded systems composed of a Computing Engine, an evolutionary Adaptation Engine and a Reconfiguration Engine. This platform is further developed and tailored for both parametric and structural self-adaptation. • Regarding parametric self-adaptation, main contributions are: – A CE adaptable through reconfigurable registers that enables parametric adaptation of the coefficients of an adaptive hardware implementation of a DWT core. – An AE based on an Evolutionary Algorithm specifically developed for numerical optimisation applied to wavelet filter coefficients in resource constrained embedded systems. – A run-time self-adaptive DWT IP core for embedded systems that allows for online optimisation of transform performance for image compression for specific deployment environments characterised by different types of input signals. – A software model and hardware implementation of a tool for the automatic, evolutionary construction of custom wavelet transforms. • Lastly, regarding structural self-adaptation, main contributions are: – A CE adaptable through native FPGA fabric reconfiguration featured by a two dimensional systolic array template of reconfigurable processing nodes. Different processing behaviours can be automatically mapped in the array by using a library of simple reconfigurable processing elements. – Definition of a library of such processing elements suited for autonomous runtime synthesis of different image processing tasks. – Efficient incorporation of DPR in EHW systems, overcoming main drawbacks from the previous approach of virtual reconfigurable circuits. Implementation details for both approaches are also originally compared in this work. – A fault tolerant, self-healing platform that enables online functional recovery in hazardous environments. The platform has been characterised from a fault tolerance perspective: fault models at FPGA CLB level and processing elements level are proposed, and using the RE, a systematic fault analysis for one fault in every processing element and for two accumulated faults is done. – A dynamic filtering quality platform that permits on-line adaptation to different types of noise and different computing behaviours considering the available computing resources. On one side, non-destructive filters are evolved, enabling scalable cascaded filtering schemes; and on the other, size-scalable filters are also evolved considering dynamically changing computational filtering requirements. This dissertation is organized in four parts and nine chapters. First part contains chapter 1, the introduction to and motivation of this PhD work. Following, the reference framework in which this dissertation is framed is analysed in the second part: chapter 2 features an introduction to the notions of self-adaptation and autonomic computing as a more general research field to the very specific one of this work; chapter 3 introduces evolutionary computation as the technique to drive adaptation; chapter 4 analyses platforms for reconfigurable computing as the technology to hold self-adaptive hardware; and finally chapter 5 defines, classifies and surveys the field of Evolvable Hardware. Third part of the work follows, which contains the proposal, development and results obtained: while chapter 6 contains an statement of the thesis goals and the description of the proposal as a whole, chapters 7 and 8 address parametric and structural self-adaptation, respectively. Finally, chapter 9 in part 4 concludes the work and describes future research paths.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PALI (release 1.2) contains three-dimensional (3-D) structure-dependent sequence alignments as well as structure-based phylogenetic trees of homologous protein domains in various families. The data set of homologous protein structures has been derived by consulting the SCOP database (release 1.50) and the data set comprises 604 families of homologous proteins involving 2739 protein domain structures with each family made up of at least two members. Each member in a family has been structurally aligned with every other member in the same family (pairwise alignment) and all the members in the family are also aligned using simultaneous super­position (multiple alignment). The structural alignments are performed largely automatically, with manual interventions especially in the cases of distantly related proteins, using the program STAMP (version 4.2). Every family is also associated with two dendrograms, calculated using PHYLIP (version 3.5), one based on a structural dissimilarity metric defined for every pairwise alignment and the other based on similarity of topologically equivalent residues. These dendrograms enable easy comparison of sequence and structure-based relationships among the members in a family. Structure-based alignments with the details of structural and sequence similarities, superposed coordinate sets and dendrograms can be accessed conveniently using a web interface. The database can be queried for protein pairs with sequence or structural similarities falling within a specified range. Thus PALI forms a useful resource to help in analysing the relationship between sequence and structure variation at a given level of sequence similarity. PALI also contains over 653 ‘orphans’ (single member families). Using the web interface involving PSI_BLAST and PHYLIP it is possible to associate the sequence of a new protein with one of the families in PALI and generate a phylogenetic tree combining the query sequence and proteins of known 3-D structure. The database with the web interfaced search and dendrogram generation tools can be accessed at http://pa uling.mbu.iisc.ernet.in/~pali.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The neuronal nitric oxide synthase (nNOS) has been successfully overexpressed in Escherichia coli, with average yields of 125-150 nmol (20-24 mg) of enzyme per liter of cells. The cDNA for nNOS was subcloned into the pCW vector under the control of the tac promotor and was coexpressed with the chaperonins groEL and groES in the protease-deficient BL21 strain of E. coli. The enzyme produced is replete with heme and flavins and, after overnight incubation with tetrahydrobiopterin, contains 0.7 pmol of tetrahydrobiopterin per pmol of nNOS. nNOS is isolated as a predominantly high-spin heme protein and demonstrates spectral properties that are identical to those of nNOS isolated from stably transfected human kidney 293 cells. It binds N omega-nitroarginine dependent on the presence of bound tetrahydrobiopterin and exhibits a Kd of 45 nM. The enzyme is completely functional; the specific activity is 450 nmol/min per mg. This overexpression system will be extremely useful for rapid, inexpensive preparation of large amounts of active nNOS for use in mechanistic and structure/function studies, as well as for drug design and development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to analyze the internal consistency and the external and structure validity of the 12-Item General Health Questionnaire (GHQ-12) in the Spanish general population. A stratified sample of 1001 subjects, ages between 25 and 65 years, taken from the general Spanish population was employed. The GHQ-12 and the Inventory of Situations and Responses of Anxiety-ISRA were administered. A Cronbach’s alpha of .76 (Standardized Alpha: .78) and a 3-factor structure (with oblique rotation and maximum likelihood procedure) were obtained. External validity of Factor I (Successful Coping) with the ISRA is very robust (.82; Factor II, .70; Factor III, .75). The GHQ-12 shows adequate reliability and validity in the Spanish population. Therefore, the GHQ-12 can be used with efficacy to assess people’s overall psychological well-being and to detect non-psychotic psychiatric problems. Additionally, our results confirm that the GHQ-12 can best be thought of as a multidimensional scale that assesses several distinct aspects of distress, rather than just a unitary screening measure.