947 resultados para Which-way experiments


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Microarray technique is rather powerful, as it allows to test up thousands of genes at a time, but this produces an overwhelming set of data files containing huge amounts of data, which is quite difficult to pre-process, separate, classify and correlate for interesting conclusions to be extracted. Modern machine learning, data mining and clustering techniques based on information theory, are needed to read and interpret the information contents buried in those large data sets. Independent Component Analysis method can be used to correct the data affected by corruption processes or to filter the uncorrectable one and then clustering methods can group similar genes or classify samples. In this paper a hybrid approach is used to obtain a two way unsupervised clustering for a corrected microarray data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Some floating-liquid-zone experiments performed under reduced-gravity conditions are reviewed. Several types of instabilities are discussed, together with the relevant parameters controlling them. It is shown that the bounding values of these parameters could be increased, by orders of magnitude in several instances, by selecting appropriate liquids. Two of the many problems that a Fluid-Physics Module, devised to perform experiments on floating zones in a space laboratory, would involve are discussed: namely (i) procedures for disturbing the zoneunder controlled conditions, and (ii) visualisation of the inner flow pattern. Several topics connected with the nonisothermal nature and the phase-changes of floating zones are presented. In particular, a mode of propagation through the liquid zone for disturbances which could appear in the melting solid/liquid interface is suggested. Although most research on floating liquid zones is aimed at improving the crystal-growth process, some additional applications are suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several models for context-sensitive analysis of modular programs have been proposed, each with different characteristics and representing different trade-offs. The advantage of these context-sensitive analyses is that they provide information which is potentially more accurate than that provided by context-free analyses. Such information can then be applied to validating/debugging the program and/or to specializing the program in order to obtain important performance improvements. Some very preliminary experimental results have also been reported for some of these models which provided initial evidence on their potential. However, further experimentation, which is needed in order to understand the many issues left open and to show that the proposed modes scale and are usable in the context of large, real-life modular programs, was left as future work. The aim of this paper is two-fold. On one hand we provide an empirical comparison of the different models proposed in previous work, as well as experimental data on the different choices left open in those designs. On the other hand we explore the scalability of these models by using larger modular programs as benchmarks. The results have been obtained from a realistic implementation of the models, integrated in a production-quality compiler (CiaoPP/Ciao). Our experimental results shed light on the practical implications of the different design choices and of the models themselves. We also show that contextsensitive analysis of modular programs is indeed feasible in practice, and that in certain critical cases it provides better performance results than those achievable by analyzing the whole program at once, specially in terms of memory consumption and when reanalyzing after making changes to a program, as is often the case during program development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proof carrying code (PCC) is a general is originally a roof in ñrst-order logic of certain vermethodology for certifying that the execution of an un- ification onditions and the checking process involves trusted mobile code is safe. The baste idea is that the ensuring that the certifícate is indeed a valid ñrst-order code supplier attaches a certifícate to the mobile code proof. which the consumer checks in order to ensure that the The main practical difñculty of PCC techniques is in code is indeed safe. The potential benefit is that the generating safety certiñeates which at the same time: i) consumer's task is reduced from the level of proving to allow expressing interesting safety properties, ii) can be the level of checking. Recently, the abstract interpre- generated automatically and, iii) are easy and efficient tation techniques developed, in logic programming have to check. In [1], the abstract interpretation techniques been proposed as a basis for PCC. This extended ab- [5] developed in logic programming1 are proposed as stract reports on experiments which illustrate several is- a basis for PCC. They offer a number of advantages sues involved in abstract interpretation-based certifica- for dealing with the aforementioned issues. In particution. First, we describe the implementation of our sys- lar, the xpressiveness of existing abstract domains will tem in the context of CiaoPP: the preprocessor of the be implicitly available in abstract interpretation-based Ciao multi-paradigm programming system. Then, by code certification to deñne a wide range of safety propermeans of some experiments, we show how code certifi- ties. Furthermore, the approach inherits the automation catión is aided in the implementation of the framework. and inference power of the abstract interpretation en- Finally, we discuss the application of our method within gines used in (Constraint) Logic Programming, (C)LP. the área, of pervasive systems

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proof carrying code is a general methodology for certifying that the execution of an untrusted mobile code is safe, according to a predefined safety policy. The basic idea is that the code supplier attaches a certifícate (or proof) to the mobile code which, then, the consumer checks in order to ensure that the code is indeed safe. The potential benefit is that the consumer's task is reduced from the level of proving to the level of checking, a much simpler task. Recently, the abstract interpretation techniques developed in logic programming have been proposed as a basis for proof carrying code [1]. To this end, the certifícate is generated from an abstract interpretation-based proof of safety. Intuitively, the verification condition is extracted from a set of assertions guaranteeing safety and the answer table generated during the analysis. Given this information, it is relatively simple and fast to verify that the code does meet this proof and so its execution is safe. This extended abstract reports on experiments which illustrate several issues involved in abstract interpretation-based code certification. First, we describe the implementation of our system in the context of CiaoPP: the preprocessor of the Ciao multi-paradigm (constraint) logic programming system. Then, by means of some experiments, we show how code certification is aided in the implementation of the framework. Finally, we discuss the application of our method within the área of pervasive systems which may lack the necessary computing resources to verify safety on their own. We herein illustrate the relevance of the information inferred by existing cost analysis to control resource usage in this context. Moreover, since the (rather complex) analysis phase is replaced by a simpler, efficient checking process at the code consumer side, we believe that our abstract interpretation-based approach to proof-carrying code becomes practically applicable to this kind of systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bats are animals that posses high maneuvering capabilities. Their wings contain dozens of articulations that allow the animal to perform aggressive maneuvers by means of controlling the wing shape during flight (morphing-wings). There is no other flying creature in nature with this level of wing dexterity and there is biological evidence that the inertial forces produced by the wings have a key role in the attitude movements of the animal. This can inspire the design of highly articulated morphing-wing micro air vehicles (not necessarily bat-like) with a significant wing-to-body mass ratio. This thesis presents the development of a novel bat-like micro air vehicle (BaTboT) inspired by the morphing-wing mechanism of bats. BaTboT’s morphology is alike in proportion compared to its biological counterpart Cynopterus brachyotis, which provides the biological foundations for developing accurate mathematical models and methods that allow for mimicking bat flight. In nature bats can achieve an amazing level of maneuverability by combining flapping and morphing wingstrokes. Attempting to reproduce the biological wing actuation system that provides that kind of motion using an artificial counterpart requires the analysis of alternative actuation technologies more likely muscle fiber arrays instead of standard servomotor actuators. Thus, NiTinol Shape Memory Alloys (SMAs) acting as artificial biceps and triceps muscles are used for mimicking the morphing wing mechanism of the bat flight apparatus. This antagonistic configuration of SMA-muscles response to an electrical heating power signal to operate. This heating power is regulated by a proper controller that allows for accurate and fast SMA actuation. Morphing-wings will enable to change wings geometry with the unique purpose of enhancing aerodynamics performance. During the downstroke phase of the wingbeat motion both wings are fully extended aimed at increasing the area surface to properly generate lift forces. Contrary during the upstroke phase of the wingbeat motion both wings are retracted to minimize the area and thus reducing drag forces. Morphing-wings do not only improve on aerodynamics but also on the inertial forces that are key to maneuver. Thus, a modeling framework is introduced for analyzing how BaTboT should maneuver by means of changing wing morphology. This allows the definition of requirements for achieving forward and turning flight according to the kinematics of the wing modulation. Motivated by the biological fact about the influence of wing inertia on the production of body accelerations, an attitude controller is proposed. The attitude control law incorporates wing inertia information to produce desired roll (φ) and pitch (θ) acceleration commands. This novel flight control approach is aimed at incrementing net body forces (Fnet) that generate propulsion. Mimicking the way how bats take advantage of inertial and aerodynamical forces produced by the wings in order to both increase lift and maneuver is a promising way to design more efficient flapping/morphing wings MAVs. The novel wing modulation strategy and attitude control methodology proposed in this thesis provide a totally new way of controlling flying robots, that eliminates the need of appendices such as flaps and rudders, and would allow performing more efficient maneuvers, especially useful in confined spaces. As a whole, the BaTboT project consists of five major stages of development: - Study and analysis of biological bat flight data reported in specialized literature aimed at defining design and control criteria. - Formulation of mathematical models for: i) wing kinematics, ii) dynamics, iii) aerodynamics, and iv) SMA muscle-like actuation. It is aimed at modeling the effects of modulating wing inertia into the production of net body forces for maneuvering. - Bio-inspired design and fabrication of: i) skeletal structure of wings and body, ii) SMA muscle-like mechanisms, iii) the wing-membrane, and iv) electronics onboard. It is aimed at developing the bat-like platform (BaTboT) that allows for testing the methods proposed. - The flight controller: i) control of SMA-muscles (morphing-wing modulation) and ii) flight control (attitude regulation). It is aimed at formulating the proper control methods that allow for the proper modulation of BaTboT’s wings. - Experiments: it is aimed at quantifying the effects of properly wing modulation into aerodynamics and inertial production for maneuvering. It is also aimed at demonstrating and validating the hypothesis of improving flight efficiency thanks to the novel control methods presented in this thesis. This thesis introduces the challenges and methods to address these stages. Windtunnel experiments will be oriented to discuss and demonstrate how the wings can considerably affect the dynamics/aerodynamics of flight and how to take advantage of wing inertia modulation that the morphing-wings enable to properly change wings’ geometry during flapping. Resumen: Los murciélagos son mamíferos con una alta capacidad de maniobra. Sus alas están conformadas por docenas de articulaciones que permiten al animal maniobrar gracias al cambio geométrico de las alas durante el vuelo. Esta característica es conocida como (alas mórficas). En la naturaleza, no existe ningún especimen volador con semejante grado de dexteridad de vuelo, y se ha demostrado, que las fuerzas inerciales producidas por el batir de las alas juega un papel fundamental en los movimientos que orientan al animal en vuelo. Estas características pueden inspirar el diseño de un micro vehículo aéreo compuesto por alas mórficas con redundantes grados de libertad, y cuya proporción entre la masa de sus alas y el cuerpo del robot sea significativa. Esta tesis doctoral presenta el desarrollo de un novedoso robot aéreo inspirado en el mecanismo de ala mórfica de los murciélagos. El robot, llamado BaTboT, ha sido diseñado con parámetros morfológicos muy similares a los descritos por su símil biológico Cynopterus brachyotis. El estudio biológico de este especimen ha permitido la definición de criterios de diseño y modelos matemáticos que representan el comportamiento del robot, con el objetivo de imitar lo mejor posible la biomecánica de vuelo de los murciélagos. La biomecánica de vuelo está definida por dos tipos de movimiento de las alas: aleteo y cambio de forma. Intentar imitar como los murciélagos cambian la forma de sus alas con un prototipo artificial, requiere el análisis de métodos alternativos de actuación que se asemejen a la biomecánica de los músculos que actúan las alas, y evitar el uso de sistemas convencionales de actuación como servomotores ó motores DC. En este sentido, las aleaciones con memoria de forma, ó por sus siglas en inglés (SMA), las cuales son fibras de NiTinol que se contraen y expanden ante estímulos térmicos, han sido usados en este proyecto como músculos artificiales que actúan como bíceps y tríceps de las alas, proporcionando la funcionalidad de ala mórfica previamente descrita. De esta manera, los músculos de SMA son mecánicamente posicionados en una configuración antagonista que permite la rotación de las articulaciones del robot. Los actuadores son accionados mediante una señal de potencia la cual es regulada por un sistema de control encargado que los músculos de SMA respondan con la precisión y velocidad deseada. Este sistema de control mórfico de las alas permitirá al robot cambiar la forma de las mismas con el único propósito de mejorar el desempeño aerodinámico. Durante la fase de bajada del aleteo, las alas deben estar extendidas para incrementar la producción de fuerzas de sustentación. Al contrario, durante el ciclo de subida del aleteo, las alas deben contraerse para minimizar el área y reducir las fuerzas de fricción aerodinámica. El control de alas mórficas no solo mejora el desempeño aerodinámico, también impacta la generación de fuerzas inerciales las cuales son esenciales para maniobrar durante el vuelo. Con el objetivo de analizar como el cambio de geometría de las alas influye en la definición de maniobras y su efecto en la producción de fuerzas netas, simulaciones y experimentos han sido llevados a cabo para medir cómo distintos patrones de modulación de las alas influyen en la producción de aceleraciones lineales y angulares. Gracias a estas mediciones, se propone un control de vuelo, ó control de actitud, el cual incorpora información inercial de las alas para la definición de referencias de aceleración angular. El objetivo de esta novedosa estrategia de control radica en el incremento de fuerzas netas para la adecuada generación de movimiento (Fnet). Imitar como los murciélagos ajustan sus alas con el propósito de incrementar las fuerzas de sustentación y mejorar la maniobra en vuelo es definitivamente un tópico de mucho interés para el diseño de robots aéros mas eficientes. La propuesta de control de vuelo definida en este trabajo de investigación podría dar paso a una nueva forma de control de vuelo de robots aéreos que no necesitan del uso de partes mecánicas tales como alerones, etc. Este control también permitiría el desarrollo de vehículos con mayor capacidad de maniobra. El desarrollo de esta investigación se centra en cinco etapas: - Estudiar y analizar el vuelo de los murciélagos con el propósito de definir criterios de diseño y control. - Formular modelos matemáticos que describan la: i) cinemática de las alas, ii) dinámica, iii) aerodinámica, y iv) actuación usando SMA. Estos modelos permiten estimar la influencia de modular las alas en la producción de fuerzas netas. - Diseño y fabricación de BaTboT: i) estructura de las alas y el cuerpo, ii) mecanismo de actuación mórfico basado en SMA, iii) membrana de las alas, y iv) electrónica abordo. - Contro de vuelo compuesto por: i) control de la SMA (modulación de las alas) y ii) regulación de maniobra (actitud). - Experimentos: están enfocados en poder cuantificar cuales son los efectos que ejercen distintos perfiles de modulación del ala en el comportamiento aerodinámico e inercial. El objetivo es demostrar y validar la hipótesis planteada al inicio de esta investigación: mejorar eficiencia de vuelo gracias al novedoso control de orientación (actitud) propuesto en este trabajo. A lo largo del desarrollo de cada una de las cinco etapas, se irán presentando los retos, problemáticas y soluciones a abordar. Los experimentos son realizados utilizando un túnel de viento con la instrumentación necesaria para llevar a cabo las mediciones de desempeño respectivas. En los resultados se discutirá y demostrará que la inercia producida por las alas juega un papel considerable en el comportamiento dinámico y aerodinámico del sistema y como poder tomar ventaja de dicha característica para regular patrones de modulación de las alas que conduzcan a mejorar la eficiencia del robot en futuros vuelos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La presente tesis doctoral tiene por objeto el estudio y análisis de técnicas y modelos de obtención de parámetros biofísicos e indicadores ambientales, de manera automatizada a partir de imágenes procedentes de satélite de alta resolución temporal. En primer lugar se revisan los diferentes programas espaciales de observación del territorio, con especial atención a los que proporcionan dicha resolución. También se han revisado las metodologías y procesos que permiten la obtención de diferentes parámetros cuantitativos y documentos cualitativos, relacionados con diversos aspectos de las cubiertas terrestres, atendiendo a su adaptabilidad a las particularidades de los datos. En segundo lugar se propone un modelo de obtención de parámetros ambientales, que integra información proveniente de sensores espaciales y de otras fuentes auxiliares utilizando, en cierta medida, las metodologías presentadas en apartados anteriores y optimizando algunas de las referidas o proponiendo otras nuevas, de manera que se permita dicha obtención de manera eficiente, a partir de los datos disponibles y de forma sistemática. Tras esta revisión de metodologías y propuesta del modelo, se ha procedido a la realización de experimentos, con la finalidad de comprobar su comportamiento en diferentes casos prácticos, depurar los flujos de datos y procesos, así como establecer las situaciones que pueden afectar a los resultados. De todo ello se deducirá la evaluación del referido modelo. Los sensores considerados en este trabajo han sido MODIS, de alta resolución temporal y Thematic Mapper (TM), de media resolución espacial, por tratarse de instrumentos de referencia en la realización de estudios ambientales. También por la duración de sus correspondientes misiones de registro de datos, lo que permite realizar estudios de evolución temporal de ciertos parámetros biofísicos, durante amplios periodos de tiempo. Así mismo. es de destacar que la continuidad de los correspondientes programas parece estar asegurada. Entre los experimentos realizados, se ha ensayado una metodología para la integración de datos procedentes de ambos sensores. También se ha analizado un método de interpolación temporal que permite obtener imágenes sintéticas con la resolución espacial de TM (30 m) y la temporal de MODIS (1 día), ampliando el rango de aplicación de este último sensor. Asimismo, se han analizado algunos de los factores que afectan a los datos registrados, tal como la geometría de la toma de los mismos y los episodios de precipitación, los cuales alteran los resultados obtenidos. Por otro lado, se ha comprobado la validez del modelo propuesto en el estudio de fenómenos ambientales dinámicos, en concreto la contaminación orgánica de aguas embalsadas. Finalmente, se ha demostrado un buen comportamiento del modelo en todos los casos ensayados, así como su flexibilidad, lo que le permite adaptarse a nuevos orígenes de datos, o nuevas metodologías de cálculo. Abstract This thesis aims to the study and analysis of techniques and models, in order to obtain biophysical parameters and environmental indicators in an automated way, using high temporal resolution satellite data. Firstly we have reviewed the main Earth Observation Programs, paying attention to those that provide high temporal resolution. Also have reviewed the methodologies and process flow diagrams in order to obtain quantitative parameters and qualitative documents, relating to various aspects of land cover, according to their adaptability to the peculiarities of the data. In the next stage, a model which allows obtaining environmental parameters, has been proposed. This structure integrates information from space sensors and ancillary data sources, using the methodologies presented in previous sections that permits the parameters calculation in an efficient and automated way. After this review of methodologies and the proposal of the model, we proceeded to carry out experiments, in order to check the behavior of the structure in real situations. From this, we derive the accuracy of the model. The sensors used in this work have been MODIS, which is a high temporal resolution sensor, and Thematic Mapper (TM), which is a medium spatial resolution instrument. This choice was motivated because they are reference sensors in environmental studies, as well as for the duration of their corresponding missions of data logging, and whose continuity seems assured. Among the experiments, we tested a methodology that allows the integration of data from cited sensors, we discussed a proposal for a temporal interpolation method for obtaining synthetic images with spatial resolution of TM (30 m) and temporal of MODIS (1 day), extending the application range of this one. Furthermore, we have analyzed some of the factors that affect the recorded data, such as the relative position of the satellite with the ground point, and the rainfall events, which alter the obtained results. On the other hand, we have proven the validity of the proposed model in the study of the organic contamination in inland water bodies. Finally, we have demonstrated a good performance of the proposed model in all cases tested, as well as its flexibility and adaptability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La mayor parte de los entornos diseñados por el hombre presentan características geométricas específicas. En ellos es frecuente encontrar formas poligonales, rectangulares, circulares . . . con una serie de relaciones típicas entre distintos elementos del entorno. Introducir este tipo de conocimiento en el proceso de construcción de mapas de un robot móvil puede mejorar notablemente la calidad y la precisión de los mapas resultantes. También puede hacerlos más útiles de cara a un razonamiento de más alto nivel. Cuando la construcción de mapas se formula en un marco probabilístico Bayesiano, una especificación completa del problema requiere considerar cierta información a priori sobre el tipo de entorno. El conocimiento previo puede aplicarse de varias maneras, en esta tesis se presentan dos marcos diferentes: uno basado en el uso de primitivas geométricas y otro que emplea un método de representación cercano al espacio de las medidas brutas. Un enfoque basado en características geométricas supone implícitamente imponer un cierto modelo a priori para el entorno. En este sentido, el desarrollo de una solución al problema SLAM mediante la optimización de un grafo de características geométricas constituye un primer paso hacia nuevos métodos de construcción de mapas en entornos estructurados. En el primero de los dos marcos propuestos, el sistema deduce la información a priori a aplicar en cada caso en base a una extensa colección de posibles modelos geométricos genéricos, siguiendo un método de Maximización de la Esperanza para hallar la estructura y el mapa más probables. La representación de la estructura del entorno se basa en un enfoque jerárquico, con diferentes niveles de abstracción para los distintos elementos geométricos que puedan describirlo. Se llevaron a cabo diversos experimentos para mostrar la versatilidad y el buen funcionamiento del método propuesto. En el segundo marco, el usuario puede definir diferentes modelos de estructura para el entorno mediante grupos de restricciones y energías locales entre puntos vecinos de un conjunto de datos del mismo. El grupo de restricciones que se aplica a cada grupo de puntos depende de la topología, que es inferida por el propio sistema. De este modo, se pueden incorporar nuevos modelos genéricos de estructura para el entorno con gran flexibilidad y facilidad. Se realizaron distintos experimentos para demostrar la flexibilidad y los buenos resultados del enfoque propuesto. Abstract Most human designed environments present specific geometrical characteristics. In them, it is easy to find polygonal, rectangular and circular shapes, with a series of typical relations between different elements of the environment. Introducing this kind of knowledge in the mapping process of mobile robots can notably improve the quality and accuracy of the resulting maps. It can also make them more suitable for higher level reasoning applications. When mapping is formulated in a Bayesian probabilistic framework, a complete specification of the problem requires considering a prior for the environment. The prior over the structure of the environment can be applied in several ways; this dissertation presents two different frameworks, one using a feature based approach and another one employing a dense representation close to the measurements space. A feature based approach implicitly imposes a prior for the environment. In this sense, feature based graph SLAM was a first step towards a new mapping solution for structured scenarios. In the first framework, the prior is inferred by the system from a wide collection of feature based priors, following an Expectation-Maximization approach to obtain the most probable structure and the most probable map. The representation of the structure of the environment is based on a hierarchical model with different levels of abstraction for the geometrical elements describing it. Various experiments were conducted to show the versatility and the good performance of the proposed method. In the second framework, different priors can be defined by the user as sets of local constraints and energies for consecutive points in a range scan from a given environment. The set of constraints applied to each group of points depends on the topology, which is inferred by the system. This way, flexible and generic priors can be incorporated very easily. Several tests were carried out to demonstrate the flexibility and the good results of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En los últimos años la tecnología láser se ha convertido en una herramienta imprescindible en la fabricación de dispositivos fotovoltaicos, ayudando a la consecución de dos objetivos claves para que esta opción energética se convierta en una alternativa viable: reducción de costes de fabricación y aumento de eficiencia de dispositivo. Dentro de las tecnologías fotovoltaicas, las basadas en silicio cristalino (c-Si) siguen siendo las dominantes en el mercado, y en la actualidad los esfuerzos científicos en este campo se encaminan fundamentalmente a conseguir células de mayor eficiencia a un menor coste encontrándose, como se comentaba anteriormente, que gran parte de las soluciones pueden venir de la mano de una mayor utilización de tecnología láser en la fabricación de los mismos. En este contexto, esta Tesis hace un estudio completo y desarrolla, hasta su aplicación en dispositivo final, tres procesos láser específicos para la optimización de dispositivos fotovoltaicos de alta eficiencia basados en silicio. Dichos procesos tienen como finalidad la mejora de los contactos frontal y posterior de células fotovoltaicas basadas en c-Si con vistas a mejorar su eficiencia eléctrica y reducir el coste de producción de las mismas. En concreto, para el contacto frontal se han desarrollado soluciones innovadoras basadas en el empleo de tecnología láser en la metalización y en la fabricación de emisores selectivos puntuales basados en técnicas de dopado con láser, mientras que para el contacto posterior se ha trabajado en el desarrollo de procesos de contacto puntual con láser para la mejora de la pasivación del dispositivo. La consecución de dichos objetivos ha llevado aparejado el alcanzar una serie de hitos que se resumen continuación: - Entender el impacto de la interacción del láser con los distintos materiales empleados en el dispositivo y su influencia sobre las prestaciones del mismo, identificando los efectos dañinos e intentar mitigarlos en lo posible. - Desarrollar procesos láser que sean compatibles con los dispositivos que admiten poca afectación térmica en el proceso de fabricación (procesos a baja temperatura), como los dispositivos de heterounión. - Desarrollar de forma concreta procesos, completamente parametrizados, de definición de dopado selectivo con láser, contactos puntuales con láser y metalización mediante técnicas de transferencia de material inducida por láser. - Definir tales procesos de forma que reduzcan la complejidad de la fabricación del dispositivo y que sean de fácil integración en una línea de producción. - Mejorar las técnicas de caracterización empleadas para verificar la calidad de los procesos, para lo que ha sido necesario adaptar específicamente técnicas de caracterización de considerable complejidad. - Demostrar su viabilidad en dispositivo final. Como se detalla en el trabajo, la consecución de estos hitos en el marco de desarrollo de esta Tesis ha permitido contribuir a la fabricación de los primeros dispositivos fotovoltaicos en España que incorporan estos conceptos avanzados y, en el caso de la tecnología de dopado con láser, ha permitido hacer avances completamente novedosos a nivel mundial. Asimismo los conceptos propuestos de metalización con láser abren vías, completamente originales, para la mejora de los dispositivos considerados. Por último decir que este trabajo ha sido posible por una colaboración muy estrecha entre el Centro Láser de la UPM, en el que la autora desarrolla su labor, y el Grupo de Investigación en Micro y Nanotecnologías de la Universidad Politécnica de Cataluña, encargado de la preparación y puesta a punto de las muestras y del desarrollo de algunos procesos láser para comparación. También cabe destacar la contribución de del Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, CIEMAT, en la preparación de experimentos específicos de gran importancia en el desarrollo del trabajo. Dichas colaboraciones se han desarrollado en el marco de varios proyectos, tales como el proyecto singular estratégico PSE-MICROSIL08 (PSE-iv 120000-2006-6), el proyecto INNDISOL (IPT-420000-2010-6), ambos financiados por el Fondo Europeo de Desarrollo Regional FEDER (UE) “Una manera de hacer Europa” y el MICINN, y el proyecto del Plan Nacional AMIC (ENE2010-21384-C04-02), cuya financiación ha permitido en gran parte llevar a término este trabajo. v ABSTRACT. Last years lasers have become a fundamental tool in the photovoltaic (PV) industry, helping this technology to achieve two major goals: cost reduction and efficiency improvement. Among the present PV technologies, crystalline silicon (c-Si) maintains a clear market supremacy and, in this particular field, the technological efforts are focussing into the improvement of the device efficiency using different approaches (reducing for instance the electrical or optical losses in the device) and the cost reduction in the device fabrication (using less silicon in the final device or implementing more cost effective production steps). In both approaches lasers appear ideally suited tools to achieve the desired success. In this context, this work makes a comprehensive study and develops, until their implementation in a final device, three specific laser processes designed for the optimization of high efficiency PV devices based in c-Si. Those processes are intended to improve the front and back contact of the considered solar cells in order to reduce the production costs and to improve the device efficiency. In particular, to improve the front contact, this work has developed innovative solutions using lasers as fundamental processing tools to metalize, using laser induced forward transfer techniques, and to create local selective emitters by means of laser doping techniques. On the other side, and for the back contact, and approached based in the optimization of standard laser fired contact formation has been envisaged. To achieve these fundamental goals, a number of milestones have been reached in the development of this work, namely: - To understand the basics of the laser-matter interaction physics in the considered processes, in order to preserve the functionality of the irradiated materials. - To develop laser processes fully compatible with low temperature device concepts (as it is the case of heterojunction solar cells). - In particular, to parameterize completely processes of laser doping, laser fired contacts and metallization via laser transfer of material. - To define such a processes in such a way that their final industrial implementation could be a real option. - To improve widely used characterization techniques in order to be applied to the study of these particular processes. - To probe their viability in a final PV device. Finally, the achievement of these milestones has brought as a consequence the fabrication of the first devices in Spain incorporating these concepts. In particular, the developments achieved in laser doping, are relevant not only for the Spanish science but in a general international context, with the introduction of really innovative concepts as local selective emitters. Finally, the advances reached in the laser metallization approached presented in this work open the door to future developments, fully innovative, in the field of PV industrial metallization techniques. This work was made possible by a very close collaboration between the Laser Center of the UPM, in which the author develops his work, and the Research Group of Micro y Nanotecnology of the Universidad Politécnica de Cataluña, in charge of the preparation and development of samples and the assessment of some laser processes for comparison. As well is important to remark the collaboration of the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, CIEMAT, in the preparation of specific experiments of great importance in the development of the work. These collaborations have been developed within the framework of various projects such as the PSE-MICROSIL08 (PSE-120000-2006-6), the project INNDISOL (IPT-420000-2010-6), both funded by the Fondo Europeo de Desarrollo Regional FEDER (UE) “Una manera de hacer Europa” and the MICINN, and the project AMIC (ENE2010-21384-C04-02), whose funding has largely allowed to complete this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Knowledge of pesticide selectivity to natural enemies is necessary for a successful implementation of biological and chemical control methods in integrated pest management (IPM) programs. Diacylhydrazine (DAH)-based ecdysone agonists also known as molting-accelerating compounds (MACs) are considered a selective group of insecticides, and their compatibility with predatory Heteroptera, which are used as biological control agents, is known. However, their molecular mode of action has not been explored in beneficial insects such as Orius laevigatus (Fieber) (Hemiptera: Anthocoridae). RESULTS: In this project in vivo toxicity assays demonstrated that the DAH-based RH-5849, tebufenozide and methoxyfenozide have no toxic effect against O. laevigatus. The ligand-binding domain (LBD) of the ecdysone receptor (EcR) of O. laevigatus was sequenced and a homology protein model was constructed which confirmed a cavity structure with 12 ?-helixes, harboring the natural insect molting hormone 20-hydroxyecdysone. However, docking studies showed that a steric clash occurred for the DAH-based insecticides due to a restricted extent of the ligand-binding cavity of the EcR of O. laevigatus. CONCLUSIONS: The insect toxicity assays demonstrated that MACs are selective for O. laevigatus. The modeling/docking experiments are indications that these pesticides do not bind with the LBD-EcR of O. laevigatus and support that they show no biological effects in the predatory bug. These data help in explaining the compatible use of MACs together with predatory bugs in IPM programs. Keywords: Orius laevigatus, selectivity, diacylhydrazine insecticides, ecdysone receptor, homology modelling, docking studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

García et al. present a class of column generation (CG) algorithms for nonlinear programs. Its main motivation from a theoretical viewpoint is that under some circumstances, finite convergence can be achieved, in much the same way as for the classic simplicial decomposition method; the main practical motivation is that within the class there are certain nonlinear column generation problems that can accelerate the convergence of a solution approach which generates a sequence of feasible points. This algorithm can, for example, accelerate simplicial decomposition schemes by making the subproblems nonlinear. This paper complements the theoretical study on the asymptotic and finite convergence of these methods given in [1] with an experimental study focused on their computational efficiency. Three types of numerical experiments are conducted. The first group of test problems has been designed to study the parameters involved in these methods. The second group has been designed to investigate the role and the computation of the prolongation of the generated columns to the relative boundary. The last one has been designed to carry out a more complete investigation of the difference in computational efficiency between linear and nonlinear column generation approaches. In order to carry out this investigation, we consider two types of test problems: the first one is the nonlinear, capacitated single-commodity network flow problem of which several large-scale instances with varied degrees of nonlinearity and total capacity are constructed and investigated, and the second one is a combined traffic assignment model

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new methodology, simple and affordable, for the definition and characterization of objects at different scales in high spatial resolution images. The objects have been generated by integrating texturally and spectrally homogeneous segments. The former have been obtained from the segmentation of Wavelet coefficients of the panchromatic image. The multi-scale character of this transform has yielded texturally homogeneous segments of different sizes for each of the scales. The spectrally homogeneous segments have been obtained by segmenting the classified corresponding multispectral image. In this way, it has been defined a set of objects characterized by different attributes, which give to the objects a semantic meaning, allowing to determine the similarities and differences between them. To demonstrate the capabilities of the methodology proposed, different experiments of unsupervised classification of a Quickbird image have been carried out, using different subsets of attributes and 1-D ascendant hierarchical classifier. Obtained results have shown the capability of the proposed methodology for separating semantic objects at different scales, as well as, its advantages against pixel-based image interpretation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The educational platform Virtual Science Hub (ViSH) has been developed as part of the GLOBAL excursion European project. ViSH (http://vishub.org/) is a portal where teachers and scientist interact to create virtual excursions to science infrastructures. The main motivation behind the project was to connect teachers - and in consequence their students - to scientific institutions and their wide amount of infrastructures and resources they are working with. Thus the idea of a hub was born that would allow the two worlds of scientists and teachers to connect and to innovate science teaching. The core of the ViSH?s concept design is based on virtual excursions, which allow for a number of pedagogical models to be applied. According to our internal definition a virtual excursion is a tour through some digital context by teachers and pupils on a given topic that is attractive and has an educational purpose. Inquiry-based learning, project-based and problem-based learning are the most prominent approaches that a virtual excursion may serve. The domain specific resources and scientific infrastructures currently available on the ViSH are focusing on life sciences, nano-technology, biotechnology, grid and volunteer computing. The virtual excursion approach allows an easy combination of these resources into interdisciplinary teaching scenarios. In addition, social networking features support the users in collaborating and communicating in relation to these excursions and thus create a community of interest for innovative science teaching. The design and development phases were performed following a participatory design approach. An important aspect in this process was to create design partnerships amongst all actors involved, researchers, developers, infrastructure providers, teachers, social scientists, and pedagogical experts early in the project. A joint sense of ownership was created and important changes during the conceptual phase were implemented in the ViSH due to early user feedback. Technology-wise the ViSH is based on the latest web technologies in order to make it cross-platform compatible so that it works on several operative systems such as Windows, Mac or Linux and multi-device accessible, such as desktop, tablet and mobile devices. The platform has been developed in HTML5, the latest standard for web development, assuring that it can run on any modern browser. In addition to social networking features a core element on the ViSH is the virtual excursions editor. It is a web tool that allows teachers and scientists to create rich mash-ups of learning resources provided by the e-Infrastructures (i.e. remote laboratories and live webcams). These rich mash-ups can be presented in either slides or flashcards format. Taking advantage of the web architecture supported, additional powerful components have been integrated like a recommendation engine to provide personalized suggestions about educational content or interesting users and a videoconference tool to enhance real-time collaboration like MashMeTV (http://www.mashme.tv/).