26 resultados para Structure-based model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is growing concern over the challenges for innovation in Freight Pipeline industry. Since the early works of Chesbrough a decade ago, we have learned a lot about the content, context and process of open innovation. However, much more research is needed in Freight Pipeline Industry. The reality is that few corporations have institutionalized open innovation practices in ways that have enabled substantial growth or industry leadership. Based on this, we pursue the following question: How does a firm’s integration into knowledge networks depend on its ability to manage knowledge? A competence-based model for freight pipeline organizations is analysed, this model should be understood by any organization in order to be successful in motivating professionals who carry out innovations and play a main role in collaborative knowledge creation processes. This paper aims to explain how can open innovation achieve its potential in most Freight Pipeline Industries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Detecting user affect automatically during real-time conversation is the main challenge towards our greater aim of infusing social intelligence into a natural-language mixed-initiative High-Fidelity (Hi-Fi) audio control spoken dialog agent. In recent years, studies on affect detection from voice have moved on to using realistic, non-acted data, which is subtler. However, it is more challenging to perceive subtler emotions and this is demonstrated in tasks such as labelling and machine prediction. This paper attempts to address part of this challenge by considering the role of user satisfaction ratings and also conversational/dialog features in discriminating contentment and frustration, two types of emotions that are known to be prevalent within spoken human-computer interaction. However, given the laboratory constraints, users might be positively biased when rating the system, indirectly making the reliability of the satisfaction data questionable. Machine learning experiments were conducted on two datasets, users and annotators, which were then compared in order to assess the reliability of these datasets. Our results indicated that standard classifiers were significantly more successful in discriminating the abovementioned emotions and their intensities (reflected by user satisfaction ratings) from annotator data than from user data. These results corroborated that: first, satisfaction data could be used directly as an alternative target variable to model affect, and that they could be predicted exclusively by dialog features. Second, these were only true when trying to predict the abovementioned emotions using annotator?s data, suggesting that user bias does exist in a laboratory-led evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The agent-based model presented here, comprises an algorithm that computes the degree of hydration, the water consumption and the layer thickness of C-S-H gel as functions of time for different temperatures and different w/c ratios. The results are in agreement with reported experimental studies, demonstrating the applicability of the model. As the available experimental results regarding elevated curing temperature are scarce, the model could be recalibrated in the future. Combining the agent-based computational model with TGA analysis, a semiempirical method is achieved to be used for better understanding the microstructure development in ordinary cement pastes and to predict the influence of temperature on the hydration process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the impact of electric mobility on the transmission grid in Flanders region (Belgium), using a micro-simulation activity based models. These models are used to provide temporal and spatial estimation of energy and power demanded by electric vehicles (EVs) in different mobility zones. The increment in the load demand due to electric mobility is added to the background load demand in these mobility areas and the effects over the transmission substations are analyzed. From this information, the total storage capacity per zone is evaluated and some strategies for EV aggregator are proposed, allowing the aggregator to fulfill bids on the electricity markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La adecuada estimación de avenidas de diseño asociadas a altos periodos de retorno es necesaria para el diseño y gestión de estructuras hidráulicas como presas. En la práctica, la estimación de estos cuantiles se realiza normalmente a través de análisis de frecuencia univariados, basados en su mayoría en el estudio de caudales punta. Sin embargo, la naturaleza de las avenidas es multivariada, siendo esencial tener en cuenta características representativas de las avenidas, tales como caudal punta, volumen y duración del hidrograma, con el fin de llevar a cabo un análisis apropiado; especialmente cuando el caudal de entrada se transforma en un caudal de salida diferente durante el proceso de laminación en un embalse o llanura de inundación. Los análisis de frecuencia de avenidas multivariados han sido tradicionalmente llevados a cabo mediante el uso de distribuciones bivariadas estándar con el fin de modelar variables correlacionadas. Sin embargo, su uso conlleva limitaciones como la necesidad de usar el mismo tipo de distribuciones marginales para todas las variables y la existencia de una relación de dependencia lineal entre ellas. Recientemente, el uso de cópulas se ha extendido en hidrología debido a sus beneficios en relación al contexto multivariado, permitiendo superar los inconvenientes de las técnicas tradicionales. Una copula es una función que representa la estructura de dependencia de las variables de estudio, y permite obtener la distribución de frecuencia multivariada de dichas variables mediante sus distribuciones marginales, sin importar el tipo de distribución marginal utilizada. La estimación de periodos de retorno multivariados, y por lo tanto, de cuantiles multivariados, también se facilita debido a la manera en la que las cópulas están formuladas. La presente tesis doctoral busca proporcionar metodologías que mejoren las técnicas tradicionales usadas por profesionales para estimar cuantiles de avenida más adecuados para el diseño y la gestión de presas, así como para la evaluación del riesgo de avenida, mediante análisis de frecuencia de avenidas bivariados basados en cópulas. Las variables consideradas para ello son el caudal punta y el volumen del hidrograma. Con el objetivo de llevar a cabo un estudio completo, la presente investigación abarca: (i) el análisis de frecuencia de avenidas local bivariado centrado en examinar y comparar los periodos de retorno teóricos basados en la probabilidad natural de ocurrencia de una avenida, con el periodo de retorno asociado al riesgo de sobrevertido de la presa bajo análisis, con el fin de proporcionar cuantiles en una estación de aforo determinada; (ii) la extensión del enfoque local al regional, proporcionando un procedimiento completo para llevar a cabo un análisis de frecuencia de avenidas regional bivariado para proporcionar cuantiles en estaciones sin aforar o para mejorar la estimación de dichos cuantiles en estaciones aforadas; (iii) el uso de cópulas para investigar tendencias bivariadas en avenidas debido al aumento de los niveles de urbanización en una cuenca; y (iv) la extensión de series de avenida observadas mediante la combinación de los beneficios de un modelo basado en cópulas y de un modelo hidrometeorológico. Accurate design flood estimates associated with high return periods are necessary to design and manage hydraulic structures such as dams. In practice, the estimate of such quantiles is usually done via univariate flood frequency analyses, mostly based on the study of peak flows. Nevertheless, the nature of floods is multivariate, being essential to consider representative flood characteristics, such as flood peak, hydrograph volume and hydrograph duration to carry out an appropriate analysis; especially when the inflow peak is transformed into a different outflow peak during the routing process in a reservoir or floodplain. Multivariate flood frequency analyses have been traditionally performed by using standard bivariate distributions to model correlated variables, yet they entail some shortcomings such as the need of using the same kind of marginal distribution for all variables and the assumption of a linear dependence relation between them. Recently, the use of copulas has been extended in hydrology because of their benefits regarding dealing with the multivariate context, as they overcome the drawbacks of the traditional approach. A copula is a function that represents the dependence structure of the studied variables, and allows obtaining the multivariate frequency distribution of them by using their marginal distributions, regardless of the kind of marginal distributions considered. The estimate of multivariate return periods, and therefore multivariate quantiles, is also facilitated by the way in which copulas are formulated. The present doctoral thesis seeks to provide methodologies that improve traditional techniques used by practitioners, in order to estimate more appropriate flood quantiles for dam design, dam management and flood risk assessment, through bivariate flood frequency analyses based on the copula approach. The flood variables considered for that goal are peak flow and hydrograph volume. In order to accomplish a complete study, the present research addresses: (i) a bivariate local flood frequency analysis focused on examining and comparing theoretical return periods based on the natural probability of occurrence of a flood, with the return period associated with the risk of dam overtopping, to estimate quantiles at a given gauged site; (ii) the extension of the local to the regional approach, supplying a complete procedure for performing a bivariate regional flood frequency analysis to either estimate quantiles at ungauged sites or improve at-site estimates at gauged sites; (iii) the use of copulas to investigate bivariate flood trends due to increasing urbanisation levels in a catchment; and (iv) the extension of observed flood series by combining the benefits of a copula-based model and a hydro-meteorological model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although most of the research on Cognitive Radio is focused on communication bands above the HF upper limit (30 MHz), Cognitive Radio principles can also be applied to HF communications to make use of the extremely scarce spectrum more efficiently. In this work we consider legacy users as primary users since these users transmit without resorting to any smart procedure, and our stations using the HFDVL (HF Data+Voice Link) architecture as secondary users. Our goal is to enhance an efficient use of the HF band by detecting the presence of uncoordinated primary users and avoiding collisions with them while transmitting in different HF channels using our broad-band HF transceiver. A model of the primary user activity dynamics in the HF band is developed in this work to make short-term predictions of the sojourn time of a primary user in the band and avoid collisions. It is based on Hidden Markov Models (HMM) which are a powerful tool for modelling stochastic random processes and are trained with real measurements of the 14 MHz band. By using the proposed HMM based model, the prediction model achieves an average 10.3% prediction error rate with one minute-long channel knowledge but it can be reduced when this knowledge is extended: with the previous 8 min knowledge, an average 5.8% prediction error rate is achieved. These results suggest that the resulting activity model for the HF band could actually be used to predict primary users activity and included in a future HF cognitive radio based station.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acquired brain injury (ABI) 1-2 refers to any brain damage occurring after birth. It usually causes certain damage to portions of the brain. ABI may result in a significant impairment of an individuals physical, cognitive and/or psychosocial functioning. The main causes are traumatic brain injury (TBI), cerebrovascular accident (CVA) and brain tumors. The main consequence of ABI is a dramatic change in the individuals daily life. This change involves a disruption of the family, a loss of future income capacity and an increase of lifetime cost. One of the main challenges in neurorehabilitation is to obtain a dysfunctional profile of each patient in order to personalize the treatment. This paper proposes a system to generate a patient s dysfunctional profile by integrating theoretical, structural and neuropsychological information on a 3D brain imaging-based model. The main goal of this dysfunctional profile is to help therapists design the most suitable treatment for each patient. At the same time, the results obtained are a source of clinical evidence to improve the accuracy and quality of our rehabilitation system. Figure 1 shows the diagram of the system. This system is composed of four main modules: image-based extraction of parameters, theoretical modeling, classification and co-registration and visualization module.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The existing seismic isolation systems are based on well-known and accepted physical principles, but they are still having some functional drawbacks. As an attempt of improvement, the Roll-N-Cage (RNC) isolator has been recently proposed. It is designed to achieve a balance in controlling isolator displacement demands and structural accelerations. It provides in a single unit all the necessary functions of vertical rigid support, horizontal flexibility with enhanced stability, resistance to low service loads and minor vibration, and hysteretic energy dissipation characteristics. It is characterized by two unique features that are a self-braking (buffer) and a self-recentering mechanism. This paper presents an advanced representation of the main and unique features of the RNC isolator using an available finite element code called SAP2000. The validity of the obtained SAP2000 model is then checked using experimental, numerical and analytical results. Then, the paper investigates the merits and demerits of activating the built-in buffer mechanism on both structural pounding mitigation and isolation efficiency. The paper addresses the problem of passive alleviation of possible inner pounding within the RNC isolator, which may arise due to the activation of its self-braking mechanism under sever excitations such as near-fault earthquakes. The results show that the obtained finite element code-based model can closely match and accurately predict the overall behavior of the RNC isolator with effectively small errors. Moreover, the inherent buffer mechanism of the RNC isolator could mitigate or even eliminate direct structure-tostructure pounding under severe excitation considering limited septation gaps between adjacent structures. In addition, the increase of inherent hysteretic damping of the RNC isolator can efficiently limit its peak displacement together with the severity of the possibly developed inner pounding and, therefore, alleviate or even eliminate the possibly arising negative effects of the buffer mechanism on the overall RNC-isolated structural responses.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents some ideas about a new neural network architecture that can be compared to a Taylor analysis when dealing with patterns. Such architecture is based on lineal activation functions with an axo-axonic architecture. A biological axo-axonic connection between two neurons is defined as the weight in a connection in given by the output of another third neuron. This idea can be implemented in the so called Enhanced Neural Networks in which two Multilayer Perceptrons are used; the first one will output the weights that the second MLP uses to computed the desired output. This kind of neural network has universal approximation properties even with lineal activation functions. There exists a clear difference between cooperative and competitive strategies. The former ones are based on the swarm colonies, in which all individuals share its knowledge about the goal in order to pass such information to other individuals to get optimum solution. The latter ones are based on genetic models, that is, individuals can die and new individuals are created combining information of alive one; or are based on molecular/celular behaviour passing information from one structure to another. A swarm-based model is applied to obtain the Neural Network, training the net with a Particle Swarm algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La idea de dotar a un grupo de robots o agentes artificiales de un lenguaje ha sido objeto de intenso estudio en las ultimas décadas. Como no podía ser de otra forma los primeros intentos se enfocaron hacia el estudio de la emergencia de vocabularios compartidos convencionalmente por el grupo de robots. Las ventajas que puede ofrecer un léxico común son evidentes, como también lo es que un lenguaje con una estructura más compleja, en la que se pudieran combinar palabras, sería todavía más beneficioso. Surgen así algunas propuestas enfocadas hacia la emergencia de un lenguaje consensuado que muestre una estructura sintáctica similar al lenguaje humano, entre las que se encuentra este trabajo. Tomar el lenguaje humano como modelo supone adoptar algunas de las hipótesis y teorías que disciplinas como la filosofía, la psicología o la lingüística entre otras se han encargado de proponer. Según estas aproximaciones teóricas el lenguaje presenta una doble dimension formal y funcional. En base a su dimensión formal parece claro que el lenguaje sigue unas reglas, por lo que el uso de una gramática se ha considerado esencial para su representación, pero también porque las gramáticas son un dispositivo muy sencillo y potente que permite generar fácilmente estructuras simbólicas. En cuanto a la dimension funcional se ha tenido en cuenta la teoría quizá más influyente de los últimos tiempos, que no es otra que la Teoría de los Actos del Habla. Esta teoría se basa en la idea de Wittgenstein por la que el significado reside en el uso del lenguaje, hasta el punto de que éste se entiende como una manera de actuar y de comportarse, en definitiva como una forma de vida. Teniendo presentes estas premisas en esta tesis se pretende experimentar con modelos computacionales que permitan a un grupo de robots alcanzar un lenguaje común de manera autónoma, simplemente mediante interacciones individuales entre los robots, en forma de juegos de lenguaje. Para ello se proponen tres modelos distintos de lenguaje: • Un modelo basado en gramáticas probabilísticas y aprendizaje por refuerzo en el que las interacciones y el uso del lenguaje son claves para su emergencia y que emplea una gramática generativa estática y diseñada de antemano. Este modelo se aplica a dos grupos distintos: uno formado exclusivamente por robots y otro que combina robots y un humano, de manera que en este segundo caso se plantea un aprendizaje supervisado por humanos. • Un modelo basado en evolución gramatical que permite estudiar no solo el consenso sintáctico, sino también cuestiones relativas a la génesis del lenguaje y que emplea una gramática universal a partir de la cual los robots pueden evolucionar por sí mismos la gramática más apropiada según la situación lingüística que traten en cada momento. • Un modelo basado en evolución gramatical y aprendizaje por refuerzo que toma aspectos de los anteriores y amplia las posibilidades de los robots al permitir desarrollar un lenguaje que se adapta a situaciones lingüísticas dinámicas que pueden cambiar en el tiempo y también posibilita la imposición de restricciones de orden muy frecuentes en las estructuras sintácticas complejas. Todos los modelos implican un planteamiento descentralizado y auto-organizado, de manera que ninguno de los robots es el dueño del lenguaje y todos deben cooperar y colaborar de forma coordinada para lograr el consenso sintáctico. En cada caso se plantean experimentos que tienen como objetivo validar los modelos propuestos, tanto en lo relativo al éxito en la emergencia del lenguaje como en lo relacionado con cuestiones paralelas de importancia, como la interacción hombre-máquina o la propia génesis del lenguaje. ABSTRACT The idea of giving a language to a group of robots or artificial agents has been the subject of intense study in recent decades. The first attempts have focused on the development and emergence of a conventionally shared vocabulary. The advantages that can provide a common vocabulary are evident and therefore a more complex language that combines words would be even more beneficial. Thus some proposals are put forward towards the emergence of a consensual language with a sintactical structure in similar terms to the human language. This work follows this trend. Taking the human language as a model means taking some of the assumptions and theories that disciplines such as philosophy, psychology or linguistics among others have provided. According to these theoretical positions language has a double formal and functional dimension. Based on its formal dimension it seems clear that language follows rules, so that the use of a grammar has been considered essential for representation, but also because grammars are a very simple and powerful device that easily generates these symbolic structures. As for the functional dimension perhaps the most influential theory of recent times, the Theory of Speech Acts has been taken into account. This theory is based on the Wittgenstein’s idea about that the meaning lies in the use of language, to the extent that it is understood as a way of acting and behaving. Having into account these issues this work implements some computational models in order to test if they allow a group of robots to reach in an autonomous way a shared language by means of individual interaction among them, that is by means of language games. Specifically, three different models of language for robots are proposed: • A reinforcement learning based model in which interactions and language use are key to its emergence. This model uses a static probabilistic generative grammar which is designed beforehand. The model is applied to two different groups: one formed exclusively by robots and other combining robots and a human. Therefore, in the second case the learning process is supervised by the human. • A model based on grammatical evolution that allows us to study not only the syntactic consensus, but also the very genesis of language. This model uses a universal grammar that allows robots to evolve for themselves the most appropriate grammar according to the current linguistic situation they deal with. • A model based on grammatical evolution and reinforcement learning that takes aspects of the previous models and increases their possibilities. This model allows robots to develop a language in order to adapt to dynamic language situations that can change over time and also allows the imposition of syntactical order restrictions which are very common in complex syntactic structures. All models involve a decentralized and self-organized approach so that none of the robots is the language’s owner and everyone must cooperate and work together in a coordinated manner to achieve syntactic consensus. In each case experiments are presented in order to validate the proposed models, both in terms of success about the emergence of language and it relates to the study of important parallel issues, such as human-computer interaction or the very genesis of language.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En esta tesis se ha profundizado en el estudio y desarrollo de modelos de soporte para el aprendizaje colaborativo a distancia, que ha permitido proponer una arquitectura fundamentada en los principios del paradigma CSCL (Computer Supported Collaborative Learning). La arquitectura propuesta aborda un tipo de problema concreto que requiere el uso de técnicas derivadas del Trabajo Colaborativo, la Inteligencia Artificial, Interfaces de Usuario así como ideas tomadas de la Pedagogía y la Psicología. Se ha diseñado una solución completa, abierta y genérica. La arquitectura aprovecha las nuevas tecnologías para lograr un sistema efectivo de apoyo a la educación a distancia. Está organizada en cuatro niveles: el de Configuración, el de Experiencia, el de Organización y el de Análisis. A partir de ella se ha implementado un sistema llamado DEGREE. En DEGREE, cada uno de los niveles de la arquitectura da lugar a un subsistema independiente pero relacionado con los otros. La aplicación saca partido del uso de espacios de trabajo estructurados. El subsistema Configurador de Experiencias permite definir los elementos de un espacio de trabajo y una experiencia y adaptarlos a cada tipo de usuario. El subsistema Manejador de Experiencias recoge las contribuciones de los usuarios para construir una solución conjunta de un problema. Las intervenciones de los alumnos se estructuran basándose en un grafo conversacional genérico. Además, se registran todas las acciones de los usuarios para representar explícitamente el proceso completo que lleva a la solución. Estos datos también se almacenan en una memoria común que constituye el subsistema llamado Memoria Organizativa de Experiencias. El subsistema Analizador estudia las intervenciones de los usuarios. Este análisis permite inferir conclusiones sobre la forma en que trabajan los grupos y sus actitudes frente a la colaboración, teniendo en cuenta además el conocimiento subjetivo del observador. El proceso de desarrollo en paralelo de la arquitectura y el sistema ha seguido un ciclo de refinamiento en cinco fases con sucesivas etapas de prototipado y evaluación formativa. Cada fase de este proceso se ha realizado con usuarios reales y se han considerado las opiniones de los usuarios para mejorar las funcionalidades de la arquitectura así como la interfaz del sistema. Esta aproximación ha permitido, además, comprobar la utilidad práctica y la validez de las propuestas que sustentan este trabajo.---ABSTRACT---In this thesis, we have studied in depth the development of support models for distance collaborative learning and subsequently devised an architecture based on the Computer Supported Collaborative Learning paradigm principles. The proposed architecture addresses a specific problem: coordinating groups of students to perform collaborative distance learning activities. Our approach uses Cooperative Work, Artificial Intelligence and Human-Computer Interaction techniques as well as some ideas from the fields of Pedagogy and Psychology. We have designed a complete, open and generic solution. Our architecture exploits the new information technologies to achieve an effective system for education purposes. It is organised into four levels: Configuration, Experience, Organisation and Reflection. This model has been implemented into a system called DEGREE. In DEGREE, each level of the architecture gives rise to an independent subsystem related to the other ones. The application benefits from the use of shared structured workspaces. The configuration subsystem allows customising the elements that define an experience and a workspace. The experience subsystem gathers the users' contributions to build joint solutions to a given problem. The students' interventions build up a structure based on a generic conversation graph. Moreover, all user actions are registered in order to represent explicitly the complete process for reaching the group solution. Those data are also stored into a common memory, which constitutes the organisation subsystem. The user interventions are studied by the reflection subsystem. This analysis allows us inferring conclusions about the way in which the group works and its attitudes towards collaboration. The inference process takes into account the observer's subjective knowledge. The process of developing both the architecture and the system in parallel has run through a five-pass cycle involving successive stages of prototyping and formative evaluation. At each stage of that process, we have considered the users' feedback for improving the architecture's functionalities as well as the system interface. This approach has allowed us to prove the usability and validity of our proposal.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En este trabajo se aborda una cuestión central en el diseño en carga última de estructuras de hormigón armado y de fábrica: la posibilidad efectiva de que las deformaciones plásticas necesarias para verificar un estado de rotura puedan ser alcanzadas por las regiones de la estructura que deban desarrollar su capacidad última para verificar tal estado. Así, se parte de las decisiones de diseño que mediante mera estática aseguran un equilibrio de la estructura para las cargas últimas que deba resistir, pero determinando directamente el valor de las deformaciones necesarias para llegar a tal estado. Por tanto, no se acude a los teoremas de rotura sin más, sino que se formula el problema desde un punto de vista elastoplástico. Es decir, no se obvia el recorrido que la estructura deba realizar en un proceso de carga incremental monótono, de modo que las regiones no plastificadas contribuyen a coaccionar las libres deformaciones plásticas que, en la teoría de rotura, se suponen. En términos de trabajo y energía, se introduce en el balance del trabajo de las fuerzas externas y en el de la energía de deformación, aquella parte del sistema que no ha plastificado. Establecido así el balance energético como potencial del sistema es cuando la condición de estacionariedad del mismo hace determinados los campos de desplazamientos y, por tanto, el de las deformaciones plásticas también. En definitiva, se trata de un modo de verificar si la ductilidad de los diseños previstos es suficiente, y en qué medida, para verificar el estado de rotura previsto, para unas determinadas cargas impuestas. Dentro del desarrollo teórico del problema, se encuentran ciertas precisiones importantes. Entre ellas, la verificación de que el estado de rotura a que se llega de manera determinada mediante el balance energético elasto-plástico satisface las condiciones de la solución de rotura que los teoremas de carga última predicen, asegurando, por tanto, que la solución determinada -unicidad del problema elásticocoincide con el teorema de unicidad de la carga de rotura, acotando además cuál es el sistema de equilibrio y cuál es la deformada de colapso, aspectos que los teoremas de rotura no pueden asegurar, sino sólo el valor de la carga última a verificar. Otra precisión se basa en la particularidad de los casos en que el sistema presenta una superficie de rotura plana, haciendo infinitas las posibilidades de equilibrio para una misma deformada de colapso determinada, lo que está en la base de, aparentemente, poder plastificar a antojo en vigas y arcos. Desde el planteamiento anterior, se encuentra entonces que existe una condición inherente a cualquier sistema, definidas unas leyes constitutivas internas, que permite al mismo llegar al inicio del estado de rotura sin demandar deformación plástica alguna, produciéndose la plastificación simultánea de todas las regiones que hayan llegado a su solicitación de rotura. En cierto modo, se daría un colapso de apariencia frágil. En tal caso, el sistema conserva plenamente hasta el final su capacidad dúctil y tal estado actúa como representante canónico de cualquier otra solución de equilibrio que con idéntico criterio de diseño interno se prevea para tal estructura. En la medida que el diseño se acerque o aleje de la solución canónica, la demanda de ductilidad del sistema para verificar la carga última será menor o mayor. Las soluciones que se aparten en exceso de la solución canónica, no verificarán el estado de rotura previsto por falta de ductilidad: la demanda de deformación plástica de alguna región plastificada estará más allá de la capacidad de la misma, revelándose una carga de rotura por falta de ductilidad menor que la que se preveía por mero equilibrio. Para la determinación de las deformaciones plásticas de las rótulas, se ha tomado un modelo formulado mediante el Método de los Elementos de Contorno, que proporciona un campo continuo de desplazamientos -y, por ende, de deformaciones y de tensiones- incluso en presencia de fisuras en el contorno. Importante cuestión es que se formula la diferencia, nada desdeñable, de la capacidad de rotación plástica de las secciones de hormigón armado en presencia de cortante y en su ausencia. Para las rótulas de fábrica, la diferencia se establece para las condiciones de la excentricidad -asociadas al valor relativo de la compresión-, donde las diferencias entres las regiones plastificadas con esfuerzo normal relativo alto o bajo son reseñables. Por otro lado, si bien de manera un tanto secundaria, las condiciones de servicio también imponen un límite al diseño previo en carga última deseado. La plastificación lleva asociadas deformaciones considerables, sean locales como globales. Tal cosa impone que, en estado de servicio, si la plastificación de alguna región lleva asociadas fisuraciones excesivas para el ambiente del entorno, la solución sea inviable por ello. Asimismo, las deformaciones de las estructuras suponen un límite severo a las posibilidades de su diseño. Especialmente en edificación, las deformaciones activas son un factor crítico a la hora de decidirse por una u otra solución. Por tanto, al límite que se impone por razón de ductilidad, se debe añadir el que se imponga por razón de las condiciones de servicio. Del modo anterior, considerando las condiciones de ductilidad y de servicio en cada caso, se puede tasar cada decisión de diseño con la previsión de cuáles serán las consecuencias en su estado de carga última y de servicio. Es decir, conocidos los límites, podemos acotar cuáles son los diseños a priori que podrán satisfacer seguro las condiciones de ductilidad y de servicio previstas, y en qué medida. Y, en caso de no poderse satisfacer, qué correcciones debieran realizarse sobre el diseño previo para poderlas cumplir. Por último, de las consecuencias que se extraen de lo estudiado, se proponen ciertas líneas de estudio y de experimentación para poder llegar a completar o expandir de manera práctica los resultados obtenidos. ABSTRACT This work deals with a main issue for the ultimate load design in reinforced concrete and masonry structures: the actual possibility that needed yield strains to reach a ultimate state could be reached by yielded regions on the structure that should develop their ultimate capacity to fulfill such a state. Thus, some statically determined design decisions are posed as a start for prescribed ultimate loads to be counteracted, but finding out the determined value of the strains needed to reach the ultimate load state. Therefore, ultimate load theorems are not taken as they are, but a full elasto-plastic formulation point of view is used. As a result, the path the structure must develop in a monotonus increasing loading procedure is not neglected, leading to the fact that non yielded regions will restrict the supposed totally free yield strains under a pure ultimate load theory. In work and energy terms, in the overall account of external forces work and internal strain energy, those domains in the body not reaching their ultimate state are considered. Once thus established the energy balance of the system as its potential, by imposing on it the stationary condition, both displacements and yield strains appear as determined values. Consequently, what proposed is a means for verifying whether the ductility of prescribed designs is enough and the extent to which they are so, for known imposed loads. On the way for the theoretical development of the proposal, some important aspects have been found. Among these, the verification that the conditions for the ultimate state reached under the elastoplastic energy balance fulfills the conditions prescribed for the ultimate load state predicted through the ultimate load theorems, assuring, therefore, that the determinate solution -unicity of the elastic problemcoincides with the unicity ultimate load theorem, determining as well which equilibrium system and which collapse shape are linked to it, being these two last aspects unaffordable by the ultimate load theorems, that make sure only which is the value of the ultimate load leading to collapse. Another aspect is based on the particular case in which the yield surface of the system is flat -i.e. expressed under a linear expression-, turning out infinite the equilibrium possibilities for one determined collapse shape, which is the basis of, apparently, deciding at own free will the yield distribution in beams and arches. From the foresaid approach, is then found that there is an inherent condition in any system, once defined internal constitutive laws, which allows it arrive at the beginning of the ultimate state or collapse without any yield strain demand, reaching the collapse simultaneously for all regions that have come to their ultimate strength. In a certain way, it would appear to be a fragile collapse. In such a case case, the system fully keeps until the end its ductility, and such a state acts as a canonical representative of any other statically determined solution having the same internal design criteria that could be posed for the that same structure. The extent to which a design is closer to or farther from the canonical solution, the ductility demand of the system to verify the ultimate load will be higher or lower. The solutions being far in excess from the canonical solution, will not verify the ultimate state due to lack of ductility: the demand for yield strains of any yielded region will be beyond its capacity, and a shortcoming ultimate load by lack of ductility will appear, lower than the expected by mere equilibrium. For determining the yield strains of plastic hinges, a Boundary Element Method based model has been used, leading to a continuous displacement field -therefore, for strains and stresses as well- even if cracks on the boundary are present. An important aspect is that a remarkable difference is found in the rotation capacity between plastic hinges in reinforced concrete with or without shear. For masonry hinges, such difference appears when dealing with the eccentricity of axial forces -related to their relative value of compression- on the section, where differences between yield regions under high or low relative compressions are remarkable. On the other hand, although in a certain secondary manner, serviceability conditions impose limits to the previous ultimate load stated wanted too. Yield means always big strains and deformations, locally and globally. Such a thing imposes, for serviceability states, that if a yielded region is associated with too large cracking for the environmental conditions, the predicted design will be unsuitable due to this. Furthermore, displacements must be restricted under certain severe limits that restrain the possibilities for a free design. Especially in building structures, active displacements are a critical factor when chosing one or another solution. Then, to the limits due to ductility reasons, other limits dealing with serviceability conditions shoud be added. In the foresaid way, both considering ductility and serviceability conditions in every case, the results for ultimate load and serviceability to which every design decision will lead can be bounded. This means that, once the limits are known, it is possible to bound which a priori designs will fulfill for sure the prescribed ductility and serviceability conditions, and the extent to wich they will be fulfilled, And, in case they were not, which corrections must be performed in the previous design so that it will. Finally, from the consequences derived through what studied, several study and experimental fields are proposed, in order to achieve a completeness and practical expansion of the obtained results.