888 resultados para belief rule-based approach
Resumo:
In the complex landscape of public education, participants at all levels are searching for policy and practice levers that can raise overall performance and close achievement gaps. The collection of articles in this edition of the Journal of Applied Research on Children takes a big step toward providing the tools and tactics needed for an evidence-based approach to educational policy and practice.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
The most successful unfolding rules used nowadays in the partial evaluation of logic programs are based on well quasi orders (wqo) applied over (covering) ancestors, i.e., a subsequence of the atoms selected during a derivation. Ancestor (sub)sequences are used to increase the specialization power of unfolding while still guaranteeing termination and also to reduce the number of atoms for which the wqo has to be checked. Unfortunately, maintaining the structure of the ancestor relation during unfolding introduces significant overhead. We propose an efficient, practical local unfolding rule based on the notion of covering ancestors which can be used in combination with a wqo and allows a stack-based implementation without losing any opportunities for specialization. Using our technique, certain non-leftmost unfoldings are allowed as long as local unfolding is performed, i.e., we cover depth-first strategies. To deal with practical programs, we propose assertion-based techniques which allow our approach to treat programs that include (Prolog) built-ins and external predicates in a very extensible manner, for the case of leftmost unfolding. Finally, we report on our mplementation of these techniques embedded in a practical partial evaluator, which shows that our techniques, in addition to dealing with practical programs, are also significantly more efficient in time and somewhat more efficient in memory than traditional tree-based implementations. To appear in Theory and Practice of Logic Programming (TPLP).
Resumo:
The integration of powerful partial evaluation methods into practical compilers for logic programs is still far from reality. This is related both to 1) efficiency issues and to 2) the complications of dealing with practical programs. Regarding efnciency, the most successful unfolding rules used nowadays are based on structural orders applied over (covering) ancestors, i.e., a subsequence of the atoms selected during a derivation. Unfortunately, maintaining the structure of the ancestor relation during unfolding introduces significant overhead. We propose an efficient, practical local unfolding rule based on the notion of covering ancestors which can be used in combination with any structural order and allows a stack-based implementation without losing any opportunities for specialization. Regarding the second issue, we propose assertion-based techniques which allow our approach to deal with real programs that include (Prolog) built-ins and external predicates in a very extensible manner. Finally, we report on our implementation of these techniques in a practical partial evaluator, embedded in a state of the art compiler which uses global analysis extensively (the Ciao compiler and, specifically, its preprocessor CiaoPP). The performance analysis of the resulting system shows that our techniques, in addition to dealing with practical programs, are also significantly more efficient in time and somewhat more efficient in memory than traditional tree-based implementations.
Resumo:
Proof carrying code is a general methodology for certifying that the execution of an untrusted mobile code is safe, according to a predefined safety policy. The basic idea is that the code supplier attaches a certifícate (or proof) to the mobile code which, then, the consumer checks in order to ensure that the code is indeed safe. The potential benefit is that the consumer's task is reduced from the level of proving to the level of checking, a much simpler task. Recently, the abstract interpretation techniques developed in logic programming have been proposed as a basis for proof carrying code [1]. To this end, the certifícate is generated from an abstract interpretation-based proof of safety. Intuitively, the verification condition is extracted from a set of assertions guaranteeing safety and the answer table generated during the analysis. Given this information, it is relatively simple and fast to verify that the code does meet this proof and so its execution is safe. This extended abstract reports on experiments which illustrate several issues involved in abstract interpretation-based code certification. First, we describe the implementation of our system in the context of CiaoPP: the preprocessor of the Ciao multi-paradigm (constraint) logic programming system. Then, by means of some experiments, we show how code certification is aided in the implementation of the framework. Finally, we discuss the application of our method within the área of pervasive systems which may lack the necessary computing resources to verify safety on their own. We herein illustrate the relevance of the information inferred by existing cost analysis to control resource usage in this context. Moreover, since the (rather complex) analysis phase is replaced by a simpler, efficient checking process at the code consumer side, we believe that our abstract interpretation-based approach to proof-carrying code becomes practically applicable to this kind of systems.
Resumo:
Sensor networks are increasingly becoming one of the main sources of Big Data on the Web. However, the observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse these data for other purposes than those for which they were originally set up. In this thesis we address these challenges, considering how we can transform streaming raw data to rich ontology-based information that is accessible through continuous queries for streaming data. Our main contribution is an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. We introduce novel query rewriting and data translation techniques that rely on mapping definitions relating streaming data models to ontological concepts. Specific contributions include: • The syntax and semantics of the SPARQLStream query language for ontologybased data access, and a query rewriting approach for transforming SPARQLStream queries into streaming algebra expressions. • The design of an ontology-based streaming data access engine that can internally reuse an existing data stream engine, complex event processor or sensor middleware, using R2RML mappings for defining relationships between streaming data models and ontology concepts. Concerning the sensor metadata of such streaming data sources, we have investigated how we can use raw measurements to characterize streaming data, producing enriched data descriptions in terms of ontological models. Our specific contributions are: • A representation of sensor data time series that captures gradient information that is useful to characterize types of sensor data. • A method for classifying sensor data time series and determining the type of data, using data mining techniques, and a method for extracting semantic sensor metadata features from the time series.
Resumo:
By 2050 it is estimated that the number of worldwide Alzheimer?s disease (AD) patients will quadruple from the current number of 36 million people. To date, no single test, prior to postmortem examination, can confirm that a person suffers from AD. Therefore, there is a strong need for accurate and sensitive tools for the early diagnoses of AD. The complex etiology and multiple pathogenesis of AD call for a system-level understanding of the currently available biomarkers and the study of new biomarkers via network-based modeling of heterogeneous data types. In this review, we summarize recent research on the study of AD as a connectivity syndrome. We argue that a network-based approach in biomarker discovery will provide key insights to fully understand the network degeneration hypothesis (disease starts in specific network areas and progressively spreads to connected areas of the initial loci-networks) with a potential impact for early diagnosis and disease-modifying treatments. We introduce a new framework for the quantitative study of biomarkers that can help shorten the transition between academic research and clinical diagnosis in AD.
Resumo:
The SESAR (Single European Sky ATM Research) program is an ambitious re-search and development initiative to design the future European air traffic man-agement (ATM) system. The study of the behavior of ATM systems using agent-based modeling and simulation tools can help the development of new methods to improve their performance. This paper presents an overview of existing agent-based approaches in air transportation (paying special attention to the challenges that exist for the design of future ATM systems) and, subsequently, describes a new agent-based approach that we proposed in the CASSIOPEIA project, which was developed according to the goals of the SESAR program. In our approach, we use agent models for different ATM stakeholders, and, in contrast to previous work, our solution models new collaborative decision processes for flow traffic management, it uses an intermediate level of abstraction (useful for simulations at larger scales), and was designed to be a practical tool (open and reusable) for the development of different ATM studies. It was successfully applied in three stud-ies related to the design of future ATM systems in Europe.
Resumo:
This paper argues about the utility of advanced knowledge-based techniques to develop web-based applications that help consumers in finding products within marketplaces in e-commerce. In particular, we describe the idea of model-based approach to develop a shopping agent that dynamically configures a product according to the needs and preferences of customers. Finally, the paper summarizes the advantages provided by this approach.
Resumo:
This study suggests a theoretical framework for improving the teaching/ learning process of English employed in the Aeronautical discourse that brings together cognitive learning strategies, Genre Analysis and the Contemporary theory of Metaphor (Lakoff and Johnson 1980; Lakoff 1993). It maintains that cognitive strategies such as imagery, deduction, inference and grouping can be enhanced by means of metaphor and genre awareness in the context of content based approach to language learning. A list of image metaphors and conceptual metaphors which comes from the terminological database METACITEC is provided. The metaphorical terms from the area of Aeronautics have been taken from specialised dictionaries and have been categorised according to the conceptual metaphors they respond to, by establishing the source domains and the target domains, as well as the semantic networks found. This information makes reference to the internal mappings underlying the discourse of aeronautics reflected in five aviation accident case studies which are related to accident reports from the National Transportation Safety Board (NTSB) and provides an important source for designing language teaching tasks. La Lingüística Cognitiva y el Análisis del Género han contribuido a la mejora de la enseñanza de segundas lenguas y, en particular, al desarrollo de la competencia lingüística de los alumnos de inglés para fines específicos. Este trabajo pretende perfeccionar los procesos de enseñanza y el aprendizaje del lenguaje empleado en el discurso aeronáutico por medio de la práctica de estrategias cognitivas y prestando atención a la Teoría del análisis del género y a la Teoría contemporánea de la metáfora (Lakoff y Johnson 1980; Lakoff 1993). Con el propósito de crear recursos didácticos en los que se apliquen estrategias metafóricas, se ha elaborado un listado de metáforas de imagen y de metáforas conceptuales proveniente de la base de datos terminológica META-CITEC. Estos términos se han clasificado de acuerdo con las metáforas conceptuales y de imagen existentes en esta área de conocimiento. Para la enseñanza de este lenguaje de especialidad, se proponen las correspondencias y las proyecciones entre el dominio origen y el dominio meta que se han hallado en los informes de accidentes aéreos tomados de la Junta federal de la Seguridad en el Transporte (NTSB)
Resumo:
Fluid flow and fabric compaction during vacuum assisted resin infusion (VARI) of composite materials was simulated using a level set-based approach. Fluid infusion through the fiber preform was modeled using Darcy’s equations for the fluid flow through a porous media. The stress partition between the fluid and the fiber bed was included by means of Terzaghi’s effective stress theory. Tracking the fluid front during infusion was introduced by means of the level set method. The resulting partial differential equations for the fluid infusion and the evolution of flow front were discretized and solved approximately using the finite differences method with a uniform grid discretization of the spatial domain. The model results were validated against uniaxial VARI experiments through an [0]8 E-glass plain woven preform. The physical parameters of the model were also independently measured. The model results (in terms of the fabric thickness, pressure and fluid front evolution during filling) were in good agreement with the numerical simulations, showing the potential of the level set method to simulate resin infusion
Resumo:
The purpose of this study was to compare a number of state-of-the-art methods in airborne laser scan- ning (ALS) remote sensing with regards to their capacity to describe tree size inequality and other indi- cators related to forest structure. The indicators chosen were based on the analysis of the Lorenz curve: Gini coefficient ( GC ), Lorenz asymmetry ( LA ), the proportions of basal area ( BALM ) and stem density ( NSLM ) stocked above the mean quadratic diameter. Each method belonged to one of these estimation strategies: (A) estimating indicators directly; (B) estimating the whole Lorenz curve; or (C) estimating a complete tree list. Across these strategies, the most popular statistical methods for area-based approach (ABA) were used: regression, random forest (RF), and nearest neighbour imputation. The latter included distance metrics based on either RF (NN–RF) or most similar neighbour (MSN). In the case of tree list esti- mation, methods based on individual tree detection (ITD) and semi-ITD, both combined with MSN impu- tation, were also studied. The most accurate method was direct estimation by best subset regression, which obtained the lowest cross-validated coefficients of variation of their root mean squared error CV(RMSE) for most indicators: GC (16.80%), LA (8.76%), BALM (8.80%) and NSLM (14.60%). Similar figures [CV(RMSE) 16.09%, 10.49%, 10.93% and 14.07%, respectively] were obtained by MSN imputation of tree lists by ABA, a method that also showed a number of additional advantages, such as better distributing the residual variance along the predictive range. In light of our results, ITD approaches may be clearly inferior to ABA with regards to describing the structural properties related to tree size inequality in for- ested areas.
Resumo:
Emergency management is one of the key aspects within the day-to-day operation procedures in a highway. Efficiency in the overall response in case of an incident is paramount in reducing the consequences of any incident. However, the approach of highway operators to the issue of incident management is still usually far from a systematic, standardized way. This paper attempts to address the issue and provide several hints on why this happens, and a proposal on how the situation could be overcome. An introduction to a performance based approach to a general system specification will be described, and then applied to a particular road emergency management task. A real testbed has been implemented to show the validity of the proposed approach. Ad-hoc sensors (one camera and one laser scanner) were efficiently deployed to acquire data, and advanced fusion techniques applied at the processing stage to reach the specific user requirements in terms of functionality, flexibility and accuracy.
Resumo:
Los análisis de fiabilidad representan una herramienta adecuada para contemplar las incertidumbres inherentes que existen en los parámetros geotécnicos. En esta Tesis Doctoral se desarrolla una metodología basada en una linealización sencilla, que emplea aproximaciones de primer o segundo orden, para evaluar eficientemente la fiabilidad del sistema en los problemas geotécnicos. En primer lugar, se emplean diferentes métodos para analizar la fiabilidad de dos aspectos propios del diseño de los túneles: la estabilidad del frente y el comportamiento del sostenimiento. Se aplican varias metodologías de fiabilidad — el Método de Fiabilidad de Primer Orden (FORM), el Método de Fiabilidad de Segundo Orden (SORM) y el Muestreo por Importancia (IS). Los resultados muestran que los tipos de distribución y las estructuras de correlación consideradas para todas las variables aleatorias tienen una influencia significativa en los resultados de fiabilidad, lo cual remarca la importancia de una adecuada caracterización de las incertidumbres geotécnicas en las aplicaciones prácticas. Los resultados también muestran que tanto el FORM como el SORM pueden emplearse para estimar la fiabilidad del sostenimiento de un túnel y que el SORM puede mejorar el FORM con un esfuerzo computacional adicional aceptable. Posteriormente, se desarrolla una metodología de linealización para evaluar la fiabilidad del sistema en los problemas geotécnicos. Esta metodología solamente necesita la información proporcionada por el FORM: el vector de índices de fiabilidad de las funciones de estado límite (LSFs) que componen el sistema y su matriz de correlación. Se analizan dos problemas geotécnicos comunes —la estabilidad de un talud en un suelo estratificado y un túnel circular excavado en roca— para demostrar la sencillez, precisión y eficiencia del procedimiento propuesto. Asimismo, se reflejan las ventajas de la metodología de linealización con respecto a las herramientas computacionales alternativas. Igualmente se muestra que, en el caso de que resulte necesario, se puede emplear el SORM —que aproxima la verdadera LSF mejor que el FORM— para calcular estimaciones más precisas de la fiabilidad del sistema. Finalmente, se presenta una nueva metodología que emplea Algoritmos Genéticos para identificar, de manera precisa, las superficies de deslizamiento representativas (RSSs) de taludes en suelos estratificados, las cuales se emplean posteriormente para estimar la fiabilidad del sistema, empleando la metodología de linealización propuesta. Se adoptan tres taludes en suelos estratificados característicos para demostrar la eficiencia, precisión y robustez del procedimiento propuesto y se discuten las ventajas del mismo con respecto a otros métodos alternativos. Los resultados muestran que la metodología propuesta da estimaciones de fiabilidad que mejoran los resultados previamente publicados, enfatizando la importancia de hallar buenas RSSs —y, especialmente, adecuadas (desde un punto de vista probabilístico) superficies de deslizamiento críticas que podrían ser no-circulares— para obtener estimaciones acertadas de la fiabilidad de taludes en suelos. Reliability analyses provide an adequate tool to consider the inherent uncertainties that exist in geotechnical parameters. This dissertation develops a simple linearization-based approach, that uses first or second order approximations, to efficiently evaluate the system reliability of geotechnical problems. First, reliability methods are employed to analyze the reliability of two tunnel design aspects: face stability and performance of support systems. Several reliability approaches —the first order reliability method (FORM), the second order reliability method (SORM), the response surface method (RSM) and importance sampling (IS)— are employed, with results showing that the assumed distribution types and correlation structures for all random variables have a significant effect on the reliability results. This emphasizes the importance of an adequate characterization of geotechnical uncertainties for practical applications. Results also show that both FORM and SORM can be used to estimate the reliability of tunnel-support systems; and that SORM can outperform FORM with an acceptable additional computational effort. A linearization approach is then developed to evaluate the system reliability of series geotechnical problems. The approach only needs information provided by FORM: the vector of reliability indices of the limit state functions (LSFs) composing the system, and their correlation matrix. Two common geotechnical problems —the stability of a slope in layered soil and a circular tunnel in rock— are employed to demonstrate the simplicity, accuracy and efficiency of the suggested procedure. Advantages of the linearization approach with respect to alternative computational tools are discussed. It is also found that, if necessary, SORM —that approximates the true LSF better than FORM— can be employed to compute better estimations of the system’s reliability. Finally, a new approach using Genetic Algorithms (GAs) is presented to identify the fully specified representative slip surfaces (RSSs) of layered soil slopes, and such RSSs are then employed to estimate the system reliability of slopes, using our proposed linearization approach. Three typical benchmark-slopes with layered soils are adopted to demonstrate the efficiency, accuracy and robustness of the suggested procedure, and advantages of the proposed method with respect to alternative methods are discussed. Results show that the proposed approach provides reliability estimates that improve previously published results, emphasizing the importance of finding good RSSs —and, especially, good (probabilistic) critical slip surfaces that might be non-circular— to obtain good estimations of the reliability of soil slope systems.
Resumo:
Abstract We consider a wide class of models that includes the highly reliable Markovian systems (HRMS) often used to represent the evolution of multi-component systems in reliability settings. Repair times and component lifetimes are random variables that follow a general distribution, and the repair service adopts a priority repair rule based on system failure risk. Since crude simulation has proved to be inefficient for highly-dependable systems, the RESTART method is used for the estimation of steady-state unavailability and other reliability measures. In this method, a number of simulation retrials are performed when the process enters regions of the state space where the chance of occurrence of a rare event (e.g., a system failure) is higher. The main difficulty involved in applying this method is finding a suitable function, called the importance function, to define the regions. In this paper we introduce an importance function which, for unbalanced systems, represents a great improvement over the importance function used in previous papers. We also demonstrate the asymptotic optimality of RESTART estimators in these models. Several examples are presented to show the effectiveness of the new approach, and probabilities up to the order of 10-42 are accurately estimated with little computational effort.