27 resultados para Shared component model
em Universidad Politécnica de Madrid
Resumo:
sharedcircuitmodels is presented in this work. The sharedcircuitsmodelapproach of sociocognitivecapacities recently proposed by Hurley in The sharedcircuitsmodel (SCM): how control, mirroring, and simulation can enable imitation, deliberation, and mindreading. Behavioral and Brain Sciences 31(1) (2008) 1–22 is enriched and improved in this work. A five-layer computational architecture for designing artificialcognitivecontrolsystems is proposed on the basis of a modified sharedcircuitsmodel for emulating sociocognitive experiences such as imitation, deliberation, and mindreading. In order to show the enormous potential of this approach, a simplified implementation is applied to a case study. An artificialcognitivecontrolsystem is applied for controlling force in a manufacturing process that demonstrates the suitability of the suggested approach
Resumo:
Cloud computing has seen an impressive growth in recent years, with virtualization technologies being massively adopted to create IaaS (Infrastructure as a Service) public and private solutions. Today, the interest is shifting towards the PaaS (Platform as a Service) model, which allows developers to abstract from the execution platform and focus only on the functionality. There are several public PaaS offerings available, but currently no private PaaS solution is ready for production environments. To fill this gap a new solution must be developed. In this paper we present a key element for enabling this model: a cloud repository based on the OSGi component model. The repository stores, manages, provisions and resolves the dependencies of PaaS software components and services. This repository can federate with other repositories located in the same or different clouds, both private and public. This way, dependencies can be fulfilled collaboratively, and new business models can be implemented.
Resumo:
Context: This paper addresses one of the major end-user development (EUD) challenges, namely, how to pack today?s EUD support tools with composable elements. This would give end users better access to more components which they can use to build a solution tailored to their own needs. The success of later end-user software engineering (EUSE) activities largely depends on how many components each tool has and how adaptable components are to multiple problem domains. Objective: A system for automatically adapting heterogeneous components to a common development environment would offer a sizeable saving of time and resources within the EUD support tool construction process. This paper presents an automated adaptation system for transforming EUD components to a standard format. Method: This system is based on the use of description logic. Based on a generic UML2 data model, this description logic is able to check whether an end-user component can be transformed to this modeling language through subsumption or as an instance of the UML2 model. Besides it automatically finds a consistent, non-ambiguous and finite set of XSLT mappings to automatically prepare data in order to leverage the component as part of a tool that conforms to the target UML2 component model. Results: The proposed system has been successfully applied to components from four prominent EUD tools. These components were automatically converted to a standard format. In order to validate the proposed system, rich internet applications (RIA) used as an operational support system for operators at a large services company were developed using automatically adapted standard format components. These RIAs would be impossible to develop using each EUD tool separately. Conclusion: The positive results of applying our system for automatically adapting components from current tool catalogues are indicative of the system?s effectiveness. Use of this system could foster the growth of web EUD component catalogues, leveraging a vast ecosystem of user-centred SaaS to further current EUSE trends.
Resumo:
Open source is a software development paradigm that has seen a huge rise in recent years. It reduces IT costs and time to market, while increasing security and reliability. However, the difficulty in integrating developments from different communities and stakeholders prevents this model from reaching its full potential. This is mainly due to the challenge of determining and locating the correct dependencies for a given software artifact. To solve this problem we propose the development of an extensible software component repository based upon models. This repository should be capable of solving the dependencies between several components and work with already existing repositories to access the needed artifacts transparently. This repository will also be easily expandable, enabling the creation of modules that support new kinds of dependencies or other existing repository technologies. The proposed solution will work with OSGi components and use OSGi itself.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Infrastructure as a Service clouds are a flexible and fast way to obtain (virtual) resources as demand varies. Grids, on the other hand, are middleware platforms able to combine resources from different administrative domains for task execution. Clouds can be used by grids as providers of devices such as virtual machines, so they only use the resources they need. But this requires grids to be able to decide when to allocate and release those resources. Here we introduce and analyze by simulations an economic mechanism (a) to set resource prices and (b) resolve when to scale resources depending on the users’ demand. This system has a strong emphasis on fairness, so no user hinders the execution of other users’ tasks by getting too many resources. Our simulator is based on the well-known GridSim software for grid simulation, which we expand to simulate infrastructure clouds. The results show how the proposed system can successfully adapt the amount of allocated resources to the demand, while at the same time ensuring that resources are fairly shared among users.
Resumo:
We study the dynamic response of a wind turbine structure subjected to theoretical seismic motions, taking into account the rotational component of ground shaking. Models are generated for a shallow moderate crustal earthquake in the Madrid Region (Spain). Synthetic translational and rotational time histories are computed using the Discrete Wavenumber Method, assuming a point source and a horizontal layered earth structure. These are used to analyze the dynamic response of a wind turbine, represented by a simple finite element model. Von Mises stress values at different heights of the tower are used to study the dynamical structural response to a set of synthetic ground motion time histories
Resumo:
A Near Infrared Spectroscopy (NIRS) industrial application was developed by the LPF-Tagralia team, and transferred to a Spanish dehydrator company (Agrotécnica Extremeña S.L.) for the classification of dehydrator onion bulbs for breeding purposes. The automated operation of the system has allowed the classification of more than one million onion bulbs during seasons 2004 to 2008 (Table 1). The performance achieved by the original model (R2=0,65; SEC=2,28ºBrix) was enough for qualitative classification thanks to the broad range of variation of the initial population (18ºBrix). Nevertheless, a reduction of the classification performance of the model has been observed with the passing of seasons. One of the reasons put forward is the reduction of the range of variation that naturally occurs during a breeding process, the other is the variations in other parameters than the variable of interest but whose effects would probably be affecting the measurements [1]. This study points to the application of Independent Component Analysis (ICA) on this highly variable dataset coming from a NIRS industrial application for the identification of the different sources of variation present through seasons.
Resumo:
Concurrency in Logic Programming has received much attention in the past. One problem with many proposals, when applied to Prolog, is that they involve large modifications to the standard implementations, and/or the communication and synchronization facilities provided do not fit as naturally within the language model as we feel is possible. In this paper we propose a new mechanism for implementing synchronization and communication for concurrency, based on atomic accesses to designated facts in the (shared) datábase. We argüe that this model is comparatively easy to implement and harmonizes better than previous proposals within the Prolog control model and standard set of built-ins. We show how in the proposed model it is easy to express classical concurrency algorithms and to subsume other mechanisms such as Linda, variable-based communication, or classical parallelism-oriented primitives. We also report on an implementation of the model and provide performance and resource consumption data.
Resumo:
An approximate analytic model of a shared memory multiprocessor with a Cache Only Memory Architecture (COMA), the busbased Data Difussion Machine (DDM), is presented and validated. It describes the timing and interference in the system as a function of the hardware, the protocols, the topology and the workload. Model results have been compared to results from an independent simulator. The comparison shows good model accuracy specially for non-saturated systems, where the errors in response times and device utilizations are independent of the number of processors and remain below 10% in 90% of the simulations. Therefore, the model can be used as an average performance prediction tool that avoids expensive simulations in the design of systems with many processors.
Resumo:
Software testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. There are in fact a number of proposals concerning effective knowledge management related to several software engineering processes. Objective: We defend the use of a lesson learned system for software testing. The reason is that such a system is an effective knowledge management resource enabling testers and managers to take advantage of the experience locked away in the brains of the testers. To do this, the experience has to be gathered, disseminated and reused. Method: After analyzing the proposals for managing software testing experience, significant weaknesses have been detected in the current systems of this type. The architectural model proposed here for lesson learned systems is designed to try to avoid these weaknesses. This model (i) defines the structure of the software testing lessons learned; (ii) sets up procedures for lesson learned management; and (iii) supports the design of software tools to manage the lessons learned. Results: A different approach, based on the management of the lessons learned that software testing engineers gather from everyday experience, with two basic goals: usefulness and applicability. Conclusion: The architectural model proposed here lays the groundwork to overcome the obstacles to sharing and reusing experience gained in the software testing and test management. As such, it provides guidance for developing software testing lesson learned systems.
Resumo:
Abstract?We consider a mathematical model related to the stationary regime of a plasma of fusion nuclear, magnetically confined in a Stellarator device. Using the geometric properties of the fusion device, a suitable system of coordinates and averaging methods, the mathematical problem may be reduced to a two dimensional free boundary problem of nonlocal type, where the corresponding differential equation is of the Grad?Shafranov type. The current balance within each flux magnetic gives us the possibility to define the third covariant magnetic field component with respect to the averaged poloidal flux function. We present here some numerical experiences and we give some numerical approach for the averaged poloidal flux and for the third covariant magnetic field component.
Resumo:
A dynamical model is proposed to describe the coupled decomposition and profile evolution of a free surfacefilm of a binary mixture. An example is a thin film of a polymer blend on a solid substrate undergoing simultaneous phase separation and dewetting. The model is based on model-H describing the coupled transport of the mass of one component (convective Cahn-Hilliard equation) and momentum (Navier-Stokes-Korteweg equations) supplemented by appropriate boundary conditions at the solid substrate and the free surface. General transport equations are derived using phenomenological nonequilibrium thermodynamics for a general nonisothermal setting taking into account Soret and Dufour effects and interfacial viscosity for the internal diffuse interface between the two components. Focusing on an isothermal setting the resulting model is compared to literature results and its base states corresponding to homogeneous or vertically stratified flat layers are analyzed.
Resumo:
El presente proyecto tiene como objeto caracterizar y optimizar un equipo de sonido profesional, entendiendo por “caracterizar” el determinar los atributos particulares de cada uno de los componentes integrados en el sistema, y entendiendo por “optimizar” el hallar la mejor manera de obtener una respuesta plana para todo el rango de frecuencias, libre de distorsión, y en la mayor área posible. El sistema de sonido utilizado pertenece a un grupo musical de directo, por lo que se instala y se configura en cada concierto en función de las características del recinto, sea cerrado o al aire libre. Con independencia de estas particularidades, el sistema completo se divide en dos formaciones, L y R (lado izquierdo y lado derecho del escenario), por lo que cada formación se compone de un procesador digital de la señal, cuatro etapas de amplificación, un sistema line array de ocho unidades, y un conjunto de ocho altavoces de subgraves. Para llevar a cabo el objetivo planteado, se ha dividido el proyecto en las fases que a continuación se describen. En primer lugar, se han realizado, en la cámara anecoica de la EUITT, las medidas que permiten obtener las características de cada uno de los elementos que componen el sistema. Estas medidas se han almacenado en formato ASCII. En segundo lugar, se ha diseñado una interfaz gráfica que permite, utilizando las medidas almacenadas, caracterizar tanto la respuesta individual de cada elemento de la cadena del sistema de sonido como la respuesta combinada de una unidad line array y una unidad de subgraves. La interfaz es interactiva, y tiene además la capacidad de entregar automáticamente los valores de configuración que permiten la optimización del conjunto. Esto es, obtener alineamiento en el rango de frecuencias compartido por ambas unidades. Las medidas realizadas en la cámara anecoica se han utilizado igualmente para modelar el sistema line array al completo y poder realizar simulaciones en campo libre utilizando programas de predicción acústica. Se ha experimentado con los valores de configuración que permiten el alineamiento de los elementos individuales y obtenidos a través de la interfaz desarrollada, para comprobar la validez de los mismos con la formación line array y subgraves al completo. Por otro lado, se han analizado los métodos de optimización de sistemas propuestos por profesionales reconocidos del medio con el objetivo de aplicarlos en un evento real. En la preparación y montaje del evento, se han aplicado los valores de configuración proporcionados por la interfaz, y se ha comprobado la validez de los mismos realizando medidas in situ según los criterios propuestos en los métodos de optimización estudiados. ABSTRACT. This project aims to characterize and optimize a professional sound system. Characterize must be understood as determining the particular attributes of each component integrated in the system; optimize must be understood as finding the best way to get a flat response for all the frequency range, distortion free, in the largest possible area. The sound system under test belongs to a live musical group, so it is setup and configured on each concert depending on the characteristics of the enclosure, whether it’s indoor or outdoor. Apart from these features, the whole system is divided into two clusters, L and R (left and right side of the stage), so that each one is provided with a digital signal processor, four amplification stages, an eight-units line array system, and a set of eight subwoofers . To accomplish the stated objective, the project has been divided into the steps described below. To begin with, measures have been realized in the anechoic chamber of EUITT, which make possible obtaining the characteristics of each of the elements of the system. These measures have been stored in ASCII format. Then, a graphical interface has been designed that allow, using the stored measurements and from graphics, to characterize both the individual response of each element of the string sound system and the combined response of the several elements. The interface is interactive, and also has the ability to automatically deliver the configuration settings that allow the whole optimization. That means to get alignment in the frequency range shared by a line array unit and a subwoofer unit. The measurements made in the anechoic chamber have also been used to model the complete line array system and to perform free-field simulations using acoustical prediction programs. Simulations have been done with the configuration settings that allow the individual elements alignment (provided by the graphical interface developed), in order to check their validity with the full line array and subwoofer systems. On the other hand, analysis about the optimization methods, proposed by renowned professionals of the field, has been made in order to apply them in a real concert. In the setup and assembly of the event, configuration settings provided by the interface have been applied. Their validity has been proved by making measures on-site according to the criteria set in the studied optimization methods.
Resumo:
In this paper, we introduce B2DI model that extends BDI model to perform Bayesian inference under uncertainty. For scalability and flexibility purposes, Multiply Sectioned Bayesian Network (MSBN) technology has been selected and adapted to BDI agent reasoning. A belief update mechanism has been defined for agents, whose belief models are connected by public shared beliefs, and the certainty of these beliefs is updated based on MSBN. The classical BDI agent architecture has been extended in order to manage uncertainty using Bayesian reasoning. The resulting extended model, so-called B2DI, proposes a new control loop. The proposed B2DI model has been evaluated in a network fault diagnosis scenario. The evaluation has compared this model with two previously developed agent models. The evaluation has been carried out with a real testbed diagnosis scenario using JADEX. As a result, the proposed model exhibits significant improvements in the cost and time required to carry out a reliable diagnosis.