877 resultados para Real applications
Resumo:
Although the computational complexity of the logic underlying the standard OWL 2 for the Web Ontology Language (OWL) appears discouraging for real applications, several contributions have shown that reasoning with OWL ontologies is feasible in practice. It turns out that reasoning in practice is often far less complex than is suggested by the established theoretical complexity bound, which reflects the worstcase scenario. State-of-the reasoners like FACT++, HERMIT, PELLET and RACER have demonstrated that, even with fairly expressive fragments of OWL 2, acceptable performances can be achieved. However, it is still not well understood why reasoning is feasible in practice and it is rather unclear how to study this problem. In this paper, we suggest first steps that in our opinion could lead to a better understanding of practical complexity. We also provide and discuss some initial empirical results with HERMIT on prominent ontologies
Resumo:
The goal of the RAP-WAM AND-parallel Prolog abstract architecture is to provide inference speeds significantly beyond those of sequential systems, while supporting Prolog semantics and preserving sequential performance and storage efficiency. This paper presents simulation results supporting these claims with special emphasis on memory performance on a two-level sharedmemory multiprocessor organization. Several solutions to the cache coherency problem are analyzed. It is shown that RAP-WAM offers good locality and storage efficiency and that it can effectively take advantage of broadcast caches. It is argued that speeds in excess of 2 ML IPS on real applications exhibiting medium parallelism can be attained with current technology.
Resumo:
La presente Tesis proporciona una gran cantidad de información con respecto al uso de un nuevo y avanzado material polimérico (con base de poliolefina) especialmente adecuada para ser usada en forma de fibras como adición en el hormigón. Se han empleado fibras de aproximadamente 1 mm de diámetro, longitudes entre 48 y 60 mm y una superficie corrugada. Las prometedoras propiedades de este material (baja densidad, bajo coste, buen comportamiento resistente y gran estabilidad química) justifican el interés en desarrollar el esfuerzo de investigación requerido para demostrar las ventajas de su uso en aplicaciones prácticas. La mayor parte de la investigación se ha realizado usando hormigón autocompactante como matriz, ya que este material es óptimo para el relleno de los encofrados del hormigón, aunque también se ha empleado hormigón normal vibrado con el fin de comparar algunas propiedades. Además, el importante desarrollo del hormigón reforzado con fibras en los últimos años, tanto en investigación como en aplicaciones prácticas, también es muestra del gran interés que los resultados y consideraciones de diseño que esta Tesis pueden tener. El material compuesto resultante, Hormigón Reforzado con Fibras de Poliolefina (HRFP o PFRC por sus siglas inglesas) ha sido exhaustivamente ensayado y estudiado en muchos aspectos. Los resultados permiten establecer cómo conseguidos los objetivos buscados: -Se han cuantificado las propiedades mecánicas del PFRC con el fin de demostrar su buen comportamiento en la fase fisurada de elementos estructurales sometidos a tensiones de tracción. -Contrastar los resultados obtenidos con las bases propuestas en la normativa existente y evaluar las posibilidades para el uso del PFRC con fin estructural para sustituir el armado tradicional con barras de acero corrugado para determinadas aplicaciones. -Se han desarrollado herramientas de cálculo con el fin de evaluar la capacidad del PFRC para sustituir al hormigón armado con las barras habituales de acero. -En base a la gran cantidad de ensayos experimentales y a alguna aplicación real en la construcción, se han podido establecer recomendaciones y consejos de diseño para que elementos de este material puedan ser proyectados y construidos con total fiabilidad. Se presentan, además, resultados prometedores en una nueva línea de trabajo en el campo del hormigón reforzado con fibras combinando dos tipologías de fibras. Se combinaron fibras de poliolefina con fibras de acero como refuerzo del mismo hormigón autocompactante detectándose sinergias que podrían ser la base del uso futuro de esta tecnología de hormigón. This thesis provides a significant amount of information on the use of a new advanced polymer (polyolefin-based) especially suitable in the form of fibres to be added to concrete. At the time of writing, there is a noteworthy lack of research and knowledge about use as a randomly distributed element to reinforce concrete. Fibres with an approximate 1 mm diameter, length of 48-60 mm, an embossed surface and improved mechanical properties are employed. The promising properties of the polyolefin material (low density, inexpensive, and with good strength behaviour and high chemical stability) justify the research effort involved and demonstrate the advantages for practical purposes. While most of the research has used self-compacting concrete, given that this type of matrix material is optimum in filling the concrete formwork, for comparison purposes standard vibration compacted mixes have also been used. In addition, the interest in fibre-reinforced concrete technology, in both research and application, support the significant interest in the results and considerations provided by the thesis. The resulting composite material, polyolefin fibre reinforced concrete (PFRC) has been extensively tested and studied. The results have allowed the following objectives to be met: -Assessment of the mechanical properties of PFRC in order to demonstrate the good performance in the post-cracking strength for structural elements subjected to tensile stresses. -- Assessment of the results in contrast with the existing structural codes, regulations and test methods. The evaluation of the potential of PFRC to meet the requirements and replace traditional steel-bar reinforcement applications. -Development of numerical tools designed to evaluate the capability of PFRC to substitute, either partially or totally, standard steel reinforcing bars either alone or in conjunction with steel fibres. -Provision, based on the large amount of experimental work and real applications, of a series of guidelines and recommendations for the practical and reliable design and use of PFRC. Furthermore, the thesis also reports promising results about an innovative line in the field of fibre-reinforced concrete: the design of a fibre cocktail to reinforce the concrete by using two types of fibres simultaneously. Polyolefin fibres were combined with steel fibres in self-compacting concrete, identifying synergies that could serve as the base in the future use of fibre-reinforced concrete technology.
Resumo:
El objetivo de esta tesis es el desarrollo y caracterización de biosensores ópticos sin marcado basados en celdas sensoras biofotónicas (BICELLs). Éstas son un nuevo concepto de biosensor desarrollado por el grupo de investigación y consiste en la combinación de técnicas de interrogación vertical, junto a estructuras fotónicas producidas usando métodos de micro- y nanofabricación. Varias conclusiones son extraídas de este trabajo. La primera, que se ha definido una BICELL estándar basada en interferómetros Fabry-Perot (FP). Se ha demostrado su capacidad para la comparación de rendimiento entre BICELLs estructuradas y para la realización de inmunoensayos de bajo coste. Se han estudiado diferentes técnicas de fabricación disponibles para la producción de BICELLs. Se determinó que la litografía de contacto a nivel de oblea produce estructuras de bajo coste, reproducibles y de alta calidad. La resolución alcanzada ha sido de 700 nm. El estudio de la respuesta a inmunoensayos de las BICELLs producidas se ha desarrollado en este trabajo. Se estudió la influencia de BICELLs basadas en diferentes geometrías y tamaños. De aquí resulta un nuevo enfoque para predecir el comportamiento de respuesta para la detección biológica de cualquier biosensor óptico estructurado, relacionando su superficie efectiva y su sensibilidad óptica. También se demostró una técnica novedosa y de bajo coste para la caracterización experimental de la sensibilidad óptica, basada en el depósito de películas ultradelgadas. Finalmente, se ha demostrado el uso de BICELLs desarrolladas en esta tesis, en la detección de aplicaciones reales, tales como hormonas, virus y proteínas. ABSTRACT The objective of this thesis is the development and characterization of optical label-free biosensors based on Bio-Photonic sensing Cells (BICELLs). BICELL is a novel biosensor concept developed by the research group, and it consists of a combination of vertical interrogation optical techniques and photonic structures produced by using micro- and nano-fabrication methods. Several main conclusions are extracted from this work. Firstly, a standard BICELL is defined based on FP interferometers, which demonstrated its capacity for accomplishing performance comparisons among different structured BICELLs, as well as to achieve low-cost immunoassays. Different available fabrication techniques were studied for BICELL manufacturing. It is found that contact lithography at wafer scale produce cost-effective, reproducible and high quality structures. The resolution achieved was 700 nm. Study on the response of developed BICELLs to immunoassays is performed within this work. It is therefore studied the influence of BICELLs based on different geometries and sizes in the immunoassay, which resulted in a new approach to predict the biosensing behaviour of any structured optical biosensor relating to its effective surface and optical sensitivity. Also, it is demonstrated a novel and low-cost characterization technique of the experimental optical sensitivity, based on ultrathin-film deposition. Finally, it is also demonstrated the capability of using the developed BICELLs in this thesis for real applications detection of hormones, virus and proteins.
Resumo:
Nowadays robots have made their way into real applications that were prohibitive and unthinkable thirty years ago. This is mainly due to the increase in power computations and the evolution in the theoretical field of robotics and control. Even though there is plenty of information in the current literature on this topics, it is not easy to find clear concepts of how to proceed in order to design and implement a controller for a robot. In general, the design of a controller requires of a complete understanding and knowledge of the system to be controlled. Therefore, for advanced control techniques the systems must be first identified. Once again this particular objective is cumbersome and is never straight forward requiring of great expertise and some criteria must be adopted. On the other hand, the particular problem of designing a controller is even more complex when dealing with Parallel Manipulators (PM), since their closed-loop structures give rise to a highly nonlinear system. Under this basis the current work is developed, which intends to resume and gather all the concepts and experiences involve for the control of an Hydraulic Parallel Manipulator. The main objective of this thesis is to provide a guide remarking all the steps involve in the designing of advanced control technique for PMs. The analysis of the PM under study is minced up to the core of the mechanism: the hydraulic actuators. The actuators are modeled and experimental identified. Additionally, some consideration regarding traditional PID controllers are presented and an adaptive controller is finally implemented. From a macro perspective the kinematic and dynamic model of the PM are presented. Based on the model of the system and extending the adaptive controller of the actuator, a control strategy for the PM is developed and its performance is analyzed with simulation.
Resumo:
This paper proposes an emotion transplantation method capable of modifying a synthetic speech model through the use of CSMAPLR adaptation in order to incorporate emotional information learned from a different speaker model while maintaining the identity of the original speaker as much as possible. The proposed method relies on learning both emotional and speaker identity information by means of their adaptation function from an average voice model, and combining them into a single cascade transform capable of imbuing the desired emotion into the target speaker. This method is then applied to the task of transplanting four emotions (anger, happiness, sadness and surprise) into 3 male speakers and 3 female speakers and evaluated in a number of perceptual tests. The results of the evaluations show how the perceived naturalness for emotional text significantly favors the use of the proposed transplanted emotional speech synthesis when compared to traditional neutral speech synthesis, evidenced by a big increase in the perceived emotional strength of the synthesized utterances at a slight cost in speech quality. A final evaluation with a robotic laboratory assistant application shows how by using emotional speech we can significantly increase the students’ satisfaction with the dialog system, proving how the proposed emotion transplantation system provides benefits in real applications.
Resumo:
This paper provides an overview of the colloquium's discussion session on natural language understanding, which followed presentations by M. Bates [Bates, M. (1995) Proc. Natl. Acad. Sci. USA 92, 9977-9982] and R. C. Moore [Moore, R. C. (1995) Proc. Natl. Acad. Sci. USA 92, 9983-9988]. The paper reviews the dual role of language processing in providing understanding of the spoken input and an additional source of constraint in the recognition process. To date, language processing has successfully provided understanding but has provided only limited (and computationally expensive) constraint. As a result, most current systems use a loosely coupled, unidirectional interface, such as N-best or a word network, with natural language constraints as a postprocess, to filter or resort the recognizer output. However, the level of discourse context provides significant constraint on what people can talk about and how things can be referred to; when the system becomes an active participant, it can influence this order. But sources of discourse constraint have not been extensively explored, in part because these effects can only be seen by studying systems in the context of their use in interactive problem solving. This paper argues that we need to study interactive systems to understand what kinds of applications are appropriate for the current state of technology and how the technology can move from the laboratory toward real applications.
Resumo:
Um das principais características da tecnologia de virtualização é a Live Migration, que permite que máquinas virtuais sejam movimentadas entre máquinas físicas sem a interrupção da execução. Esta característica habilita a implementação de políticas mais sofisticadas dentro de um ambiente de computação na nuvem, como a otimização de uso de energia elétrica e recursos computacionais. Entretanto, a Live Migration pode impor severa degradação de desempenho nas aplicações das máquinas virtuais e causar diversos impactos na infraestrutura dos provedores de serviço, como congestionamento de rede e máquinas virtuais co-existentes nas máquinas físicas. Diferente de diversos estudos, este estudo considera a carga de trabalho da máquina virtual um importante fator e argumenta que escolhendo o momento adequado para a migração da máquina virtual pode-se reduzir as penalidades impostas pela Live Migration. Este trabalho introduz a Application-aware Live Migration (ALMA), que intercepta as submissões de Live Migration e, baseado na carga de trabalho da aplicação, adia a migração para um momento mais favorável. Os experimentos conduzidos neste trabalho mostraram que a arquitetura reduziu em até 74% o tempo das migrações para os experimentos com benchmarks e em até 67% os experimentos com carga de trabalho real. A transferência de dados causada pela Live Migration foi reduzida em até 62%. Além disso, o presente introduz um modelo que faz a predição do custo da Live Migration para a carga de trabalho e também um algoritmo de migração que não é sensível à utilização de memória da máquina virtual.
Resumo:
Self-organising neural models have the ability to provide a good representation of the input space. In particular the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time-consuming, especially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This paper proposes a Graphics Processing Unit (GPU) parallel implementation of the GNG with Compute Unified Device Architecture (CUDA). In contrast to existing algorithms, the proposed GPU implementation allows the acceleration of the learning process keeping a good quality of representation. Comparative experiments using iterative, parallel and hybrid implementations are carried out to demonstrate the effectiveness of CUDA implementation. The results show that GNG learning with the proposed implementation achieves a speed-up of 6× compared with the single-threaded CPU implementation. GPU implementation has also been applied to a real application with time constraints: acceleration of 3D scene reconstruction for egomotion, in order to validate the proposal.
Resumo:
Let T be a given subset of ℝ n , whose elements are called sites, and let s∈T. The Voronoi cell of s with respect to T consists of all points closer to s than to any other site. In many real applications, the position of some elements of T is uncertain due to either random external causes or to measurement errors. In this paper we analyze the effect on the Voronoi cell of small changes in s or in a given non-empty set P⊂T\{s}. Two types of perturbations of P are considered, one of them not increasing the cardinality of T. More in detail, the paper provides conditions for the corresponding Voronoi cell mappings to be closed, lower and upper semicontinuous. All the involved conditions are expressed in terms of the data.
Resumo:
Numerical modelling methodologies are important by their application to engineering and scientific problems, because there are processes where analytical mathematical expressions cannot be obtained to model them. When the only available information is a set of experimental values for the variables that determine the state of the system, the modelling problem is equivalent to determining the hyper-surface that best fits the data. This paper presents a methodology based on the Galerkin formulation of the finite elements method to obtain representations of relationships that are defined a priori, between a set of variables: y = z(x1, x2,...., xd). These representations are generated from the values of the variables in the experimental data. The approximation, piecewise, is an element of a Sobolev space and has derivatives defined in a general sense into this space. The using of this approach results in the need of inverting a linear system with a structure that allows a fast solver algorithm. The algorithm can be used in a variety of fields, being a multidisciplinary tool. The validity of the methodology is studied considering two real applications: a problem in hydrodynamics and a problem of engineering related to fluids, heat and transport in an energy generation plant. Also a test of the predictive capacity of the methodology is performed using a cross-validation method.
Resumo:
Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. In this paper, we investigate the problem of evaluating the top k distinguished “features” for a “cluster” based on weighted proximity relationships between the cluster and features. We measure proximity in an average fashion to address possible nonuniform data distribution in a cluster. Combining a standard multi-step paradigm with new lower and upper proximity bounds, we presented an efficient algorithm to solve the problem. The algorithm is implemented in several different modes. Our experiment results not only give a comparison among them but also illustrate the efficiency of the algorithm.
Resumo:
A major task of traditional temporal event sequence mining is to find all frequent event patterns from a long temporal sequence. In many real applications, however, events are often grouped into different types, and not all types are of equal importance. In this paper, we consider the problem of efficient mining of temporal event sequences which lead to an instance of a specific type of event. Temporal constraints are used to ensure sensibility of the mining results. We will first generalise and formalise the problem of event-oriented temporal sequence data mining. After discussing some unique issues in this new problem, we give a set of criteria, which are adapted from traditional data mining techniques, to measure the quality of patterns to be discovered. Finally we present an algorithm to discover potentially interesting patterns.
Resumo:
The performance of feed-forward neural networks in real applications can be often be improved significantly if use is made of a-priori information. For interpolation problems this prior knowledge frequently includes smoothness requirements on the network mapping, and can be imposed by the addition to the error function of suitable regularization terms. The new error function, however, now depends on the derivatives of the network mapping, and so the standard back-propagation algorithm cannot be applied. In this paper, we derive a computationally efficient learning algorithm, for a feed-forward network of arbitrary topology, which can be used to minimize the new error function. Networks having a single hidden layer, for which the learning algorithm simplifies, are treated as a special case.
Resumo:
We report a distinctive polarization mode coupling behaviour of tilted fibre Bragg gratings (TFBGs) with a tilted angle exceeding 45°. The ex-45° TFBGs exhibit pronounced polarization mode splitting resulted from the birefringence induced by the grating structure asymmetry. We have fabricated TFBGs with a tilted structure at 81° and studied their properties under transverse load applied to their equivalent fast and slow axes. The results show that the light coupling to the orthogonally polarized modes of the 81°-TFBGs changes only when the load is applied to their slow axis, giving a prominent directional loading response. For the view of real applications, we further investigated the possibility of interrogating such a TFBG-based load sensor using low-cost and compact-size single wavelength source and power detector. The experimental results clearly show that the 81°-TFBGs plus the proposed power-measurement interrogation scheme may be developed to an optical fibre vector sensor system capable of not just measuring the magnitude but also recognizing the direction of the applied transverse load. Using such an 81°-TFBG based load sensor, a load change as small as 1.6 × 10-2 g may be detected by employing a standard photodiode detector.