993 resultados para Relational databases -- Design


Relevância:

30.00% 30.00%

Publicador:

Resumo:

People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis propone un sistema biométrico de geometría de mano orientado a entornos sin contacto junto con un sistema de detección de estrés capaz de decir qué grado de estrés tiene una determinada persona en base a señales fisiológicas Con respecto al sistema biométrico, esta tesis contribuye con el diseño y la implementación de un sistema biométrico de geometría de mano, donde la adquisición se realiza sin ningún tipo de contacto, y el patrón del usuario se crea considerando únicamente datos del propio individuo. Además, esta tesis propone un algoritmo de segmentación multiescala para solucionar los problemas que conlleva la adquisición de manos en entornos reales. Por otro lado, respecto a la extracción de características y su posterior comparación esta tesis tiene una contribución específica, proponiendo esquemas adecuados para llevar a cabo tales tareas con un coste computacional bajo pero con una alta precisión en el reconocimiento de personas. Por último, este sistema es evaluado acorde a la norma estándar ISO/IEC 19795 considerando seis bases de datos públicas. En relación al método de detección de estrés, esta tesis propone un sistema basado en dos señales fisiológicas, concretamente la tasa cardiaca y la conductancia de la piel, así como la creación de un innovador patrón de estrés que recoge el comportamiento de ambas señales bajo las situaciones de estrés y no-estrés. Además, este sistema está basado en lógica difusa para decidir el grado de estrés de un individuo. En general, este sistema es capaz de detectar estrés de forma precisa y en tiempo real, proporcionando una solución adecuada para sistemas biométricos actuales, donde la aplicación del sistema de detección de estrés es directa para evitar situaciónes donde los individuos sean forzados a proporcionar sus datos biométricos. Finalmente, esta tesis incluye un estudio de aceptabilidad del usuario, donde se evalúa cuál es la aceptación del usuario con respecto a la técnica biométrica propuesta por un total de 250 usuarios. Además se incluye un prototipo implementado en un dispositivo móvil y su evaluación. ABSTRACT: This thesis proposes a hand biometric system oriented to unconstrained and contactless scenarios together with a stress detection method able to elucidate to what extent an individual is under stress based on physiological signals. Concerning the biometric system, this thesis contributes with the design and implementation of a hand-based biometric system, where the acquisition is carried out without contact and the template is created only requiring information from a single individual. In addition, this thesis proposes an algorithm based on multiscale aggregation in order to tackle with the problem of segmentation in real unconstrained environments. Furthermore, feature extraction and matching are also a specific contributions of this thesis, providing adequate schemes to carry out both actions with low computational cost but with certain recognition accuracy. Finally, this system is evaluated according to international standard ISO/IEC 19795 considering six public databases. In relation to the stress detection method, this thesis proposes a system based on two physiological signals, namely heart rate and galvanic skin response, with the creation of an innovative stress detection template which gathers the behaviour of both physiological signals under both stressing and non-stressing situations. Besides, this system is based on fuzzy logic to elucidate the level of stress of an individual. As an overview, this system is able to detect stress accurately and in real-time, providing an adequate solution for current biometric systems, where the application of a stress detection system is direct to avoid situations where individuals are forced to provide the biometric data. Finally, this thesis includes a user acceptability evaluation, where the acceptance of the proposed biometric technique is assessed by a total of 250 individuals. In addition, this thesis includes a mobile implementation prototype and its evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An EMI filter design procedure for power converters is proposed. Based on a given noise spectrum, information about the converter noise source impedance and design constraints, the design space of the input filter is defined. The design is based on component databases and detailed models of the filter components, including high frequency parasitics, losses, weight, volume, etc.. The design space is mapped onto a performance space in which different filter implementations are evaluated and compared. A multi-objective optimization approach is used to obtain optimal designs w.r.t. a given performance function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El trabajo de fin de grado que se va a definir detalladamente en esta memoria, trata de poner de manifiesto muchos de los conocimientos que he adquirido a lo largo de la carrera, aplicándolos en un proyecto real. Se ha desarrollado una plataforma capaz de albergar ideas, escritas por personas de todo el mundo que buscan compartirlas con los demás, para que estas sean comentadas, valoradas y entre todos poder mejorarlas. Estas ideas pueden ser de cualquier ámbito, por tanto, se da la posibilidad de clasificarlas en las categorías que mejor encajen con la idea. La aplicación ofrece una API RESTful muy descriptiva, en la que se ha identificado y estructurado cada recurso, para que a través de los “verbos http” se puedan gestionar todos los elementos de una forma fácil y sencilla, independientemente del cliente que la utilice. La arquitectura está montada siguiendo el patrón de diseño modelo vista-controlador, utilizando las últimas tecnologías del mercado como Spring, Liferay, SmartGWT y MongoDB (entre muchas otras) con el objetivo de crear una aplicación segura, escalable y modulada, por lo que se ha tenido que integrar todos estos frameworks. Los datos de la aplicación se hacen persistentes en dos tipos de bases de datos, una relacional (MySQL) y otra no relacional (MongoDB), aprovechando al máximo las características que ofrecen cada una de ellas. El cliente propuesto es accesible mediante un navegador web, se basa en el portal de Liferay. Se han desarrollado varios “Portlets o Widgets”, que componen la estructura de contenido que ve el usuario final. A través de ellos se puede acceder al contenido de la aplicación, ideas, comentarios y demás contenidos sociales, de una forma agradable para el usuario, ya que estos “Portlets” se comunican entre sí y hacen peticiones asíncronas a la API RESTful sin necesidad de recargar toda la estructura de la página. Además, los usuarios pueden registrarse en el sistema para aportar más contenidos u obtener roles que les dan permisos para realizar acciones de administración. Se ha seguido una metodología “Scrum” para la realización del proyecto, con el objetivo de dividir el proyecto en tareas pequeñas y desarrollarlas de una forma ágil. Herramientas como “Jenkins” me han ayudado a una integración continua y asegurando mediante la ejecución de los test de prueba, que todos los componentes funcionan. La calidad ha sido un aspecto principal en el proyecto, se han seguido metodologías software y patrones de diseño para garantizar un diseño de calidad, reutilizable, óptimo y modulado. El uso de la herramienta “Sonar” ha ayudado a este cometido. Además, se ha implementado un sistema de pruebas muy completo de todos los componentes de la aplicación. En definitiva, se ha diseñado una aplicación innovadora de código abierto, que establece unas bases muy definidas para que si algún día se pone en producción, sirva a las personas para compartir pensamientos o ideas ayudando a mejorar el mundo en el que vivimos. ---ABSTRACT---The Final Degree Project, described in detail in this report, attempts to cover a lot of the knowledge I have acquired during my studies, applying it to a real project. The objective of the project has been to develop a platform capable of hosting ideas from people all over the world, where users can share their ideas, comment on and rate the ideas of others and together help improving them. Since these ideas can be of any kind, it is possible to classify them into suitable categories. The application offers a very descriptive API RESTful, where each resource has been identified and organized in a way that makes it possible to easily manage all the elements using the HTTP verbs, regardless of the client using it. The architecture has been built following the design pattern model-view-controller, using the latest market technologies such as Spring, Liferay, Smart GWT and MongoDB (among others) with the purpose of creating a safe, scalable and adjustable application. The data of the application are persistent in two different kinds of databases, one relational (MySQL) and the other non-relational (MongoDB), taking advantage of all the different features each one of them provides. The suggested client is accessible through a web browser and it is based in Liferay. Various “Portlets" or "Widgets” make up the final content of the page. Thanks to these Portlets, the user can access the application content (ideas, comments and categories) in a pleasant way as the Portlets communicate with each other making asynchronous requests to the API RESTful without the necessity to refresh the whole page. Furthermore, users can log on to the system to contribute with more contents or to obtain administrator privileges. The Project has been developed following a “Scrum” methodology, with the main objective being that of dividing the Project into smaller tasks making it possible to develop each task in a more agile and ultimately faster way. Tools like “Jenkins” have been used to guarantee a continuous integration and to ensure that all the components work correctly thanks to the execution of test runs. Quality has been one of the main aspects in this project, why design patterns and software methodologies have been used to guarantee a high quality, reusable, modular and optimized design. The “Sonar” technology has helped in the achievement of this goal. Furthermore, a comprehensive proofing system of all the application's components has been implemented. In conclusion, this Project has consisted in developing an innovative, free source application that establishes a clearly defined basis so that, if it someday will be put in production, it will allow people to share thoughts and ideas, and by doing so, help them to improve the World we live in.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El objetivo principal de este proyecto ha sido introducir aprendizaje automático en la aplicación FleSe. FleSe es una aplicación web que permite realizar consultas borrosas sobre bases de datos nítidos. Para llevar a cabo esta función la aplicación utiliza unos criterios para definir los conceptos borrosos usados para llevar a cabo las consultas. FleSe además permite que el usuario cambie estas personalizaciones. Es aquí donde introduciremos el aprendizaje automático, de tal manera que los criterios por defecto cambien y aprendan en función de las personalizaciones que van realizando los usuarios. Los objetivos secundarios han sido familiarizarse con el desarrollo y diseño web, al igual que recordar y ampliar el conocimiento sobre lógica borrosa y el lenguaje de programación lógica Ciao-Prolog. A lo largo de la realización del proyecto y sobre todo después del estudio de los resultados se demuestra que la agrupación de los usuarios marca la diferencia con la última versión de la aplicación. Esto se basa en la siguiente idea, podemos usar un algoritmo de aprendizaje automático sobre las personalizaciones de los criterios de todos los usuarios, pero la gran diversidad de opiniones de los usuarios puede llevar al algoritmo a concluir criterios erróneos o no representativos. Para solucionar este problema agrupamos a los usuarios intentando que cada grupo tengan la misma opinión o mismo criterio sobre el concepto. Y después de haber realizado las agrupaciones usar el algoritmo de aprendizaje automático para precisar el criterio por defecto de cada grupo de usuarios. Como posibles mejoras para futuras versiones de la aplicación FleSe sería un mejor control y manejo del ejecutable plserver. Este archivo se encarga de permitir a la aplicación web usar el lenguaje de programación lógica Ciao-Prolog para llevar a cabo la lógica borrosa relacionada con las consultas. Uno de los problemas más importantes que ofrece plserver es que bloquea el hilo de ejecución al intentar cargar un archivo con errores y en caso de ocurrir repetidas veces bloquea todas las peticiones siguientes bloqueando la aplicación. Pensando en los usuarios y posibles clientes, sería también importante permitir que FleSe trabajase con bases de datos de SQL en vez de almacenar la base de datos en los archivos de Prolog. Otra posible mejora basarse en distintas características a la hora de agrupar los usuarios dependiendo de los conceptos borrosos que se van ha utilizar en las consultas. Con esto se conseguiría que para cada concepto borroso, se generasen distintos grupos de usuarios, los cuales tendrían opiniones distintas sobre el concepto en cuestión. Así se generarían criterios por defecto más precisos para cada usuario y cada concepto borroso.---ABSTRACT---The main objective of this project has been to introduce machine learning in the application FleSe. FleSe is a web application that makes fuzzy queries over databases with precise information, using defined criteria to define the fuzzy concepts used by the queries. The application allows the users to change and custom these criteria. On this point is where the machine learning would be introduced, so FleSe learn from every new user customization of the criteria in order to generate a new default value of it. The secondary objectives of this project were get familiar with web development and web design in order to understand the how the application works, as well as refresh and improve the knowledge about fuzzy logic and logic programing. During the realization of the project and after the study of the results, I realized that clustering the users in different groups makes the difference between this new version of the application and the previous. This conclusion follows the next idea, we can use an algorithm to introduce machine learning over the criteria that people have, but the problem is the diversity of opinions and judgements that exists, making impossible to generate a unique correct criteria for all the users. In order to solve this problem, before using the machine learning methods, we cluster the users in order to make groups that have the same opinion, and afterwards, use the machine learning methods to precise the default criteria of each users group. The future improvements that could be important for the next versions of FleSe will be to control better the behaviour of the plserver file, that cost many troubles at the beginning of this project and it also generate important errors in the previous version. The file plserver allows the web application to use Ciao-Prolog, a logic programming language that control and manage all the fuzzy logic. One of the main problems with plserver is that when the user uploads a file with errors, it will block the thread and when this happens multiple times it will start blocking all the requests. Oriented to the customer, would be important as well to allow FleSe to manage and work with SQL databases instead of store the data in the Prolog files. Another possible improvement would that the cluster algorithm would be based on different criteria depending on the fuzzy concepts that the selected Prolog file have. This will generate more meaningful clusters, and therefore, the default criteria offered to the users will be more precise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

GlycoSuiteDB is a relational database that curates information from the scientific literature on glyco­protein derived glycan structures, their biological sources, the references in which the glycan was described and the methods used to determine the glycan structure. To date, the database includes most published O-linked oligosaccharides from the last 50 years and most N-linked oligosaccharides that were published in the 1990s. For each structure, information is available concerning the glycan type, linkage and anomeric configuration, mass and composition. Detailed information is also provided on native and recombinant sources, including tissue and/or cell type, cell line, strain and disease state. Where known, the proteins to which the glycan structures are attached are reported, and cross-references to the SWISS-PROT/TrEMBL protein sequence databases are given if applicable. The GlycoSuiteDB annotations include literature references which are linked to PubMed, and detailed information on the methods used to determine each glycan structure are noted to help the user assess the quality of the structural assignment. GlycoSuiteDB has a user-friendly web interface which allows the researcher to query the database using mono­isotopic or average mass, monosaccharide composition, glycosylation linkages (e.g. N- or O-linked), reducing terminal sugar, attached protein, taxonomy, tissue or cell type and GlycoSuiteDB accession number. Advanced queries using combinations of these parameters are also possible. GlycoSuiteDB can be accessed on the web at http://www.glycosuite.com.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural Language Interfaces to Query Databases (NLIDBs) have been an active research field since the 1960s. However, they have not been widely adopted. This article explores some of the biggest challenges and approaches for building NLIDBs and proposes techniques to reduce implementation and adoption costs. The article describes {AskMe*}, a new system that leverages some of these approaches and adds an innovative feature: query-authoring services, which lower the entry barrier for end users. Advantages of these approaches are proven with experimentation. Results confirm that, even when {AskMe*} is automatically reconfigurable against multiple domains, its accuracy is comparable to domain-specific NLIDBs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The general purpose of the EQUIFASE Conference is to promote the Scientific and Technologic exchange between people from both the academic and the industrial environment in the field of Phase Equilibria and Thermodynamic Properties for the Design of Chemical Processes. Topics: Measurement of Thermodynamic Properties. Phase Equilibria and Chemical Equilibria. Theory and Modelling. Alternative Solvents. Supercritical Fluids. Ionic Liquids. Energy. Gas and oil. Petrochemicals. Environment and sustainability. Biomolecules and Biotechnology. Product and Process Design. Databases and Software. Education.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Federal Highway Administration, McLean, Va.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theoretical analyses of air traffic complexity were carried out using the Method for the Analysis of Relational Complexity. Twenty-two air traffic controllers examined static air traffic displays and were required to detect and resolve conflicts. Objective measures of performance included conflict detection time and accuracy. Subjective perceptions of mental workload were assessed by a complexity-sorting task and subjective ratings of the difficulty of different aspects of the task. A metric quantifying the complexity of pair-wise relations among aircraft was able to account for a substantial portion of the variance in the perceived complexity and difficulty of conflict detection problems, as well as reaction time. Other variables that influenced performance included the mean minimum separation between aircraft pairs and the amount of time that aircraft spent in conflict.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Retrieving large amounts of information over wide area networks, including the Internet, is problematic due to issues arising from latency of response, lack of direct memory access to data serving resources, and fault tolerance. This paper describes a design pattern for solving the issues of handling results from queries that return large amounts of data. Typically these queries would be made by a client process across a wide area network (or Internet), with one or more middle-tiers, to a relational database residing on a remote server. The solution involves implementing a combination of data retrieval strategies, including the use of iterators for traversing data sets and providing an appropriate level of abstraction to the client, double-buffering of data subsets, multi-threaded data retrieval, and query slicing. This design has recently been implemented and incorporated into the framework of a commercial software product developed at Oracle Corporation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern distributed control systems comprise of a set of processors which are interconnected using a suitable communication network. For use in real-time control environments, such systems must be deterministic and generate specified responses within critical timing constraints. Also, they should be sufficiently robust to survive predictable events such as communication or processor faults. This thesis considers the problem of coordinating and synchronizing a distributed real-time control system under normal and abnormal conditions. Distributed control systems need to periodically coordinate the actions of several autonomous sites. Often the type of coordination required is the all or nothing property of an atomic action. Atomic commit protocols have been used to achieve this atomicity in distributed database systems which are not subject to deadlines. This thesis addresses the problem of applying time constraints to atomic commit protocols so that decisions can be made within a deadline. A modified protocol is proposed which is suitable for real-time applications. The thesis also addresses the problem of ensuring that atomicity is provided even if processor or communication failures occur. Previous work has considered the design of atomic commit protocols for use in non time critical distributed database systems. However, in a distributed real-time control system a fault must not allow stringent timing constraints to be violated. This thesis proposes commit protocols using synchronous communications which can be made resilient to a single processor or communication failure and still satisfy deadlines. Previous formal models used to design commit protocols have had adequate state coverability but have omitted timing properties. They also assumed that sites communicated asynchronously and omitted the communications from the model. Timed Petri nets are used in this thesis to specify and design the proposed protocols which are analysed for consistency and timeliness. Also the communication system is mcxielled within the Petri net specifications so that communication failures can be included in the analysis. Analysis of the Timed Petri net and the associated reachability tree is used to show the proposed protocols always terminate consistently and satisfy timing constraints. Finally the applications of this work are described. Two different types of applications are considered, real-time databases and real-time control systems. It is shown that it may be advantageous to use synchronous communications in distributed database systems, especially if predictable response times are required. Emphasis is given to the application of the developed commit protocols to real-time control systems. Using the same analysis techniques as those used for the design of the protocols it can be shown that the overall system performs as expected both functionally and temporally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Database systems have a user interface one of the components of which will normally be a query language which is based on a particular data model. Typically data models provide primitives to define, manipulate and query databases. Often these primitives are designed to form self-contained query languages. This thesis describes a prototype implementation of a system which allows users to specify queries against the database in a query language whose primitives are not those provided by the actual model on which the database system is based, but those provided by a different data model. The implementation chosen is the Functional Query Language Front End (FQLFE). This uses the Daplex functional data model and query language. Using FQLFE, users can specify the underlying database (based on the relational model) in terms of Daplex. Queries against this specified view can then be made in Daplex. FQLFE transforms these queries into the query language (Quel) of the underlying target database system (Ingres). The automation of part of the Daplex function definition phase is also described and its implementation discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes work done exploring the application of expert system techniques to the domain of designing durable concrete. The nature of concrete durability design is described and some problems from the domain are discussed. Some related work on expert systems in concrete durability are described. Various implementation languages are considered - PROLOG and OPS5, and rejected in favour of a shell - CRYSTAL3 (later CRYSTAL4). Criteria for useful expert system shells in the domain are discussed. CRYSTAL4 is evaluated in the light of these criteria. Modules in various sub-domains (mix-design, sulphate attack, steel-corrosion and alkali aggregate reaction) are developed and organised under a BLACKBOARD system (called DEX). Extensions to the CRYSTAL4 modules are considered for different knowledge representations. These include LOTUS123 spreadsheets implementing models incorporating some of the mathematical knowledge in the domain. Design databases are used to represent tabular design knowledge. Hypertext representations of the original building standards texts are proposed as a tool for providing a well structured and extensive justification/help facility. A standardised approach to module development is proposed using hypertext development as a structured basis for expert systems development. Some areas of deficient domain knowledge are highlighted particularly in the use of data from mathematical models and in gaps and inconsistencies in the original knowledge source Digests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present the design and analysis of an intonation model for text-to-speech (TTS) synthesis applications using a combination of Relational Tree (RT) and Fuzzy Logic (FL) technologies. The model is demonstrated using the Standard Yorùbá (SY) language. In the proposed intonation model, phonological information extracted from text is converted into an RT. RT is a sophisticated data structure that represents the peaks and valleys as well as the spatial structure of a waveform symbolically in the form of trees. An initial approximation to the RT, called Skeletal Tree (ST), is first generated algorithmically. The exact numerical values of the peaks and valleys on the ST is then computed using FL. Quantitative analysis of the result gives RMSE of 0.56 and 0.71 for peak and valley respectively. Mean Opinion Scores (MOS) of 9.5 and 6.8, on a scale of 1 - -10, was obtained for intelligibility and naturalness respectively.