937 resultados para Relational Databases
Resumo:
Three rhesus monkeys (Macaca mulatta) and four pigeons (Columba livia) were trained in a visual serial probe recognition (SPR) task. A list of visual stimuli (slides) was presented sequentially to the subjects. Following the list and after a delay interval, a probe stimulus was presented that could be either from the list (Same) or not from the list (Different). The monkeys readily acquired a variable list length SPR task, while pigeons showed acquisition only under constant list length condition. However, monkeys memorized the responses to the probes (absolute strategy) when overtrained with the same lists and probes, while pigeons compared the probe to the list in memory (relational strategy). Performance of the pigeon on 4-items constant list length was disrupted when blocks of trials of different list lengths were imbedded between the 4-items blocks. Serial position curves for recognition at variable probe delays showed better relative performance on the last items of the list at short delays (0-0.5 seconds) and better relative performance on the initial items of the list at long delays (6-10 seconds for the pigeons and 20-30 seconds for the monkeys and a human adolescent). The serial position curves also showed reliable primacy and recency effects at intermediate probe delays. The monkeys showed evidence of using a relational strategy in the variable probe delay task. The results are the first demonstration of relational serial probe recognition performance in an avian and suggest similar underlying dynamic recognition memory mechanisms in primates and avians. ^
Resumo:
This paper describes the spatial data handling procedures used to create a vector database of the Connecticut shoreline from Coastal Survey Maps. The appendix contains detailed information on how the procedures were implemented using Geographic Transformer Software 5 and ArcGIS 8.3. The project was a joint project of the Connecticut Department of Environmental Protection and the University of Connecticut Center for Geographic Information and Analysis.
Resumo:
Correct species identifications are of tremendous importance for invasion ecology, as mistakes could lead to misdirecting limited resources against harmless species or inaction against problematic ones. DNA barcoding is becoming a promising and reliable tool for species identifications, however the efficacy of such molecular taxonomy depends on gene region(s) that provide a unique sequence to differentiate among species and on availability of reference sequences in existing genetic databases. Here, we assembled a list of aquatic and terrestrial non-indigenous species (NIS) and checked two leading genetic databases for corresponding sequences of six genome regions used for DNA barcoding. The genetic databases were checked in 2010, 2012, and 2016. All four aquatic kingdoms (Animalia, Chromista, Plantae and Protozoa) were initially equally represented in the genetic databases, with 64, 65, 69, and 61% of NIS included, respectively. Sequences for terrestrial NIS were present at rates of 58 and 78% for Animalia and Plantae, respectively. Six years later, the number of sequences for aquatic NIS increased to 75, 75, 74, and 63% respectively, while those for terrestrial NIS increased to 74 and 88% respectively. Genetic databases are marginally better populated with sequences of terrestrial NIS of plants compared to aquatic NIS and terrestrial NIS of animals. The rate at which sequences are added to databases is not equal among taxa. Though some groups of NIS are not detectable at all based on available data - mostly aquatic ones - encouragingly, current availability of sequences of taxa with environmental and/or economic impact is relatively good and continues to increase with time.
Resumo:
Despite the fact that input–output (IO) tables form a central part of the System of National Accounts, each individual country's national IO table exhibits more or less different features and characteristics, reflecting the country's socioeconomic idiosyncrasies. Consequently, the compilers of a multi-regional input–output table (MRIOT) are advised to thoroughly examine the conceptual as well as methodological differences among countries in the estimation of basic statistics for national IO tables and, if necessary, to carry out pre-adjustment of these tables into a common format prior to the MRIOT compilation. The objective of this study is to provide a practical guide for harmonizing national IO tables to construct a consistent MRIOT, referring to the adjustment practices used by the Institute of Developing Economies, JETRO (IDE-JETRO) in compiling the Asian International Input–Output Table.
Resumo:
Work on distributed data management commenced shortly after the introduction of the relational model in the mid-1970's. 1970's and 1980's were very active periods for the development of distributed relational database technology, and claims were made that in the following ten years centralized databases will be an “antique curiosity” and most organizations will move toward distributed database managers [1]. That prediction has certainly become true, and all commercial DBMSs today are distributed.
Resumo:
Geographic Information Systems are developed to handle enormous volumes of data and are equipped with numerous functionalities intended to capture, store, edit, organise, process and analyse or represent the geographically referenced information. On the other hand, industrial simulators for driver training are real-time applications that require a virtual environment, either geospecific, geogeneric or a combination of the two, over which the simulation programs will be run. In the final instance, this environment constitutes a geographic location with its specific characteristics of geometry, appearance, functionality, topography, etc. The set of elements that enables the virtual simulation environment to be created and in which the simulator user can move, is usually called the Visual Database (VDB). The main idea behind the work being developed approaches a topic that is of major interest in the field of industrial training simulators, which is the problem of analysing, structuring and describing the virtual environments to be used in large driving simulators. This paper sets out a methodology that uses the capabilities and benefits of Geographic Information Systems for organising, optimising and managing the visual Database of the simulator and for generally enhancing the quality and performance of the simulator.
Resumo:
The need to refine models for best-estimate calculations, based on good-quality experimental data, has been expressed in many recent meetings in the field of nuclear applications. The modeling needs arising in this respect should not be limited to the currently available macroscopic methods but should be extended to next-generation analysis techniques that focus on more microscopic processes. One of the most valuable databases identified for the thermalhydraulics modeling was developed by the Nuclear Power Engineering Corporation (NUPEC), Japan. From 1987 to 1995, NUPEC performed steady-state and transient critical power and departure from nucleate boiling (DNB) test series based on the equivalent full-size mock-ups. Considering the reliability not only of the measured data, but also other relevant parameters such as the system pressure, inlet sub-cooling and rod surface temperature, these test series supplied the first substantial database for the development of truly mechanistic and consistent models for boiling transition and critical heat flux. Over the last few years the Pennsylvania State University (PSU) under the sponsorship of the U.S. Nuclear Regulatory Commission (NRC) has prepared, organized, conducted and summarized the OECD/NRC Full-size Fine-mesh Bundle Tests (BFBT) Benchmark. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD) and Japan Nuclear Energy Safety (JNES) organization, Japan. Consequently, the JNES has made available the Boiling Water Reactor (BWR) NUPEC database for the purposes of the benchmark. Based on the success of the OECD/NRC BFBT benchmark the JNES has decided to release also the data based on the NUPEC Pressurized Water Reactor (PWR) subchannel and bundle tests for another follow-up international benchmark entitled OECD/NRC PWR Subchannel and Bundle Tests (PSBT) benchmark. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM) version of the well-known subchannel code COBRA-TF, namely CTF, to the critical power and departure from nucleate boiling (DNB) exercises of the OECD/NRC BFBT and PSBT benchmarks
Resumo:
El cálculo de relaciones binarias fue creado por De Morgan en 1860 para ser posteriormente desarrollado en gran medida por Peirce y Schröder. Tarski, Givant, Freyd y Scedrov demostraron que las álgebras relacionales son capaces de formalizar la lógica de primer orden, la lógica de orden superior así como la teoría de conjuntos. A partir de los resultados matemáticos de Tarski y Freyd, esta tesis desarrolla semánticas denotacionales y operacionales para la programación lógica con restricciones usando el álgebra relacional como base. La idea principal es la utilización del concepto de semántica ejecutable, semánticas cuya característica principal es el que la ejecución es posible utilizando el razonamiento estándar del universo semántico, este caso, razonamiento ecuacional. En el caso de este trabajo, se muestra que las álgebras relacionales distributivas con un operador de punto fijo capturan toda la teoría y metateoría estándar de la programación lógica con restricciones incluyendo los árboles utilizados en la búsqueda de demostraciones. La mayor parte de técnicas de optimización de programas, evaluación parcial e interpretación abstracta pueden ser llevadas a cabo utilizando las semánticas aquí presentadas. La demostración de la corrección de la implementación resulta extremadamente sencilla. En la primera parte de la tesis, un programa lógico con restricciones es traducido a un conjunto de términos relacionales. La interpretación estándar en la teoría de conjuntos de dichas relaciones coincide con la semántica estándar para CLP. Las consultas contra el programa traducido son llevadas a cabo mediante la reescritura de relaciones. Para concluir la primera parte, se demuestra la corrección y equivalencia operacional de esta nueva semántica, así como se define un algoritmo de unificación mediante la reescritura de relaciones. La segunda parte de la tesis desarrolla una semántica para la programación lógica con restricciones usando la teoría de alegorías—versión categórica del álgebra de relaciones—de Freyd. Para ello, se definen dos nuevos conceptos de Categoría Regular de Lawvere y _-Alegoría, en las cuales es posible interpretar un programa lógico. La ventaja fundamental que el enfoque categórico aporta es la definición de una máquina categórica que mejora e sistema de reescritura presentado en la primera parte. Gracias al uso de relaciones tabulares, la máquina modela la ejecución eficiente sin salir de un marco estrictamente formal. Utilizando la reescritura de diagramas, se define un algoritmo para el cálculo de pullbacks en Categorías Regulares de Lawvere. Los dominios de las tabulaciones aportan información sobre la utilización de memoria y variable libres, mientras que el estado compartido queda capturado por los diagramas. La especificación de la máquina induce la derivación formal de un juego de instrucciones eficiente. El marco categórico aporta otras importantes ventajas, como la posibilidad de incorporar tipos de datos algebraicos, funciones y otras extensiones a Prolog, a la vez que se conserva el carácter 100% declarativo de nuestra semántica. ABSTRACT The calculus of binary relations was introduced by De Morgan in 1860, to be greatly developed by Peirce and Schröder, as well as many others in the twentieth century. Using different formulations of relational structures, Tarski, Givant, Freyd, and Scedrov have shown how relation algebras can provide a variable-free way of formalizing first order logic, higher order logic and set theory, among other formal systems. Building on those mathematical results, we develop denotational and operational semantics for Constraint Logic Programming using relation algebra. The idea of executable semantics plays a fundamental role in this work, both as a philosophical and technical foundation. We call a semantics executable when program execution can be carried out using the regular theory and tools that define the semantic universe. Throughout this work, the use of pure algebraic reasoning is the basis of denotational and operational results, eliminating all the classical non-equational meta-theory associated to traditional semantics for Logic Programming. All algebraic reasoning, including execution, is performed in an algebraic way, to the point we could state that the denotational semantics of a CLP program is directly executable. Techniques like optimization, partial evaluation and abstract interpretation find a natural place in our algebraic models. Other properties, like correctness of the implementation or program transformation are easy to check, as they are carried out using instances of the general equational theory. In the first part of the work, we translate Constraint Logic Programs to binary relations in a modified version of the distributive relation algebras used by Tarski. Execution is carried out by a rewriting system. We prove adequacy and operational equivalence of the semantics. In the second part of the work, the relation algebraic approach is improved by using allegory theory, a categorical version of the algebra of relations developed by Freyd and Scedrov. The use of allegories lifts the semantics to typed relations, which capture the number of logical variables used by a predicate or program state in a declarative way. A logic program is interpreted in a _-allegory, which is in turn generated from a new notion of Regular Lawvere Category. As in the untyped case, program translation coincides with program interpretation. Thus, we develop a categorical machine directly from the semantics. The machine is based on relation composition, with a pullback calculation algorithm at its core. The algorithm is defined with the help of a notion of diagram rewriting. In this operational interpretation, types represent information about memory allocation and the execution mechanism is more efficient, thanks to the faithful representation of shared state by categorical projections. We finish the work by illustrating how the categorical semantics allows the incorporation into Prolog of constructs typical of Functional Programming, like abstract data types, and strict and lazy functions.
Resumo:
RESUMEN Las aplicaciones de los Sistemas de Información Geográfica (SIG) a la Arqueología, u otra disciplina humanística no son una novedad. La evolución de los mismos hacia sistemas distribuidos e interoperables, y estructuras donde las políticas de uso, compartido y coordinado de los datos sí lo son, estando todos estos aspectos contemplados en la Infraestructura de Datos Espaciales. INSPIRE es el máximo exponente europeo en cuestiones de iniciativa y marco legal en estos aspectos. La metodología arqueológica recopila y genera gran cantidad de datos, y entre los atributos o características intrínsecas están la posición y el tiempo, aspectos que tradicionalmente explotan los SIG. Los datos se catalogan, organizan, mantienen, comparten y publican, y los potenciales consumidores comienzan a tenerlos disponibles. Toda esta información almacenada de forma tradicional en fichas y posteriormente en bases de datos relacionadas alfanuméricas pueden ser considerados «metadatos» en muchos casos por contener información útil para más usuarios en los procesos de descubrimiento, y explotación de los datos. Además estos datos también suelen ir acompañados de información sobre ellos mismos, que describe su especificaciones, calidad, etc. Cotidianamente usamos los metadatos: ficha bibliográfica del libro o especificaciones de un ordenador. Pudiéndose definir como: «información descriptiva sobre el contexto, calidad, condición y características de un recurso, dato u objeto que tiene la finalidad de facilitar su recuperación, identificación,evaluación, preservación y/o interoperabilidad». En España existe una iniciativa para estandarizar la descripción de los metadatos de los conjuntos de datos geoespaciales: Núcleo Español de Metadatos (NEM), los mismos contienen elementos para la descripción de las particularidades de los datos geográficos, que incluye todos los registros obligatorios de la Norma ISO19115 y del estudio de metadatos Dublin Core, tradicionalmente usado en contextos de Biblioteconomía. Conscientes de la necesidad de los metadatos, para optimizar la búsqueda y recuperación de los datos, se pretende formalizar la documentación de los datos arqueológicos a partir de la utilización del NEM, consiguiendo así la interoperabilidad de la información arqueológica. SUMMARY The application of Geographical Information Systems (GIS) to Archaeology and other social sciences is not new. Their evolution towards inter-operating, distributed systems, and structures in which policies for shared and coordinated data use are, and all these aspects are included in the Spatial Data Infrastructure (SDI). INSPIRE is the main European exponent in matters related to initiative and legal frame. Archaeological methodology gathers and creates a great amount of data, and position and time, aspects traditionally exploited by GIS, are among the attributes or intrinsic characteristics. Data are catalogued, organised, maintained, shared and published, and potential consumers begin to have them at their disposal. All this information, traditionally stored as cards and later in relational alphanumeric databases may be considered «metadata» in many cases, as they contain information that is useful for more users in the processes of discovery and exploitation of data. Moreover, this data are often accompanied by information about themselves, describing its especifications, quality, etc. We use metadata very often: in a book’s bibliographical card, or in the description of the characteristics of a computer. They may be defined as «descriptive information regarding the context, quality, condition and characteristics of a resource, data or object with the purpose of facilitating is recuperation, identification, evaluation, preservation and / interoperability.» There is an initiative in Spain to standardise the description of metadata in sets of geo-spatial data: the Núcleo Español de Metadatos (Spanish Metadata Nucleus), which contains elements for the description of the particular characteristics of geographical data, includes all the obligatory registers from the ISO Norm 19115 and from the metadata study Dublin Core, traditionally used in library management. Being aware of the need of metadata, to optimise the search and retrieval of data, the objective is to formalise the documentation of archaeological data from the Núcleo Español de Metadatos (Spanish Metadata Nucleus), thus obtaining the interoperability of the archaeological information.
Resumo:
La seguridad verificada es una metodología para demostrar propiedades de seguridad de los sistemas informáticos que se destaca por las altas garantías de corrección que provee. Los sistemas informáticos se modelan como programas probabilísticos y para probar que verifican una determinada propiedad de seguridad se utilizan técnicas rigurosas basadas en modelos matemáticos de los programas. En particular, la seguridad verificada promueve el uso de demostradores de teoremas interactivos o automáticos para construir demostraciones completamente formales cuya corrección es certificada mecánicamente (por ordenador). La seguridad verificada demostró ser una técnica muy efectiva para razonar sobre diversas nociones de seguridad en el área de criptografía. Sin embargo, no ha podido cubrir un importante conjunto de nociones de seguridad “aproximada”. La característica distintiva de estas nociones de seguridad es que se expresan como una condición de “similitud” entre las distribuciones de salida de dos programas probabilísticos y esta similitud se cuantifica usando alguna noción de distancia entre distribuciones de probabilidad. Este conjunto incluye destacadas nociones de seguridad de diversas áreas como la minería de datos privados, el análisis de flujo de información y la criptografía. Ejemplos representativos de estas nociones de seguridad son la indiferenciabilidad, que permite reemplazar un componente idealizado de un sistema por una implementación concreta (sin alterar significativamente sus propiedades de seguridad), o la privacidad diferencial, una noción de privacidad que ha recibido mucha atención en los últimos años y tiene como objetivo evitar la publicación datos confidenciales en la minería de datos. La falta de técnicas rigurosas que permitan verificar formalmente este tipo de propiedades constituye un notable problema abierto que tiene que ser abordado. En esta tesis introducimos varias lógicas de programa quantitativas para razonar sobre esta clase de propiedades de seguridad. Nuestra principal contribución teórica es una versión quantitativa de una lógica de Hoare relacional para programas probabilísticos. Las pruebas de correción de estas lógicas son completamente formalizadas en el asistente de pruebas Coq. Desarrollamos, además, una herramienta para razonar sobre propiedades de programas a través de estas lógicas extendiendo CertiCrypt, un framework para verificar pruebas de criptografía en Coq. Confirmamos la efectividad y aplicabilidad de nuestra metodología construyendo pruebas certificadas por ordendor de varios sistemas cuyo análisis estaba fuera del alcance de la seguridad verificada. Esto incluye, entre otros, una meta-construcción para diseñar funciones de hash “seguras” sobre curvas elípticas y algoritmos diferencialmente privados para varios problemas de optimización combinatoria de la literatura reciente. ABSTRACT The verified security methodology is an emerging approach to build high assurance proofs about security properties of computer systems. Computer systems are modeled as probabilistic programs and one relies on rigorous program semantics techniques to prove that they comply with a given security goal. In particular, it advocates the use of interactive theorem provers or automated provers to build fully formal machine-checked versions of these security proofs. The verified security methodology has proved successful in modeling and reasoning about several standard security notions in the area of cryptography. However, it has fallen short of covering an important class of approximate, quantitative security notions. The distinguishing characteristic of this class of security notions is that they are stated as a “similarity” condition between the output distributions of two probabilistic programs, and this similarity is quantified using some notion of distance between probability distributions. This class comprises prominent security notions from multiple areas such as private data analysis, information flow analysis and cryptography. These include, for instance, indifferentiability, which enables securely replacing an idealized component of system with a concrete implementation, and differential privacy, a notion of privacy-preserving data mining that has received a great deal of attention in the last few years. The lack of rigorous techniques for verifying these properties is thus an important problem that needs to be addressed. In this dissertation we introduce several quantitative program logics to reason about this class of security notions. Our main theoretical contribution is, in particular, a quantitative variant of a full-fledged relational Hoare logic for probabilistic programs. The soundness of these logics is fully formalized in the Coq proof-assistant and tool support is also available through an extension of CertiCrypt, a framework to verify cryptographic proofs in Coq. We validate the applicability of our approach by building fully machine-checked proofs for several systems that were out of the reach of the verified security methodology. These comprise, among others, a construction to build “safe” hash functions into elliptic curves and differentially private algorithms for several combinatorial optimization problems from the recent literature.
Resumo:
Over the last few years, the Pennsylvania State University (PSU) under the sponsorship of the US Nuclear Regulatory Commission (NRC) has prepared, organized, conducted, and summarized two international benchmarks based on the NUPEC data—the OECD/NRC Full-Size Fine-Mesh Bundle Test (BFBT) Benchmark and the OECD/NRC PWR Sub-Channel and Bundle Test (PSBT) Benchmark. The benchmarks’ activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD) and the Japan Nuclear Energy Safety (JNES) Organization. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM) version of the well-known sub-channel code COBRA-TF (Coolant Boiling in Rod Array-Two Fluid), namely, CTF, to the steady state critical power and departure from nucleate boiling (DNB) exercises of the OECD/NRC BFBT and PSBT benchmarks. The goal is two-fold: firstly, to assess these models and to examine their strengths and weaknesses; and secondly, to identify the areas for improvement.
Resumo:
El trabajo de fin de grado que se va a definir detalladamente en esta memoria, trata de poner de manifiesto muchos de los conocimientos que he adquirido a lo largo de la carrera, aplicándolos en un proyecto real. Se ha desarrollado una plataforma capaz de albergar ideas, escritas por personas de todo el mundo que buscan compartirlas con los demás, para que estas sean comentadas, valoradas y entre todos poder mejorarlas. Estas ideas pueden ser de cualquier ámbito, por tanto, se da la posibilidad de clasificarlas en las categorías que mejor encajen con la idea. La aplicación ofrece una API RESTful muy descriptiva, en la que se ha identificado y estructurado cada recurso, para que a través de los “verbos http” se puedan gestionar todos los elementos de una forma fácil y sencilla, independientemente del cliente que la utilice. La arquitectura está montada siguiendo el patrón de diseño modelo vista-controlador, utilizando las últimas tecnologías del mercado como Spring, Liferay, SmartGWT y MongoDB (entre muchas otras) con el objetivo de crear una aplicación segura, escalable y modulada, por lo que se ha tenido que integrar todos estos frameworks. Los datos de la aplicación se hacen persistentes en dos tipos de bases de datos, una relacional (MySQL) y otra no relacional (MongoDB), aprovechando al máximo las características que ofrecen cada una de ellas. El cliente propuesto es accesible mediante un navegador web, se basa en el portal de Liferay. Se han desarrollado varios “Portlets o Widgets”, que componen la estructura de contenido que ve el usuario final. A través de ellos se puede acceder al contenido de la aplicación, ideas, comentarios y demás contenidos sociales, de una forma agradable para el usuario, ya que estos “Portlets” se comunican entre sí y hacen peticiones asíncronas a la API RESTful sin necesidad de recargar toda la estructura de la página. Además, los usuarios pueden registrarse en el sistema para aportar más contenidos u obtener roles que les dan permisos para realizar acciones de administración. Se ha seguido una metodología “Scrum” para la realización del proyecto, con el objetivo de dividir el proyecto en tareas pequeñas y desarrollarlas de una forma ágil. Herramientas como “Jenkins” me han ayudado a una integración continua y asegurando mediante la ejecución de los test de prueba, que todos los componentes funcionan. La calidad ha sido un aspecto principal en el proyecto, se han seguido metodologías software y patrones de diseño para garantizar un diseño de calidad, reutilizable, óptimo y modulado. El uso de la herramienta “Sonar” ha ayudado a este cometido. Además, se ha implementado un sistema de pruebas muy completo de todos los componentes de la aplicación. En definitiva, se ha diseñado una aplicación innovadora de código abierto, que establece unas bases muy definidas para que si algún día se pone en producción, sirva a las personas para compartir pensamientos o ideas ayudando a mejorar el mundo en el que vivimos. ---ABSTRACT---The Final Degree Project, described in detail in this report, attempts to cover a lot of the knowledge I have acquired during my studies, applying it to a real project. The objective of the project has been to develop a platform capable of hosting ideas from people all over the world, where users can share their ideas, comment on and rate the ideas of others and together help improving them. Since these ideas can be of any kind, it is possible to classify them into suitable categories. The application offers a very descriptive API RESTful, where each resource has been identified and organized in a way that makes it possible to easily manage all the elements using the HTTP verbs, regardless of the client using it. The architecture has been built following the design pattern model-view-controller, using the latest market technologies such as Spring, Liferay, Smart GWT and MongoDB (among others) with the purpose of creating a safe, scalable and adjustable application. The data of the application are persistent in two different kinds of databases, one relational (MySQL) and the other non-relational (MongoDB), taking advantage of all the different features each one of them provides. The suggested client is accessible through a web browser and it is based in Liferay. Various “Portlets" or "Widgets” make up the final content of the page. Thanks to these Portlets, the user can access the application content (ideas, comments and categories) in a pleasant way as the Portlets communicate with each other making asynchronous requests to the API RESTful without the necessity to refresh the whole page. Furthermore, users can log on to the system to contribute with more contents or to obtain administrator privileges. The Project has been developed following a “Scrum” methodology, with the main objective being that of dividing the Project into smaller tasks making it possible to develop each task in a more agile and ultimately faster way. Tools like “Jenkins” have been used to guarantee a continuous integration and to ensure that all the components work correctly thanks to the execution of test runs. Quality has been one of the main aspects in this project, why design patterns and software methodologies have been used to guarantee a high quality, reusable, modular and optimized design. The “Sonar” technology has helped in the achievement of this goal. Furthermore, a comprehensive proofing system of all the application's components has been implemented. In conclusion, this Project has consisted in developing an innovative, free source application that establishes a clearly defined basis so that, if it someday will be put in production, it will allow people to share thoughts and ideas, and by doing so, help them to improve the World we live in.