938 resultados para Business intelligence, data warehouse, sql server
Resumo:
“Dì che ti piace questa pagina”. Questo è uno dei tanti inviti rivolti a chi, ogni giorno, naviga in Internet. Che si stia leggendo un articolo sul sito de La Repubblica, o visitando il blog di un personaggio famoso o di un politico, i riferimenti ai social network sono ormai una presenza costante nelle pagine web. La facilità di restare in contatto con i propri amici, e la possibilità di collegarsi in qualsiasi momento, hanno portato gli utenti del Web 2.0 ad intensificare le discussioni, ed a commentare gli argomenti ed i contenuti prodotti dagli altri in un continuo e complesso “botta e risposta”. È possibile che quest'ambiente abbia favorito lo sviluppo di una nuova prospettiva della Rete, inteso come un nuovo modo di vedersi e di rapportarsi con gli altri, di esprimersi e di condividere le proprie storie e la propria storia. Per approfondire queste tematiche si è deciso di osservare alcuni dei social networks più diffusi, tra i quali Twitter e Facebook e, per raccogliere i dati più significativi di quest'ultimo, di sviluppare un'apposita applicazione software. Questa tesi tratterà gli aspetti teorici che hanno portato questa ricerca su scala nazionale e l'analisi dei requisiti del progetto; approfondirà le dinamiche progettuali e lo sviluppo dell'applicazione entro i vincoli imposti da Facebook, integrando un questionario per l'utente alla lettura dei dati. Dopo la descrizione delle fasi di testing e deployment, l'elaborato includerà un'analisi preliminare dei dati ottenuti per mezzo di una pre-elaborazione all'interno dell'applicazione stessa.
Resumo:
L’obiettivo di questa tesi è quello di mettere a confronto due mondi: quello dei DBMS relazionali e quello dei DBMS a grafo, con lo scopo di comprendere meglio quest'ultimo. Perciò, sono state scelte le due tecnologie che meglio rappresentano i loro mondi: Oracle per gli RDBMS e Neo4j per i Graph DBMS. I due DBMS sono stati sottoposti ad una serie di interrogazioni atte a testare le performance al variare di determinati fattori, come la selettività, il numero di join che Oracle effettua, etc. I test svolti si collocano nell'ambito business intelligence e in particolare in quello dell’analisi OLAP.
Resumo:
L’obiettivo di questa tesi è quello di progettare un sistema gestionale che risponda alle esigenze organizzative delle palestre. In particolar modo verrà implementato un software per piattaforma Windows dedicato ai personal trainer. Questa applicazione dovrà essere in grado di registrare e gestire i clienti del centro sportivo, e permettere la creazione degli allenamenti a loro dedicati (con l’utilizzo quindi di schede, esercizi, serie, ripetizioni, ...). Tutti i dati andranno memorizzati su un server SQL centralizzato, accessibile anche da Internet. Successivamente questi allenamenti potranno esser scaricati e visualizzati dai clienti tramite i propri Smartphone Android.
Resumo:
L’obiettivo di questa tesi è approfondire le competenze sulle funzionalità sviluppate nei sistemi SCADA/EMS presenti sul mercato, così da conoscerne le potenzialità offerte: tutte le conoscenze acquisite servono a progettare uno strumento di analisi dati flessibile e interattivo, con il quale è possibile svolgere analisi non proponibili con le altre soluzioni analizzate. La progettazione dello strumento di analisi dei dati è orientata a definire un modello multidimensionale per la rappresentazione delle informazioni: il percorso di progettazione richiede di individuare le informazioni d’interesse per l’utente, così da poterle reintrodurre in fase di progettazione della nuova base dati. L’infrastruttura finale di questa nuova funzionalità si concretizza in un data warehouse: tutte le informazioni di analisi sono memorizzare su una base dati diversa da quella di On.Energy, evitando di correlare le prestazione dei due diversi sottosistemi. L’utilizzo di un data warehouse pone le basi per realizzare analisi su lunghi periodi temporali: tutte le tipologie di interrogazione dati comprendono un enorme quantità d’informazioni, esattamente in linea con le caratteristiche delle interrogazioni OLAP
Resumo:
Companion animals closely share their domestic environment with people and have the potential to, act as sources of zoonotic diseases. They also have the potential to be sentinels of infectious and noninfectious, diseases. With the exception of rabies, there has been minimal ongoing surveillance of, companion animals in Canada. We developed customized data extraction software, the University of, Calgary Data Extraction Program (UCDEP), to automatically extract and warehouse the electronic, medical records (EMR) from participating private veterinary practices to make them available for, disease surveillance and knowledge creation for evidence-based practice. It was not possible to build, generic data extraction software; the UCDEP required customization to meet the specific software, capabilities of the veterinary practices. The UCDEP, tailored to the participating veterinary practices', management software, was capable of extracting data from the EMR with greater than 99%, completeness and accuracy. The experiences of the people developing and using the UCDEP and the, quality of the extracted data were evaluated. The electronic medical record data stored in the data, warehouse may be a valuable resource for surveillance and evidence-based medical research.
Resumo:
Large amounts of animal health care data are present in veterinary electronic medical records (EMR) and they present an opportunity for companion animal disease surveillance. Veterinary patient records are largely in free-text without clinical coding or fixed vocabulary. Text-mining, a computer and information technology application, is needed to identify cases of interest and to add structure to the otherwise unstructured data. In this study EMR's were extracted from veterinary management programs of 12 participating veterinary practices and stored in a data warehouse. Using commercially available text-mining software (WordStat™), we developed a categorization dictionary that could be used to automatically classify and extract enteric syndrome cases from the warehoused electronic medical records. The diagnostic accuracy of the text-miner for retrieving cases of enteric syndrome was measured against human reviewers who independently categorized a random sample of 2500 cases as enteric syndrome positive or negative. Compared to the reviewers, the text-miner retrieved cases with enteric signs with a sensitivity of 87.6% (95%CI, 80.4-92.9%) and a specificity of 99.3% (95%CI, 98.9-99.6%). Automatic and accurate detection of enteric syndrome cases provides an opportunity for community surveillance of enteric pathogens in companion animals.
Resumo:
The language used in Section 165.002 of the Texas Health and Safety Code renders breastfeeding women vulnerable and susceptible to harassment, discrimination, and persecution via the Texas Penal Code, Sec. 30.05 (Criminal Trespassing), Sec. 21.08 (Indecent Exposure), and Sec. 21.22 (Indecency with a Child). ^ The overall goal of this paper is to develop a solution to this problem via a proposed law or legislative action that offers protection and support for breastfeeding women who choose to nurse in public. Data to inform these recommendations were collected through a literature review and structured interviews with several breastfeeding stakeholders. A literature review of state and federal breastfeeding legislation was conducted to compare and contrast differences between existing legislation in the United States. Interviews were conducted with breastfeeding legislation stakeholders, which included state legislators who have been active in breastfeeding legislation, breastfeeding mothers, and representatives from the Central Texas Healthy Mothers Healthy Babies Coalition (Centex HMHB Coalition), Texas Breastfeeding Coalition (TXBF coalition), La Leche League International, and the Texas Business Association. Data from the literature and legislation reviews and interviews were transcribed and examined for common themes using qualitative data techniques. ^ Overall, most of the stakeholders came to a general consensus on three points, (1) breastfeeding women are supported by stakeholders within the community, (2) other legislation or penal codes should not override the right to breastfeed, and (3) the current breastfeeding legislation needs to be improved to adequately support breastfeeding women. The interviews with breastfeeding legislation stakeholders yielded two major recommendations for the improvement of Section 165.002 of the Texas Health and Safety Code: advocacy efforts to change the wording of the legislation and education to inform people about the legislation. ^ The right to breastfeed is an important public health issue in that it provides a host of health benefits for mothers and children, and is more economical and environmentally superior to alternative feeding methods. While breastfeeding in public is not illegal nor ever has been, adequate legislation is important to affirm this right for women so that they can confidently feed their children without embarrassment or harassment.^
Resumo:
This study assessed and compared sociodemographic and income characteristics along with food and physical activity assets (i.e. grocery stores, fast food restaurants, and park areas) in the Texas Childhood Obesity Research Demonstration (CORD) Study intervention and comparison catchment areas in Houston and Austin, Texas. The Texas CORD Study used a quasi-experimental study design, so it is necessary to establish the interval validity of the study characteristics by confirming that the intervention and comparison catchment areas are statistically comparable. In this ecological study, ArcGIS and Esri Business Analyst were used to spatially relate U.S. Census Bureau and other business listing data to the specific school attendance zones within the catchment areas. T-tests were used to compare percentages of sociodemographic and income characteristics and densities of food and physical activity assets between the intervention and comparison catchment areas.^ Only five variables were found to have significant differences between the intervention and comparison catchment areas: Age groups 0-4 and 35-64, the percentage of owner-occupied and renter-occupied households, and the percentage of Asian and Pacific Islander residents. All other variables showed no significant differences between the two groups. This study shows that the methodology used to select intervention and comparison catchment areas for the Texas CORD Study was effective and can be used in future studies. The results of this study can be used in future Texas CORD studies to confirm the comparability of the intervention and comparison catchment areas. In addition, this study demonstrates a methodology for describing detailed characteristics about a geographic area that practitioners, researchers, and educators can use.^
Resumo:
The software Pan2Applic is a tool to convert files or folders of files (ascii/tab-separated data files with or without metaheader), downloaded from PANGAEA via the search engine or the data warehouse to formats as used by applications, e.g. for visualization or further processing. It may also be used to convert files or zip-archives as downloaded from CD-ROM data collections, published in the WDC-MARE Reports series. Pan2Applic is distributed as freeware for the operating systems Microsoft Windows, Apple OS X and Linux.
Resumo:
Durante los últimos años, el imparable crecimiento de fuentes de datos biomédicas, propiciado por el desarrollo de técnicas de generación de datos masivos (principalmente en el campo de la genómica) y la expansión de tecnologías para la comunicación y compartición de información ha propiciado que la investigación biomédica haya pasado a basarse de forma casi exclusiva en el análisis distribuido de información y en la búsqueda de relaciones entre diferentes fuentes de datos. Esto resulta una tarea compleja debido a la heterogeneidad entre las fuentes de datos empleadas (ya sea por el uso de diferentes formatos, tecnologías, o modelizaciones de dominios). Existen trabajos que tienen como objetivo la homogeneización de estas con el fin de conseguir que la información se muestre de forma integrada, como si fuera una única base de datos. Sin embargo no existe ningún trabajo que automatice de forma completa este proceso de integración semántica. Existen dos enfoques principales para dar solución al problema de integración de fuentes heterogéneas de datos: Centralizado y Distribuido. Ambos enfoques requieren de una traducción de datos de un modelo a otro. Para realizar esta tarea se emplean formalizaciones de las relaciones semánticas entre los modelos subyacentes y el modelo central. Estas formalizaciones se denominan comúnmente anotaciones. Las anotaciones de bases de datos, en el contexto de la integración semántica de la información, consisten en definir relaciones entre términos de igual significado, para posibilitar la traducción automática de la información. Dependiendo del problema en el que se esté trabajando, estas relaciones serán entre conceptos individuales o entre conjuntos enteros de conceptos (vistas). El trabajo aquí expuesto se centra en estas últimas. El proyecto europeo p-medicine (FP7-ICT-2009-270089) se basa en el enfoque centralizado y hace uso de anotaciones basadas en vistas y cuyas bases de datos están modeladas en RDF. Los datos extraídos de las diferentes fuentes son traducidos e integrados en un Data Warehouse. Dentro de la plataforma de p-medicine, el Grupo de Informática Biomédica (GIB) de la Universidad Politécnica de Madrid, en el cuál realicé mi trabajo, proporciona una herramienta para la generación de las necesarias anotaciones de las bases de datos RDF. Esta herramienta, denominada Ontology Annotator ofrece la posibilidad de generar de manera manual anotaciones basadas en vistas. Sin embargo, aunque esta herramienta muestra las fuentes de datos a anotar de manera gráfica, la gran mayoría de usuarios encuentran difícil el manejo de la herramienta , y pierden demasiado tiempo en el proceso de anotación. Es por ello que surge la necesidad de desarrollar una herramienta más avanzada, que sea capaz de asistir al usuario en el proceso de anotar bases de datos en p-medicine. El objetivo es automatizar los procesos más complejos de la anotación y presentar de forma natural y entendible la información relativa a las anotaciones de bases de datos RDF. Esta herramienta ha sido denominada Ontology Annotator Assistant, y el trabajo aquí expuesto describe el proceso de diseño y desarrollo, así como algunos algoritmos innovadores que han sido creados por el autor del trabajo para su correcto funcionamiento. Esta herramienta ofrece funcionalidades no existentes previamente en ninguna otra herramienta del área de la anotación automática e integración semántica de bases de datos. ---ABSTRACT---Over the last years, the unstoppable growth of biomedical data sources, mainly thanks to the development of massive data generation techniques (specially in the genomics field) and the rise of the communication and information sharing technologies, lead to the fact that biomedical research has come to rely almost exclusively on the analysis of distributed information and in finding relationships between different data sources. This is a complex task due to the heterogeneity of the sources used (either by the use of different formats, technologies or domain modeling). There are some research proyects that aim homogenization of these sources in order to retrieve information in an integrated way, as if it were a single database. However there is still now work to automate completely this process of semantic integration. There are two main approaches with the purpouse of integrating heterogeneous data sources: Centralized and Distributed. Both approches involve making translation from one model to another. To perform this task there is a need of using formalization of the semantic relationships between the underlying models and the main model. These formalizations are also calles annotations. In the context of semantic integration of the information, data base annotations consist on defining relations between concepts or words with the same meaning, so the automatic translation can be performed. Depending on the task, the ralationships can be between individuals or between whole sets of concepts (views). This paper focuses on the latter. The European project p-medicine (FP7-ICT-2009-270089) is based on the centralized approach. It uses view based annotations and RDF modeled databases. The data retireved from different data sources is translated and joined into a Data Warehouse. Within the p-medicine platform, the Biomedical Informatics Group (GIB) of the Polytechnic University of Madrid, in which I worked, provides a software to create annotations for the RDF sources. This tool, called Ontology Annotator, is used to create annotations manually. However, although Ontology Annotator displays the data sources graphically, most of the users find it difficult to use this software, thus they spend too much time to complete the task. For this reason there is a need to develop a more advanced tool, which would be able to help the user in the task of annotating p-medicine databases. The aim is automating the most complex processes of the annotation and display the information clearly and easy understanding. This software is called Ontology Annotater Assistant and this book describes the process of design and development of it. as well as some innovative algorithms that were designed by the author of the work. This tool provides features that no other software in the field of automatic annotation can provide.
Resumo:
This document is the result of a process of web development to create a tool that will allow to Cracow University of Technology consult, create and manage timetables. The technologies chosen for this purpose are Apache Tomcat Server, My SQL Community Server, JDBC driver, Java Servlets and JSPs for the server side. The client part counts on Javascript, jQuery, AJAX and CSS technologies to perform the dynamism. The document will justify the choice of these technologies and will explain some development tools that help in the integration and development of all this elements: specifically, NetBeans IDE and MySQL workbench have been used as helpful tools. After explaining all the elements involved in the development of the web application, the architecture and the code developed are explained through UML diagrams. Some implementation details related to security are also deeper explained through sequence diagrams. As the source code of the application is provided, an installation manual has been developed to run the project. In addition, as the platform is intended to be a beta that will be grown, some unimplemented ideas for future development are also exposed. Finally, some annexes with important files and scripts related to the initiation of the platform are attached. This project started through an existing tool that needed to be expanded. The main purpose of the project along its development has focused on setting the roots for a whole new platform that will replace the existing one. For this goal, it has been needed to make a deep inspection on the existing web technologies: a web server and a SQL database had to be chosen. Although the alternatives were a lot, Java technology for the server was finally selected because of the big community backwards, the easiness of modelling the language through UML diagrams and the fact of being free license software. Apache Tomcat is the open source server that can use Java Servlet and JSP technology. Related to the SQL database, MySQL Community Server is the most popular open-source SQL Server, with a big community after and quite a lot of tools to manage the server. JDBC is the driver needed to put in contact Java and MySQL. Once we chose the technologies that would be part of the platform, the development process started. After a detailed explanation of the development environment installation, we used UML use case diagrams to set the main tasks of the platform; UML class diagrams served to establish the existing relations between the classes generated; the architecture of the platform was represented through UML deployment diagrams; and Enhanced entity–relationship (EER) model were used to define the tables of the database and their relationships. Apart from the previous diagrams, some implementation issues were explained to make a better understanding of the developed code - UML sequence diagrams helped to explain this. Once the whole platform was properly defined and developed, the performance of the application has been shown: it has been proved that with the current state of the code, the platform covers the use cases that were set as the main target. Nevertheless, some requisites needed for the proper working of the platform have been specified. As the project is aimed to be grown, some ideas that could not be added to this beta have been explained in order not to be missed for future development. Finally, some annexes containing important configuration issues for the platform have been added after proper explanation, as well as an installation guide that will let a new developer get the project ready. In addition to this document some other files related to the project are provided: - Javadoc. The Javadoc containing the information of every Java class created is necessary for a better understanding of the source code. - database_model.mwb. This file contains the model of the database for MySQL Workbench. This model allows, among other things, generate the MySQL script for the creation of the tables. - ScheduleManager.war. The WAR file that will allow loading the developed application into Tomcat Server without using NetBeans. - ScheduleManager.zip. The source code exported from NetBeans project containing all Java packages, JSPs, Javascript files and CSS files that are part of the platform. - config.properties. The configuration file to properly get the names and credentials to use the database, also explained in Annex II. Example of config.properties file. - db_init_script.sql. The SQL query to initiate the database explained in Annex III. SQL statements for MySQL initialization. RESUMEN. Este proyecto tiene como punto de partida la necesidad de evolución de una herramienta web existente. El propósito principal del proyecto durante su desarrollo se ha centrado en establecer las bases de una completamente nueva plataforma que reemplazará a la existente. Para lograr esto, ha sido necesario realizar una profunda inspección en las tecnologías web existentes: un servidor web y una base de datos SQL debían ser elegidos. Aunque existen muchas alternativas, la tecnología Java ha resultado ser elegida debido a la gran comunidad de desarrolladores que tiene detrás, además de la facilidad que proporciona este lenguaje a la hora de modelarlo usando diagramas UML. Tampoco hay que olvidar que es una tecnología de uso libre de licencia. Apache Tomcat es el servidor de código libre que permite emplear Java Servlets y JSPs para hacer uso de la tecnología de Java. Respecto a la base de datos SQL, el servidor más popular de código libre es MySQL, y cuenta también con una gran comunidad detrás y buenas herramientas de modelado, creación y gestión de la bases de datos. JDBC es el driver que va a permitir comunicar las aplicaciones Java con MySQL. Tras elegir las tecnologías que formarían parte de esta nueva plataforma, el proceso de desarrollo tiene comienzo. Tras una extensa explicación de la instalación del entorno de desarrollo, se han usado diagramas de caso de UML para establecer cuáles son los objetivos principales de la plataforma; los diagramas de clases nos permiten realizar una organización del código java desarrollado de modo que sean fácilmente entendibles las relaciones entre las diferentes clases. La arquitectura de la plataforma queda definida a través de diagramas de despliegue. Por último, diagramas EER van a definir las relaciones entre las tablas creadas en la base de datos. Aparte de estos diagramas, algunos detalles de implementación se van a justificar para tener una mejor comprensión del código desarrollado. Diagramas de secuencia ayudarán en estas explicaciones. Una vez que toda la plataforma haya quedad debidamente definida y desarrollada, se va a realizar una demostración de la misma: se demostrará cómo los objetivos generales han sido alcanzados con el desarrollo actual del proyecto. No obstante, algunos requisitos han sido aclarados para que la plataforma trabaje adecuadamente. Como la intención del proyecto es crecer (no es una versión final), algunas ideas que se han podido llevar acabo han quedado descritas de manera que no se pierdan. Por último, algunos anexos que contienen información importante acerca de la plataforma se han añadido tras la correspondiente explicación de su utilidad, así como una guía de instalación que va a permitir a un nuevo desarrollador tener el proyecto preparado. Junto a este documento, ficheros conteniendo el proyecto desarrollado quedan adjuntos. Estos ficheros son: - Documentación Javadoc. Contiene la información de las clases Java que han sido creadas. - database_model.mwb. Este fichero contiene el modelo de la base de datos para MySQL Workbench. Esto permite, entre otras cosas, generar el script de iniciación de la base de datos para la creación de las tablas. - ScheduleManager.war. El fichero WAR que permite desplegar la plataforma en un servidor Apache Tomcat. - ScheduleManager.zip. El código fuente exportado directamente del proyecto de Netbeans. Contiene todos los paquetes de Java generados, ficheros JSPs, Javascript y CSS que forman parte de la plataforma. - config.properties. Ejemplo del fichero de configuración que permite obtener los nombres de la base de datos - db_init_script.sql. Las consultas SQL necesarias para la creación de la base de datos.
Resumo:
En esta tesis se aborda el problema de la externalización segura de servicios de datos y computación. El escenario de interés es aquel en el que el usuario posee datos y quiere subcontratar un servidor en la nube (“Cloud”). Además, el usuario puede querer también delegar el cálculo de un subconjunto de sus datos al servidor. Se presentan dos aspectos de seguridad relacionados con este escenario, en concreto, la integridad y la privacidad y se analizan las posibles soluciones a dichas cuestiones, aprovechando herramientas criptográficas avanzadas, como el Autentificador de Mensajes Homomórfico (“Homomorphic Message Authenticators”) y el Cifrado Totalmente Homomórfico (“Fully Homomorphic Encryption”). La contribución de este trabajo es tanto teórica como práctica. Desde el punto de vista de la contribución teórica, se define un nuevo esquema de externalización (en lo siguiente, denominado con su término inglés Outsourcing), usando como punto de partida los artículos de [3] y [12], con el objetivo de realizar un modelo muy genérico y flexible que podría emplearse para representar varios esquemas de ”outsourcing” seguro. Dicho modelo puede utilizarse para representar esquemas de “outsourcing” seguro proporcionando únicamente integridad, únicamente privacidad o, curiosamente, integridad con privacidad. Utilizando este nuevo modelo también se redefine un esquema altamente eficiente, construido en [12] y que se ha denominado Outsourcinglin. Este esquema permite calcular polinomios multivariados de grado 1 sobre el anillo Z2k . Desde el punto de vista de la contribución práctica, se ha construido una infraestructura marco (“Framework”) para aplicar el esquema de “outsourcing”. Seguidamente, se ha testado dicho “Framework” con varias implementaciones, en concreto la implementación del criptosistema Joye-Libert ([18]) y la implementación del esquema propio Outsourcinglin. En el contexto de este trabajo práctico, la tesis también ha dado lugar a algunas contribuciones innovadoras: el diseño y la implementación de un nuevo algoritmo de descifrado para el esquema de cifrado Joye-Libert, en colaboración con Darío Fiore. Presenta un mejor comportamiento frente a los algoritmos propuestos por los autores de [18];la implementación de la función eficiente pseudo-aleatoria de forma amortizada cerrada (“amortized-closed-form efficient pseudorandom function”) de [12]. Esta función no se había implementado con anterioridad y no supone un problema trivial, por lo que este trabajo puede llegar a ser útil en otros contextos. Finalmente se han usado las implementaciones durante varias pruebas para medir tiempos de ejecución de los principales algoritmos.---ABSTRACT---In this thesis we tackle the problem of secure outsourcing of data and computation. The scenario we are interested in is that in which a user owns some data and wants to “outsource” it to a Cloud server. Furthermore, the user may want also to delegate the computation over a subset of its data to the server. We present the security issues related to this scenario, namely integrity and privacy and we analyse some possible solutions to these two issues, exploiting advanced cryptographic tools, such as Homomorphic Message Authenticators and Fully Homomorphic Encryption. Our contribution is both theoretical and practical. Considering our theoretical contribution, using as starting points the articles of [3] and [12], we introduce a new cryptographic primitive, called Outsourcing with the aim of realizing a very generic and flexible model that might be employed to represent several secure outsourcing schemes. Such model can be used to represent secure outsourcing schemes that provide only integrity, only privacy or, interestingly, integrity with privacy. Using our new model we also re-define an highly efficient scheme constructed in [12], that we called Outsourcinglin and that is a scheme for computing multi-variate polynomials of degree 1 over the ring Z2k. Considering our practical contribution, we build a Framework to implement the Outsourcing scheme. Then, we test such Framework to realize several implementations, specifically the implementation of the Joye-Libert cryptosystem ([18]) and the implementation of our Outsourcinglin scheme. In the context of this practical work, the thesis also led to some novel contributions: the design and the implementation, in collaboration with Dario Fiore, of a new decryption algorithm for the Joye-Libert encryption scheme, that performs better than the algorithms proposed by the authors in [18]; the implementation of the amortized-closed-form efficient pseudorandom function of [12]. There was no prior implementation of this function and it represented a non trivial work, which can become useful in other contexts. Finally we test the implementations to execute several experiments for measuring the timing performances of the main algorithms.
Resumo:
From the Introduction. The main focus of this study is to examine whether the euro has been an economic, monetary, fiscal, and social stabilizer for the Eurozone. In order to do this, the underpinnings of the euro are analysed, and the requirements and benchmarks that have to be achieved, maintained, and respected are tested against the data found in three major statistics data sources: the European Central Bank’s Statistics Data Warehouse (http://sdw.ecb.europa.eu/), Economagic (www.economagic.com), and E-signal. The purpose of this work is to analyse if the euro was a stabilizing factor from its inception to the break of the financial crisis in summer 2008 in the European Union. To answer this question, this study analyses a number of indexes to understand the impact of the euro in three markets: (1) the foreign exchange market, (2) the stock market, and the Crude Oil and commodities markets, (3) the money market.
Resumo:
Exchange between anonymous actors in Internet auctions corresponds to a one-shot prisoner's dilemma-like situation. Therefore, in any given auction the risk is high that seller and buyer will cheat and, as a consequence, that the market will collapse. However, mutual cooperation can be attained by the simple and very efficient institution of a public rating system. By this system, sellers have incentives to invest in reputation in order to enhance future chances of business. Using data from about 200 auctions of mobile phones we empirically explore the effects of the reputation system. In general, the analysis of nonobtrusive data from auctions may help to gain a deeper understanding of basic social processes of exchange, reputation, trust, and cooperation, and of the impact of institutions on the efficiency of markets. In this study we report empirical estimates of effects of reputation on characteristics of transactions such as the probability of a successful deal, the mode of payment, and the selling price (highest bid). In particular, we try to answer the question whether sellers receive a "premium" for reputation. Our results show that buyers are willing to pay higher prices for reputation in order to diminish the risk of exploitation. On the other hand, sellers protect themselves from cheating buyers by the choice of an appropriate payment mode. Therefore, despite the risk of mutual opportunistic behavior, simple institutional settings lead to cooperation, relatively rare events of fraud, and efficient markets.
Resumo:
Esta tese trata da comunicação como instrumento de inteligência empresarial numa instituição de ensino superior. Ela pretende demonstrar que a comunicação agrega vantagem competitiva às organizações que atuam no mercado educacional. O presente trabalho se fundamenta em referenciais teóricos das ciências da Comunicação e de Planejamento Estratégico, e seus procedimentos metodológicos incluem, além de revisão bibliográfica extensiva e análise de documentos, a técnica da observação participante, com o acompanhamento das atividades do grupo de trabalho intitulado Comunicação e Integração entre os anos 2003 e 2005, que integrava o Planejamento Estratégico da UMESP Universidade Metodista de São Paulo. Ao final do trabalho, buscou-se mapear as condições necessárias para que a comunicação se constitua efetivamente num processo de inteligência empresarial, incorporando-se à gestão estratégica das organizações. Admitimos que a Comunicação Empresarial ainda tem de vencer alguns desafios e que eles, necessariamente, não são fáceis de serem superados. É necessário considerar sempre que a Comunicação Empresarial não flui no vazio, não se realiza à margem das organizações, mas está umbilicalmente associada a um particular sistema de gestão, a uma específica cultura organizacional e que é expressão, portanto, de uma realidade concreta. Para que a Comunicação Empresarial seja assumida como estratégica, essa condição deverá ser favorecida pela gestão, pela cultura e mesmo pela alocação adequada de recursos (humanos, tecnológicos e financeiros), pois sem os quais ela não se realiza. Logo, se estes pressupostos não estiverem devidamente satisfeitos, será prematuro concluir pelo caráter estratégico da Comunicação Empresarial. Mais ainda: a comunicação não será estratégica em função unicamente do trabalho mais ou menos competente dos profissionais de comunicação. Há exigências outras que, infelizmente, fogem ao seu controle. Em resumo, nesse trabalho são analisadas três questões centrais. A primeira delas diz respeito ao conceito de estratégia. A segunda refere-se ao chamado ethos organizacional em que se insere a prática comunicacional. Finalmente, são examinadas as condições básicas para que a comunicação estratégica realmente prevaleça.