909 resultados para Business Intelligence,Data Warehouse,Sistemi Informativi


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Companion animals closely share their domestic environment with people and have the potential to, act as sources of zoonotic diseases. They also have the potential to be sentinels of infectious and noninfectious, diseases. With the exception of rabies, there has been minimal ongoing surveillance of, companion animals in Canada. We developed customized data extraction software, the University of, Calgary Data Extraction Program (UCDEP), to automatically extract and warehouse the electronic, medical records (EMR) from participating private veterinary practices to make them available for, disease surveillance and knowledge creation for evidence-based practice. It was not possible to build, generic data extraction software; the UCDEP required customization to meet the specific software, capabilities of the veterinary practices. The UCDEP, tailored to the participating veterinary practices', management software, was capable of extracting data from the EMR with greater than 99%, completeness and accuracy. The experiences of the people developing and using the UCDEP and the, quality of the extracted data were evaluated. The electronic medical record data stored in the data, warehouse may be a valuable resource for surveillance and evidence-based medical research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large amounts of animal health care data are present in veterinary electronic medical records (EMR) and they present an opportunity for companion animal disease surveillance. Veterinary patient records are largely in free-text without clinical coding or fixed vocabulary. Text-mining, a computer and information technology application, is needed to identify cases of interest and to add structure to the otherwise unstructured data. In this study EMR's were extracted from veterinary management programs of 12 participating veterinary practices and stored in a data warehouse. Using commercially available text-mining software (WordStat™), we developed a categorization dictionary that could be used to automatically classify and extract enteric syndrome cases from the warehoused electronic medical records. The diagnostic accuracy of the text-miner for retrieving cases of enteric syndrome was measured against human reviewers who independently categorized a random sample of 2500 cases as enteric syndrome positive or negative. Compared to the reviewers, the text-miner retrieved cases with enteric signs with a sensitivity of 87.6% (95%CI, 80.4-92.9%) and a specificity of 99.3% (95%CI, 98.9-99.6%). Automatic and accurate detection of enteric syndrome cases provides an opportunity for community surveillance of enteric pathogens in companion animals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The language used in Section 165.002 of the Texas Health and Safety Code renders breastfeeding women vulnerable and susceptible to harassment, discrimination, and persecution via the Texas Penal Code, Sec. 30.05 (Criminal Trespassing), Sec. 21.08 (Indecent Exposure), and Sec. 21.22 (Indecency with a Child). ^ The overall goal of this paper is to develop a solution to this problem via a proposed law or legislative action that offers protection and support for breastfeeding women who choose to nurse in public. Data to inform these recommendations were collected through a literature review and structured interviews with several breastfeeding stakeholders. A literature review of state and federal breastfeeding legislation was conducted to compare and contrast differences between existing legislation in the United States. Interviews were conducted with breastfeeding legislation stakeholders, which included state legislators who have been active in breastfeeding legislation, breastfeeding mothers, and representatives from the Central Texas Healthy Mothers Healthy Babies Coalition (Centex HMHB Coalition), Texas Breastfeeding Coalition (TXBF coalition), La Leche League International, and the Texas Business Association. Data from the literature and legislation reviews and interviews were transcribed and examined for common themes using qualitative data techniques. ^ Overall, most of the stakeholders came to a general consensus on three points, (1) breastfeeding women are supported by stakeholders within the community, (2) other legislation or penal codes should not override the right to breastfeed, and (3) the current breastfeeding legislation needs to be improved to adequately support breastfeeding women. The interviews with breastfeeding legislation stakeholders yielded two major recommendations for the improvement of Section 165.002 of the Texas Health and Safety Code: advocacy efforts to change the wording of the legislation and education to inform people about the legislation. ^ The right to breastfeed is an important public health issue in that it provides a host of health benefits for mothers and children, and is more economical and environmentally superior to alternative feeding methods. While breastfeeding in public is not illegal nor ever has been, adequate legislation is important to affirm this right for women so that they can confidently feed their children without embarrassment or harassment.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study assessed and compared sociodemographic and income characteristics along with food and physical activity assets (i.e. grocery stores, fast food restaurants, and park areas) in the Texas Childhood Obesity Research Demonstration (CORD) Study intervention and comparison catchment areas in Houston and Austin, Texas. The Texas CORD Study used a quasi-experimental study design, so it is necessary to establish the interval validity of the study characteristics by confirming that the intervention and comparison catchment areas are statistically comparable. In this ecological study, ArcGIS and Esri Business Analyst were used to spatially relate U.S. Census Bureau and other business listing data to the specific school attendance zones within the catchment areas. T-tests were used to compare percentages of sociodemographic and income characteristics and densities of food and physical activity assets between the intervention and comparison catchment areas.^ Only five variables were found to have significant differences between the intervention and comparison catchment areas: Age groups 0-4 and 35-64, the percentage of owner-occupied and renter-occupied households, and the percentage of Asian and Pacific Islander residents. All other variables showed no significant differences between the two groups. This study shows that the methodology used to select intervention and comparison catchment areas for the Texas CORD Study was effective and can be used in future studies. The results of this study can be used in future Texas CORD studies to confirm the comparability of the intervention and comparison catchment areas. In addition, this study demonstrates a methodology for describing detailed characteristics about a geographic area that practitioners, researchers, and educators can use.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The software Pan2Applic is a tool to convert files or folders of files (ascii/tab-separated data files with or without metaheader), downloaded from PANGAEA via the search engine or the data warehouse to formats as used by applications, e.g. for visualization or further processing. It may also be used to convert files or zip-archives as downloaded from CD-ROM data collections, published in the WDC-MARE Reports series. Pan2Applic is distributed as freeware for the operating systems Microsoft Windows, Apple OS X and Linux.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Durante los últimos años, el imparable crecimiento de fuentes de datos biomédicas, propiciado por el desarrollo de técnicas de generación de datos masivos (principalmente en el campo de la genómica) y la expansión de tecnologías para la comunicación y compartición de información ha propiciado que la investigación biomédica haya pasado a basarse de forma casi exclusiva en el análisis distribuido de información y en la búsqueda de relaciones entre diferentes fuentes de datos. Esto resulta una tarea compleja debido a la heterogeneidad entre las fuentes de datos empleadas (ya sea por el uso de diferentes formatos, tecnologías, o modelizaciones de dominios). Existen trabajos que tienen como objetivo la homogeneización de estas con el fin de conseguir que la información se muestre de forma integrada, como si fuera una única base de datos. Sin embargo no existe ningún trabajo que automatice de forma completa este proceso de integración semántica. Existen dos enfoques principales para dar solución al problema de integración de fuentes heterogéneas de datos: Centralizado y Distribuido. Ambos enfoques requieren de una traducción de datos de un modelo a otro. Para realizar esta tarea se emplean formalizaciones de las relaciones semánticas entre los modelos subyacentes y el modelo central. Estas formalizaciones se denominan comúnmente anotaciones. Las anotaciones de bases de datos, en el contexto de la integración semántica de la información, consisten en definir relaciones entre términos de igual significado, para posibilitar la traducción automática de la información. Dependiendo del problema en el que se esté trabajando, estas relaciones serán entre conceptos individuales o entre conjuntos enteros de conceptos (vistas). El trabajo aquí expuesto se centra en estas últimas. El proyecto europeo p-medicine (FP7-ICT-2009-270089) se basa en el enfoque centralizado y hace uso de anotaciones basadas en vistas y cuyas bases de datos están modeladas en RDF. Los datos extraídos de las diferentes fuentes son traducidos e integrados en un Data Warehouse. Dentro de la plataforma de p-medicine, el Grupo de Informática Biomédica (GIB) de la Universidad Politécnica de Madrid, en el cuál realicé mi trabajo, proporciona una herramienta para la generación de las necesarias anotaciones de las bases de datos RDF. Esta herramienta, denominada Ontology Annotator ofrece la posibilidad de generar de manera manual anotaciones basadas en vistas. Sin embargo, aunque esta herramienta muestra las fuentes de datos a anotar de manera gráfica, la gran mayoría de usuarios encuentran difícil el manejo de la herramienta , y pierden demasiado tiempo en el proceso de anotación. Es por ello que surge la necesidad de desarrollar una herramienta más avanzada, que sea capaz de asistir al usuario en el proceso de anotar bases de datos en p-medicine. El objetivo es automatizar los procesos más complejos de la anotación y presentar de forma natural y entendible la información relativa a las anotaciones de bases de datos RDF. Esta herramienta ha sido denominada Ontology Annotator Assistant, y el trabajo aquí expuesto describe el proceso de diseño y desarrollo, así como algunos algoritmos innovadores que han sido creados por el autor del trabajo para su correcto funcionamiento. Esta herramienta ofrece funcionalidades no existentes previamente en ninguna otra herramienta del área de la anotación automática e integración semántica de bases de datos. ---ABSTRACT---Over the last years, the unstoppable growth of biomedical data sources, mainly thanks to the development of massive data generation techniques (specially in the genomics field) and the rise of the communication and information sharing technologies, lead to the fact that biomedical research has come to rely almost exclusively on the analysis of distributed information and in finding relationships between different data sources. This is a complex task due to the heterogeneity of the sources used (either by the use of different formats, technologies or domain modeling). There are some research proyects that aim homogenization of these sources in order to retrieve information in an integrated way, as if it were a single database. However there is still now work to automate completely this process of semantic integration. There are two main approaches with the purpouse of integrating heterogeneous data sources: Centralized and Distributed. Both approches involve making translation from one model to another. To perform this task there is a need of using formalization of the semantic relationships between the underlying models and the main model. These formalizations are also calles annotations. In the context of semantic integration of the information, data base annotations consist on defining relations between concepts or words with the same meaning, so the automatic translation can be performed. Depending on the task, the ralationships can be between individuals or between whole sets of concepts (views). This paper focuses on the latter. The European project p-medicine (FP7-ICT-2009-270089) is based on the centralized approach. It uses view based annotations and RDF modeled databases. The data retireved from different data sources is translated and joined into a Data Warehouse. Within the p-medicine platform, the Biomedical Informatics Group (GIB) of the Polytechnic University of Madrid, in which I worked, provides a software to create annotations for the RDF sources. This tool, called Ontology Annotator, is used to create annotations manually. However, although Ontology Annotator displays the data sources graphically, most of the users find it difficult to use this software, thus they spend too much time to complete the task. For this reason there is a need to develop a more advanced tool, which would be able to help the user in the task of annotating p-medicine databases. The aim is automating the most complex processes of the annotation and display the information clearly and easy understanding. This software is called Ontology Annotater Assistant and this book describes the process of design and development of it. as well as some innovative algorithms that were designed by the author of the work. This tool provides features that no other software in the field of automatic annotation can provide.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From the Introduction. The main focus of this study is to examine whether the euro has been an economic, monetary, fiscal, and social stabilizer for the Eurozone. In order to do this, the underpinnings of the euro are analysed, and the requirements and benchmarks that have to be achieved, maintained, and respected are tested against the data found in three major statistics data sources: the European Central Bank’s Statistics Data Warehouse (http://sdw.ecb.europa.eu/), Economagic (www.economagic.com), and E-signal. The purpose of this work is to analyse if the euro was a stabilizing factor from its inception to the break of the financial crisis in summer 2008 in the European Union. To answer this question, this study analyses a number of indexes to understand the impact of the euro in three markets: (1) the foreign exchange market, (2) the stock market, and the Crude Oil and commodities markets, (3) the money market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exchange between anonymous actors in Internet auctions corresponds to a one-shot prisoner's dilemma-like situation. Therefore, in any given auction the risk is high that seller and buyer will cheat and, as a consequence, that the market will collapse. However, mutual cooperation can be attained by the simple and very efficient institution of a public rating system. By this system, sellers have incentives to invest in reputation in order to enhance future chances of business. Using data from about 200 auctions of mobile phones we empirically explore the effects of the reputation system. In general, the analysis of nonobtrusive data from auctions may help to gain a deeper understanding of basic social processes of exchange, reputation, trust, and cooperation, and of the impact of institutions on the efficiency of markets. In this study we report empirical estimates of effects of reputation on characteristics of transactions such as the probability of a successful deal, the mode of payment, and the selling price (highest bid). In particular, we try to answer the question whether sellers receive a "premium" for reputation. Our results show that buyers are willing to pay higher prices for reputation in order to diminish the risk of exploitation. On the other hand, sellers protect themselves from cheating buyers by the choice of an appropriate payment mode. Therefore, despite the risk of mutual opportunistic behavior, simple institutional settings lead to cooperation, relatively rare events of fraud, and efficient markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese trata da comunicação como instrumento de inteligência empresarial numa instituição de ensino superior. Ela pretende demonstrar que a comunicação agrega vantagem competitiva às organizações que atuam no mercado educacional. O presente trabalho se fundamenta em referenciais teóricos das ciências da Comunicação e de Planejamento Estratégico, e seus procedimentos metodológicos incluem, além de revisão bibliográfica extensiva e análise de documentos, a técnica da observação participante, com o acompanhamento das atividades do grupo de trabalho intitulado Comunicação e Integração entre os anos 2003 e 2005, que integrava o Planejamento Estratégico da UMESP Universidade Metodista de São Paulo. Ao final do trabalho, buscou-se mapear as condições necessárias para que a comunicação se constitua efetivamente num processo de inteligência empresarial, incorporando-se à gestão estratégica das organizações. Admitimos que a Comunicação Empresarial ainda tem de vencer alguns desafios e que eles, necessariamente, não são fáceis de serem superados. É necessário considerar sempre que a Comunicação Empresarial não flui no vazio, não se realiza à margem das organizações, mas está umbilicalmente associada a um particular sistema de gestão, a uma específica cultura organizacional e que é expressão, portanto, de uma realidade concreta. Para que a Comunicação Empresarial seja assumida como estratégica, essa condição deverá ser favorecida pela gestão, pela cultura e mesmo pela alocação adequada de recursos (humanos, tecnológicos e financeiros), pois sem os quais ela não se realiza. Logo, se estes pressupostos não estiverem devidamente satisfeitos, será prematuro concluir pelo caráter estratégico da Comunicação Empresarial. Mais ainda: a comunicação não será estratégica em função unicamente do trabalho mais ou menos competente dos profissionais de comunicação. Há exigências outras que, infelizmente, fogem ao seu controle. Em resumo, nesse trabalho são analisadas três questões centrais. A primeira delas diz respeito ao conceito de estratégia. A segunda refere-se ao chamado ethos organizacional em que se insere a prática comunicacional. Finalmente, são examinadas as condições básicas para que a comunicação estratégica realmente prevaleça.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the following paper a new class of executive information system is suggested. It is based on a selforganization in management and on a module modeling. The system is multifunctional and multidisciplinary. The structure elements of the system and the common features of the modules are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Представлено формальное описание многомерной модели данных, реализованной в программном комплексе METAS BI-Platform. В статью включено описание объектов многомерной модели (измерений и множеств измерений и т.д.), их свойств и организации, а также операций, выполняемых над ними. Описаны методы агрегации многомерных данных, позволяющие эффективно агрегировать массивы числовых показателей. Программный комплекс METAS BI-Platform предназначен для многомерного анализа данных, получаемых из гетерогенных источников, и позволяет упростить разработку BI-приложений. Программный комплекс представляет собой многоуровневое приложение с архитектурой «Клиент-сервер». Каждый уровень комплекса соответствует степени абстракции данных. На самом низком уровне расположены драйверы доступа к специфическим физическим источникам данных. Следующий уровень – уровень виртуальной СУБД, позволяющей осуществлять унифицированный доступ к данным, что избавляет от необходимости учитывать специфику конкретных СУБД при разработке BI-приложений. Реализован программный интерфейс комплекса (API). В распоряжение разработчиков предоставляется набор готовых компонентов, которые могут быть использованы при создании BI-приложений. Это позволяет разрабатывать на основе комплекса BI-приложения, отвечающие современным требованиям, предъявляемым к подобным системам.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One dominant feature of the modern manufacturing chains is the movement of goods. Manufacturing companies would remain an unprofitable investment if the supplies/logistics of raw materials, semi-finished products or final goods are not handled in an effective way. Both levels of a modern manufacturing chain-actual production and logistics-are characterized by continuous data creation at a much faster rate than they can be meaningfully analyzed and acted upon manually. Often, instant and reliable decisions need to be taken based on huge, previously inconceivable amounts of heterogeneous, contradictory or incomplete data. The paper will highlight aspects of information flows related to business process data visibility and observability in modern manufacturing networks. An information management platform developed in the framework of the EU FP7 project ADVANCE will be presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Encyclopaedia slavica sanctorum (eslavsanct.net) is designed as a complex heterogenous multimedia product. It is part of the project Encyclopaedia Slavica Sanctorum: Saints and Holy Places in Bulgaria (in electronic and Guthenberg versions). Until 2013, its web-based platform for online management and presentation of structured digital content has been prepared and numerous materials have been input. The platform is developed using the server technologies PHP, MySQL and HTML, JavaScript, CSS on the client side. The search in the e-ESS can be made by different parameters (12, or combinations of parameters), such as saints’ or feasts’ names, type of sainthood, types of texts dedicated to the saints, dates of saints’ commemorations, and several others. Both guests and registered users can search in the e-ESS but the latter have access to much more information including the publications of original sources. The e-platform allows for making statistics of what have been searched and read. The software used for content and access analysis is BI tool QlikView. As an analysis services provider, it is connected to the e-ESS objects repository and tracking services by a preliminary created data warehouse. The data warehouse is updated automatically, achieving real time analytics solution. The paper discusses some of the statistics results of the use of the e-ESS: the activities of the editors, users, and guests, the types of searches, the most often viewed object, such as the date of January 1 and the article on St. Basil the Great which is one of the richest encyclopaedia articles and includes both matadata and original sources published, both from medieval Slavonic manuscripts and popular culture records.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The integration of automation (specifically Global Positioning Systems (GPS)) and Information and Communications Technology (ICT) through the creation of a Total Jobsite Management Tool (TJMT) in construction contractor companies can revolutionize the way contractors do business. The key to this integration is the collection and processing of real-time GPS data that is produced on the jobsite for use in project management applications. This research study established the need for an effective planning and implementation framework to assist construction contractor companies in navigating the terrain of GPS and ICT use. An Implementation Framework was developed using the Action Research approach. The framework consists of three components, as follows: (i) ICT Infrastructure Model, (ii) Organizational Restructuring Model, and (iii) Cost/Benefit Analysis. The conceptual ICT infrastructure model was developed for the purpose of showing decision makers within highway construction companies how to collect, process, and use GPS data for project management applications. The organizational restructuring model was developed to assist companies in the analysis and redesign of business processes, data flows, core job responsibilities, and their organizational structure in order to obtain the maximum benefit at the least cost in implementing GPS as a TJMT. A cost-benefit analysis which identifies and quantifies the cost and benefits (both direct and indirect) was performed in the study to clearly demonstrate the advantages of using GPS as a TJMT. Finally, the study revealed that in order to successfully implement a program to utilize GPS data as a TJMT, it is important for construction companies to understand the various implementation and transitioning issues that arise when implementing this new technology and business strategy. In the study, Factors for Success were identified and ranked to allow a construction company to understand the factors that may contribute to or detract from the prospect for success during implementation. The Implementation Framework developed as a result of this study will serve to guide highway construction companies in the successful integration of GPS and ICT technologies for use as a TJMT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the past decade, there has been a dramatic increase by postsecondary institutions in providing academic programs and course offerings in a multitude of formats and venues (Biemiller, 2009; Kucsera & Zimmaro, 2010; Lang, 2009; Mangan, 2008). Strategies pertaining to reapportionment of course-delivery seat time have been a major facet of these institutional initiatives; most notably, within many open-door 2-year colleges. Often, these enrollment-management decisions are driven by the desire to increase market-share, optimize the usage of finite facility capacity, and contain costs, especially during these economically turbulent times. So, while enrollments have surged to the point where nearly one in three 18-to-24 year-old U.S. undergraduates are community college students (Pew Research Center, 2009), graduation rates, on average, still remain distressingly low (Complete College America, 2011). Among the learning-theory constructs related to seat-time reapportionment efforts is the cognitive phenomenon commonly referred to as the spacing effect, the degree to which learning is enhanced by a series of shorter, separated sessions as opposed to fewer, more massed episodes. This ex post facto study explored whether seat time in a postsecondary developmental-level algebra course is significantly related to: course success; course-enrollment persistence; and, longitudinally, the time to successfully complete a general-education-level mathematics course. Hierarchical logistic regression and discrete-time survival analysis were used to perform a multi-level, multivariable analysis of a student cohort (N = 3,284) enrolled at a large, multi-campus, urban community college. The subjects were retrospectively tracked over a 2-year longitudinal period. The study found that students in long seat-time classes tended to withdraw earlier and more often than did their peers in short seat-time classes (p < .05). Additionally, a model comprised of nine statistically significant covariates (all with p-values less than .01) was constructed. However, no longitudinal seat-time group differences were detected nor was there sufficient statistical evidence to conclude that seat time was predictive of developmental-level course success. A principal aim of this study was to demonstrate—to educational leaders, researchers, and institutional-research/business-intelligence professionals—the advantages and computational practicability of survival analysis, an underused but more powerful way to investigate changes in students over time.