891 resultados para Business Intelligence, BI Mobile, OBI11g, Decision Support System, Data Warehouse


Relevância:

100.00% 100.00%

Publicador:

Resumo:

When underwater vehicles navigate close to the ocean floor, computer vision techniques can be applied to obtain motion estimates. A complete system to create visual mosaics of the seabed is described in this paper. Unfortunately, the accuracy of the constructed mosaic is difficult to evaluate. The use of a laboratory setup to obtain an accurate error measurement is proposed. The system consists on a robot arm carrying a downward looking camera. A pattern formed by a white background and a matrix of black dots uniformly distributed along the surveyed scene is used to find the exact image registration parameters. When the robot executes a trajectory (simulating the motion of a submersible), an image sequence is acquired by the camera. The estimated motion computed from the encoders of the robot is refined by detecting, to subpixel accuracy, the black dots of the image sequence, and computing the 2D projective transform which relates two consecutive images. The pattern is then substituted by a poster of the sea floor and the trajectory is executed again, acquiring the image sequence used to test the accuracy of the mosaicking system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presentation will focus on the reasons for deploying an e-reader loan service at a virtual university library as a part of an e-learning support system to aid user mobility, concentration of documentary and electronic resources, and ICT skills acquisition, using the example of the UOC pilot project and its subsequent consolidation. E-reader devices at the UOC are an extension of the Virtual Campus. They are offered as a tool to aid user mobility, access to documentary and electronic resources, and development of information and IT skills. The e-reader loan service began as a pilot project in 2009 and was consolidated in 2010. The UOC Library piloted the e-reader loan service from October to December 2009. The pilot project was carried out with 15 devices and involved 37 loans. The project was extended into 2010 with the same number of devices and 218 loans (October 2010). In 2011 the e-reader loan service is to involve 190 devices, thus offering an improved service. The reasons for deploying an e-reader loan service at the UOC are the following: a) to offer library users access to the many kinds of learning materials available at the UOC through a single device that facilitates student study and learning; b) to enhance access to and use of the e-book collections subscribed to by the UOC Library; c) to align with UOC strategy on the development of learning materials in multiple formats, and promote e-devices as an extension of the UOC Virtual Campus, and d) to increase UOC Library visibility within and beyond the institution. The presentation will conclude with an analysis of the key issues to be taken into account at a university library: the e-reader market, the unclear business and license model for e-book contents, and the library's role in promoting new reading formats to increase use of e-collections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Emotions are crucial for user's decision making in recommendation processes. We first introduce ambient recommender systems, which arise from the analysis of new trends on the exploitation of the emotional context in the next generation of recommender systems. We then explain some results of these new trends in real-world applications through the smart prediction assistant (SPA) platform in an intelligent learning guide with more than three million users. While most approaches to recommending have focused on algorithm performance. SPA makes recommendations to users on the basis of emotional information acquired in an incremental way. This article provides a cross-disciplinary perspective to achieve this goal in such recommender systems through a SPA platform. The methodology applied in SPA is the result of a bunch of technology transfer projects for large real-world rccommender systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En aquest projecte es realitzarà un quadre de comandament amb l'eina Oracle BI, amb la construcció del data warehouse i procés ETL.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Maintaining therapeutic concentrations of drugs with a narrow therapeutic window is a complex task. Several computer systems have been designed to help doctors determine optimum drug dosage. Significant improvements in health care could be achieved if computer advice improved health outcomes and could be implemented in routine practice in a cost effective fashion. This is an updated version of an earlier Cochrane systematic review, by Walton et al, published in 2001. OBJECTIVES: To assess whether computerised advice on drug dosage has beneficial effects on the process or outcome of health care. SEARCH STRATEGY: We searched the Cochrane Effective Practice and Organisation of Care Group specialized register (June 1996 to December 2006), MEDLINE (1966 to December 2006), EMBASE (1980 to December 2006), hand searched the journal Therapeutic Drug Monitoring (1979 to March 2007) and the Journal of the American Medical Informatics Association (1996 to March 2007) as well as reference lists from primary articles. SELECTION CRITERIA: Randomized controlled trials, controlled trials, controlled before and after studies and interrupted time series analyses of computerized advice on drug dosage were included. The participants were health professionals responsible for patient care. The outcomes were: any objectively measured change in the behaviour of the health care provider (such as changes in the dose of drug used); any change in the health of patients resulting from computerized advice (such as adverse reactions to drugs). DATA COLLECTION AND ANALYSIS: Two reviewers independently extracted data and assessed study quality. MAIN RESULTS: Twenty-six comparisons (23 articles) were included (as compared to fifteen comparisons in the original review) including a wide range of drugs in inpatient and outpatient settings. Interventions usually targeted doctors although some studies attempted to influence prescriptions by pharmacists and nurses. Although all studies used reliable outcome measures, their quality was generally low. Computerized advice for drug dosage gave significant benefits by:1.increasing the initial dose (standardised mean difference 1.12, 95% CI 0.33 to 1.92)2.increasing serum concentrations (standradised mean difference 1.12, 95% CI 0.43 to 1.82)3.reducing the time to therapeutic stabilisation (standardised mean difference -0.55, 95%CI -1.03 to -0.08)4.reducing the risk of toxic drug level (rate ratio 0.45, 95% CI 0.30 to 0.70)5.reducing the length of hospital stay (standardised mean difference -0.35, 95% CI -0.52 to -0.17). AUTHORS' CONCLUSIONS: This review suggests that computerized advice for drug dosage has some benefits: it increased the initial dose of drug, increased serum drug concentrations and led to a more rapid therapeutic control. It also reduced the risk of toxic drug levels and the length of time spent in the hospital. However, it had no effect on adverse reactions. In addition, there was no evidence to suggest that some decision support technical features (such as its integration into a computer physician order entry system) or aspects of organization of care (such as the setting) could optimise the effect of computerised advice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: A clinical decision rule to improve the accuracy of a diagnosis of influenza could help clinicians avoid unnecessary use of diagnostic tests and treatments. Our objective was to develop and validate a simple clinical decision rule for diagnosis of influenza. METHODS: We combined data from 2 studies of influenza diagnosis in adult outpatients with suspected influenza: one set in California and one in Switzerland. Patients in both studies underwent a structured history and physical examination and had a reference standard test for influenza (polymerase chain reaction or culture). We randomly divided the dataset into derivation and validation groups and then evaluated simple heuristics and decision rules from previous studies and 3 rules based on our own multivariate analysis. Cutpoints for stratification of risk groups in each model were determined using the derivation group before evaluating them in the validation group. For each decision rule, the positive predictive value and likelihood ratio for influenza in low-, moderate-, and high-risk groups, and the percentage of patients allocated to each risk group, were reported. RESULTS: The simple heuristics (fever and cough; fever, cough, and acute onset) were helpful when positive but not when negative. The most useful and accurate clinical rule assigned 2 points for fever plus cough, 2 points for myalgias, and 1 point each for duration <48 hours and chills or sweats. The risk of influenza was 8% for 0 to 2 points, 30% for 3 points, and 59% for 4 to 6 points; the rule performed similarly in derivation and validation groups. Approximately two-thirds of patients fell into the low- or high-risk group and would not require further diagnostic testing. CONCLUSION: A simple, valid clinical rule can be used to guide point-of-care testing and empiric therapy for patients with suspected influenza.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The new recommendations on the pharmacological treatment of type 2 diabetes have introduced two important changes. The first is to have common strategies between European and American diabetes societies. The second, which is certainly the most significant, is to develop a patient centred approach suggesting therapies that take into account the patient's preferences and use of decision support tools. The individual approach integrates six factors: the capacity and motivation of the patient to manage his illness and its treatment, the risks of hypoglycemia, the life expectancy, the presence of co-morbidities and vascular complications, as well as the financial resources of the patient and the healthcare system. Treatment guidelines for cardiovascular risk reduction in diabetic remains the last point to develop.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Switzerland there is a strong movement at a national policy level towards strengthening patient rights and patient involvement in health care decisions. Yet, there is no national programme promoting shared decision making. First decision support tools (prenatal diagnosis and screening) for the counselling process have been developed and implemented. Although Swiss doctors acknowledge that shared decision making is important, hierarchical structures and asymmetric physician-patient relationships are still prevailing. The last years have seen some promising activities regarding the training of medical students and the development of patient support programmes. Swiss direct democracy and the habit of consensual decision making and citizen involvement in general may provide a fertile ground for SDM development in the primary care setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El proyecto consiste principalmente en definir y desarrollar una herramienta de gestión de metadatos de negocio para los indicadores clave de rendimiento. Actualmente no existe ninguna herramienta BI que permita almacenar información de negocio más allá de las especificaciones técnicas. Dicha aplicación, actuará como repositorio centralizado de la definición de los indicadores clave de negocio de la compañia incluyendo información del metadato del indicador, (tal como, responsable del indicador, dimensiones de análisis, profundidad histórica, frecuencia de actualización, procesos de negocio implicados, etc.).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aquest document defineix la construcció i explotació d'un magatzem de dades per a la Fundació d'Estudis per a la Conducció Responsable. L'objectiu del projecte és homogeneïtzar la informació que rep la fundació, des de diverses fonts i en diferents formats, consolidar-la en un únic magatzem de dades i habilitar eines que facilitin la seva explotació i anàlisi. La consecució d'aquestes fites és determinant perquè la direcció conegui l'evolució del trànsit rodat de vehicles a Catalunya i minimitzi el riscos en cas de qualsevol presa de decisions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Toiminnanohjausjärjestelmän implementointi ja sen mukanaan tuomat muutokset tuotekustannuslaskentaan asettavat haasteita yritykselle. Metallitoimialalla toimivassa yrityksessä on havaittu samat haasteet implementoitaessa SAP R/3 toiminnanohjausjärjestelmää ja sen tuotekustannuslaskentatoiminnallisuutta. SAP R/3 tuotekustannuslogiikka tarvitsee tietoa järjestelmän ulkopuolelta, minkä huomioimatta jättäminen vaikuttaa suoraan laskentatarkkuuteen. Diplomityössä kehitetään sekä standardoitu prosessi että laskentajärjestelmä, joiden avulla pystytään laskemaan tarvittavat niin toimintokustannukset eri teräspalvelukeskuksen kuormituspisteille kuin kustannustenvyörytysarvot. Lasketut arvot muodostavat tarvittavat tekijät SAP R/3 tuotekustannuslaskennan master dataan. Tavoitteena on edesauttaa läpinäkyvän kustannustiedon muodostumista. Diplomityö pohjautuu ns. vesiputousmalliin (SDLC). Ensin tunnistetaan reunaehdot ympäristöstä, jossa tuotekustannuslaskenta toteutetaan. Tämä asettaa joustamattomia komponentteja kehitettävälle laskentajärjestelmälle. Joustavat komponentit sen sijaan antavat vapautta laskentajärjestelmälle. Yhdistämällä joustamattomat ja joustavat komponentit saavutetaan järjestelmä, jolla voidaan täydentää SAP R/3 tuotekustannuslaskennan puutteellisuutta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Teollisuusyritysten kunnossapitostrategioiden muuttuessa enemmän ennakoivan kunnossapidon suuntaan, avaase teollisuuspalveluille uusia mahdollisuuksia. Nykyisin yksi menestyvän yrityksen lähtökohdista on kehittää tuotettaan asiakaslähtöisesti. Yrityksen täytyy luoda toimintatapoja, joilla tietoa asiakkaiden tarpeista pystytään keräämään. Työssä käsitellään asiakastarvekartoituksen vaiheet korkeapainehöyryputkiston muutosyön kehittämistä varten. asiakastarvekartoitukseen valitaan menetelmät niin tiedon hankintaa kuin analysointia varten. Asiakastarvekartoituksessa ei huomioida ainoastaan asiakasta, vaan siinä huomioidaan myös yrityksen itsensä sekä muiden osapuolten, kuten viranomaisten, vaikutus tuotteeseen. Asiakastarvekartoituksen avulla saadaan muutostyölle asetettua konkreettisia tavoitteita, sekä kehitettyä ratkaisuja, joiden avulla asiakasta pystytään palvelemaan entistä paremmin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diplomityö liittyy case-yhtiön raportointi- ja suunnittelujärjestelmän käyttöönottoon ja kehittämiseen. Työn tavoitteena on raportoinnin toteuttaminen uudella järjestelmällä. Diplomityön teoriaosuudessa perehdytään johdon laskentatoimen rooliin johtamisen tukena, business intelligence -käsitteeseen ja raportointijärjestelmähankkeisiin liittyviin haasteisiin johdon laskentatoimen näkökulmasta. Työn empiirisessä osassa kuvataan raportoinnin toteutus uudella järjestelmällä, laaditaan raportointi- ja suunnittelujärjestelmän arkkitehtuurikuvaus, arvioidaan käyttöönottoprojektin onnistumisesta sekä esitetään jatkokehitysehdotuksia. Diplomityö on osa case-yhtiön johdon laskentatoimen kehittämistä ja jatkoa aiemmin yhtiön raportoinnin kehittämistä tutkineelle diplomityölle. Meneillään oleva raportointi- jasuunnittelujärjestelmä -projekti on osa yhtiön laajempaa IT-järjestelmien integraatioprojektia. Näin diplomityö liittyy myös yhtiön tietojärjestelmien kehittämiseen ja uudistamiseen. Diplomityössä rakennettiin case-yhtiön talouden raportointiympäristöä uuteen järjestelmään ja kehitettiin johdon laskentatoimen raportteja. Työssä kehitetty talouden raportointi täyttää raportointi- ja suunnittelujärjestelmähankkeelle asetetut tavoitteet. Raportointi on erilaiset tarpeet huomioivaa, läpinäkyvää ja antaa kokonaiskuvan tarkasteltavasta kohteesta, mutta mahdollistaa myös tapahtumien yksityiskohtaisen tarkastelun. Raportointi perustuu sähköpostin kautta jaettaviin vakioraportteihin sekä käyttäjän itse raportointiportaalissa suorittamaan raportointiin. Talouden seurannan lisäksi työssä laadittiin raportteja teknisen tiedon ja työtuntien seurantaan. Case-yhtiön nykyinen johdon laskentatoimen raportointi toteutetaan työssä esitetyllä järjestelmällä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimuksen keskeisin tavoite oli kartoittaa Beneq Oy:n innovaatiokyvykkyys ja kuvata yrityksen innovaatioprosessi. Lisäksi tutkimuksessa selvitettiin yleiset valintaperusteet BI (business intelligence) - ohjelmiston hankinnalle. Tutkimus oli tärkeä, jotta kohdeyritys voi tulevaisuudessa kehittää innovatiivisuuttaan ja hankkia innovaatiotoimintaa tukevan ohjelmiston. Työ on tehty osana TEKESin rahoittamaa yritysten innovatiivisuutta tukevaa LIIMA - projektia. Ensin työssä tutkittiin innovaatiotoimintaaja - prosesseja teoriassa. Tämän jälkeen yrityksen käytäntöjä selvitettiin haastattelemalla työntekijöitä sekä käymällä läpi tuote- ja asiakastapauksia. Yrityksen nykyiseen tietojärjestelmään tutustuttiin käytännössä ja BI - ohjelmiston valintaperusteita kerättiin kirjallisuuslähteistä. Työn tuloksena valmistuivat kartoitukset Beneq Oy:n ja sen partneriyrityksen innovaatiokyvykkyydestä. Myös yrityksen innovaatioprosessi määriteltiin ja havainnollistettiin kuvana. Lisäksi työn tuloksena syntyivät yleiset valintaperusteet BI - ohjelmiston hankintaan. Saatujen tulosten perusteella Beneq Oy voi tehokkaasti kehittää innovatiivisuuttaan tulevaisuudessa. Työn yhteydessä kehitettyä innovaatioprosessimallia voidaan todennäköisesti hyödyntää innovaatiotutkimuksessa laajemminkin.