900 resultados para Artificial intelligence -- Data processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Obiettivo della tesi è la progettazione e lo sviluppo di un sistema di BI e di relativa reportistica per un'azienda di servizi. Il tutto realizzato mediante la suite Microsoft Business Intelligence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the relationship between annual report disclosure, market liquidity, and capital cost for firms registered on the Deutsche Börse. Disclosure is comprehensively measured using the innovative Artificial Intelligence Measurement of Disclosure (AIMD). Results show that annual report disclosure enhances market liquidity by changing investors’ expectations and inducing portfolio adjustments. Trading frictions are negatively associated with disclosure. The study provides evidence for a capital-costreduction effect of disclosure based on the analysis of investors’ return requirements and market values. Altogether, no evidence is found that the information processing at the German capital market is structurally different from other markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given arbitrary pictures, we explore the possibility of using new techniques from computer vision and artificial intelligence to create customized visual games on-the-fly. This includes coloring books, link-the-dot and spot-the-difference popular games. The feasibility of these systems is discussed and we describe prototype implementation that work well in practice in an automatic or semi-automatic way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Applying location-focused data protection law within the context of a location-agnostic cloud computing framework is fraught with difficulties. While the Proposed EU Data Protection Regulation has introduced a lot of changes to the current data protection framework, the complexities of data processing in the cloud involve various layers and intermediaries of actors that have not been properly addressed. This leaves some gaps in the regulation when analyzed in cloud scenarios. This paper gives a brief overview of the relevant provisions of the regulation that will have an impact on cloud transactions and addresses the missing links. It is hoped that these loopholes will be reconsidered before the final version of the law is passed in order to avoid unintended consequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a novel vision for further enhanced Internet of Things services. Based on a variety of data (such as location data, ontology-backed search queries, in- and outdoor conditions) the Prometheus framework is intended to support users with helpful recommendations and information preceding a search for context-aware data. Adapted from artificial intelligence concepts, Prometheus proposes user-readjusted answers on umpteen conditions. A number of potential Prometheus framework applications are illustrated. Added value and possible future studies are discussed in the conclusion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a novel vision for further enhanced Internet of Things services. Based on a variety of data – such as location data, ontology-backed search queries, in- and outdoor conditions – the Prometheus framework is intended to support users with helpful recommendations and information preceding a search for context-aware data. Adapted from artificial intelligence concepts, Prometheus proposes user-readjusted answers on umpteen conditions. A number of potential Prometheus framework applications are illustrated. Added value and possible future studies are discussed in the conclusion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nurses prepare knowledge representations, or summaries of patient clinical data, each shift. These knowledge representations serve multiple purposes, including support of working memory, workload organization and prioritization, critical thinking, and reflection. This summary is integral to internal knowledge representations, working memory, and decision-making. Study of this nurse knowledge representation resulted in development of a taxonomy of knowledge representations necessary to nursing practice.This paper describes the methods used to elicit the knowledge representations and structures necessary for the work of clinical nurses, described the development of a taxonomy of this knowledge representation, and discusses translation of this methodology to the cognitive artifacts of other disciplines. Understanding the development and purpose of practitioner's knowledge representations provides important direction to informaticists seeking to create information technology alternatives. The outcome of this paper is to suggest a process template for transition of cognitive artifacts to an information system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the main part, electronic government (or e-government for short) aims to put digital public services at disposal for citizens, companies, and organizations. To that end, in particular, e-government comprises the application of Information and Communications Technology (ICT) to support government operations and provide better governmental services (Fraga, 2002) as possible with traditional means. Accordingly, e-government services go further as traditional governmental services and aim to fundamentally alter the processes in which public services are generated and delivered, after this manner transforming the entire spectrum of relationships of public bodies with its citizens, businesses and other government agencies (Leitner, 2003). To implement this transformation, one of the most important points is to inform the citizen, business, and/or other government agencies faithfully and in an accessible way. This allows all the partaking participants of governmental affairs for a transition from passive information access to active participation (Palvia and Sharma, 2007). In addition, by a corresponding handling of the participants' data, a personalization towards these participants may even be accomplished. For instance, by creating significant user profiles as a kind of participants' tailored knowledge structures, a better-quality governmental service may be provided (i.e., expressed by individualized governmental services). To create such knowledge structures, thus known information (e.g., a social security number) can be enriched by vague information that may be accurate to a certain degree only. Hence, fuzzy knowledge structures can be generated, which help improve governmental-participants relationship. The Web KnowARR framework (Portmann and Thiessen, 2013; Portmann and Pedrycz, 2014; Portmann and Kaltenrieder, 2014), which I introduce in my presentation, allows just all these participants to be automatically informed about changes of Web content regarding a- respective governmental action. The name Web KnowARR thereby stands for a self-acting entity (i.e. instantiated form the conceptual framework) that knows or apprehends the Web. In this talk, the frameworks respective three main components from artificial intelligence research (i.e. knowledge aggregation, representation, and reasoning), as well as its specific use in electronic government will be briefly introduced and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our research project develops an intranet search engine with concept- browsing functionality, where the user is able to navigate the conceptual level in an interactive, automatically generated knowledge map. This knowledge map visualizes tacit, implicit knowledge, extracted from the intranet, as a network of semantic concepts. Inductive and deductive methods are combined; a text ana- lytics engine extracts knowledge structures from data inductively, and the en- terprise ontology provides a backbone structure to the process deductively. In addition to performing conventional keyword search, the user can browse the semantic network of concepts and associations to find documents and data rec- ords. Also, the user can expand and edit the knowledge network directly. As a vision, we propose a knowledge-management system that provides concept- browsing, based on a knowledge warehouse layer on top of a heterogeneous knowledge base with various systems interfaces. Such a concept browser will empower knowledge workers to interact with knowledge structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Navigation of deep space probes is most commonly operated using the spacecraft Doppler tracking technique. Orbital parameters are determined from a series of repeated measurements of the frequency shift of a microwave carrier over a given integration time. Currently, both ESA and NASA operate antennas at several sites around the world to ensure the tracking of deep space probes. Just a small number of software packages are nowadays used to process Doppler observations. The Astronomical Institute of the University of Bern (AIUB) has recently started the development of Doppler data processing capabilities within the Bernese GNSS Software. This software has been extensively used for Precise Orbit Determination of Earth orbiting satellites using GPS data collected by on-board receivers and for subsequent determination of the Earth gravity field. In this paper, we present the currently achieved status of the Doppler data modeling and orbit determination capabilities in the Bernese GNSS Software using GRAIL data. In particular we will focus on the implemented orbit determination procedure used for the combined analysis of Doppler and intersatellite Ka-band data. We show that even at this earlier stage of the development we can achieve an accuracy of few mHz on two-way S-band Doppler observation and of 2 µm/s on KBRR data from the GRAIL primary mission phase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A wide variety of spatial data collection efforts are ongoing throughout local, state and federal agencies, private firms and non-profit organizations. Each effort is established for a different purpose but organizations and individuals often collect and maintain the same or similar information. The United States federal government has undertaken many initiatives such as the National Spatial Data Infrastructure, the National Map and Geospatial One-Stop to reduce duplicative spatial data collection and promote the coordinated use, sharing, and dissemination of spatial data nationwide. A key premise in most of these initiatives is that no national government will be able to gather and maintain more than a small percentage of the geographic data that users want and desire. Thus, national initiatives depend typically on the cooperation of those already gathering spatial data and those using GIs to meet specific needs to help construct and maintain these spatial data infrastructures and geo-libraries for their nations (Onsrud 2001). Some of the impediments to widespread spatial data sharing are well known from directly asking GIs data producers why they are not currently involved in creating datasets that are of common or compatible formats, documenting their datasets in a standardized metadata format or making their datasets more readily available to others through Data Clearinghouses or geo-libraries. The research described in this thesis addresses the impediments to wide-scale spatial data sharing faced by GIs data producers and explores a new conceptual data-sharing approach, the Public Commons for Geospatial Data, that supports user-friendly metadata creation, open access licenses, archival services and documentation of parent lineage of the contributors and value- adders of digital spatial data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysis for micro-molar concentrations of nitrate and nitrite, nitrite, phosphate, silicate and ammonia was undertaken on a SEAL Analytical UK Ltd, AA3 segmented flow autoanalyser following methods described by Kirkwood (1996). Samples were drawn from Niskin bottles on the CTD into 15ml polycarbonate centrifuge tubes and kept refrigerated at approximately 4oC until analysis, which generally commenced within 30 minutes. Overall 23 runs with 597 samples were analysed. This is a total of 502 CTD samples, 69 underway samples and 26 from other sources. An artificial seawater matrix (ASW) of 40g/litre sodium chloride was used as the inter-sample wash and standard matrix. The nutrient free status of this solution was checked by running Ocean Scientific International (OSI) low nutrient seawater (LNS) on every run. A single set of mixed standards were made up by diluting 5mM solutions made from weighed dried salts in 1litre of ASW into plastic 250ml volumetric flasks that had been cleaned by washing in MilliQ water (MQ). Data processing was undertaken using SEAL Analytical UK Ltd proprietary software (AACE 6.07) and was performed within a few hours of the run being finished. The sample time was 60 seconds and the wash time was 30 seconds. The lines were washed daily with wash solutions specific for each chemistry, but comprised of MQ, MQ and SDS, MQ and Triton-X, or MQ and Brij-35. Three times during the cruise the phosphate and silicate channels were washed with a weak sodium hypochlorite solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, profiling floats, which form the basis of the successful international Argo observatory, are also being considered as platforms for marine biogeochemical research. This study showcases the utility of floats as a novel tool for combined gas measurements of CO2 partial pressure (pCO2) and O2. These float prototypes were equipped with a small-sized and submersible pCO2 sensor and an optode O2 sensor for highresolution measurements in the surface ocean layer. Four consecutive deployments were carried out during November 2010 and June 2011 near the Cape Verde Ocean Observatory (CVOO) in the eastern tropical North Atlantic. The profiling float performed upcasts every 31 h while measuring pCO2, O2, salinity, temperature, and hydrostatic pressure in the upper 200 m of the water column. To maintain accuracy, regular pCO2 sensor zeroings at depth and surface, as well as optode measurements in air, were performed for each profile. Through the application of data processing procedures (e.g., time-lag correction), accuracies of floatborne pCO2 measurements were greatly improved (10-15 µatm for the water column and 5 µatm for surface measurements). O2 measurements yielded an accuracy of 2 µmol/kg. First results of this pilot study show the possibility of using profiling floats as a platform for detailed and unattended observations of the marine carbon and oxygen cycle dynamics.