938 resultados para Business intelligence, data warehouse, sql server


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Instrumentation and automation plays a vital role to managing the water industry. These systems generate vast amounts of data that must be effectively managed in order to enable intelligent decision making. Time series data management software, commonly known as data historians are used for collecting and managing real-time (time series) information. More advanced software solutions provide a data infrastructure or utility wide Operations Data Management System (ODMS) that stores, manages, calculates, displays, shares, and integrates data from multiple disparate automation and business systems that are used daily in water utilities. These ODMS solutions are proven and have the ability to manage data from smart water meters to the collaboration of data across third party corporations. This paper focuses on practical, utility successes in the water industry where utility managers are leveraging instantaneous access to data from proven, commercial off-the-shelf ODMS solutions to enable better real-time decision making. Successes include saving $650,000 / year in water loss control, safeguarding water quality, saving millions of dollars in energy management and asset management. Immediate opportunities exist to integrate the research being done in academia with these ODMS solutions in the field and to leverage these successes to utilities around the world.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The strategic management of information plays a fundamental role in the organizational management process since the decision-making process depend on the need for survival in a highly competitive market. Companies are constantly concerned about information transparency and good practices of corporate governance (CG) which, in turn, directs relations between the controlling power of the company and investors. In this context, this article presents the relationship between the disclosing of information of joint-stock companies by means of using XBRL, the open data model adopted by the Brazilian government, a model that boosted the publication of Information Access Law (Lei de Acesso à Informação), nº 12,527 of 18 November 2011. Information access should be permeated by a mediation policy in order to subsidize the knowledge construction and decision-making of investors. The XBRL is the main model for the publishing of financial information. The use of XBRL by means of new semantic standard created for Linked Data, strengthens the information dissemination, as well as creates analysis mechanisms and cross-referencing of data with different open databases available on the Internet, providing added value to the data/information accessed by civil society.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the supply chain management there are several risk factors that must be mitigated to increase the flow of production and as a possible solution the literature cites the implementation of a warehouse management system, but this subject is few explored. This thesis has as main objective the study of the implementation of a warehouse management system in a company from the automotive sector that produces clutches. As results, are shown data of the characterization of items; as well as data and comparisons between disruptions in production reports due to lack of material before and after the implementation of WMS and is presented the result of a questionnaire applied to the involved on the implementation of the system, the results were associated with the risk factors on the implementation of the system studied on the literature review, and enumeration of the results that are not associated with any factors previously studied. And finally, the study is concluded and are recommended future studies related to the theme

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Presentations sponsored by the Patent and Trademark Depository Library Association (PTDLA) at the American Library Association Annual Conference, New Orleans, June 25, 2006 Speaker #1: Nan Myers Associate Professor; Government Documents, Patents and Trademarks Librarian Wichita State University, Wichita, KS Title: Intellectual Property Roundup: Copyright, Trademarks, Trade Secrets, and Patents Abstract: This presentation provides a capsule overview of the distinctive coverage of the four types of intellectual property – What they are, why they are important, how to get them, what they cost, how long they last. Emphasis will be on what questions patrons ask most, along with the answers! Includes coverage of the mission of Patent & Trademark Depository Libraries (PTDLs) and other sources of business information outside of libraries, such as Small Business Development Centers. Speaker #2: Jan Comfort Government Information Reference Librarian Clemson University, Clemson, SC Title: Patents as a Source of Competitive Intelligence Information Abstract: Large corporations often have R&D departments, or large numbers of staff whose jobs are to monitor the activities of their competitors. This presentation will review strategies that small business owners can employ to do their own competitive intelligence analysis. The focus will be on features of the patent database that is available free of charge on the USPTO website, as well as commercial databases available at many public and academic libraries across the country. Speaker #3: Virginia Baldwin Professor; Engineering Librarian University of Nebraska-Lincoln, Lincoln, NE Title: Mining Online Patent Data for Business Information Abstract: The United States Patent and Trademark Office (USPTO) website and websites of international databases contains information about granted patents and patent applications and the technologies they represent. Statistical information about patents, their technologies, geographical information, and patenting entities are compiled and available as reports on the USPTO website. Other valuable information from these websites can be obtained using data mining techniques. This presentation will provide the keys to opening these resources and obtaining valuable data. Speaker #4: Donna Hopkins Engineering Librarian Renssalaer Polytechnic Institute, Troy, NY Title: Searching the USPTO Trademark Database for Wordmarks and Logos Abstract: This presentation provides an overview of wordmark searching in www.uspto.gov, followed by a review of the techniques of searching for non-word US trademarks using codes from the Design Search Code Manual. These codes are used in an electronic search, either on the uspto website or on CASSIS DVDs. The search is sometimes supplemented by consulting the Official Gazette. A specific example of using a section of the codes for searching is included. Similar searches on the Madrid Express database of WIPO, using the Vienna Classification, will also be briefly described.

Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Java Enterprise Applications (JEAs) are complex software systems written using multiple technologies. Moreover they are usually distributed systems and use a database to deal with persistence. A particular problem that appears in the design of these systems is the lack of a rich business model. In this paper we propose a technique to support the recovery of such rich business objects starting from anemic Data Transfer Objects (DTOs). Exposing the code duplications in the application's elements using the DTOs we suggest which business logic can be moved into the DTOs from the other classes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper provides an insight to the development of a process model for the essential expansion of the automatic miniload warehouse. The model is based on the literature research and covers four phases of a warehouse expansion: the preparatory phase, the current state analysis, the design phase and the decision making phase. In addition to the literature research, the presented model is based on a reliable data set and can be applicable with a reasonable effort to ensure the informed decision on the warehouse layout. The model is addressed to users who are usually employees of logistics department, and is oriented on the improvement of the daily business organization combined with the warehouse expansion planning.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Aim: To determine the relationship between nurse leader emotional intelligence and registered nurse job satisfaction. ^ Background: Nurse leaders influence the work environments of nurses working at the bedside. Nursing leadership plays an important role in fostering work environments that attract and retain nurses. ^ Methods: A non-experimental, predictive design study conducted in 5 hospitals evaluated relationships between 31 nurse leaders and 799 registered nurses. The nurse leaders were administered the MSCEIT and MBTI. The registered nurses participated in the 2010 NDNQI RN Job Satisfaction Survey. ^ Measurements and Results: The sample population completed two online instruments, the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) and the Myers Brigg Trait Inventory (MBTI). Nurse leader demographic data was collected consisting of age, sex, race, educational level, certification status and years in the profession of nursing. The relationships among characteristics of the nurse leader and staff nurses were examined using regression analysis and stepwise deletion. The results from the MBTI were obtained electronically from CPP. Inc. and the results of MSCEIT were obtained electronically from MHS, Inc. The nurse leader response rate was 46% and the NDNQI RN Job Satisfaction response rate was 62%. The sample of 31 nurse leaders were 65 percent female and 67.7% were White, 12.9% Black, and 19.4% Hispanic. The most prevalent MBTI type was ESTJ (19.35%), followed by ENFJ and ISFJ (9.68% each). The nurse leader sample was primarily extroverts (n=20), sensing (n=18), thinking (n=16) and judging (n=19). The nurse leaders' overall MSCEIT scores ranged from 69 to 111 (implying a range from those who should consider development to competent) with a mean score of 89.84 (consider improvement). The nurse leaders scored highest in the MSCEIT Facilitating subscale with scores ranging from 69 to 121 (consider development to strength) and a mean score of 95.19 (low average score). The overall mean MSCEIT mean scores for the entire sample ranged from 89.90 to 95.19 (consider emotional intelligence improvement to low average score) Overall, staff nurse participants in the NDNQI RN Job Satisfaction Survey were moderately satisfied with the nurse leaders as noted by a mean t score of 55.03 of 60 and this score was consistent with the comparison hospitals that participated in the 2010 NDNQI RN Job Satisfaction Survey (American Nurses Association, 2010). Staff nurses gave nurse leaders a mean score of 4.50 for patient assignments appropriate, and rated a mean score of 4.35 and moderately agreeing to recommend the hospital to a friend. ^ Conclusions: Future research is needed to determine if there is a relationship between nurse leader emotional intelligence ability and registered nurse job satisfaction. Additional research is also needed to determine what to measure in regards to nurse leader emotional intelligence, ability or behavior. Another issue that emerged in the examination of EI is the moderating relationship between the nurse leaders span of control and staff nurse satisfaction on the NDNQI. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Reducing the energy consumption for computation and cooling in servers is a major challenge considering the data center energy costs today. To ensure energy-efficient operation of servers in data centers, the relationship among computa- tional power, temperature, leakage, and cooling power needs to be analyzed. By means of an innovative setup that enables monitoring and controlling the computing and cooling power consumption separately on a commercial enterprise server, this paper studies temperature-leakage-energy tradeoffs, obtaining an empirical model for the leakage component. Using this model, we design a controller that continuously seeks and settles at the optimal fan speed to minimize the energy consumption for a given workload. We run a customized dynamic load-synthesis tool to stress the system. Our proposed cooling controller achieves up to 9% energy savings and 30W reduction in peak power in comparison to the default cooling control scheme.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Internet está evolucionando hacia la conocida como Live Web. En esta nueva etapa en la evolución de Internet, se pone al servicio de los usuarios multitud de streams de datos sociales. Gracias a estas fuentes de datos, los usuarios han pasado de navegar por páginas web estáticas a interacturar con aplicaciones que ofrecen contenido personalizado, basada en sus preferencias. Cada usuario interactúa a diario con multiples aplicaciones que ofrecen notificaciones y alertas, en este sentido cada usuario es una fuente de eventos, y a menudo los usuarios se sienten desbordados y no son capaces de procesar toda esa información a la carta. Para lidiar con esta sobresaturación, han aparecido múltiples herramientas que automatizan las tareas más habituales, desde gestores de bandeja de entrada, gestores de alertas en redes sociales, a complejos CRMs o smart-home hubs. La contrapartida es que aunque ofrecen una solución a problemas comunes, no pueden adaptarse a las necesidades de cada usuario ofreciendo una solucion personalizada. Los Servicios de Automatización de Tareas (TAS de sus siglas en inglés) entraron en escena a partir de 2012 para dar solución a esta liminación. Dada su semejanza, estos servicios también son considerados como un nuevo enfoque en la tecnología de mash-ups pero centra en el usuarios. Los usuarios de estas plataformas tienen la capacidad de interconectar servicios, sensores y otros aparatos con connexión a internet diseñando las automatizaciones que se ajustan a sus necesidades. La propuesta ha sido ámpliamante aceptada por los usuarios. Este hecho ha propiciado multitud de plataformas que ofrecen servicios TAS entren en escena. Al ser un nuevo campo de investigación, esta tesis presenta las principales características de los TAS, describe sus componentes, e identifica las dimensiones fundamentales que los defines y permiten su clasificación. En este trabajo se acuña el termino Servicio de Automatización de Tareas (TAS) dando una descripción formal para estos servicios y sus componentes (llamados canales), y proporciona una arquitectura de referencia. De igual forma, existe una falta de herramientas para describir servicios de automatización, y las reglas de automatización. A este respecto, esta tesis propone un modelo común que se concreta en la ontología EWE (Evented WEb Ontology). Este modelo permite com parar y equiparar canales y automatizaciones de distintos TASs, constituyendo un aporte considerable paraa la portabilidad de automatizaciones de usuarios entre plataformas. De igual manera, dado el carácter semántico del modelo, permite incluir en las automatizaciones elementos de fuentes externas sobre los que razonar, como es el caso de Linked Open Data. Utilizando este modelo, se ha generado un dataset de canales y automatizaciones, con los datos obtenidos de algunos de los TAS existentes en el mercado. Como último paso hacia el lograr un modelo común para describir TAS, se ha desarrollado un algoritmo para aprender ontologías de forma automática a partir de los datos del dataset. De esta forma, se favorece el descubrimiento de nuevos canales, y se reduce el coste de mantenimiento del modelo, el cual se actualiza de forma semi-automática. En conclusión, las principales contribuciones de esta tesis son: i) describir el estado del arte en automatización de tareas y acuñar el término Servicio de Automatización de Tareas, ii) desarrollar una ontología para el modelado de los componentes de TASs y automatizaciones, iii) poblar un dataset de datos de canales y automatizaciones, usado para desarrollar un algoritmo de aprendizaje automatico de ontologías, y iv) diseñar una arquitectura de agentes para la asistencia a usuarios en la creación de automatizaciones. ABSTRACT The new stage in the evolution of the Web (the Live Web or Evented Web) puts lots of social data-streams at the service of users, who no longer browse static web pages but interact with applications that present them contextual and relevant experiences. Given that each user is a potential source of events, a typical user often gets overwhelmed. To deal with that huge amount of data, multiple automation tools have emerged, covering from simple social media managers or notification aggregators to complex CRMs or smart-home Hub/Apps. As a downside, they cannot tailor to the needs of every single user. As a natural response to this downside, Task Automation Services broke in the Internet. They may be seen as a new model of mash-up technology for combining social streams, services and connected devices from an end-user perspective: end-users are empowered to connect those stream however they want, designing the automations they need. The numbers of those platforms that appeared early on shot up, and as a consequence the amount of platforms following this approach is growing fast. Being a novel field, this thesis aims to shed light on it, presenting and exemplifying the main characteristics of Task Automation Services, describing their components, and identifying several dimensions to classify them. This thesis coins the term Task Automation Services (TAS) by providing a formal definition of them, their components (called channels), as well a TAS reference architecture. There is also a lack of tools for describing automation services and automations rules. In this regard, this thesis proposes a theoretical common model of TAS and formalizes it as the EWE ontology This model enables to compare channels and automations from different TASs, which has a high impact in interoperability; and enhances automations providing a mechanism to reason over external sources such as Linked Open Data. Based on this model, a dataset of components of TAS was built, harvesting data from the web sites of actual TASs. Going a step further towards this common model, an algorithm for categorizing them was designed, enabling their discovery across different TAS. Thus, the main contributions of the thesis are: i) surveying the state of the art on task automation and coining the term Task Automation Service; ii) providing a semantic common model for describing TAS components and automations; iii) populating a categorized dataset of TAS components, used to learn ontologies of particular domains from the TAS perspective; and iv) designing an agent architecture for assisting users in setting up automations, that is aware of their context and acts in consequence.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Il processo di Data Entry manuale non solo è oneroso dal punto di vista temporale ed economico, lo è ancor di più poiché rappresenta una fonte di errore: per questi motivi, l’acquisizione automatizzata delle informazioni lungo la catena produttiva è un obiettivo fortemente desiderato dal Gruppo per migliorare i propri business. Le tecnologie analizzate, ormai diffuse e standardizzate in ampia scala come barcode, etichette logistiche, terminali in radiofrequenza, possono apportare grandi benefici ai processi aziendali, ancor più integrandole su misura agli ERP aziendali, permettendo una registrazione rapida e corretta delle informazioni e la diffusione immediata delle stesse all’intera organizzazione. L’analisi dei processi e dei flussi hanno evidenziato le criticità e permesso di capire dove e quando intervenire con una progettazione che risultasse quanto più la best suite possibile. Il lancio dei fabbisogni, l’entrata, la mappatura e la movimentazione merci in Magazzino, lo stato di produzione, lo scarico componenti ed il carico di produzione in Confezionamento e Semilavorazione, l’istituzione di un magazzino di interscambio Dogana, un flusso di tracciabilità preciso e rapido, sono tutti eventi che modificheranno i processi aziendali, snellendoli e svincolando risorse che potranno essere reinvestite in operatività a valore aggiunto superiore. I risultati potenzialmente ottenibili, comprovati anche dalle esperienze esterne di fornitori e consulenza, hanno generato le condizioni necessarie ad un rapido studio e start dei lavori: il Gruppo è entusiasta ed impaziente di portare a termine quanto prima il progetto e di andare a regime con la nuova modalità operativa, snellita ed ottimizzata.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bibliography: p. 325-327.