921 resultados para Database management -- Design
Resumo:
Item 247.
Resumo:
Includes bibliographical references (p. 17-19).
Resumo:
Objective: To assess the value of cusum analysis in hospital bed management. Design: Comparative analysis of medical patient flows, bed occupancy, and emergency department admission rates and access block over 2 years. Setting: Internal Medicine Services and Emergency Department in a teaching hospital. Interventions: Improvements in bed use and changes in the level of available beds. Main outcome measures: Average length of stay; percentage occupancy of available beds; number of patients waiting more than 8 hours for admission (access block); number of medical patients occupying beds in non-medical wards; and number of elective surgical admissions. Results: Cusum analysis provided a simple means of revealing important trends in patient flows that were not obvious in conventional time-series data. This prompted improvements in bed use that resulted in a decrease of 9500 occupied bed-days over a year. Unfortunately and unexpectedly, after some initial improvement, the levels of access block, medical ward congestion and elective surgical admissions all then deteriorated significantly. This was probably caused by excessive bed closures in response to the initial improvement in bed use. Conclusion: Cusum analysis is a useful technique for the early detection of significant changes in patient flows and bed use, and in determining the appropriate number of beds required for a given rate of patient flow.
Resumo:
With rapid advances in video processing technologies and ever fast increments in network bandwidth, the popularity of video content publishing and sharing has made similarity search an indispensable operation to retrieve videos of user interests. The video similarity is usually measured by the percentage of similar frames shared by two video sequences, and each frame is typically represented as a high-dimensional feature vector. Unfortunately, high complexity of video content has posed the following major challenges for fast retrieval: (a) effective and compact video representations, (b) efficient similarity measurements, and (c) efficient indexing on the compact representations. In this paper, we propose a number of methods to achieve fast similarity search for very large video database. First, each video sequence is summarized into a small number of clusters, each of which contains similar frames and is represented by a novel compact model called Video Triplet (ViTri). ViTri models a cluster as a tightly bounded hypersphere described by its position, radius, and density. The ViTri similarity is measured by the volume of intersection between two hyperspheres multiplying the minimal density, i.e., the estimated number of similar frames shared by two clusters. The total number of similar frames is then estimated to derive the overall similarity between two video sequences. Hence the time complexity of video similarity measure can be reduced greatly. To further reduce the number of similarity computations on ViTris, we introduce a new one dimensional transformation technique which rotates and shifts the original axis system using PCA in such a way that the original inter-distance between two high-dimensional vectors can be maximally retained after mapping. An efficient B+-tree is then built on the transformed one dimensional values of ViTris' positions. Such a transformation enables B+-tree to achieve its optimal performance by quickly filtering a large portion of non-similar ViTris. Our extensive experiments on real large video datasets prove the effectiveness of our proposals that outperform existing methods significantly.
Resumo:
Os ativos intangíveis receberam atenção especial de estudiosos nos últimos anos, no contexto organizacional de gestão de pessoas, visto que foram desenvolvidas propostas teóricas para compreendê-los e mensurá-los. Ancorados nesta linha de investigação, encontram-se o capital humano e o capital psicológico. Enquanto o capital humano representa o que os trabalhadores sabem fazer, o capital psicológico compreende um estado mental positivo composto por autoeficácia, esperança, otimismo e resiliência. Este estudo teve, como objetivo geral, analisar as relações entre capital humano e capital psicológico. Tratou-se de um estudo quantitativo do qual participaram 60 trabalhadores, estudantes do último período do Curso de Administração, com idade média de 23,85 anos, sendo a maioria do sexo feminino, solteira e empregada. Os dados para o estudo foram coletados em salas de aula de uma universidade particular, situada na Região do Grande ABC, por meio de um instrumento auto aplicável, contendo uma medida intervalar de capital psicológico com 12 itens, validada para o Brasil, e uma de capital humano com seis questões, sendo duas para medir a dimensão experiência e quatro para aferir educação. Foi criado um banco eletrônico, o qual foi submetido a análises descritivas e de correlação (r de Pearson) por meio do SPSS, versão 19.0. Os resultados revelaram que os participantes detinham um capital humano representado por 4,38 anos médios de experiência de trabalho , e que a maioria (75%) havia revelado possuir no máximo cinco anos neste quesito. Quanto à dimensão educação do capital humano, a maioria (96,70%) não havia concluído nenhum curso de graduação, se dedicou entre 11 a 20 anos aos estudos (81,60%), não realizou intercâmbios de estudos (93%), enquanto 86,70% já incluíram, em seu currículo acadêmico, atividades complementares previstas no Plano Pedagógico do Curso de Administração que cursavam, bem como 73,30% realizaram entre um a três estágios curriculares. As análises indicaram um escore mediano de capital psicológico, acentuado pela dificuldade dos participantes para reconhecerem que estavam em uma fase de sucesso no trabalho e de conseguirem enxergar o lado brilhante das coisas relativas ao trabalho. Ao se investigar as relações entre o capital humano e capital psicológico não foram encontradas correlações significativas. Diante de tais resultados, pareceu provável que os participantes do estudo, por serem predominantemente jovens trabalhadores que ainda não concluíram um curso de graduação, embora tivessem em seu percurso de vida mais de 10 anos dedicados aos estudos e trabalhado por volta de cinco anos em média, ainda não reconheciam, em si, a presença de um consistente capital humano nem psicológico. A ausência de relação observada entre os dois ativos intangíveis preconizados por teóricos como importantes, para que o trabalhador pudesse contribuir com a empresa no alcance de suas metas, pareceu revelar que ainda eram necessários mais estudos e desenvolvimento de teorização, para sustentar não somente as hipóteses acerca de ativos intangíveis, como também permitir identificar a relação de dependência que pudesse existir entre as categorias de capital humano e psicológico.
Resumo:
The purpose of this work is the development of database of the distributed information measurement and control system that implements methods of optical spectroscopy for plasma physics research and atomic collisions and provides remote access to information and hardware resources within the Intranet/Internet networks. The database is based on database management system Oracle9i. Client software was realized in Java language. The software was developed using Model View Controller architecture, which separates application data from graphical presentation components and input processing logic. The following graphical presentations were implemented: measurement of radiation spectra of beam and plasma objects, excitation function for non-elastic collisions of heavy particles and analysis of data acquired in preceding experiments. The graphical clients have the following functionality of the interaction with the database: browsing information on experiments of a certain type, searching for data with various criteria, and inserting the information about preceding experiments.
Resumo:
* The research has been partially supported by INFRAWEBS - IST FP62003/IST/2.3.2.3 Research Project No. 511723 and “Technologies of the Information Society for Knowledge Processing and Management” - IIT-BAS Research Project No. 010061.
Resumo:
ACM Computing Classification System (1998): D.2.5, D.2.9, D.2.11.
Resumo:
Purpose – The objective of this paper is to address the question whether and how firms can follow a standard management process to cope with emerging corporate social responsibility (CSR) challenges? Both researchers and practitioners have paid increasing attention to the question because of the rapidly evolving CSR expectations of stakeholders and the limited diffusion of CSR standardization. The question was addressed by developing a theoretical framework to explain how dynamic capabilities can contribute to effective CSR management. Design/methodology/approach – Based on 64 world-leading companies’ contemporary CSR reports, we carried out a large-scale content analysis to identify and examine the common organizational processes involved in CSR management and the dynamic capabilities underpinning those management processes. Findings – Drawing on the dynamic capabilities perspective, we demonstrate how the deployment of three dynamic capabilities for CSR management, namely, scanning, sensing and reconfiguration capabilities can help firms to meet emerging CSR requirements by following a set of common management processes. The findings demonstrate that what is more important in CSR standardization is the identification and development of the underlying dynamic capabilities and the related organizational processes and routines, rather than the detailed operational activities. Originality/value - Our study is an early attempt to examine the fundamental organizational capabilities and processes involved in CSR management from the dynamic capabilities perspective. Our research findings contribute to CSR standardization literature by providing a new theoretical perspective to better understand the capabilities enabling common CSR management processes.
Resumo:
An implementation of Sem-ODB—a database management system based on the Semantic Binary Model is presented. A metaschema of Sem-ODB database as well as the top-level architecture of the database engine is defined. A new benchmarking technique is proposed which allows databases built on different database models to compete fairly. This technique is applied to show that Sem-ODB has excellent efficiency comparing to a relational database on a certain class of database applications. A new semantic benchmark is designed which allows evaluation of the performance of the features characteristic of semantic database applications. An application used in the benchmark represents a class of problems requiring databases with sparse data, complex inheritances and many-to-many relations. Such databases can be naturally accommodated by semantic model. A fixed predefined implementation is not enforced allowing the database designer to choose the most efficient structures available in the DBMS tested. The results of the benchmark are analyzed. ^ A new high-level querying model for semantic databases is defined. It is proven adequate to serve as an efficient native semantic database interface, and has several advantages over the existing interfaces. It is optimizable and parallelizable, supports the definition of semantic userviews and the interoperability of semantic databases with other data sources such as World Wide Web, relational, and object-oriented databases. The query is structured as a semantic database schema graph with interlinking conditionals. The query result is a mini-database, accessible in the same way as the original database. The paradigm supports and utilizes the rich semantics and inherent ergonomics of semantic databases. ^ The analysis and high-level design of a system that exploits the superiority of the Semantic Database Model to other data models in expressive power and ease of use to allow uniform access to heterogeneous data sources such as semantic databases, relational databases, web sites, ASCII files, and others via a common query interface is presented. The Sem-ODB engine is used to control all the data sources combined under a unified semantic schema. A particular application of the system to provide an ODBC interface to the WWW as a data source is discussed. ^
Resumo:
Since multimedia data, such as images and videos, are way more expressive and informative than ordinary text-based data, people find it more attractive to communicate and express with them. Additionally, with the rising popularity of social networking tools such as Facebook and Twitter, multimedia information retrieval can no longer be considered a solitary task. Rather, people constantly collaborate with one another while searching and retrieving information. But the very cause of the popularity of multimedia data, the huge and different types of information a single data object can carry, makes their management a challenging task. Multimedia data is commonly represented as multidimensional feature vectors and carry high-level semantic information. These two characteristics make them very different from traditional alpha-numeric data. Thus, to try to manage them with frameworks and rationales designed for primitive alpha-numeric data, will be inefficient. An index structure is the backbone of any database management system. It has been seen that index structures present in existing relational database management frameworks cannot handle multimedia data effectively. Thus, in this dissertation, a generalized multidimensional index structure is proposed which accommodates the atypical multidimensional representation and the semantic information carried by different multimedia data seamlessly from within one single framework. Additionally, the dissertation investigates the evolving relationships among multimedia data in a collaborative environment and how such information can help to customize the design of the proposed index structure, when it is used to manage multimedia data in a shared environment. Extensive experiments were conducted to present the usability and better performance of the proposed framework over current state-of-art approaches.
Resumo:
Component-based Software Engineering (CBSE) and Service-Oriented Architecture (SOA) became popular ways to develop software over the last years. During the life-cycle of a software system, several components and services can be developed, evolved and replaced. In production environments, the replacement of core components, such as databases, is often a risky and delicate operation, where several factors and stakeholders should be considered. Service Level Agreement (SLA), according to ITILv3’s official glossary, is “an agreement between an IT service provider and a customer. The agreement consists on a set of measurable constraints that a service provider must guarantee to its customers.”. In practical terms, SLA is a document that a service provider delivers to its consumers with minimum quality of service (QoS) metrics.This work is intended to assesses and improve the use of SLAs to guide the transitioning process of databases on production environments. In particular, in this work we propose SLA-Based Guidelines/Process to support migrations from a relational database management system (RDBMS) to a NoSQL one. Our study is validated by case studies.