998 resultados para information explosion
Resumo:
Pode-se afirmar que a evolução tecnológica (desenvolvimento de novos instrumentos de medição como, softwares, satélites e computadores, bem como, o barateamento das mídias de armazenamento) permite às Organizações produzirem e adquirirem grande quantidade de dados em curto espaço de tempo. Devido ao volume de dados, Organizações de pesquisa se tornam potencialmente vulneráveis aos impactos da explosão de informações. Uma solução adotada por algumas Organizações é a utilização de ferramentas de sistemas de informação para auxiliar na documentação, recuperação e análise dos dados. No âmbito científico, essas ferramentas são desenvolvidas para armazenar diferentes padrões de metadados (dados sobre dados). Durante o processo de desenvolvimento destas ferramentas, destaca-se a adoção de padrões como a Linguagem Unificada de Modelagem (UML, do Inglês Unified Modeling Language), cujos diagramas auxiliam na modelagem de diferentes aspectos do software. O objetivo deste estudo é apresentar uma ferramenta de sistemas de informação para auxiliar na documentação dos dados das Organizações por meio de metadados e destacar o processo de modelagem de software, por meio da UML. Será abordado o Padrão de Metadados Digitais Geoespaciais, amplamente utilizado na catalogação de dados por Organizações científicas de todo mundo, e os diagramas dinâmicos e estáticos da UML como casos de uso, sequências e classes. O desenvolvimento das ferramentas de sistemas de informação pode ser uma forma de promover a organização e a divulgação de dados científicos. No entanto, o processo de modelagem requer especial atenção para o desenvolvimento de interfaces que estimularão o uso das ferramentas de sistemas de informação.
Resumo:
Johdanto ; Soitinkokeita ; Kohinakenttä ; Äänisuunnittelua ; Information explosion - poikkialaisen projektin esittely ; Information explosionin kollaasityylistä ; Elektroakustisten teosten unohdus ja säveltäjäkuva.
Resumo:
Marketing of information services is now an important goal of librarians all over the ~orld t? attract mor~ users to the libr~ry. thereby promoting user satisfaction. Marketing IS considered ~s ~n Integr~1 part of libraries and information centres mainly due to the developments In information t.echnology, information explosion, and declining library budgets. Kotler defines marketing as the "analysis, planning, implementation and control o~ carefully formulated programs designed to bring about voluntary exchanges of values WIth target markets fo~ the ~~rpos~ of~chievingorganizational objectives". Organizations suc.h as museums, unrversittes, libraries, and charities need to market their causes and their products to gain. political and social support as well as economic support (Kotler, 1995). In the marketing world people are now migrating from the traditional Four P ~lodelto th~ S~VE mode! (Alt~ns~n, 2013). According to the SAVE model, marketing III an orgarusanon must grve pnonty to 'Solutions' instead of the features or functions of~he.'Product" Similarily it is to focus on 'Access', instead of ,Place'; 'Value' instead of Price" so that the benefits are more stressed, rather than production costs. Instead of :Proliloti.on', marketi~g has. to conc~ntrate on 'Educating' the customers, providing lJlfo~mahol~ about t~eJr specific req~lrements, instead of relying on advertising, public rel~tlons, direct selling etc. From a library point ofview, to ensure maximum utilization of library ~ervices there is an increasing need for definite marketing plans to exploit the techn.ologlcal dcvelop",len~s so ~s to entice the users. By anticipating their changing needs and ~y co.mmulllcatl~g WIth them it should be possible to devise strategies to present various library services and products in a perceptive style.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
During the last decades, we assisted to what is called “information explosion”. With the advent of the new technologies and new contexts, the volume, velocity and variety of data has increased exponentially, becoming what is known today as big data. Among them, we emphasize telecommunications operators, which gather, using network monitoring equipment, millions of network event records, the Call Detail Records (CDRs) and the Event Detail Records (EDRs), commonly known as xDRs. These records are stored and later processed to compute network performance and quality of service metrics. With the ever increasing number of collected xDRs, its generated volume needing to be stored has increased exponentially, making the current solutions based on relational databases not suited anymore. To tackle this problem, the relational data store can be replaced by Hadoop File System (HDFS). However, HDFS is simply a distributed file system, this way not supporting any aspect of the relational paradigm. To overcome this difficulty, this paper presents a framework that enables the current systems inserting data into relational databases, to keep doing it transparently when migrating to Hadoop. As proof of concept, the developed platform was integrated with the Altaia - a performance and QoS management of telecommunications networks and services.
Resumo:
Las nuevas y múltiples funciones que debe atender actualmente la biblioteca pública a raíz de un público cada vez más heterogéneo e indefinido, de la creciente variedad de medios de comunicación y de métodos de presentación y reproducción de la información, de la explosión informativa y la demanda creciente de ofertas culturales plantean una serie de dilemas: las bibliotecas están capacitadas para atender todas las funciones?, cuáles son las prioridades frente a la evolución que cada una de ellas soporta? cuál es esa realidad en los países en vías de desarrollo como Argentina con inestabilidad política, sin desarrollo económico y sin conciencia, a nivel de las autoridades, del valor e importancia de estas instituciones?El siglo XIX a raíz de la evolución de la alfabetización y de la democratización de la instrucción asistió a la eclosión de la biblioteca de lectura pública que, a través de los libros prepara a toda la ciudadanía para participar en el ejercicio del poder.Estas bibliotecas sufren la influencia del medio socio- cultural y político y su evolución y desarrollo está vinculado a ese entorno. Ello explica el por qué de los profundos cambios que han venido operándose en sus funciones y en consecuencia en los servicios que debe prestar.
Resumo:
El volumen de datos en bibliotecas ha aumentado enormemente en los últimos años, así como también la complejidad de sus fuentes y formatos de información, dificultando su gestión y acceso, especialmente como apoyo en la toma de decisiones. Sabiendo que una buena gestión de bibliotecas involucra la integración de indicadores estratégicos, la implementación de un Data Warehouse (DW), que gestione adecuadamente tal cantidad de información, así como su compleja mezcla de fuentes de datos, se convierte en una alternativa interesante a considerar. El artículo describe el diseño e implementación de un sistema de soporte de decisiones (DSS) basado en técnicas de DW para la biblioteca de la Universidad de Cuenca. Para esto, el estudio utiliza una metodología holística, propuesto por Siguenza-Guzman et al. (2014) para la evaluación integral de bibliotecas. Dicha metodología evalúa la colección y los servicios, incorporando importantes elementos para la gestión de bibliotecas, tales como: el desempeño de los servicios, el control de calidad, el uso de la colección y la interacción con el usuario. A partir de este análisis, se propone una arquitectura de DW que integra, procesa y almacena los datos. Finalmente, estos datos almacenados son analizados y visualizados a través de herramientas de procesamiento analítico en línea (OLAP). Las pruebas iniciales de implementación confirman la viabilidad y eficacia del enfoque propuesto, al integrar con éxito múltiples y heterogéneas fuentes y formatos de datos, facilitando que los directores de bibliotecas generen informes personalizados, e incluso permitiendo madurar los procesos transaccionales que diariamente se llevan a cabo.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
AEA Technology has provided an assessment of the probability of α-mode containment failure for the Sizewell B PWR. After a preliminary review of the methodologies available it was decided to use the probabilistic approach described in the paper, based on an extension of the methodology developed by Theofanous et al. (Nucl. Sci. Eng. 97 (1987) 259–325). The input to the assessment is 12 probability distributions; the bases for the quantification of these distributions are discussed. The α-mode assessment performed for the Sizewell B PWR has demonstrated the practicality of the event-tree method with input data represented by probability distributions. The assessment itself has drawn attention to a number of topics, which may be plant and sequence dependent, and has indicated the importance of melt relocation scenarios. The α-mode failure probability following an accident that leads to core melt relocation to the lower head for the Sizewell B PWR has been assessed as a few parts in 10 000, on the basis of current information. This assessment has been the first to consider elevated pressures (6 MPa and 15 MPa) besides atmospheric pressure, but the results suggest only a modest sensitivity to system pressure.
Resumo:
Clearly, the fossil record from the Cambrian period is an invaluable tool for deciphering animal evolution. Less clear, however, is how to integrate the paleontological information with molecular phylogeny and developmental biology data. Equally challenging is answering why the Cambrian period provided such a rich interval for the redeployment of genes that led to more complex bodyplans.
Resumo:
IMB (Irvine, Michigan, Brookline), a collaboration between the University of Michigan, the University of California at Irvine, and the U.S. Department of Energy, was an experiment designed to determine the ultimate stability of matter. The blue giant star Sanduleak in the large magellenic cloud exploded 170,000 years ago giving off a pulse of neutrinos that arrived at earth on 23 February 1987. For a few weeks it was as bright as 100 million suns.
Resumo:
Report date: Oct. 13, 1977.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-07
Resumo:
One of the great challenges of the scientific community on theories of genetic information, genetic communication and genetic coding is to determine a mathematical structure related to DNA sequences. In this paper we propose a model of an intra-cellular transmission system of genetic information similar to a model of a power and bandwidth efficient digital communication system in order to identify a mathematical structure in DNA sequences where such sequences are biologically relevant. The model of a transmission system of genetic information is concerned with the identification, reproduction and mathematical classification of the nucleotide sequence of single stranded DNA by the genetic encoder. Hence, a genetic encoder is devised where labelings and cyclic codes are established. The establishment of the algebraic structure of the corresponding codes alphabets, mappings, labelings, primitive polynomials (p(x)) and code generator polynomials (g(x)) are quite important in characterizing error-correcting codes subclasses of G-linear codes. These latter codes are useful for the identification, reproduction and mathematical classification of DNA sequences. The characterization of this model may contribute to the development of a methodology that can be applied in mutational analysis and polymorphisms, production of new drugs and genetic improvement, among other things, resulting in the reduction of time and laboratory costs.
Resumo:
To assess the completeness and reliability of the Information System on Live Births (Sinasc) data. A cross-sectional analysis of the reliability and completeness of Sinasc's data was performed using a sample of Live Birth Certificate (LBC) from 2009, related to births from Campinas, Southeast Brazil. For data analysis, hospitals were grouped according to category of service (Unified National Health System, private or both), 600 LBCs were randomly selected and the data were collected in LBC-copies through mothers and newborns' hospital records and by telephone interviews. The completeness of LBCs was evaluated, calculating the percentage of blank fields, and the LBCs agreement comparing the originals with the copies was evaluated by Kappa and intraclass correlation coefficients. The percentage of completeness of LBCs ranged from 99.8%-100%. For the most items, the agreement was excellent. However, the agreement was acceptable for marital status, maternal education and newborn infants' race/color, low for prenatal visits and presence of birth defects, and very low for the number of deceased children. The results showed that the municipality Sinasc is reliable for most of the studied variables. Investments in training of the professionals are suggested in an attempt to improve system capacity to support planning and implementation of health activities for the benefit of maternal and child population.