946 resultados para information management
Resumo:
Nowadays, competitiveness introduces new behaviors and leads companies to a discomforting situation and often to non adaptation to environmental requirements. A growing number of challenges associated with control of information in organizations with engineering activities can be seen, particularly, the growing amount of information subject to continuous changes. The innovative performance of an organization is directly proportional to its ability to manage information. Thus, the importance of information management is recognized by the search for more competent ways to face current demands. The purpose of this article was to analyze informationdependent processes in technology-based companies, through the four major stages of information management. The comparative method of cases and qualitative research were used. The research was conducted in nine technology-based companies which were incubated or recently went through the incubating process at the Technological Park of Sao Carlos, in the state of Sao Paulo. Among the main results, it was found that in graduated companies information management and its procedures were identified as more conscious and structured in contrast to those of the incubated companies.
Resumo:
The objective of this research is to investigate the consequences of sharing or using information generated in one phase of the project to subsequent life cycle phases. Sometimes the assumptions supporting the information change, and at other times the context within which the information was created changes in a way that causes the information to become invalid. Often these inconsistencies are not discovered till the damage has occurred. This study builds on previous research that proposed a framework based on the metaphor of ‘ecosystems’ to model such inconsistencies in the 'supply chain' of life cycle information (Brokaw and Mukherjee, 2012). The outcome of such inconsistencies often results in litigation. Therefore, this paper studies a set of legal cases that resulted from inconsistencies in life cycle information, within the ecosystems framework. For each project, the errant information type, creator and user of the information and their relationship, time of creation and usage of the information in the life cycle of the project are investigated to assess the causes of failure of precise and accurate information flow as well as the impact of such failures in later stages of the project. The analysis shows that the misleading information is mostly due to lack of collaboration. Besides, in all the studied cases, lack of compliance checking, imprecise data and insufficient clarifications hinder accurate and smooth flow of information. The paper presents findings regarding the bottleneck of the information flow process during the design, construction and post construction phases. It also highlights the role of collaboration as well as information integration and management during the project life cycle and presents a baseline for improvement in information supply chain through the life cycle of the project.
Resumo:
Information management and geoinformation systems (GIS) have become indispensable in a large majority of protected areas all over the world. These tools are used for management purposes as well as for research and in recent years have become even more important for visitor information, education and communication. This study is divided into two parts: the first part provides a general overview of GIS and information management in a selected number of national park organizations. The second part lists and evaluates the needs of evolving large protected areas in Switzerland. The results show a wide use of GIS and information management tools in well established protected areas. The more isolated use of singular GIS tools has increasingly been replaced by an integrated geoinformation management. However, interview partners pointed out that human resources for GIS in most parks are limited. The interviews also highlight uneven access to national geodata. The view of integrated geoinformation management is not yet fully developed in the park projects in Switzerland. Short-term needs, such as software and data availability, motivate a large number of responses collected within an exhaustive questionnaire. Nevertheless, the need for coordinated action has been identified and should be followed up. The park organizations in North America show how an effective coordination and cooperation might be organized.
Resumo:
The purpose of this study was to investigate the association between epilepsy self-management and disease control and socio-economic status. Study participants were adult patients at two epilepsy specialty clinics in Houston, Texas that serve demographically and socioeconomically diverse populations. Self-management behaviors- medication, information, safety, seizure, and lifestyle management were tested against emergency room visits, hospitalizations, and seizure occurrence. Overall self-management score was associated with a greater likelihood of hospitalizations over a prior twelve month time frame, but not for three months, and was not associated with seizure occurrence or emergency room visits, at all. Scores on specific self-management behaviors varied in their relationships to the different disease control indicators, over time. Contrary to expectations based on the findings of previous research, higher information management scores were associated with greater likelihood of emergency room visits and hospitalizations, over the study's twelve months. Higher lifestyle management scores were associated with lower likelihood of any emergency room visits, over the preceding twelve months and emergency room visits for the last three months. The positive associations between overall self-management scores and information management behaviors and disease control are contrary to published research. These findings may indicate that those with worse disease control in a prior period employ stronger self-management efforts to better control their epilepsy. Further research is needed to investigate this hypothesis.^
Resumo:
Aiming to address requirements concerning integration of services in the context of ?big data?, this paper presents an innovative approach that (i) ensures a flexible, adaptable and scalable information and computation infrastructure, and (ii) exploits the competences of stakeholders and information workers to meaningfully confront information management issues such as information characterization, classification and interpretation, thus incorporating the underlying collective intelligence. Our approach pays much attention to the issues of usability and ease-of-use, not requiring any particular programming expertise from the end users. We report on a series of technical issues concerning the desired flexibility of the proposed integration framework and we provide related recommendations to developers of such solutions. Evaluation results are also discussed.
Resumo:
This research is concerned with the experimental software engineering area, specifically experiment replication. Replication has traditionally been viewed as a complex task in software engineering. This is possibly due to the present immaturity of the experimental paradigm applied to software development. Researchers usually use replication packages to replicate an experiment. However, replication packages are not the solution to all the information management problems that crop up when successive replications of an experiment accumulate. This research borrows ideas from the software configuration management and software product line paradigms to support the replication process. We believe that configuration management can help to manage and administer information from one replication to another: hypotheses, designs, data analysis, etc. The software product line paradigm can help to organize and manage any changes introduced into the experiment by each replication. We expect the union of the two paradigms in replication to improve the planning, design and execution of further replications and their alignment with existing replications. Additionally, this research work will contribute a web support environment for archiving information related to different experiment replications. Additionally, it will provide flexible enough information management support for running replications with different numbers and types of changes. Finally, it will afford massive storage of data from different replications. Experimenters working collaboratively on the same experiment must all have access to the different experiments.
Resumo:
La nanotecnologÃa es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos fÃsicos, quÃmicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones quÃmicas tradicionales, asà como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando asà nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavÃa no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vÃnculos existentes entre la medicina, la nanotecnologÃa y la informática, fomentando asà la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnologÃa. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, asà como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minerÃa de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura cientÃfica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clÃnicos de resultados clÃnicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clÃnicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologÃas provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologÃas presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado especÃficamente para tratar con las caracterÃsticas especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura cientÃfica y de los registros de ensayos clÃnicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clÃnicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarÃan a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, asà como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.
Resumo:
Currently personal data gathering in online markets is done on a far larger scale and much cheaper and faster than ever before. Within this scenario, a number of highly relevant companies for whom personal data is the key factor of production have emerged. However, up to now, the corresponding economic analysis has been restricted primarily to a qualitative perspective linked to privacy issues. Precisely, this paper seeks to shed light on the quantitative perspective, approximating the value of personal information for those companies that base their business model on this new type of asset. In the absence of any systematic research or methodology on the subject, an ad hoc procedure is developed in this paper. It starts with the examination of the accounts of a number of key players in online markets. This inspection first aims to determine whether the value of personal information databases is somehow reflected in the firms’ books, and second to define performance measures able to capture this value. After discussing the strengths and weaknesses of possible approaches, the method that performs best under several criteria (revenue per data record) is selected. From here, an estimation of the net present value of personal data is derived, as well as a slight digression into regional differences in the economic value of personal information.
Resumo:
The building sector has experienced a significant decline in recent years in Spain and Europe as a result of the financial crisis that began in 2007. This drop accompanies a low penetration of information and communication technologies in inter-organizational oriented business processes. The market decrease is causing a slowdown in the building sector, where only flexible small and medium enterprises (SMEs) survive thanks to specialization and innovation in services, which allow them to face new market demands. Inter-organizational information systems (IOISs) support innovation in services, and are thus a strategic tool for SMEs to obtain competitive advantage. Because of the inherent complexity of IOIS adoption, this research extends Kurnia and Johnston's (2000) theoretical model of IOIS adoption with an empirical model of IOIS characterization. The resultant model identifies the factors influencing IOIS adoption in SMEs in the building sector, to promote further service innovation for competitive and collaborative advantages. An empirical longitudinal study over six consecutive years using data from Spanish SMEs in the building sector validates the model, using the partial least squares technique and analyzing temporal stability. The main findings of this research are the four ways an IOIS might contribute to service innovation in the building sector. Namely: a) improving client interfaces and the link between service providers and end users; b) defining a specific market where SMEs can develop new service concepts; c) enhancing the service delivery system in traditional customer?supplier relationships; and d) introducing information and communication technologies and tools to improve information management.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Federal Highway Administration, Washington, D.C.
Resumo:
Federal Highway Administration, Washington, D.C.
Resumo:
Mode of access: Internet.