888 resultados para Data Structures, Cryptology and Information Theory
Resumo:
Supply Chain Risk Management (SCRM) has become a popular area of research and study in recent years. This can be highlighted by the number of peer reviewed articles that have appeared in academic literature. This coupled with the realisation by companies that SCRM strategies are required to mitigate the risks that they face, makes for challenging research questions in the field of risk management. The challenge that companies face today is not only to identify the types of risks that they face, but also to assess the indicators of risk that face them. This will allow them to mitigate that risk before any disruption to the supply chain occurs. The use of social network theory can aid in the identification of disruption risk. This thesis proposes the combination of social networks, behavioural risk indicators and information management, to uniquely identify disruption risk. The propositions that were developed from the literature review and exploratory case study in the aerospace OEM, in this thesis are:- By improving information flows, through the use of social networks, we can identify supply chain disruption risk. - The management of information to identify supply chain disruption risk can be explored using push and pull concepts. The propositions were further explored through four focus group sessions, two within the OEM and two within an academic setting. The literature review conducted by the researcher did not find any studies that have evaluated supply chain disruption risk management in terms of social network analysis or information management studies. The evaluation of SCRM using these methods is thought to be a unique way of understanding the issues in SCRM that practitioners face today in the aerospace industry.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
The basic structure of the General Information Theory (GIT) is presented in the paper. The main divisions of the GIT are outlined. Some new results are pointed.
Resumo:
The polyparametric intelligence information system for diagnostics human functional state in medicine and public health is developed. The essence of the system consists in polyparametric describing of human functional state with the unified set of physiological parameters and using the polyparametric cognitive model developed as the tool for a system analysis of multitude data and diagnostics of a human functional state. The model is developed on the basis of general principles geometry and symmetry by algorithms of artificial intelligence systems. The architecture of the system is represented. The model allows analyzing traditional signs - absolute values of electrophysiological parameters and new signs generated by the model – relationships of ones. The classification of physiological multidimensional data is made with a transformer of the model. The results are presented to a physician in a form of visual graph – a pattern individual functional state. This graph allows performing clinical syndrome analysis. A level of human functional state is defined in the case of the developed standard (“ideal”) functional state. The complete formalization of results makes it possible to accumulate physiological data and to analyze them by mathematics methods.
Resumo:
* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be
Resumo:
An approach for organizing the information in the data warehouses is presented in the paper. The possibilities of the numbered information spaces for building data warehouses are discussed. An application is outlined in the paper.
Resumo:
Use of modern object-oriented methods of designing of information systems (IS) both descriptions of interrelations IS and automated with its help business-processes of the enterprises leads to necessity of construction uniform complete IS on the basis of set of local models of such system. As a result of use of such approach there are the contradictions caused by inconsistency of actions of separate developers IS with each other and that is much more important, inconsistency of the points of view of separate users IS. Besides similar contradictions arise while in service IS at the enterprise because of constant change separate business- processes of the enterprise. It is necessary to note also, that now overwhelming majority IS is developed and maintained as set of separate functional modules. Each of such modules can function as independent IS. However the problem of integration of separate functional modules in uniform system can lead to a lot of problems. Among these problems it is possible to specify, for example, presence in modules of functions which are not used by the enterprise to destination, to complexity of information and program integration of modules of various manufacturers, etc. In most cases these contradictions and the reasons, their caused, are consequence of primary representation IS as equilibrium steady system. In work [1] representation IS as dynamic multistable system which is capable to carry out following actions has been considered:
Resumo:
The purpose of this article is to evaluate the effectiveness of learning by doing as a practical tool for managing the training of students in "Library Management" at the ULSIT, Sofia, Bulgaria, by using the creation of project 'Data Base “Bulgarian Revival Towns” (CD), financed by Bulgarian Ministry of Education, Youth and Science (1/D002/144/13.10.2011) headed by Prof. DSc Ivanka Yankova, which aims to create new information resource for the towns which will serve the needs of scientific researches. By participating in generating the an array in the database through searching, selection and digitization of documents from these period, at the same time students get an opportunity to expand their skills to work effectively in a team, finding the interdisciplinary, a causal connection between the studied items, objects and subjects and foremost – practical experience in the field of digitization, information behavior, strategies for information search, etc. This method achieves good results for the accumulation of sustainable knowledge and it generates motivation to work in the field of library and information professions.
Resumo:
In this paper we summarize our recently proposed work on the information theory analysis of regenerative channels. We discuss how the design and the transfer function properties of the regenerator affect the noise statistics and enable Shannon capacities higher than that of the corresponding linear channels (in the absence of regeneration).
Resumo:
This research pursued the conceptualization and real-time verification of a system that allows a computer user to control the cursor of a computer interface without using his/her hands. The target user groups for this system are individuals who are unable to use their hands due to spinal dysfunction or other afflictions, and individuals who must use their hands for higher priority tasks while still requiring interaction with a computer. ^ The system receives two forms of input from the user: Electromyogram (EMG) signals from muscles in the face and point-of-gaze coordinates produced by an Eye Gaze Tracking (EGT) system. In order to produce reliable cursor control from the two forms of user input, the development of this EMG/EGT system addressed three key requirements: an algorithm was created to accurately translate EMG signals due to facial movements into cursor actions, a separate algorithm was created that recognized an eye gaze fixation and provided an estimate of the associated eye gaze position, and an information fusion protocol was devised to efficiently integrate the outputs of these algorithms. ^ Experiments were conducted to compare the performance of EMG/EGT cursor control to EGT-only control and mouse control. These experiments took the form of two different types of point-and-click trials. The data produced by these experiments were evaluated using statistical analysis, Fitts' Law analysis and target re-entry (TRE) analysis. ^ The experimental results revealed that though EMG/EGT control was slower than EGT-only and mouse control, it provided effective hands-free control of the cursor without a spatial accuracy limitation, and it also facilitated a reliable click operation. This combination of qualities is not possessed by either EGT-only or mouse control, making EMG/EGT cursor control a unique and practical alternative for a user's cursor control needs. ^
Resumo:
The purpose of this study was to test Lotka’s law of scientific publication productivity using the methodology outlined by Pao (1985), in the field of Library and Information Studies (LIS). Lotka’s law has been sporadically tested in the field over the past 30+ years, but the results of these studies are inconclusive due to the varying methods employed by the researchers. ^ A data set of 1,856 citations that were found using the ISI Web of Knowledge databases were studied. The values of n and c were calculated to be 2.1 and 0.6418 (64.18%) respectively. The Kolmogorov-Smirnov (K-S) one sample goodness-of-fit test was conducted at the 0.10 level of significance. The Dmax value is 0.022758 and the calculated critical value is 0.026562. It was determined that the null hypothesis stating that there is no difference in the observed distribution of publications and the distribution obtained using Lotka’s and Pao’s procedure could not be rejected. ^ This study finds that literature in the field of Library and Information Studies does conform to Lotka’s law with reliable results. As result, Lotka’s law can be used in LIS as a standardized means of measuring author publication productivity which will lead to findings that are comparable on many levels (e.g., department, institution, national). Lotka’s law can be employed as an empirically proven analytical tool to establish publication productivity benchmarks for faculty and faculty librarians. Recommendations for further study include (a) exploring the characteristics of the high and low producers; (b) finding a way to successfully account for collaborative contributions in the formula; and, (c) a detailed study of institutional policies concerning publication productivity and its impact on the appointment, tenure and promotion process of academic librarians. ^
Resumo:
Climate change is one of the most important and urgent issues of our time. Since 2006, China has overtaken the United States as the world’s largest greenhouse gas (GHG) emitter. China’s role in an international climate change solution has gained increased attention. Although much literature has addressed the functioning, performance, and implications of existing climate change mitigation policies and actions in China, there is insufficient literature that illuminates how the national climate change mitigation policies have been formulated and shaped. This research utilizes the policy network approach to explore China’s climate change mitigation policy making by examining how a variety of government, business, and civil society actors have formed networks to address environmental contexts and influence the policy outcomes and changes. The study is qualitative in nature. Three cases are selected to illustrate structural and interactive features of the specific policy network settings in shaping different policy arrangements and influencing the outcomes in the Chinese context. The three cases include the regulatory evolution of China’s climate change policy making; the country’s involvement in the Clean Development Mechanism (CDM) activity, and China’s exploration of voluntary agreement through adopting the Top-1000 Industrial Energy Conservation Program. The historical analysis of the policy process uses both primary data from interviews and fieldwork, and secondary data from relevant literature. The study finds that the Chinese central government dominates domestic climate change policy making; however, expanded action networks that involve actors at all levels have emerged in correspondence to diverse climate mitigation policy arrangements. The improved openness and accessibility of climate change policy network have contributed to its proactive engagement in promoting mitigation outcomes. In conclusion, the research suggests that the policy network approach provides a useful tool for studying China’s climate change policy making process. The involvement of various types of state and non-state actors has shaped new relations and affected the policy outcomes and changes. In addition, through the cross-case analysis, the study challenges the “fragmented authoritarianism” model and argues that this once-influential model is not appropriate in explaining new development and changes of policy making processes in contemporary China.
Resumo:
Background: Biologists often need to assess whether unfamiliar datasets warrant the time investment required for more detailed exploration. Basing such assessments on brief descriptions provided by data publishers is unwieldy for large datasets that contain insights dependent on specific scientific questions. Alternatively, using complex software systems for a preliminary analysis may be deemed as too time consuming in itself, especially for unfamiliar data types and formats. This may lead to wasted analysis time and discarding of potentially useful data. Results: We present an exploration of design opportunities that the Google Maps interface offers to biomedical data visualization. In particular, we focus on synergies between visualization techniques and Google Maps that facilitate the development of biological visualizations which have both low-overhead and sufficient expressivity to support the exploration of data at multiple scales. The methods we explore rely on displaying pre-rendered visualizations of biological data in browsers, with sparse yet powerful interactions, by using the Google Maps API. We structure our discussion around five visualizations: a gene co-regulation visualization, a heatmap viewer, a genome browser, a protein interaction network, and a planar visualization of white matter in the brain. Feedback from collaborative work with domain experts suggests that our Google Maps visualizations offer multiple, scale-dependent perspectives and can be particularly helpful for unfamiliar datasets due to their accessibility. We also find that users, particularly those less experienced with computer use, are attracted by the familiarity of the Google Maps API. Our five implementations introduce design elements that can benefit visualization developers. Conclusions: We describe a low-overhead approach that lets biologists access readily analyzed views of unfamiliar scientific datasets. We rely on pre-computed visualizations prepared by data experts, accompanied by sparse and intuitive interactions, and distributed via the familiar Google Maps framework. Our contributions are an evaluation demonstrating the validity and opportunities of this approach, a set of design guidelines benefiting those wanting to create such visualizations, and five concrete example visualizations.
Resumo:
Online Social Network (OSN) services provided by Internet companies bring people together to chat, share the information, and enjoy the information. Meanwhile, huge amounts of data are generated by those services (they can be regarded as the social media ) every day, every hour, even every minute, and every second. Currently, researchers are interested in analyzing the OSN data, extracting interesting patterns from it, and applying those patterns to real-world applications. However, due to the large-scale property of the OSN data, it is difficult to effectively analyze it. This dissertation focuses on applying data mining and information retrieval techniques to mine two key components in the social media data — users and user-generated contents. Specifically, it aims at addressing three problems related to the social media users and contents: (1) how does one organize the users and the contents? (2) how does one summarize the textual contents so that users do not have to go over every post to capture the general idea? (3) how does one identify the influential users in the social media to benefit other applications, e.g., Marketing Campaign? The contribution of this dissertation is briefly summarized as follows. (1) It provides a comprehensive and versatile data mining framework to analyze the users and user-generated contents from the social media. (2) It designs a hierarchical co-clustering algorithm to organize the users and contents. (3) It proposes multi-document summarization methods to extract core information from the social network contents. (4) It introduces three important dimensions of social influence, and a dynamic influence model for identifying influential users.
Resumo:
Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved. Acknowledgements This review is one of a series of systematic reviews for the ROMEO project (Review Of MEn and Obesity), funded by the National Institute for Health Research, Health Technology Assessment Programme (NIHR HTA Project 09/127/01; Systematic reviews and integrated report on the quantitative and qualitative evidence base for the management of obesity in men http://www.hta.ac.uk/2545). The views and opinions expressed therein are those of the authors and do not necessarily reflect those of the Department of Health. HERU, HSRU and NMAHP are funded by the Chief Scientist Office of the Scottish Government Health and Social Care Directorates. The authors accept full responsibility for this publication. We would also like to thank the Men's Health Forums of Scotland, Ireland, England and Wales: Tim Street, Paula Carroll, Colin Fowler and David Wilkins. We also thank Kate Jolly for further information about the Lighten Up trial.