911 resultados para Classical Information
Resumo:
The design and construction community has shown increasing interest in adopting building information models (BIMs). The richness of information provided by BIMs has the potential to streamline the design and construction processes by enabling enhanced communication, coordination, automation and analysis. However, there are many challenges in extracting construction-specific information out of BIMs. In most cases, construction practitioners have to manually identify the required information, which is inefficient and prone to error, particularly for complex, large-scale projects. This paper describes the process and methods we have formalized to partially automate the extraction and querying of construction-specific information from a BIM. We describe methods for analyzing a BIM to query for spatial information that is relevant for construction practitioners, and that is typically represented implicitly in a BIM. Our approach integrates ifcXML data and other spatial data to develop a richer model for construction users. We employ custom 2D topological XQuery predicates to answer a variety of spatial queries. The validation results demonstrate that this approach provides a richer representation of construction-specific information compared to existing BIM tools.
Resumo:
Modern mobile computing devices are versatile, but bring the burden of constant settings adjustment according to the current conditions of the environment. While until today, this task has to be accomplished by the human user, the variety of sensors usually deployed in such a handset provides enough data for autonomous self-configuration by a learning, adaptive system. However, this data is not fully available at certain points in time, or can contain false values. Handling potentially incomplete sensor data to detect context changes without a semantic layer represents a scientific challenge which we address with our approach. A novel machine learning technique is presented - the Missing-Values-SOM - which solves this problem by predicting setting adjustments based on context information. Our method is centered around a self-organizing map, extending it to provide a means of handling missing values. We demonstrate the performance of our approach on mobile context snapshots, as well as on classical machine learning datasets.
Resumo:
Crowdsourcing has become a popular approach for capitalizing on the potential of large and open crowds of people external to the organization. While crowdsourcing as a phenomenon is studied in a variety of fields, research mostly focuses on isolated aspects and little is known about the integrated design of crowdsourcing efforts. We introduce a socio-technical systems perspective on crowdsourcing, which provides a deeper understanding of the components and relationships in crowdsourcing systems. By considering the function of crowdsourcing systems within their organizational context, we develop a typology of four distinct system archetypes. We analyze the characteristics of each type and derive a number of design requirements for the respective system components. The paper lays a foundation for IS-based crowdsourcing research, channels related academic work, and helps guiding the study and design of crowdsourcing information systems.
Resumo:
On August 16, 2012 the SIGIR 2012 Workshop on Open Source Information Retrieval was held as part of the SIGIR 2012 conference in Portland, Oregon, USA. There were 2 invited talks, one from industry and one from academia. There were 6 full papers and 6 short papers presented as well as demonstrations of 4 open source tools. Finally there was a lively discussion on future directions for the open source Information Retrieval community. This contribution discusses the events of the workshop and outlines future directions for the community.
Resumo:
The evolution of classic power grids to smart grids creates chances for most participants in the energy sector. Customers can save money by reducing energy consumption, energy providers can better predict energy demand and environment benefits since lower energy consumption implies lower energy production including a decrease of emissions from plants. But information and communication systems supporting smart grids can also be subject to classical or new network attacks. Attacks can result in serious damage such as harming privacy of customers, creating economical loss and even disturb the power supply/demand balance of large regions and countries. In this paper, we give an overview about the German smart measuring architecture, protocols and security. Afterwards, we present a simulation framework which enables researchers to analyze security aspects of smart measuring scenarios.
Resumo:
We consider Cooperative Intrusion Detection System (CIDS) which is a distributed AIS-based (Artificial Immune System) IDS where nodes collaborate over a peer-to-peer overlay network. The AIS uses the negative selection algorithm for the selection of detectors (e.g., vectors of features such as CPU utilization, memory usage and network activity). For better detection performance, selection of all possible detectors for a node is desirable but it may not be feasible due to storage and computational overheads. Limiting the number of detectors on the other hand comes with the danger of missing attacks. We present a scheme for the controlled and decentralized division of detector sets where each IDS is assigned to a region of the feature space. We investigate the trade-off between scalability and robustness of detector sets. We address the problem of self-organization in CIDS so that each node generates a distinct set of the detectors to maximize the coverage of the feature space while pairs of nodes exchange their detector sets to provide a controlled level of redundancy. Our contribution is twofold. First, we use Symmetric Balanced Incomplete Block Design, Generalized Quadrangles and Ramanujan Expander Graph based deterministic techniques from combinatorial design theory and graph theory to decide how many and which detectors are exchanged between which pair of IDS nodes. Second, we use a classical epidemic model (SIR model) to show how properties from deterministic techniques can help us to reduce the attack spread rate.
Resumo:
Theoretical foundations of higher order spectral analysis are revisited to examine the use of time-varying bicoherence on non-stationary signals using a classical short-time Fourier approach. A methodology is developed to apply this to evoked EEG responses where a stimulus-locked time reference is available. Short-time windowed ensembles of the response at the same offset from the reference are considered as ergodic cyclostationary processes within a non-stationary random process. Bicoherence can be estimated reliably with known levels at which it is significantly different from zero and can be tracked as a function of offset from the stimulus. When this methodology is applied to multi-channel EEG, it is possible to obtain information about phase synchronization at different regions of the brain as the neural response develops. The methodology is applied to analyze evoked EEG response to flash visual stimulii to the left and right eye separately. The EEG electrode array is segmented based on bicoherence evolution with time using the mean absolute difference as a measure of dissimilarity. Segment maps confirm the importance of the occipital region in visual processing and demonstrate a link between the frontal and occipital regions during the response. Maps are constructed using bicoherence at bifrequencies that include the alpha band frequency of 8Hz as well as 4 and 20Hz. Differences are observed between responses from the left eye and the right eye, and also between subjects. The methodology shows potential as a neurological functional imaging technique that can be further developed for diagnosis and monitoring using scalp EEG which is less invasive and less expensive than magnetic resonance imaging.
Resumo:
This paper describes an innovative platform that facilitates the collection of objective safety data around occurrences at railway level crossings using data sources including forward-facing video, telemetry from trains and geo-referenced asset and survey data. This platform is being developed with support by the Australian rail industry and the Cooperative Research Centre for Rail Innovation. The paper provides a description of the underlying accident causation model, the development methodology and refinement process as well as a description of the data collection platform. The paper concludes with a brief discussion of benefits this project is expected to provide the Australian rail industry.
Resumo:
Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.
Resumo:
There are different ways to authenticate humans, which is an essential prerequisite for access control. The authentication process can be subdivided into three categories that rely on something someone i) knows (e.g. password), and/or ii) has (e.g. smart card), and/or iii) is (biometric features). Besides classical attacks on password solutions and the risk that identity-related objects can be stolen, traditional biometric solutions have their own disadvantages such as the requirement of expensive devices, risk of stolen bio-templates etc. Moreover, existing approaches provide the authentication process usually performed only once initially. Non-intrusive and continuous monitoring of user activities emerges as promising solution in hardening authentication process: iii-2) how so. behaves. In recent years various keystroke dynamic behavior-based approaches were published that are able to authenticate humans based on their typing behavior. The majority focuses on so-called static text approaches, where users are requested to type a previously defined text. Relatively few techniques are based on free text approaches that allow a transparent monitoring of user activities and provide continuous verification. Unfortunately only few solutions are deployable in application environments under realistic conditions. Unsolved problems are for instance scalability problems, high response times and error rates. The aim of this work is the development of behavioral-based verification solutions. Our main requirement is to deploy these solutions under realistic conditions within existing environments in order to enable a transparent and free text based continuous verification of active users with low error rates and response times.
Resumo:
Purpose – The aim of the paper is to describe and explain, using a combination of interviews and content analysis, the social and environmental reporting practices of a major garment export organisation within a developing country. Design/methodology/approach – Senior executives from a major organisation in Bangladesh are interviewed to determine the pressures being exerted on them in terms of their social and environmental performance. The perceptions of pressures are then used to explain – via content analysis – changing social and environmental disclosure practices. Findings – The results show that particular stakeholder groups have, since the early 1990s, placed pressure on the Bangladeshi clothing industry in terms of its social performance. This pressure, which is also directly related to the expectations of the global community, in turn drives the industry's social policies and related disclosure practices. Research limitations/implications – The findings show that, within the context of a developing country, unless we consider the managers' perceptions about the social and environmental expectations being imposed upon them by powerful stakeholder groups then we will be unable to understand organisational disclosure practices. Originality/value – This paper is the first known paper to interview managers from a large organisation in a developing country about changing stakeholder expectations and then link these changing expectations to annual report disclosures across an extended period of analysis.
Resumo:
This paper investigates the social and environmental disclosure practices of two large multinational companies, specifically Nike and Hennes&Mauritz. Utilising a joint consideration of legitimacy theory and media agenda setting theory, we investigate the linkage between negative media attention, and positive corporate social and environmental disclosures. Our results generally support a view that for those industry‐related social and environmental issues attracting the greatest amount of negative media attention, these corporations react by providing positive social and environmental disclosures. The results were particularly significant in relation to labour practices in developing countries – the issue attracting the greatest amount of negative media attention for the companies in question.
Resumo:
Information privacy is a crucial aspect of eHealth. Appropriate privacy management measures are therefore essential for its success. However, traditional measures for privacy preservation such as rigid access controls (i.e., preventive measures) are not suitable to eHealth because of the specialised and information - intensive nature of healthcare itself, and the nature of the information. Healthcare professionals (HCP) require easy, unrestricted access to as much information as possible towards making well - informed decisions. On the other end of the scale however, consumers (i.e., patients) demand control over their health information and raise concerns for privacy arising from internal activities (i.e., information use by HCPs). A proper balance of these competing concerns is vital for the implementation of successful eHealth systems. Towards reaching this balance, we propose an information accountability framework (IAF) for eHealth systems.
Resumo:
A global, online quantitative study among 300 consumers of digital technology products found the most reliable information sources were friends, family or word of mouth (WOM) from someone they knew, followed by expert product reviews, and product reviews written by other consumers. The most unreliable information sources were advertising or infomercials, automated recommendations based on purchasing patterns or retailers. While a very small number of consumers evaluated products online, rating of products and online discussions were more frequent activities. The most popular social media websites for reviews were Facebook, Twitter, Amazon and e-Bay, indicating the importance of WOM in social networks and online media spaces that feature product reviews as it is the most persuasive piece of information in both online and offline social networks. These results suggest that ‘social customers’ must be considered as an integral part of a marketing strategy.