186 resultados para computer science, information systems

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fraud is one of the besetting evils of our time. While less dramatic than crimes of violence like murder or rape, fraud can inflict significant damage at organizational or individual level.

Fraud is a concept that seems to have an obvious meaning until we try to define it. As fraud exists in many different guises, and it is necessary to carefully define what it is and to tailor policies and initiatives accordingly.

Developing a definition of fraud is an early step of a prevention program. In order to be involved in the protection function, people at all levels of an organization must be knowledgeable about fraud. In this paper, we discuss the risk of fraud from an information systems perspective, explain what fraud is and present a range of definitions of fraud and computer fraud. We argue that without clearly defining fraud, organizations will not be able to share information that has the same meaning to everyone, to agree on how to measure the problem, and to know the extent of the problem, in order to decide how much and where to deploy resources to effectively solve it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article verifies the importance of popular users in OSNs. The results are counter-intuitive. First, for dissemination speed, a large amount of users can swiftly distribute information to the masses, but they are not highly-connected users. Second, for dissemination scale, many powerful forwarders in OSNs cannot be identified by the degree measure. Furthermore, to control dissemination, popular users cannot capture most bridges of social communities.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the new millennium approaches, we are living in a society that is increasingly dependent upon information technology. However, whilst technology can deliver a number of benefits, it also introduces new vulnerabilities that can be exploited by persons with the necessary technical skills. Hackers represent a well-known threat in this respect and are responsible for a significant degree of disruption and damage to information systems. However, they are not the only criminal element that has to be taken into consideration. Evidence suggests that technology is increasingly seen as potential tool for terrorist organizations. This is leading to the emergence of a new threat in the form of 'cyber terrorists', who attack technological infrastructures such as the Internet in order to help further their cause. The paper discusses the problems posed by these groups and considers the nature of the responses necessary to preserve the future security of our society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Event related potential (ERP) analysis is one of the most widely used methods in cognitive neuroscience research to study the physiological correlates of sensory, perceptual and cognitive activity associated with processing information. To this end information flow or dynamic effective connectivity analysis is a vital technique to understand the higher cognitive processing under different events. In this paper we present a Granger causality (GC)-based connectivity estimation applied to ERP data analysis. In contrast to the generally used strictly causal multivariate autoregressive model, we use an extended multivariate autoregressive model (eMVAR) which also accounts for any instantaneous interaction among variables under consideration. The experimental data used in the paper is based on a single subject data set for erroneous button press response from a two-back with feedback continuous performance task (CPT). In order to demonstrate the feasibility of application of eMVAR models in source space connectivity studies, we use cortical source time series data estimated using blind source separation or independent component analysis (ICA) for this data set.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On-time completion is an important temporal QoS (Quality of Service) dimension and one of the fundamental requirements for high-confidence workflow systems. In recent years, a workflow temporal verification framework, which generally consists of temporal constraint setting, temporal checkpoint selection, temporal verification, and temporal violation handling, has been the major approach for the high temporal QoS assurance of workflow systems. Among them, effective temporal checkpoint selection, which aims to timely detect intermediate temporal violations along workflow execution plays a critical role. Therefore, temporal checkpoint selection has been a major topic and has attracted significant efforts. In this paper, we will present an overview of work-flow temporal checkpoint selection for temporal verification. Specifically, we will first introduce the throughput based and response-time based temporal consistency models for business and scientific cloud workflow systems, respectively. Then the corresponding benchmarking checkpoint selection strategies that satisfy the property of “necessity and sufficiency” are presented. We also provide experimental results to demonstrate the effectiveness of our checkpoint selection strategies, and finally points out some possible future issues in this research area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Intelligent Internet Computing (IIC) is emerging rapidly as an exciting new paradigm including pervasive, grid, and peer-to-peer computing to provide computing and communication services any time and anywhere. IIC paradigm foresees seamless integration of communicating and computational devices and applications embedded in all parts of our environment, from our physical selves, to our homes, our offices, our streets and so on. Although IIC presents exciting enabling opportunities, the benefits will only be realized if application and security issues can be appropriately addressed. This special issue is intended to foster the dissemination of state-of-the-art research in the area of IIC, including novel applications associated with its utilization, security systems and services, security models. We plan to publish high quality manuscripts, which cover the various practical applications and related security theories of IIC. The papers should not be submitted simultaneously for publication elsewhere. Submissions of high quality papers describing mature results or on-going work are invited. Selected high-quality papers from “the Eleventh IEEE International Conference on High Performance Computing and Communications (HPCC-09) and the Third International Conference on Information Security and Assurance (ISA-09),” will be published in this special issue of Journal of Internet Technology on "Intelligent Internet Computing".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine-to-Machine (M2M) paradigm enables machines (sensors, actuators, robots, and smart meter readers) to communicate with each other with little or no human intervention. M2M is a key enabling technology for the cyber-physical systems (CPSs). This paper explores CPS beyond M2M concept and looks at futuristic applications. Our vision is CPS with distributed actuation and in-network processing. We describe few particular use cases that motivate the development of the M2M communication primitives tailored to large-scale CPS. M2M communications in literature were considered in limited extent so far. The existing work is based on small-scale M2M models and centralized solutions. Different sources discuss different primitives. Few existing decentralized solutions do not scale well. There is a need to design M2M communication primitives that will scale to thousands and trillions of M2M devices, without sacrificing solution quality. The main paradigm shift is to design localized algorithms, where CPS nodes make decisions based on local knowledge. Localized coordination and communication in networked robotics, for matching events and robots, were studied to illustrate new directions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis and prediction of stock market has always been well recognized as a difficult problem due to the level of uncertainty and the factors that affect the price. To tackle this challenge problem, this paper proposed a hybrid approach which mines the useful information utilizing grey system and fuzzy risk analysis in stock prices prediction. In this approach, we firstly provide a model which contains the fuzzy function, k-mean algorithm and grey system (shorted for FKG), then provide the model of fuzzy risk analysis (FRA). A practical example to describe the development of FKG and FRA in stock market is given, and the analytical results provide an evaluation of the method which shows promote results. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Density-based means have been recently proposed as a method for dealing with outliers in the stream processing of data. Derived from a weighted arithmetic mean with variable weights that depend on the location of all data samples, these functions are not monotonic and hence cannot be classified as aggregation functions. In this article we establish the weak monotonicity of this class of averaging functions and use this to establish robust generalisations of these means. Specifically, we find that as proposed, the density based means are only robust to isolated outliers. However, by using penalty based formalisms of averaging functions and applying more sophisticated and robust density estimators, we are able to define a broader family of density based means that are more effective at filtering both isolated and clustered outliers. © 2014 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new approach for defining similarity measures for Atanassov's intuitionistic fuzzy sets (AIFS), in which a similarity measure has two components indicating the similarity and hesitancy aspects. We justify that there are at least two facets of uncertainty of an AIFS, one of which is related to fuzziness while other is related to lack of knowledge or non-specificity. We propose a set of axioms and build families of similarity measures that avoid counterintuitive examples that are used to justify one similarity measure over another. We also investigate a relation to entropies of AIFS, and outline possible application of our method in decision making and image segmentation. © 2014 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radio-frequency identification (RFID) is seen as one of the requirements for the implementation of the Internet-of-Things (IoT). However, an RFID system has to be equipped with a holistic security framework for a secure and scalable operation. Although much work has been done to provide privacy and anonymity, little focus has been given to performance, scalability and customizability issues to support robust implementation of IoT. Also, existing protocols suffer from a number of deficiencies such as insecure or inefficient identification techniques, throughput delay and inadaptability. In this paper, we propose a novel identification technique based on a hybrid approach (group-based approach and collaborative approach) and security check handoff (SCH) for RFID systems with mobility. The proposed protocol provides customizability and adaptability as well as ensuring the secure and scalable deployment of an RFID system to support a robust distributed structure such as the IoT. The protocol has an extra fold of protection against malware using an incorporated malware detection technique. We evaluated the protocol using a randomness battery test and the results show that the protocol offers better security, scalability and customizability than the existing protocols. © 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phishing attacks continue unabated to plague Internet users and trick them into providing personal and confidential information to phishers. In this paper, an approach for email-born phishing detection based on profiling and clustering techniques is proposed. We formulate the profiling problem as a clustering problem using various features present in the phishing emails as feature vectors and generate profiles based on clustering predictions. These predictions are further utilized to generate complete profiles of the emails. We carried out extensive experimental analysis of the proposed approach in order to evaluate its effectiveness to various factors such as sensitivity to the type of data, number of data sizes and cluster sizes. We compared the performance of the proposed approach against the Modified Global Kmeans (MGKmeans) approach. The results show that the proposed approach is efficient as compared to the baseline approach. © 2014 Elsevier Ltd. All rights reserved.