904 resultados para bigdata, data stream processing, dsp, apache storm, cyber security


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to decrease information security threats caused by human-related vulnerabilities, an increased concentration on information security awareness and training is necessary. There are numerous information security awareness training delivery methods. The purpose of this study was to determine what delivery method is most successful in providing security awareness training. We conducted security awareness training using various delivery methods such as text based, game based and a short video presentation with the aim of determining user preference delivery methods. Our study suggests that a combined delvery methods are better than individual secrity awareness delivery method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Operating systems and programmes are more protected these days and attackers have shifted their attention to human elements to break into the organisation's information systems. As the number and frequency of cyber-attacks designed to take advantage of unsuspecting personnel are increasing, the significance of the human factor in information security management cannot be understated. In order to counter cyber-attacks designed to exploit human factors in information security chain, information security awareness with an objective to reduce information security risks that occur due to human related vulnerabilities is paramount. This paper discusses and evaluates the effects of various information security awareness delivery methods used in improving end-users’ information security awareness and behaviour. There are a wide range of information security awareness delivery methods such as web-based training materials, contextual training and embedded training. In spite of efforts to increase information security awareness, research is scant regarding effective information security awareness delivery methods. To this end, this study focuses on determining the security awareness delivery method that is most successful in providing information security awareness and which delivery method is preferred by users. We conducted information security awareness using text-based, game-based and video-based delivery methods with the aim of determining user preferences. Our study suggests that a combined delivery methods are better than individual security awareness delivery method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many aspects of our modern society now have either a direct or implicit dependence upon information technology. As such, a compromise of the availability or integrity in relation to these systems (which may encompass such diverse domains as banking, government, health care, and law enforcement) could have dramatic consequences from a societal perspective. These key systems are often referred to as critical infrastructure. Critical infrastructure can consist of corporate information systems or systems that control key industrial processes; these specific systems are referred to as ICS (Industry Control Systems) systems. ICS systems have devolved since the 1960s from standalone systems to networked architectures that communicate across large distances, utilise wireless network and can be controlled via the Internet. ICS systems form part of many countries’ key critical infrastructure, including Australia. They are used to remotely monitor and control the delivery of essential services and products, such as electricity, gas, water, waste treatment and transport systems. The need for security measures within these systems was not anticipated in the early development stages as they were designed to be closed systems and not open systems to be accessible via the Internet. We are also seeing these ICS and their supporting systems being integrated into organisational corporate systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artigo é parte do relatório Cybersecurity Are We Ready in Latin America and the Caribbean?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, the European Union has come to view cyber security, and in particular, cyber crime as one of the most relevant challenges to the completion of its Area of Freedom, Security and Justice. Given European societies’ increased reliance on borderless and decentralized information technologies, this sector of activity has been identified as an easy target for actors such as organised criminals, hacktivists or terrorist networks. Such analysis has been accompanied by EU calls to step up the fight against unlawful online activities, namely through increased cooperation among law enforcement authorities (both national and extra- communitarian), the approximation of legislations, and public- private partnerships. Although EU initiatives in this field have, so far, been characterized by a lack of interconnection and an integrated strategy, there has been, since the mid- 2000s, an attempt to develop a more cohesive and coordinated policy. An important part of this policy is connected to the activities of Europol, which have come to assume a central role in the coordination of intelligence gathering and analysis of cyber crime. The European Cybercrime Center (EC3), which will become operational within Europol in January 2013, is regarded, in particular, as a focal point of the EU’s fight against this phenomenon. Bearing this background in mind, the present article wishes to understand the role of Europol in the development of a European policy to counter the illegal use of the internet. The article proposes to reach this objective by analyzing, through the theoretical lenses of experimental governance, the evolution of this agency’s activities in the area of cyber crime and cyber security, its positioning as an expert in the field, and the consequences for the way this policy is currently developing and is expected to develop in the near future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract There has been a great deal of interest in the area of cyber security in recent years. But what is cyber security exactly? And should society really care about it? We look at some of the challenges of being an academic working in the area of cyber security and explain why cyber security is, to put it rather simply, hard! Speaker Biography Keith Martin Prof. Keith Martin is Professor of Information Security at Royal Holloway, University of London. He received his BSc (Hons) in Mathematics from the University of Glasgow in 1988 and a PhD from Royal Holloway in 1991. Between 1992 and 1996 he held a Research Fellowship at the University of Adelaide, investigating mathematical modelling of cryptographic key distribution problems. In 1996 he joined the COSIC research group of the Katholieke Universiteit Leuven in Belgium, working on security for third generation mobile communications. Keith rejoined Royal Holloway in January 2000, became a Professor in Information Security in 2007 and was Director of the Information Security Group between 2010 and 2015. Keith's research interests range across cyber security, but with a focus on cryptographic applications. He is the author of 'Everyday Cryptography' published by Oxford University Press.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents security issues and vulnerabilities in home and small office local area networks that can be used in cyber-attacks. There is previous research done on single vulnerabilities and attack vectors, but not many papers present full scale attack examples towards LAN. First this thesis categorizes different security threads and later in the paper methods to launch the attacks are shown by example. Offensive security and penetration testing is used as research methods in this thesis. As a result of this thesis an attack is conducted using vulnerabilities in WLAN, ARP protocol, browser as well as methods of social engineering. In the end reverse shell access is gained to the target machine. Ready-made tools are used in the attack and their inner workings are described. Prevention methods are presented towards the attacks in the end of the thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a distributed multi-agent scheme for enhancing the cyber security of smart grids which integrates computational resources, physical processes, and communication capabilities. Smart grid infrastructures are vulnerable to various cyber attacks and noises whose influences are significant for reliable and secure operations. A distributed agent-based framework is developed to investigate the interactions between physical processes and cyber activities where the attacks are considered as additive sensor fault signals and noises as randomly generated disturbance signals. A model of innovative physical process-oriented counter-measure and abnormal angle-state observer is designed for detection and mitigation against integrity attacks. Furthermore, this model helps to identify if the observation errors are caused either by attacks or noises.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a distributed multi-agent scheme for enhancing the cyber security of smart grids which integrates computational resources, physical processes, and communication capabilities. Smart grid infrastructures are vulnerable to various cyber attacks and noises whose influences are significant for reliable and secure operations. A distributed agent-based framework is developed to investigate the interactions between physical processes and cyber activities where the attacks are considered as additive sensor fault signals and noises as randomly generated disturbance signals. A model of innovative physical process-oriented counter-measure and abnormal angle-state observer is designed for detection and mitigation against integrity attacks. Furthermore, this model helps to identify if the observation errors are caused either by attacks or noises.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I Big Data hanno forgiato nuove tecnologie che migliorano la qualità della vita utilizzando la combinazione di rappresentazioni eterogenee di dati in varie discipline. Occorre, quindi, un sistema realtime in grado di computare i dati in tempo reale. Tale sistema viene denominato speed layer, come si evince dal nome si è pensato a garantire che i nuovi dati siano restituiti dalle query funcions con la rapidità in cui essi arrivano. Il lavoro di tesi verte sulla realizzazione di un’architettura che si rifaccia allo Speed Layer della Lambda Architecture e che sia in grado di ricevere dati metereologici pubblicati su una coda MQTT, elaborarli in tempo reale e memorizzarli in un database per renderli disponibili ai Data Scientist. L’ambiente di programmazione utilizzato è JAVA, il progetto è stato installato sulla piattaforma Hortonworks che si basa sul framework Hadoop e sul sistema di computazione Storm, che permette di lavorare con flussi di dati illimitati, effettuando l’elaborazione in tempo reale. A differenza dei tradizionali approcci di stream-processing con reti di code e workers, Storm è fault-tolerance e scalabile. Gli sforzi dedicati al suo sviluppo da parte della Apache Software Foundation, il crescente utilizzo in ambito di produzione di importanti aziende, il supporto da parte delle compagnie di cloud hosting sono segnali che questa tecnologia prenderà sempre più piede come soluzione per la gestione di computazioni distribuite orientate agli eventi. Per poter memorizzare e analizzare queste moli di dati, che da sempre hanno costituito una problematica non superabile con i database tradizionali, è stato utilizzato un database non relazionale: HBase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lo scopo di questo l'elaborato è l'analisi,lo studio e il confronto delle tecnologie per l'analisi in tempo reale di Big Data: Apache Spark Streaming, Apache Storm e Apache Flink. Per eseguire un adeguato confronto si è deciso di realizzare un sistema di rilevamento e riconoscimento facciale all’interno di un video, in maniera da poter parallelizzare le elaborazioni necessarie sfruttando le potenzialità di ogni architettura. Dopo aver realizzato dei prototipi realistici, uno per ogni architettura, si è passati alla fase di testing per misurarne le prestazioni. Attraverso l’impiego di cluster appositamente realizzati in ambiente locale e cloud, sono state misurare le caratteristiche che rappresentavano, meglio di altre, le differenze tra le architetture, cercando di dimostrarne quantitativamente l’efficacia degli algoritmi utilizzati e l’efficienza delle stesse. Si è scelto quindi il massimo input rate sostenibile e la latenza misurate al variare del numero di nodi. In questo modo era possibile osservare la scalabilità di architettura, per analizzarne l’andamento e verificare fino a che limite si potesse giungere per mantenere un compromesso accettabile tra il numero di nodi e l’input rate sostenibile. Gli esperimenti effettuati hanno mostrato che, all’aumentare del numero di worker le prestazioni del sistema migliorano, rendendo i sistemi studiati adatti all’utilizzo su larga scala. Inoltre sono state rilevate sostanziali differenze tra i vari framework, riportando pro e contro di ognuno, cercando di evidenziarne i più idonei al caso di studio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantile computation has many applications including data mining and financial data analysis. It has been shown that an is an element of-approximate summary can be maintained so that, given a quantile query d (phi, is an element of), the data item at rank [phi N] may be approximately obtained within the rank error precision is an element of N over all N data items in a data stream or in a sliding window. However, scalable online processing of massive continuous quantile queries with different phi and is an element of poses a new challenge because the summary is continuously updated with new arrivals of data items. In this paper, first we aim to dramatically reduce the number of distinct query results by grouping a set of different queries into a cluster so that they can be processed virtually as a single query while the precision requirements from users can be retained. Second, we aim to minimize the total query processing costs. Efficient algorithms are developed to minimize the total number of times for reprocessing clusters and to produce the minimum number of clusters, respectively. The techniques are extended to maintain near-optimal clustering when queries are registered and removed in an arbitrary fashion against whole data streams or sliding windows. In addition to theoretical analysis, our performance study indicates that the proposed techniques are indeed scalable with respect to the number of input queries as well as the number of items and the item arrival rate in a data stream.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as f-test is performed during each node’s split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.