33 resultados para Parallel processing (Electronic computers) - Research

em Repository Napier


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The International Conference on Advanced Materials, Structures and Mechanical Engineering 2015 (ICAMSME 2015) was held on May 29-31, Incheon, South-Korea. The conference was attended by scientists, scholars, engineers and students from universities, research institutes and industries all around the world to present on going research activities. This proceedings volume assembles papers from various professionals engaged in the fields of materials, structures and mechanical engineering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SQL Injection Attack (SQLIA) remains a technique used by a computer network intruder to pilfer an organisation’s confidential data. This is done by an intruder re-crafting web form’s input and query strings used in web requests with malicious intent to compromise the security of an organisation’s confidential data stored at the back-end database. The database is the most valuable data source, and thus, intruders are unrelenting in constantly evolving new techniques to bypass the signature’s solutions currently provided in Web Application Firewalls (WAF) to mitigate SQLIA. There is therefore a need for an automated scalable methodology in the pre-processing of SQLIA features fit for a supervised learning model. However, obtaining a ready-made scalable dataset that is feature engineered with numerical attributes dataset items to train Artificial Neural Network (ANN) and Machine Leaning (ML) models is a known issue in applying artificial intelligence to effectively address ever evolving novel SQLIA signatures. This proposed approach applies numerical attributes encoding ontology to encode features (both legitimate web requests and SQLIA) to numerical data items as to extract scalable dataset for input to a supervised learning model in moving towards a ML SQLIA detection and prevention model. In numerical attributes encoding of features, the proposed model explores a hybrid of static and dynamic pattern matching by implementing a Non-Deterministic Finite Automaton (NFA). This combined with proxy and SQL parser Application Programming Interface (API) to intercept and parse web requests in transition to the back-end database. In developing a solution to address SQLIA, this model allows processed web requests at the proxy deemed to contain injected query string to be excluded from reaching the target back-end database. This paper is intended for evaluating the performance metrics of a dataset obtained by numerical encoding of features ontology in Microsoft Azure Machine Learning (MAML) studio using Two-Class Support Vector Machines (TCSVM) binary classifier. This methodology then forms the subject of the empirical evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multimedia Internet KEYing protocol (MIKEY) aims at establishing secure credentials between two communicating entities. However, existing MIKEY modes fail to meet the requirements of low-power and low-processing devices. To address this issue, we combine two previously proposed approaches to introduce a new distributed and compressed MIKEY mode for the Internet of Things. Indeed, relying on a cooperative approach, a set of third parties is used to discharge the constrained nodes from heavy computational operations. Doing so, the preshared mode is used in the constrained part of network, while the public key mode is used in the unconstrained part of the network. Furthermore, to mitigate the communication cost we introduce a new header compression scheme that reduces the size of MIKEY’s header from 12 Bytes to 3 Bytes in the best compression case. Preliminary results show that our proposed mode is energy preserving whereas its security properties are preserved untouched.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Imagine being told that your wage was going to be cut in half. Well, that’s what’s soon going to happen to those who make money from Bitcoin mining, the process of earning the online currency Bitcoin. The current expected date for this change is 11 July 2016. Many see this as the day when Bitcoin prices will rocket and when Bitcoin owners could make a great deal of money. Others see it as the start of a Bitcoin crash. At present no one quite knows which way it will go. Bitcoin was created in 2009 by someone known as Satoshi Nakamoto, borrowing from a whole lot of research methods. It is a cryptocurrency, meaning it uses digital encryption techniques to create bitcoins and secure financial transactions. It doesn’t need a central government or organisation to regulate it, nor a broker to manage payments. Conventional currencies usually have a central bank that creates money and controls its supply. Bitcoin is instead created when individuals “mine” for it by using their computers to perform complex calculations through special software. The algorithm behind Bitcoin is designed to limit the number of bitcoins that can ever be created. All Bitcoin transactions are recorded on a public database known as a blockchain. Every time someone mines for Bitcoin, it is recorded with a new block that is transmitted to every Bitcoin app across the network, like a bank updating its online records.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to solve the problem of uncertain cycle of water injection in the oilfield, this paper proposed a numerical method based on PCA-FNN, so that it can forecast the effective cycle of water injection. PCA is used to reduce the dimension of original data, while FNN is applied to train and test the new data. The correctness of PCA-FNN model is verified by the real injection statistics data from 116 wells of an oilfield, the result shows that the average absolute error and relative error of the test are 1.97 months and 10.75% respectively. The testing accuracy has been greatly improved by PCA-FNN model compare with the FNN which has not been processed by PCA and multiple liner regression method. Therefore, PCA-FNN method is reliable to forecast the effectiveness cycle of water injection and it can be used as an decision-making reference method for the engineers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data leakage is a serious issue and can result in the loss of sensitive data, compromising user accounts and details, potentially affecting millions of internet users. This paper contributes to research in online security and reducing personal footprint by evaluating the levels of privacy provided by the Firefox browser. The aim of identifying conditions that would minimize data leakage and maximize data privacy is addressed by assessing and comparing data leakage in the four possible browsing modes: normal and private modes using a browser installed on the host PC or using a portable browser from a connected USB device respectively. To provide a firm foundation for analysis, a series of carefully designed, pre-planned browsing sessions were repeated in each of the various modes of Firefox. This included low RAM environments to determine any effects low RAM may have on browser data leakage. The results show that considerable data leakage may occur within Firefox. In normal mode, all of the browsing information is stored within the Mozilla profile folder in Firefox-specific SQLite databases and sessionstore.js. While passwords were not stored as plain text, other confidential information such as credit card numbers could be recovered from the Form history under certain conditions. There is no difference when using a portable browser in normal mode, except that the Mozilla profile folder is located on the USB device rather than the host's hard disk. By comparison, private browsing reduces data leakage. Our findings confirm that no information is written to the Firefox-related locations on the hard disk or USB device during private browsing, implying that no deletion would be necessary and no remnants of data would be forensically recoverable from unallocated space. However, two aspects of data leakage occurred equally in all four browsing modes. Firstly, all of the browsing history was stored in the live RAM and was therefore accessible while the browser remained open. Secondly, in low RAM situations, the operating system caches out RAM to pagefile.sys on the host's hard disk. Irrespective of the browsing mode used, this may include Firefox history elements which can then remain forensically recoverable for considerable time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: The importance of e-government models lies in their offering a basis to measure and guide e-government. There is still no agreement on how to assess a government online. Most of the e-government models are not based on research, nor are they validated. In most countries, e-government has not reached higher stages of growth. Several scholars have shown a confusing picture of e-government. What is lacking is an in-depth analysis of e-government models. Responding to the need for such an analysis, this study identifies the strengths and weaknesses of major national and local e-government evaluation models. The common limitations of most models are focusing on the government and not the citizen, missing qualitative measures, constructing the e-equivalent of a bureaucratic administration, and defining general criteria without sufficient validations. In addition, this study has found that the metrics defined for national e-government are not suitable for municipalities, and most of the existing studies have focused on national e-governments even though local ones are closer to citizens. There is a need for developing a good theoretical model for both national and local municipal e-government.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A collaboration between dot.rural at the University of Aberdeen and the iSchool at Northumbria University, POWkist is a pilot-study exploring potential usages of currently available linked datasets within the cultural heritage domain. Many privately-held family history collections (shoebox archives) remain vulnerable unless a sustainable, affordable and accessible model of citizen-archivist digital preservation can be offered. Citizen-historians have used the web as a platform to preserve cultural heritage, however with no accessible or sustainable model these digital footprints have been ad hoc and rarely connected to broader historical research. Similarly, current approaches to connecting material on the web by exploiting linked datasets do not take into account the data characteristics of the cultural heritage domain. Funded by Semantic Media, the POWKist project is investigating how best to capture, curate, connect and present the contents of citizen-historians’ shoebox archives in an accessible and sustainable online collection. Using the Curios platform - an open-source digital archive - we have digitised a collection relating to a prisoner of war during WWII (1939-1945). Following a series of user group workshops, POWkist is now connecting these ‘made digital’ items with the broader web using a semantic technology model and identifying appropriate linked datasets of relevant content such as DBPedia (an archived linked dataset of Wikipedia) and Ordnance Survey Open Data. We are analysing the characteristics of cultural heritage linked datasets, so that these materials are better visualised, contextualised and presented in an attractive and comprehensive user interface. Our paper will consider the issues we have identified, the solutions we are developing and include a demonstration of our work-in-progress.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Participation Space Studies explore eParticipation in the day-to-day activities of local, citizen-led groups, working to improve their communities. The focus is the relationship between activities and contexts. The concept of a participation space is introduced in order to reify online and offline contexts where people participate in democracy. Participation spaces include websites, blogs, email, social media presences, paper media, and physical spaces. They are understood as sociotechnical systems: assemblages of heterogeneous elements, with relevant histories and trajectories of development and use. This approach enables the parallel study of diverse spaces, on and offline. Participation spaces are investigated within three case studies, centred on interviews and participant observation. Each case concerns a community or activist group, in Scotland. The participation spaces are then modelled using a Socio-Technical Interaction Network (STIN) framework (Kling, McKim and King, 2003). The participation space concept effectively supports the parallel investigation of the diverse social and technical contexts of grassroots democracy and the relationship between the case-study groups and the technologies they use to support their work. Participants’ democratic participation is supported by online technologies, especially email, and they create online communities and networks around their goals. The studies illustrate the mutual shaping relationship between technology and democracy. Participants’ choice of technologies can be understood in spatial terms: boundaries, inhabitants, access, ownership, and cost. Participation spaces and infrastructures are used together and shared with other groups. Non-public online spaces, such as Facebook groups, are vital contexts for eParticipation; further, the majority of participants’ work is non-public, on and offline. It is informational, potentially invisible, work that supports public outputs. The groups involve people and influence events through emotional and symbolic impact, as well as rational argument. Images are powerful vehicles for this and digital images become an increasingly evident and important feature of participation spaces throughout the consecutively conducted case studies. Collaboration of diverse people via social media indicates that these spaces could be understood as boundary objects (Star and Griesemer, 1989). The Participation Space Studies draw from and contribute to eParticipation, social informatics, mediation, social shaping studies, and ethnographic studies of Internet use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Choosing a single similarity threshold for cutting dendrograms is not sufficient for performing hierarchical clustering analysis of heterogeneous data sets. In addition, alternative automated or semi-automated methods that cut dendrograms in multiple levels make assumptions about the data in hand. In an attempt to help the user to find patterns in the data and resolve ambiguities in cluster assignments, we developed MLCut: a tool that provides visual support for exploring dendrograms of heterogeneous data sets in different levels of detail. The interactive exploration of the dendrogram is coordinated with a representation of the original data, shown as parallel coordinates. The tool supports three analysis steps. Firstly, a single-height similarity threshold can be applied using a dynamic slider to identify the main clusters. Secondly, a distinctiveness threshold can be applied using a second dynamic slider to identify “weak-edges” that indicate heterogeneity within clusters. Thirdly, the user can drill-down to further explore the dendrogram structure - always in relation to the original data - and cut the branches of the tree at multiple levels. Interactive drill-down is supported using mouse events such as hovering, pointing and clicking on elements of the dendrogram. Two prototypes of this tool have been developed in collaboration with a group of biologists for analysing their own data sets. We found that enabling the users to cut the tree at multiple levels, while viewing the effect in the original data, is a promising method for clustering which could lead to scientific discoveries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Individuals living in highly networked societies publish a large amount of personal, and potentially sensitive, information online. Web investigators can exploit such information for a variety of purposes, such as in background vetting and fraud detection. However, such investigations require a large number of expensive man hours and human effort. This paper describes InfoScout, a search tool which is intended to reduce the time it takes to identify and gather subject centric information on the Web. InfoScout collects relevance feedback information from the investigator in order to rerank search results, allowing the intended information to be discovered more quickly. Users may still direct their search as they see fit, issuing ad-hoc queries and filtering existing results by keywords. Design choices are informed by prior work and industry collaboration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyzes the inner relations between classical sub-scheme probability and statistic probability, subjective probability and objective probability, prior probability and posterior probability, transition probability and probability of utility, and further analysis the goal, method, and its practical economic purpose which represent by these various probability from the perspective of mathematics, so as to deeply understand there connotation and its relation with economic decision making, thus will pave the route for scientific predication and decision making.