47 resultados para Multiprogramming (Electronic computers)
Resumo:
Model Driven based approach for Service Evolution in Clouds will mainly focus on the reusable evolution patterns' advantage to solve evolution problems. During the process, evolution pattern will be driven by MDA models to pattern aspects. Weaving the aspects into service based process by using Aspect-Oriented extended BPEL engine at runtime will be the dynamic feature of the evolution.
Resumo:
In order to solve the problem of uncertain cycle of water injection in the oilfield, this paper proposed a numerical method based on PCA-FNN, so that it can forecast the effective cycle of water injection. PCA is used to reduce the dimension of original data, while FNN is applied to train and test the new data. The correctness of PCA-FNN model is verified by the real injection statistics data from 116 wells of an oilfield, the result shows that the average absolute error and relative error of the test are 1.97 months and 10.75% respectively. The testing accuracy has been greatly improved by PCA-FNN model compare with the FNN which has not been processed by PCA and multiple liner regression method. Therefore, PCA-FNN method is reliable to forecast the effectiveness cycle of water injection and it can be used as an decision-making reference method for the engineers.
Resumo:
Understanding the evolution of sociality in humans and other species requires understanding how selection on social behaviour varies with group size. However, the effects of group size are frequently obscured in the theoretical literature, which often makes assumptions that are at odds with empirical findings. In particular, mechanisms are suggested as supporting large-scale cooperation when they would in fact rapidly become ineffective with increasing group size. Here we review the literature on the evolution of helping behaviours (cooperation and altruism), and frame it using a simple synthetic model that allows us to delineate how the three main components of the selection pressure on helping must vary with increasing group size. The first component is the marginal benefit of helping to group members, which determines both direct fitness benefits to the actor and indirect fitness benefits to recipients. While this is often assumed to be independent of group size, marginal benefits are in practice likely to be maximal at intermediate group sizes for many types of collective action problems, and will eventually become very small in large groups due to the law of decreasing returns. The second component is the response of social partners on the past play of an actor, which underlies conditional behaviour under repeated social interactions. We argue that under realistic conditions on the transmission of information in a population, this response on past play decreases rapidly with increasing group size so that reciprocity alone (whether direct, indirect, or generalised) cannot sustain cooperation in very large groups. The final component is the relatedness between actor and recipient, which, according to the rules of inheritance, again decreases rapidly with increasing group size. These results explain why helping behaviours in very large social groups are limited to cases where the number of reproducing individuals is small, as in social insects, or where there are social institutions that can promote (possibly through sanctioning) large-scale cooperation, as in human societies. Finally, we discuss how individually devised institutions can foster the transition from small-scale to large-scale cooperative groups in human evolution.
Resumo:
Nowadays there is almost no crime committed without a trace of digital evidence, and since the advanced functionality of mobile devices today can be exploited to assist in crime, the need for mobile forensics is imperative. Many of the mobile applications available today, including internet browsers, will request the user’s permission to access their current location when in use. This geolocation data is subsequently stored and managed by that application's underlying database files. If recovered from a device during a forensic investigation, such GPS evidence and track points could hold major evidentiary value for a case. The aim of this paper is to examine and compare to what extent geolocation data is available from the iOS and Android operating systems. We focus particularly on geolocation data recovered from internet browsing applications, comparing the native Safari and Browser apps with Google Chrome, downloaded on to both platforms. All browsers were used over a period of several days at various locations to generate comparable test data for analysis. Results show considerable differences not only in the storage locations and formats, but also in the amount of geolocation data stored by different browsers and on different operating systems.
Resumo:
Data leakage is a serious issue and can result in the loss of sensitive data, compromising user accounts and details, potentially affecting millions of internet users. This paper contributes to research in online security and reducing personal footprint by evaluating the levels of privacy provided by the Firefox browser. The aim of identifying conditions that would minimize data leakage and maximize data privacy is addressed by assessing and comparing data leakage in the four possible browsing modes: normal and private modes using a browser installed on the host PC or using a portable browser from a connected USB device respectively. To provide a firm foundation for analysis, a series of carefully designed, pre-planned browsing sessions were repeated in each of the various modes of Firefox. This included low RAM environments to determine any effects low RAM may have on browser data leakage. The results show that considerable data leakage may occur within Firefox. In normal mode, all of the browsing information is stored within the Mozilla profile folder in Firefox-specific SQLite databases and sessionstore.js. While passwords were not stored as plain text, other confidential information such as credit card numbers could be recovered from the Form history under certain conditions. There is no difference when using a portable browser in normal mode, except that the Mozilla profile folder is located on the USB device rather than the host's hard disk. By comparison, private browsing reduces data leakage. Our findings confirm that no information is written to the Firefox-related locations on the hard disk or USB device during private browsing, implying that no deletion would be necessary and no remnants of data would be forensically recoverable from unallocated space. However, two aspects of data leakage occurred equally in all four browsing modes. Firstly, all of the browsing history was stored in the live RAM and was therefore accessible while the browser remained open. Secondly, in low RAM situations, the operating system caches out RAM to pagefile.sys on the host's hard disk. Irrespective of the browsing mode used, this may include Firefox history elements which can then remain forensically recoverable for considerable time.
Resumo:
SQL injection is a common attack method used to leverage infor-mation out of a database or to compromise a company’s network. This paper investigates four injection attacks that can be conducted against the PL/SQL engine of Oracle databases, comparing two recent releases (10g, 11g) of Oracle. The results of the experiments showed that both releases of Oracle were vulner-able to injection but that the injection technique often differed in the packages that it could be conducted in.
Resumo:
This paper discusses the large-scale group project undertaken by BSc Hons Digital Forensics students at Abertay University in their penultimate year. The philosophy of the project is to expose students to the full digital crime "life cycle", from commission through investigation, preparation of formal court report and finally, to prosecution in court. In addition, the project is novel in two aspects; the "crimes" are committed by students, and the moot court proceedings, where students appear as expert witnesses for the prosecution, are led by law students acting as counsels for the prosecution and defence. To support students, assessments are staged across both semesters with staff feedback provided at critical points. Feedback from students is very positive, highlighting particularly the experience of engaging with the law students and culminating in the realistic moot court, including a challenging cross-examination. Students also commented on the usefulness of the final debrief, where the whole process and the student experience is discussed in an informal plenary meeting between DF students and staff, providing an opportunity for the perpetrators and investigators to discuss details of the "crimes", and enabling all groups to learn from all crimes and investigations. We conclude with a reflection on the challenges encountered and a discussion of planned changes.
Resumo:
Abstract: The importance of e-government models lies in their offering a basis to measure and guide e-government. There is still no agreement on how to assess a government online. Most of the e-government models are not based on research, nor are they validated. In most countries, e-government has not reached higher stages of growth. Several scholars have shown a confusing picture of e-government. What is lacking is an in-depth analysis of e-government models. Responding to the need for such an analysis, this study identifies the strengths and weaknesses of major national and local e-government evaluation models. The common limitations of most models are focusing on the government and not the citizen, missing qualitative measures, constructing the e-equivalent of a bureaucratic administration, and defining general criteria without sufficient validations. In addition, this study has found that the metrics defined for national e-government are not suitable for municipalities, and most of the existing studies have focused on national e-governments even though local ones are closer to citizens. There is a need for developing a good theoretical model for both national and local municipal e-government.
Resumo:
There is still a lack of an engineering approach for building Web systems, and the field of measuring the Web is not yet mature. In particular, there is an uncertainty in the selection of evaluation methods, and there are risks of standardizing inadequate evaluation practices. It is important to know whether we are evaluating the Web or specific website(s). We need a new categorization system, a different focus on evaluation methods, and an in-depth analysis that reveals the strengths and weaknesses of each method. As a contribution to the field of Web evaluation, this study proposes a novel approach to view and select evaluation methods based on the purpose and platforms of the evaluation. It has been shown that the choice of the appropriate evaluation method(s) depends greatly on the purpose of the evaluation.
Resumo:
A collaboration between dot.rural at the University of Aberdeen and the iSchool at Northumbria University, POWkist is a pilot-study exploring potential usages of currently available linked datasets within the cultural heritage domain. Many privately-held family history collections (shoebox archives) remain vulnerable unless a sustainable, affordable and accessible model of citizen-archivist digital preservation can be offered. Citizen-historians have used the web as a platform to preserve cultural heritage, however with no accessible or sustainable model these digital footprints have been ad hoc and rarely connected to broader historical research. Similarly, current approaches to connecting material on the web by exploiting linked datasets do not take into account the data characteristics of the cultural heritage domain. Funded by Semantic Media, the POWKist project is investigating how best to capture, curate, connect and present the contents of citizen-historians’ shoebox archives in an accessible and sustainable online collection. Using the Curios platform - an open-source digital archive - we have digitised a collection relating to a prisoner of war during WWII (1939-1945). Following a series of user group workshops, POWkist is now connecting these ‘made digital’ items with the broader web using a semantic technology model and identifying appropriate linked datasets of relevant content such as DBPedia (an archived linked dataset of Wikipedia) and Ordnance Survey Open Data. We are analysing the characteristics of cultural heritage linked datasets, so that these materials are better visualised, contextualised and presented in an attractive and comprehensive user interface. Our paper will consider the issues we have identified, the solutions we are developing and include a demonstration of our work-in-progress.
Resumo:
Recent years have seen an astronomical rise in SQL Injection Attacks (SQLIAs) used to compromise the confidentiality, authentication and integrity of organisations’ databases. Intruders becoming smarter in obfuscating web requests to evade detection combined with increasing volumes of web traffic from the Internet of Things (IoT), cloud-hosted and on-premise business applications have made it evident that the existing approaches of mostly static signature lack the ability to cope with novel signatures. A SQLIA detection and prevention solution can be achieved through exploring an alternative bio-inspired supervised learning approach that uses input of labelled dataset of numerical attributes in classifying true positives and negatives. We present in this paper a Numerical Encoding to Tame SQLIA (NETSQLIA) that implements a proof of concept for scalable numerical encoding of features to a dataset attributes with labelled class obtained from deep web traffic analysis. In the numerical attributes encoding: the model leverages proxy in the interception and decryption of web traffic. The intercepted web requests are then assembled for front-end SQL parsing and pattern matching by applying traditional Non-Deterministic Finite Automaton (NFA). This paper is intended for a technique of numerical attributes extraction of any size primed as an input dataset to an Artificial Neural Network (ANN) and statistical Machine Learning (ML) algorithms implemented using Two-Class Averaged Perceptron (TCAP) and Two-Class Logistic Regression (TCLR) respectively. This methodology then forms the subject of the empirical evaluation of the suitability of this model in the accurate classification of both legitimate web requests and SQLIA payloads.
Resumo:
Rigid adherence to pre-specified thresholds and static graphical representations can lead to incorrect decisions on merging of clusters. As an alternative to existing automated or semi-automated methods, we developed a visual analytics approach for performing hierarchical clustering analysis of short time-series gene expression data. Dynamic sliders control parameters such as the similarity threshold at which clusters are merged and the level of relative intra-cluster distinctiveness, which can be used to identify "weak-edges" within clusters. An expert user can drill down to further explore the dendrogram and detect nested clusters and outliers. This is done by using the sliders and by pointing and clicking on the representation to cut the branches of the tree in multiple-heights. A prototype of this tool has been developed in collaboration with a small group of biologists for analysing their own datasets. Initial feedback on the tool has been positive.