962 resultados para Multiprogramming (Electronic computers)


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The exchange of information between the police and community partners forms a central aspect of effective community service provision. In the context of policing, a robust and timely communications mechanism is required between police agencies and community partner domains, including: Primary healthcare (such as a Family Physician or a General Practitioner); Secondary healthcare (such as hospitals); Social Services; Education; and Fire and Rescue services. Investigations into high-profile cases such as the Victoria Climbié murder in 2000, the murders of Holly Wells and Jessica Chapman in 2002, and, more recently, the death of baby Peter Connelly through child abuse in 2007, highlight the requirement for a robust information-sharing framework. This paper presents a novel syntax that supports information-sharing requests, within strict data-sharing policy definitions. Such requests may form the basis for any information-sharing agreement that can exist between the police and their community partners. It defines a role-based architecture, with partner domains, with a syntax for the effective and efficient information sharing, using SPoC (Single Point-of-Contact) agents to control in-formation exchange. The application of policy definitions using rules within these SPoCs is inspired by network firewall rules and thus define information exchange permissions. These rules can be imple-mented by software filtering agents that act as information gateways between partner domains. Roles are exposed from each domain to give the rights to exchange information as defined within the policy definition. This work involves collaboration with the Scottish Police, as part of the Scottish Institute for Policing Research (SIPR), and aims to improve the safety of individuals by reducing risks to the community using enhanced information-sharing mechanisms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper discusses the use of relation algebra operations on formal contexts. These operations are a generalisation of some of the context operations that are described in the standard FCA textbook (Ganter & Wille, 1999). This paper extends previous research in this area with respect to applications and implementations. It also describes a software tool (FcaFlint) which in combination with FcaStone facilitates the application of relation algebra operations to contexts stored in many formats.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Clonal selection has been a dominant theme in many immune-inspired algorithms applied to machine learning and optimisation. We examine existing clonal selections algorithms for learning from a theoertical and empirical perspective and assert that the widely accepted computational interpretation of clonal selection is compromised both algorithmically andbiologically. We suggest a more capable abstraction of the clonal selection principle grounded in probabilistic estimation and approximation and demonstrate how it addresses some of the shortcomings in existing algorithms. We further show that by recasting black-box optimisation as a learning problem, the same abstraction may be re-employed; thereby taking steps toward unifying the clonal selection principle and distinguishing it from natural selection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud computing is the technology prescription that will help the UK’s National Health Service (NHS) beat the budget constraints imposed as a consequence of the credit crunch. The internet based shared data and services resource will revolutionise the management of medical records and patient information while saving the NHS millions of pounds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The proposed research will focus on developing a novel approach to solve Software Service Evolution problems in Computing Clouds. The approach will support dynamic evolution of the software service in clouds via a set of discovered evolution patterns. An initial survey informed us that such an approach does not exist yet and is in urgent need. Evolution Requirement can be classified into evolution features; researchers can describe the whole requirement by using evolution feature typology, the typology will define the relation and dependency between each features. After the evolution feature typology has been constructed, evolution model will be created to make the evolution more specific. Aspect oriented approach can be used for enhance evolution feature-model modularity. Aspect template code generation technique will be used for model transformation in the end. Product Line Engineering contains all the essential components for driving the whole evolution process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper outlines a novel information sharing method using Binary Decision Diagrams (BBDs). It is inspired by the work of Al-Shaer and Hamed, who applied BDDs into the modelling of network firewalls. This is applied into an information sharing policy system which optimizes the search of redundancy, shadowing, generalisation and correlation within information sharing rules.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper defines a structured methodology which is based on the foundational work of Al-Shaer et al. in [1] and that of Hamed and Al-Shaer in [2]. It defines a methodology for the declaration of policy field elements, through to the syntax, ontology and functional verification stages. In their works of [1] and [2] the authors concentrated on developing formal definitions of possible anomalies between rules in a network firewall rule set. Their work is considered as the foundation for further works on anomaly detection, including those of Fitzgerald et al. [3], Chen et al. [4], Hu et al. [5], among others. This paper extends this work by applying the methods to information sharing policies, and outlines the evaluation related to these.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A principal, but largely unexplored, use of our cognition when using interacting technology involves pretending. To pretend is to believe that which is not the case, for example, when we use the desktop on our personal computer we are pretending, that is, we are pretending that the screen is a desktop upon which windows reside. But, of course, the screen really isn't a desktop. Similarly when we engage in scenario- or persona-based design we are pretending about the settings, narrative, contexts and agents involved. Although there are exceptions, the overwhelming majority of the contents of these different kinds of stories are not the case. We also often pretend when we engage in the evaluation of these technologies (e.g. in the Wizard of Oz technique we "ignore the man behind the curtain"). We are pretending when we ascribe human-like qualities to digital technology. In each we temporarily believe something to be the case which is not. If we add the experience of tele- and social-presence to this, and the diverse experiences which can arise from using digital technology which too are predicted on pretending, then we are prompted to propose that human computer interaction and cognitive ergonomics are largely built on pretending and make believe. If this premise is accepted (and if not, please pretend for a moment), there are a number of interesting consequences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Information Systems for complex situations often fail to adequately deliver quality and suitability. One reason for this failure is an inability to identify comprehensive user requirements. Seldom do all stakeholders, especially those "invisible‟ or "back room‟ system users, have a voice when systems are designed. If this is a global problem then it may impact on both the public and private sectors in terms of their ability to perform, produce and stay competitive. To improve upon this, system designers use rich pictures as a diagrammatic means of identifying differing world views with the aim of creating shared understanding of the organisation. Rich pictures have predominantly been used as freeform, unstructured tools with no commonly agreed syntax. This research has collated, analysed and documented a substantial collection of rich pictures into a single dataset. Attention has been focussed on three main research areas; how the rich picture is facilitated, how the rich picture is constructed and how to interpret the resultant pictures. This research highlights the importance of the rich picture tool and argues the value of adding levels of structure, in certain cases. It is shown that there are considerable benefits for both the interpreter and the creator by providing a pre-drawing session, a common key of symbols and a framework for icon understanding. In conclusion, it is suggested that there is some evidence that a framework which aims to support the process of the rich picture and aid interpretation is valuable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The creative industries sector faces a constantly changing context characterised by the speed of the development and deployment of digital information systems and Information Communications Technologies (ICT) on a global scale. This continuous digital disruption has had significant impact on the whole value chain of the sector: creation and production; discovery and distribution; and consumption of cultural goods and services. As a result, creative enterprises must evolve business and operational models and practices to be sustainable. Enterprises of all scales, type, and operational model are affected, and all sectors face ongoing digital disruption. Management consultancy practitioners and business strategy academics have called for new strategy development frameworks and toolkits, fit for a continuously changing world. This thesis investigates a novel approach to organisational change appropriate to the digital age, in the context of the creative sector in Scotland. A set of concepts, methods, tools, and processes to generate theoretical learning and practical knowing was created to support enterprises to digitally adapt through undertaking journeys of change and organisational development. The framework is called The AmbITion Approach. It was developed by blending participatory action research (PAR) methods and modern management consultancy, design, and creative practices. Empirical work also introduced to the framework Coghlan and Rashford’s change categories. These enabled the definition and description of the extent to which organisations developed: whether they experienced first order (change), second order (adaptation) or third order (transformation) change. Digital research tools for inquiry were tested by a pilot study, and then embedded in a longitudinal study over two years of twentyone participant organisations from Scotland’s creative sector. The author applied and investigated the novel approach in a national digital development programme for Scotland’s creative industries. The programme was designed and delivered by the author and ran nationally between 2012-14. Detailed grounded thematic analysis of the data corpus was undertaken, along with analysis of rich media case studies produced by the organisations about their change journeys. The results of studies on participants, and validation criteria applied to the results, demonstrated that the framework triggers second (adaptation) and third order change (transformation) in creative industry enterprises. The AmbITion Approach framework is suitable for the continuing landscape of digital disruption within the creative sector. The thesis contributes to practice the concepts, methods, tools, and processes of The AmbITion Approach, which have been empirically tested in the field, and validated as a new framework for business transformation in a digital age. The thesis contributes to knowledge a theoretical and conceptual framework with a specific set of constructs and criteria that define first, second, and third order change in creative enterprises, and a robust research and action framework for the analysis of the quality, validity and change achieved by action research based development programmes. The thesis additionally contributes to the practice of research, adding to our understanding of the value of PAR and design thinking approaches and creative practices as methods for change.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyperheuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Accepted Version

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work considers the static calculation of a program’s average-case time. The number of systems that currently tackle this research problem is quite small due to the difficulties inherent in average-case analysis. While each of these systems make a pertinent contribution, and are individually discussed in this work, only one of them forms the basis of this research. That particular system is known as MOQA. The MOQA system consists of the MOQA language and the MOQA static analysis tool. Its technique for statically determining average-case behaviour centres on maintaining strict control over both the data structure type and the labeling distribution. This research develops and evaluates the MOQA language implementation, and adds to the functions already available in this language. Furthermore, the theory that backs MOQA is generalised and the range of data structures for which the MOQA static analysis tool can determine average-case behaviour is increased. Also, some of the MOQA applications and extensions suggested in other works are logically examined here. For example, the accuracy of classifying the MOQA language as reversible is investigated, along with the feasibility of incorporating duplicate labels into the MOQA theory. Finally, the analyses that take place during the course of this research reveal some of the MOQA strengths and weaknesses. This thesis aims to be pragmatic when evaluating the current MOQA theory, the advancements set forth in the following work and the benefits of MOQA when compared to similar systems. Succinctly, this work’s significant expansion of the MOQA theory is accompanied by a realistic assessment of MOQA’s accomplishments and a serious deliberation of the opportunities available to MOQA in the future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We continue the discussion of the decision points in the FUELCON metaarchitecture. Having discussed the relation of the original expert system to its sequel projects in terms of an AND/OR tree, we consider one further domain for a neural component: parameter prediction downstream of the core reload candidate pattern generator, thus, a replacement for the NOXER simulator currently in use in the project.