961 resultados para Compiling (Electronic computers)


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A principal, but largely unexplored, use of our cognition when using interacting technology involves pretending. To pretend is to believe that which is not the case, for example, when we use the desktop on our personal computer we are pretending, that is, we are pretending that the screen is a desktop upon which windows reside. But, of course, the screen really isn't a desktop. Similarly when we engage in scenario- or persona-based design we are pretending about the settings, narrative, contexts and agents involved. Although there are exceptions, the overwhelming majority of the contents of these different kinds of stories are not the case. We also often pretend when we engage in the evaluation of these technologies (e.g. in the Wizard of Oz technique we "ignore the man behind the curtain"). We are pretending when we ascribe human-like qualities to digital technology. In each we temporarily believe something to be the case which is not. If we add the experience of tele- and social-presence to this, and the diverse experiences which can arise from using digital technology which too are predicted on pretending, then we are prompted to propose that human computer interaction and cognitive ergonomics are largely built on pretending and make believe. If this premise is accepted (and if not, please pretend for a moment), there are a number of interesting consequences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Information Systems for complex situations often fail to adequately deliver quality and suitability. One reason for this failure is an inability to identify comprehensive user requirements. Seldom do all stakeholders, especially those "invisible‟ or "back room‟ system users, have a voice when systems are designed. If this is a global problem then it may impact on both the public and private sectors in terms of their ability to perform, produce and stay competitive. To improve upon this, system designers use rich pictures as a diagrammatic means of identifying differing world views with the aim of creating shared understanding of the organisation. Rich pictures have predominantly been used as freeform, unstructured tools with no commonly agreed syntax. This research has collated, analysed and documented a substantial collection of rich pictures into a single dataset. Attention has been focussed on three main research areas; how the rich picture is facilitated, how the rich picture is constructed and how to interpret the resultant pictures. This research highlights the importance of the rich picture tool and argues the value of adding levels of structure, in certain cases. It is shown that there are considerable benefits for both the interpreter and the creator by providing a pre-drawing session, a common key of symbols and a framework for icon understanding. In conclusion, it is suggested that there is some evidence that a framework which aims to support the process of the rich picture and aid interpretation is valuable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The creative industries sector faces a constantly changing context characterised by the speed of the development and deployment of digital information systems and Information Communications Technologies (ICT) on a global scale. This continuous digital disruption has had significant impact on the whole value chain of the sector: creation and production; discovery and distribution; and consumption of cultural goods and services. As a result, creative enterprises must evolve business and operational models and practices to be sustainable. Enterprises of all scales, type, and operational model are affected, and all sectors face ongoing digital disruption. Management consultancy practitioners and business strategy academics have called for new strategy development frameworks and toolkits, fit for a continuously changing world. This thesis investigates a novel approach to organisational change appropriate to the digital age, in the context of the creative sector in Scotland. A set of concepts, methods, tools, and processes to generate theoretical learning and practical knowing was created to support enterprises to digitally adapt through undertaking journeys of change and organisational development. The framework is called The AmbITion Approach. It was developed by blending participatory action research (PAR) methods and modern management consultancy, design, and creative practices. Empirical work also introduced to the framework Coghlan and Rashford’s change categories. These enabled the definition and description of the extent to which organisations developed: whether they experienced first order (change), second order (adaptation) or third order (transformation) change. Digital research tools for inquiry were tested by a pilot study, and then embedded in a longitudinal study over two years of twentyone participant organisations from Scotland’s creative sector. The author applied and investigated the novel approach in a national digital development programme for Scotland’s creative industries. The programme was designed and delivered by the author and ran nationally between 2012-14. Detailed grounded thematic analysis of the data corpus was undertaken, along with analysis of rich media case studies produced by the organisations about their change journeys. The results of studies on participants, and validation criteria applied to the results, demonstrated that the framework triggers second (adaptation) and third order change (transformation) in creative industry enterprises. The AmbITion Approach framework is suitable for the continuing landscape of digital disruption within the creative sector. The thesis contributes to practice the concepts, methods, tools, and processes of The AmbITion Approach, which have been empirically tested in the field, and validated as a new framework for business transformation in a digital age. The thesis contributes to knowledge a theoretical and conceptual framework with a specific set of constructs and criteria that define first, second, and third order change in creative enterprises, and a robust research and action framework for the analysis of the quality, validity and change achieved by action research based development programmes. The thesis additionally contributes to the practice of research, adding to our understanding of the value of PAR and design thinking approaches and creative practices as methods for change.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyperheuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Accepted Version

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work considers the static calculation of a program’s average-case time. The number of systems that currently tackle this research problem is quite small due to the difficulties inherent in average-case analysis. While each of these systems make a pertinent contribution, and are individually discussed in this work, only one of them forms the basis of this research. That particular system is known as MOQA. The MOQA system consists of the MOQA language and the MOQA static analysis tool. Its technique for statically determining average-case behaviour centres on maintaining strict control over both the data structure type and the labeling distribution. This research develops and evaluates the MOQA language implementation, and adds to the functions already available in this language. Furthermore, the theory that backs MOQA is generalised and the range of data structures for which the MOQA static analysis tool can determine average-case behaviour is increased. Also, some of the MOQA applications and extensions suggested in other works are logically examined here. For example, the accuracy of classifying the MOQA language as reversible is investigated, along with the feasibility of incorporating duplicate labels into the MOQA theory. Finally, the analyses that take place during the course of this research reveal some of the MOQA strengths and weaknesses. This thesis aims to be pragmatic when evaluating the current MOQA theory, the advancements set forth in the following work and the benefits of MOQA when compared to similar systems. Succinctly, this work’s significant expansion of the MOQA theory is accompanied by a realistic assessment of MOQA’s accomplishments and a serious deliberation of the opportunities available to MOQA in the future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We continue the discussion of the decision points in the FUELCON metaarchitecture. Having discussed the relation of the original expert system to its sequel projects in terms of an AND/OR tree, we consider one further domain for a neural component: parameter prediction downstream of the core reload candidate pattern generator, thus, a replacement for the NOXER simulator currently in use in the project.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper examines scheduling problems in which the setup phase of each operation needs to be attended by a single server, common for all jobs and different from the processing machines. The objective in each situation is to minimize the makespan. For the processing system consisting of two parallel dedicated machines we prove that the problem of finding an optimal schedule is NP-hard in the strong sense even if all setup times are equal or if all processing times are equal. For the case of m parallel dedicated machines, a simple greedy algorithm is shown to create a schedule with the makespan that is at most twice the optimum value. For the two machine case, an improved heuristic guarantees a tight worst-case ratio of 3/2. We also describe several polynomially solvable cases of the later problem. The two-machine flow shop and the open shop problems with a single server are also shown to be NP-hard in the strong sense. However, we reduce the two-machine flow shop no-wait problem with a single server to the Gilmore-Gomory traveling salesman problem and solve it in polynomial time. (c) 2000 John Wiley & Sons, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Review of: Peter Reimann and Hans Spada (eds), Learning in Humans and Machines: Towards an Interdisciplinary Learning Science, Pergamon. (1995). ISBN: 978-0080425696

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The PHYSICA software was developed to enable multiphysics modelling allowing for interaction between Computational Fluid Dynamics (CFD) and Computational Solid Mechanics (CSM) and Computational Aeroacoustics (CAA). PHYSICA uses the finite volume method with 3-D unstructured meshes to enable the modelling of complex geometries. Many engineering applications involve significant computational time which needs to be reduced by means of a faster solution method or parallel and high performance algorithms. It is well known that multigrid methods serve as a fast iterative scheme for linear and nonlinear diffusion problems. This papers attempts to address two major issues of this iterative solver, including parallelisation of multigrid methods and their applications to time dependent multiscale problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we study a problem of scheduling and batching on two machines in a flow-shop and open-shop environment. Each machine processes operations in batches, and the processing time of a batch is the sum of the processing times of the operations in that batch. A setup time, which depends only on the machine, is required before a batch is processed on a machine, and all jobs in a batch remain at the machine until the entire batch is processed. The aim is to make batching and sequencing decisions, which specify a partition of the jobs into batches on each machine, and a processing order of the batches on each machine, respectively, so that the makespan is minimized. The flow-shop problem is shown to be strongly NP-hard. We demonstrate that there is an optimal solution with the same batches on the two machines; we refer to these as consistent batches. A heuristic is developed that selects the best schedule among several with one, two, or three consistent batches, and is shown to have a worst-case performance ratio of 4/3. For the open-shop, we show that the problem is NP-hard in the ordinary sense. By proving the existence of an optimal solution with one, two or three consistent batches, a close relationship is established with the problem of scheduling two or three identical parallel machines to minimize the makespan. This allows a pseudo-polynomial algorithm to be derived, and various heuristic methods to be suggested.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sound waves are propagating pressure fluctuations, which are typically several orders of magnitude smaller than the pressure variations in the flow field that account for flow acceleration. On the other hand, these fluctuations travel at the speed of sound in the medium, not as a transported fluid quantity. Due to the above two properties, the Reynolds averaged Navier–Stokes equations do not resolve the acoustic fluctuations. This paper discusses a defect correction method for this type of multi-scale problems in aeroacoustics. Numerical examples in one dimensional and two dimensional are used to illustrate the concept. Copyright (C) 2002 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The factors that are driving the development and use of grids and grid computing, such as size, dynamic features, distribution and heterogeneity, are also pushing to the forefront service quality issues. These include performance, reliability and security. Although grid middleware can address some of these issues on a wider scale, it has also become imperative to ensure adequate service provision at local level. Load sharing in clusters can contribute to the provision of a high quality service, by exploiting both static and dynamic information. This paper is concerned with the presentation of a load sharing scheme, that can satisfy grid computing requirements. It follows a proactive, non preemptive and distributed approach. Load information is gathered continuously before it is needed, and a task is allocated to the most appropriate node for execution. Performance and reliability are enhanced by the decentralised nature of the scheme and the symmetric roles of the nodes. In addition, the scheme exhibits transparency characteristics that facilitate integration with the grid.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Existing election algorithms suffer limited scalability. This limit stems from the communication design which in turn stems from their fundamentally two-state behaviour. This paper presents a new election algorithm specifically designed to be highly scalable in broadcast networks whilst allowing any processing node to become coordinator with initially equal probability. To achieve this, careful attention has been paid to the communication design, and an additional state has been introduced. The design of the tri-state election algorithm has been motivated by the requirements analysis of a major research project to deliver robust scalable distributed applications, including load sharing, in hostile computing environments in which it is common for processing nodes to be rebooted frequently without notice. The new election algorithm is based in-part on a simple 'emergent' design. The science of emergence is of great relevance to developers of distributed applications because it describes how higher-level self-regulatory behaviour can arise from many participants following a small set of simple rules. The tri-state election algorithm is shown to have very low communication complexity in which the number of messages generated remains loosely-bounded regardless of scale for large systems; is highly scalable because nodes in the idle state do not transmit any messages; and because of its self-organising characteristics, is very stable.