18 resultados para script-driven test program generation process

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topic of bioenergy, biofuels and bioproducts remains at the top of the current political and research agenda. Identification of the optimum processing routes for biomass, in terms of efficiency, cost, environment and socio-economics is vital as concern grows over the remaining fossil fuel resources, climate change and energy security. It is known that the only renewable way of producing conventional hydrocarbon fuels and organic chemicals is from biomass, but the problem remains of identifying the best product mix and the most efficient way of processing biomass to products. The aim is to move Europe towards a biobased economy and it is widely accepted that biorefineries are key to this development. A methodology was required for the generation and evaluation of biorefinery process chains for converting biomass into one or more valuable products that properly considers performance, cost, environment, socio-economics and other factors that influence the commercial viability of a process. In this thesis a methodology to achieve this objective is described. The completed methodology includes process chain generation, process modelling and subsequent analysis and comparison of results in order to evaluate alternative process routes. A modular structure was chosen to allow greater flexibility and allowing the user to generate a large number of different biorefinery configurations The significance of the approach is that the methodology is defined and is thus rigorous and consistent and may be readily re-examined if circumstances change. There was the requirement for consistency in structure and use, particularly for multiple analyses. It was important that analyses could be quickly and easily carried out to consider, for example, different scales, configurations and product portfolios and so that previous outcomes could be readily reconsidered. The result of the completed methodology is the identification of the most promising biorefinery chains from those considered as part of the European Biosynergy Project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – This paper describes research that has sought to create a formal and rational process that guides manufacturers through the strategic positioning decision. Design/methodology/approach – The methodology is based on a series of case studies to develop and test the decision process. Findings – A decision process that leads the practitioner through an analytical process to decide which manufacturing activities they should carryout themselves. Practical implications – Strategic positioning is concerned with choosing those production related activities that an organisations should carry out internally, and those that should be external and under the ownership and control of suppliers, partners, distributors and customers. Originality/value – This concept extends traditional decision paradigms, such as those associated with “make versus buy” and “outsourcing”, by looking at the interactions between manufacturing operations and the wider supply chain networks associated with the organisation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fast pyrolysis of biomass produces a liquid bio-oil that can be used for electricity generation. Bio-oil can be stored and transported so it is possible to decouple the pyrolysis process from the generation process. This allows each process to be separately optimised. It is necessary to have an understanding of the transport costs involved in order to carry out techno-economic assessments of combinations of remote pyrolysis plants and generation plants. Published fixed and variable costs for freight haulage have been used to calculate the transport cost for trucks running between field stores and a pyrolysis plant. It was found that the key parameter for estimating these costs was the number of round trips a day a truck could make rather than the distance covered. This zone costing approach was used to estimate the transport costs for a range of pyrolysis plants size for willow woodchips and baled miscanthus. The possibility of saving transport costs by producing bio-oil near to the field stores and transporting the bio-oil to a central plant was investigated and it was found that this would only be cost effective for large generation plants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large number of studies have been devoted to modeling the contents and interactions between users on Twitter. In this paper, we propose a method inspired from Social Role Theory (SRT), which assumes that a user behaves differently in different roles in the generation process of Twitter content. We consider the two most distinctive social roles on Twitter: originator and propagator, who respectively posts original messages and retweets or forwards the messages from others. In addition, we also consider role-specific social interactions, especially implicit interactions between users who share some common interests. All the above elements are integrated into a novel regularized topic model. We evaluate the proposed method on real Twitter data. The results show that our method is more effective than the existing ones which do not distinguish social roles. Copyright 2013 ACM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The project “Reference in Discourse” deals with the selection of a specific object from a visual scene in a natural language situation. The goal of this research is to explain this everyday discourse reference task in terms of a concept generation process based on subconceptual visual and verbal information. The system OINC (Object Identification in Natural Communicators) aims at solving this problem in a psychologically adequate way. The system’s difficulties occurring with incomplete and deviant descriptions correspond to the data from experiments with human subjects. The results of these experiments are reported.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The inclusion of high-level scripting functionality in state-of-the-art rendering APIs indicates a movement toward data-driven methodologies for structuring next generation rendering pipelines. A similar theme can be seen in the use of composition languages to deploy component software using selection and configuration of collaborating component implementations. In this paper we introduce the Fluid framework, which places particular emphasis on the use of high-level data manipulations in order to develop component based software that is flexible, extensible, and expressive. We introduce a data-driven, object oriented programming methodology to component based software development, and demonstrate how a rendering system with a similar focus on abstract manipulations can be incorporated, in order to develop a visualization application for geospatial data. In particular we describe a novel SAS script integration layer that provides access to vertex and fragment programs, producing a very controllable, responsive rendering system. The proposed system is very similar to developments speculatively planned for DirectX 10, but uses open standards and has cross platform applicability. © The Eurographics Association 2007.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Operators can become confused while diagnosing faults in process plant while in operation. This may prevent remedial actions being taken before hazardous consequences can occur. The work in this thesis proposes a method to aid plant operators in systematically finding the causes of any fault in the process plant. A computer aided fault diagnosis package has been developed for use on the widely available IBM PC compatible microcomputer. The program displays a coloured diagram of a fault tree on the VDU of the microcomputer, so that the operator can see the link between the fault and its causes. The consequences of the fault and the causes of the fault are also shown to provide a warning of what may happen if the fault is not remedied. The cause and effect data needed by the package are obtained from a hazard and operability (HAZOP) study on the process plant. The result of the HAZOP study is recorded as cause and symptom equations which are translated into a data structure and stored in the computer as a file for the package to access. Probability values are assigned to the events that constitute the basic causes of any deviation. From these probability values, the a priori probabilities of occurrence of other events are evaluated. A top-down recursive algorithm, called TDRA, for evaluating the probability of every event in a fault tree has been developed. From the a priori probabilities, the conditional probabilities of the causes of the fault are then evaluated using Bayes' conditional probability theorem. The posteriori probability values could then be used by the operators to check in an orderly manner the cause of the fault. The package has been tested using the results of a HAZOP study on a pilot distillation plant. The results from the test show how easy it is to trace the chain of events that leads to the primary cause of a fault. This method could be applied in a real process environment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study of concentrating solar thermal power generation sets out to evaluate the main existing collection technologies using the framework of the Analytical Hierarchy Process (AHP). It encompasses parabolic troughs, heliostat fields, linear Fresnel reflectors, parabolic dishes, compound parabolic concentrators and linear Fresnel lenses. These technologies are compared based on technical, economic and environmental criteria. Within these three categories, numerous sub-criteria are identified; similarly sub-alternatives are considered for each technology. A literature review, thermodynamic calculations and an expert workshop have been used to arrive at quantitative and qualitative assessments. The methodology is applied principally to a case study in Gujarat in north-west India, though case studies based on the Sahara Desert, Southern Spain and California are included for comparison. A sensitivity analysis is carried out for Gujarat. The study concludes that the linear Fresnel lens with a secondary compound parabolic collector, or the parabolic dish reflector, is the preferred technology for north-west India.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research addressed the question: "Which factors predict the effectiveness of healthcare teams?" It was addressed by assessing the psychometric properties of a new measure of team functioning with the use of data collected from 797 team members in 61 healthcare teams. This new measure is the Aston Team Performance Inventory (ATPI) developed by West, Markiewicz and Dawson (2005) and based on the IPO model. The ATPI was pilot tested in order to examine the reliability of this measure in the Jordanian cultural context. A sample of five teams comprising 3-6 members each was randomly selected from the Jordan Red Crescent health centers in Amman. Factors that predict team effectiveness were explored in a Jordanian sample (comprising 1622 members in 277 teams with 255 leaders from healthcare teams in hospitals in Amman) using self-report and Leader Ratings measures adapted from work by West, Borrill et al (2000) to determine team effectiveness and innovation from the leaders' point of view. The results demonstrate the validity and reliability of the measures for use in healthcare settings. Team effort and skills and leader managing had the strongest association with team processes in terms of team objectives, reflexivity, participation, task focus, creativity and innovation. Team inputs in terms of task design, team effort and skills, and organizational support were associated with team effectiveness and innovation whereas team resources were associated only with team innovation. Team objectives had the strongest mediated and direct association with team effectiveness whereas task focus had the strongest mediated and direct association with team innovation. Finally, among leadership variables, leader managing had the strongest association with team effectiveness and innovation. The theoretical and practical implications of this thesis are that: team effectiveness and innovation are influenced by multiple factors that must all be taken into account. The key factors managers need to ensure are in place for effective teams are team effort and skills, organizational support and team objectives. To conclude, the application of these findings to healthcare teams in Jordan will help improve their team effectiveness, and thus the healthcare services that they provide.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Modelling architectural information is particularly important because of the acknowledged crucial role of software architecture in raising the level of abstraction during development. In the MDE area, the level of abstraction of models has frequently been related to low-level design concepts. However, model-driven techniques can be further exploited to model software artefacts that take into account the architecture of the system and its changes according to variations of the environment. In this paper, we propose model-driven techniques and dynamic variability as concepts useful for modelling the dynamic fluctuation of the environment and its impact on the architecture. Using the mappings from the models to implementation, generative techniques allow the (semi) automatic generation of artefacts making the process more efficient and promoting software reuse. The automatic generation of configurations and reconfigurations from models provides the basis for safer execution. The architectural perspective offered by the models shift focus away from implementation details to the whole view of the system and its runtime change promoting high-level analysis. © 2009 Springer Berlin Heidelberg.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study examined an integrated model of the antecedents and outcomes of organisational and overall justice using a sample of Indian Call Centre employees (n = 458). Results of structural equation modelling (SEM) revealed that the four organisational justice dimensions relate to overall justice. Further, work group identification mediated the influence of overall justice on counterproductive work behaviors, such as presenteeism and social loafing, while conscientiousness was a significant moderator between work group identification and presenteeism and social loafing. Theoretical and practical implications are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We argue that, for certain constrained domains, elaborate model transformation technologies-implemented from scratch in general-purpose programming languages-are unnecessary for model-driven engineering; instead, lightweight configuration of commercial off-the-shelf productivity tools suffices. In particular, in the CancerGrid project, we have been developing model-driven techniques for the generation of software tools to support clinical trials. A domain metamodel captures the community's best practice in trial design. A scientist authors a trial protocol, modelling their trial by instantiating the metamodel; customized software artifacts to support trial execution are generated automatically from the scientist's model. The metamodel is expressed as an XML Schema, in such a way that it can be instantiated by completing a form to generate a conformant XML document. The same process works at a second level for trial execution: among the artifacts generated from the protocol are models of the data to be collected, and the clinician conducting the trial instantiates such models in reporting observations-again by completing a form to create a conformant XML document, representing the data gathered during that observation. Simple standard form management tools are all that is needed. Our approach is applicable to a wide variety of information-modelling domains: not just clinical trials, but also electronic public sector computing, customer relationship management, document workflow, and so on. © 2012 Springer-Verlag.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A scalable synthetic muscle has been constructed that transducts nanoscale molecular shape changes into macroscopic motion. The working material, which deforms affinely in response to a pH stimulus, is a self-assembled block copolymer comprising nanoscopic hydrophobic domains in a weak polyacid matrix. A device has been assembled where the muscle does work on a cantilever and the force generated has been measured. When coupled to a chemical oscillator this provides a free running chemical motor that generates a peak power of 20 mW kg 1 by the serial addition of 10 nm shape changes that scales over 5 orders of magnitude. It is the nanostructured nature of the gel that gives rise to the affine deformation and results in a robust working material for the construction of scalable muscle devices.