843 resultados para “Hybrid” implementation model
Resumo:
Due to complex field/tissue interactions, high-field magnetic resonance (MR) images suffer significant image distortions that result in compromised diagnostic quality. A new method that attempts to remove these distortions is proposed in this paper and is based on the use of transceiver-phased arrays. The proposed system uses, in the examples presented herein, a shielded four-element transceive-phased array head coil and involves performing two separate scans of the same slice with each scan using different excitations during transmission. By optimizing the amplitudes and phases for each scan, antipodal signal profiles can be obtained, and by combining both the images together, the image distortion can be reduced several fold. A combined hybrid method of moments (MoM)/finite element method (FEM) and finite-difference time-domain (FDTD) technique is proposed and used to elucidate the concept of the new method and to accurately evaluate the electromagnetic field (EMF) in a human head model. In addition, the proposed method is used in conjunction with the generalized auto-calibrating partially parallel acquisitions (GRAPPA) reconstruction technique to enable rapid imaging of the two scans. Simulation results reported herein for 11-T (470-MHz) brain imaging applications show that the new method with GRAPPA reconstruction theoretically results in improved image quality and that the proposed combined hybrid MoM/FEM and FDTD technique is. suitable for high-field magnetic resonance imaging (MRI) numerical analysis.
Resumo:
Despite the insight gained from 2-D particle models, and given that the dynamics of crustal faults occur in 3-D space, the question remains, how do the 3-D fault gouge dynamics differ from those in 2-D? Traditionally, 2-D modeling has been preferred over 3-D simulations because of the computational cost of solving 3-D problems. However, modern high performance computing architectures, combined with a parallel implementation of the Lattice Solid Model (LSM), provide the opportunity to explore 3-D fault micro-mechanics and to advance understanding of effective constitutive relations of fault gouge layers. In this paper, macroscopic friction values from 2-D and 3-D LSM simulations, performed on an SGI Altix 3700 super-cluster, are compared. Two rectangular elastic blocks of bonded particles, with a rough fault plane and separated by a region of randomly sized non-bonded gouge particles, are sheared in opposite directions by normally-loaded driving plates. The results demonstrate that the gouge particles in the 3-D models undergo significant out-of-plane motion during shear. The 3-D models also exhibit a higher mean macroscopic friction than the 2-D models for varying values of interparticle friction. 2-D LSM gouge models have previously been shown to exhibit accelerating energy release in simulated earthquake cycles, supporting the Critical Point hypothesis. The 3-D models are shown to also display accelerating energy release, and good fits of power law time-to-failure functions to the cumulative energy release are obtained.
Resumo:
The work presents a new approach to the problem of simultaneous localization and mapping - SLAM - inspired by computational models of the hippocampus of rodents. The rodent hippocampus has been extensively studied with respect to navigation tasks, and displays many of the properties of a desirable SLAM solution. RatSLAM is an implementation of a hippocampal model that can perform SLAM in real time on a real robot. It uses a competitive attractor network to integrate odometric information with landmark sensing to form a consistent representation of the environment. Experimental results show that RatSLAM can operate with ambiguous landmark information and recover from both minor and major path integration errors.
Resumo:
Achieving consistency between a specification and its implementation is an important part of software development In previous work, we have presented a method and tool support for testing a formal specification using animation and then verifying an implementation of that specification. The method is based on a testgraph, which provides a partial model of the application under test. The testgraph is used in combination with an animator to generate test sequences for testing the formal specification. The same testgraph is used during testing to execute those same sequences on the implementation and to ensure that the implementation conforms to the specification. So far, the method and its tool support have been applied to software components that can be accessed through an application programmer interface (API). In this paper, we use an industrially-based case study to discuss the problems associated with applying the method to a software system with a graphical user interface (GUI). In particular, the lack of a standardised interface, as well as controllability and observability problems, make it difficult to automate the testing of the implementation. The method can still be applied, but the amount of testing that can be carried on the implementation is limited by the manual effort involved.
Resumo:
Because organizations are making large investments in Information systems (IS), efficient IS project management has been found critical to success. This study examines how the use of incentives can improve the project success. Agency theory is used to: identify motivational factors of project success, help the IS owners to understand to what extent management incentives can improve IS development and implementation (ISD/I). The outcomes will help practitioners and researchers to build on theoretical model of project management elements which lead to project success. Given the principal-agent nature of most significant scale of IS development, insights that will allow for greater alignment of the agent’s goals with those of the principal through incentive contracts, will serve to make ISD/I both more efficient and more effective, leading to more successful IS projects.
Resumo:
Semantic data models provide a map of the components of an information system. The characteristics of these models affect their usefulness for various tasks (e.g., information retrieval). The quality of information retrieval has obvious important consequences, both economic and otherwise. Traditionally, data base designers have produced parsimonious logical data models. In spite of their increased size, ontologically clearer conceptual models have been shown to facilitate better performance for both problem solving and information retrieval tasks in experimental settings. The experiments producing evidence of enhanced performance for ontologically clearer models have, however, used application domains of modest size. Data models in organizational settings are likely to be substantially larger than those used in these experiments. This research used an experiment to investigate whether the benefits of improved information retrieval performance associated with ontologically clearer models are robust as the size of the application domains increase. The experiment used an application domain of approximately twice the size as tested in prior experiments. The results indicate that, relative to the users of the parsimonious implementation, end users of the ontologically clearer implementation made significantly more semantic errors, took significantly more time to compose their queries, and were significantly less confident in the accuracy of their queries.
Resumo:
Since the object management group (OMG) commenced its model driven architecture (MDA) initiative, there has been considerable activity proposing and building automatic model transformation systems to help implement the MDA concept. Much less attention has been given to the need to ensure that model transformations generate the intended results. This paper explores one aspect of validation and verification for MDA: coverage of the source and/or target metamodels by a set of model transformations. The paper defines the property of metamodel coverage and some corresponding algorithms. This property helps the user assess which parts of a source (or target) metamodel are referenced by a given model transformation set. Some results are presented from a prototype implementation that is built on the eclipse modeling framework (EMF).
Resumo:
This thesis presents the formal definition of a novel Mobile Cloud Computing (MCC) extension of the Networked Autonomic Machine (NAM) framework, a general-purpose conceptual tool which describes large-scale distributed autonomic systems. The introduction of autonomic policies in the MCC paradigm has proved to be an effective technique to increase the robustness and flexibility of MCC systems. In particular, autonomic policies based on continuous resource and connectivity monitoring help automate context-aware decisions for computation offloading. We have also provided NAM with a formalization in terms of a transformational operational semantics in order to fill the gap between its existing Java implementation NAM4J and its conceptual definition. Moreover, we have extended NAM4J by adding several components with the purpose of managing large scale autonomic distributed environments. In particular, the middleware allows for the implementation of peer-to-peer (P2P) networks of NAM nodes. Moreover, NAM mobility actions have been implemented to enable the migration of code, execution state and data. Within NAM4J, we have designed and developed a component, denoted as context bus, which is particularly useful in collaborative applications in that, if replicated on each peer, it instantiates a virtual shared channel allowing nodes to notify and get notified about context events. Regarding the autonomic policies management, we have provided NAM4J with a rule engine, whose purpose is to allow a system to autonomously determine when offloading is convenient. We have also provided NAM4J with trust and reputation management mechanisms to make the middleware suitable for applications in which such aspects are of great interest. To this purpose, we have designed and implemented a distributed framework, denoted as DARTSense, where no central server is required, as reputation values are stored and updated by participants in a subjective fashion. We have also investigated the literature regarding MCC systems. The analysis pointed out that all MCC models focus on mobile devices, and consider the Cloud as a system with unlimited resources. To contribute in filling this gap, we defined a modeling and simulation framework for the design and analysis of MCC systems, encompassing both their sides. We have also implemented a modular and reusable simulator of the model. We have applied the NAM principles to two different application scenarios. First, we have defined a hybrid P2P/cloud approach where components and protocols are autonomically configured according to specific target goals, such as cost-effectiveness, reliability and availability. Merging P2P and cloud paradigms brings together the advantages of both: high availability, provided by the Cloud presence, and low cost, by exploiting inexpensive peers resources. As an example, we have shown how the proposed approach can be used to design NAM-based collaborative storage systems based on an autonomic policy to decide how to distribute data chunks among peers and Cloud, according to cost minimization and data availability goals. As a second application, we have defined an autonomic architecture for decentralized urban participatory sensing (UPS) which bridges sensor networks and mobile systems to improve effectiveness and efficiency. The developed application allows users to retrieve and publish different types of sensed information by using the features provided by NAM4J's context bus. Trust and reputation is managed through the application of DARTSense mechanisms. Also, the application includes an autonomic policy that detects areas characterized by few contributors, and tries to recruit new providers by migrating code necessary to sensing, through NAM mobility actions.
Resumo:
The ERS-1 satellite carries a scatterometer which measures the amount of radiation scattered back toward the satellite by the ocean's surface. These measurements can be used to infer wind vectors. The implementation of a neural network based forward model which maps wind vectors to radar backscatter is addressed. Input noise cannot be neglected. To account for this noise, a Bayesian framework is adopted. However, Markov Chain Monte Carlo sampling is too computationally expensive. Instead, gradient information is used with a non-linear optimisation algorithm to find the maximum em a posteriori probability values of the unknown variables. The resulting models are shown to compare well with the current operational model when visualised in the target space.
Resumo:
The ERS-1 satellite carries a scatterometer which measures the amount of radiation scattered back toward the satellite by the ocean's surface. These measurements can be used to infer wind vectors. The implementation of a neural network based forward model which maps wind vectors to radar backscatter is addressed. Input noise cannot be neglected. To account for this noise, a Bayesian framework is adopted. However, Markov Chain Monte Carlo sampling is too computationally expensive. Instead, gradient information is used with a non-linear optimisation algorithm to find the maximum em a posteriori probability values of the unknown variables. The resulting models are shown to compare well with the current operational model when visualised in the target space.
Resumo:
Public policy becomes managerial practice through a process of implementation. There is an established literature within Implementation Studies which explains the variables and some of the processes involved in implementation, but less attention has been focused upon how public service managers convert new policy initiatives into practice. The research proposes that managers and their organisations have to go through a process of learning in order to achieve the implementation of public policy. Data was collected over a five year period from four case studies of capital investment appraisal in the British National Health Service. Further data was collected from taped interviews by key actors within the case studies. The findings suggest that managers do learn to implement policy and four factors are important in this learning process. These are; (i) the nature of bureaucratic responsibility; (ii) the motivation of actors towards learning; (iii) the passage of time which allows for the development of competence and (iv) the use of project team structures. The research has demonstrated that the conversion of policy into practice occurs through the operationalisation of solutions to policy problems via job tasks. As such it suggests that in understanding how policy is implemented, technical learning is more important than cultural learning, in this context. In conclusion, a "Model of Learned Implementation" is presented, together with a discussion of some of the implications of the research. These are the possible use of more pilot projects for new policy initiatives and the more systematic diffusion of knowledge about implementation solutions.
Resumo:
Conventional project management techniques are not always sufficient to ensure time, cost and quality achievement of large-scale construction projects due to complexity in planning, design and implementation processes. The main reasons for project non-achievement are changes in scope and design, changes in government policies and regulations, unforeseen inflation, underestimation and improper estimation. Projects that are exposed to such an uncertain environment can be effectively managed with the application of risk management throughout the project's life cycle. However, the effectiveness of risk management depends on the technique through which the effects of risk factors are analysed/quantified. This study proposes the Analytic Hierarchy Process (AHP), a multiple attribute decision making technique, as a tool for risk analysis because it can handle subjective as well as objective factors in a decision model that are conflicting in nature. This provides a decision support system (DSS) to project management for making the right decision at the right time for ensuring project success in line with organisation policy, project objectives and a competitive business environment. The whole methodology is explained through a case application of a cross-country petroleum pipeline project in India and its effectiveness in project management is demonstrated.
Resumo:
The role of technology management in achieving improved manufacturing performance has been receiving increased attention as enterprises are becoming more exposed to competition from around the world. In the modern market for manufactured goods the demand is now for more product variety, better quality, shorter delivery and greater flexibility, while the financial and environmental cost of resources has become an urgent concern to manufacturing managers. This issue of the International Journal of Technology Management addresses the question of how the diffusion, implementation and management of technology can improve the performance of manufacturing industries. The authors come from a large number of different countries and their contributions cover a wide range of topics within this general theme. Some papers are conceptual, others report on research carried out in a range of different industries including steel production, iron founding, electronics, robotics, machinery, precision engineering, metal working and motor manufacture. In some cases they describe situations in specific countries. Several are based on presentations made at the UK Operations Management Association's Sixth International Conference held at Aston University at which the conference theme was 'Achieving Competitive Edge: Getting Ahead Through Technology and People'. The first two papers deal with questions of advanced manufacturing technology implementation and management. Firstly Beatty describes a three year longitudinal field study carried out in ten Canadian manufacturing companies using CADICAM and CIM systems. Her findings relate to speed of implementation, choice of system type, the role of individuals in implementation, organization and job design. This is followed by a paper by Bessant in which he argues that a more a strategic approach should be taken towards the management of technology in the 1990s and beyond. Also considered in this paper are the capabilities necessary in order to deploy advanced manufacturing technology as a strategic resource and the way such capabilities might be developed within the firm. These two papers, which deal largely with the implementation of hardware, are supplemented by Samson and Sohal's contribution in which they argue that a much wider perspective should be adopted based on a new approach to manufacturing strategy formulation. Technology transfer is the topic of the following two papers. Pohlen again takes the case of advanced manufacturing technology and reports on his research which considers the factors contributing to successful realisation of AMT transfer. The paper by Lee then provides a more detailed account of technology transfer in the foundry industry. Using a case study based on a firm which has implemented a number of transferred innovations a model is illustrated in which the 'performance gap' can be identified and closed. The diffusion of technology is addressed in the next two papers. In the first of these, by Lowe and Sim, the managerial technologies of 'Just in Time' and 'Manufacturing Resource Planning' (or MRP 11) are examined. A study is described from which a number of factors are found to influence the adoption process including, rate of diffusion and size. Dahlin then considers the case of a specific item of hardware technology, the industrial robot. Her paper reviews the history of robot diffusion since the early 1960s and then tries to predict how the industry will develop in the future. The following two papers deal with the future of manufacturing in a more general sense. The future implementation of advanced manufacturing technology is the subject explored by de Haan and Peters who describe the results of their Dutch Delphi forecasting study conducted among a panel of experts including scientists, consultants, users and suppliers of AMT. Busby and Fan then consider a type of organisational model, 'the extended manufacturing enterprise', which would represent a distinct alternative pure market-led and command structures by exploiting the shared knowledge of suppliers and customers. The three country-based papers consider some strategic issues relating manufacturing technology. In a paper based on investigations conducted in China He, Liff and Steward report their findings from strategy analyses carried out in the steel and watch industries with a view to assessing technology needs and organizational change requirements. This is followed by Tang and Nam's paper which examines the case of machinery industry in Korea and its emerging importance as a key sector in the Korean economy. In his paper which focuses on Venezuela, Ernst then considers the particular problem of how this country can address the problem of falling oil revenues. He sees manufacturing as being an important contributor to Venezuela's future economy and proposes a means whereby government and private enterprise can co-operate in development of the manufacturing sector. The last six papers all deal with specific topics relating to the management manufacturing. Firstly Youssef looks at the question of manufacturing flexibility, introducing and testing a conceptual model that relates computer based technologies flexibility. Dangerfield's paper which follows is based on research conducted in the steel industry. He considers the question of scale and proposes a modelling approach determining the plant configuration necessary to meet market demand. Engstrom presents the results of a detailed investigation into the need for reorganising material flow where group assembly of products has been adopted. Sherwood, Guerrier and Dale then report the findings of a study into the effectiveness of Quality Circle implementation. Stillwagon and Burns, consider how manufacturing competitiveness can be improved individual firms by describing how the application of 'human performance engineering' can be used to motivate individual performance as well as to integrate organizational goals. Finally Sohal, Lewis and Samson describe, using a case study example, how just-in-time control can be applied within the context of computer numerically controlled flexible machining lines. The papers in this issue of the International Journal of Technology Management cover a wide range of topics relating to the general question of improving manufacturing performance through the dissemination, implementation and management of technology. Although they differ markedly in content and approach, they have the collective aim addressing the concepts, principles and practices which provide a better understanding the technology of manufacturing and assist in achieving and maintaining a competitive edge.
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
The integration of a microprocessor and a medium power stepper motor in one control system brings together two quite different disciplines. Various methods of interfacing are examined and the problems involved in both hardware and software manipulation are investigated. Microprocessor open-loop control of the stepper motor is considered. The possible advantages of microprocessor closed-loop control are examined and the development of a system is detailed. The system uses position feedback to initiate each motor step. Results of the dynamic response of the system are presented and its performance discussed. Applications of the static torque characteristic of the stepper motor are considered followed by a review of methods of predicting the characteristic. This shows that accurate results are possible only when the effects of magnetic saturation are avoided or when the machine is available for magnetic circuit tests to be carried out. A new method of predicting the static torque characteristic is explained in detail. The method described uses the machine geometry and the magnetic characteristics of the iron types used in the machine. From this information the permeance of each iron component of the machine is calculated and by using the equivalent magnetic circuit of the machine, the total torque produced is predicted. It is shown how this new method is implemented on a digital computer and how the model may be used to investigate further aspects of the stepper motor in addition to the static torque.