924 resultados para Input and outputs
Resumo:
This article looks at learner initiative in teacher-fronted activities and how this can influence classroom interaction. Extracts from lesson transcripts of adult evening classes in Italy are used to give a precise definition of what is meant by learner initiative and to illustrate how it can change interaction patterns. It is suggested that learner initiative could have an important role to play in promoting comprehensible input and output and therefore language learning. It will be seen how, by giving learners more space and time, initiative can be actively encouraged. However, there are direct implications for teacher training as it is necessary to change traditional interaction patterns and make learner initiative more effective.
Resumo:
This special issue of the Journal of the Operational Research Society is dedicated to papers on the related subjects of knowledge management and intellectual capital. These subjects continue to generate considerable interest amongst both practitioners and academics. This issue demonstrates that operational researchers have many contributions to offer to the area, especially by bringing multi-disciplinary, integrated and holistic perspectives. The papers included are both theoretical as well as practical, and include a number of case studies showing how knowledge management has been implemented in practice that may assist other organisations in their search for a better means of managing what is now recognised as a core organisational activity. It has been accepted by a growing number of organisations that the precise handling of information and knowledge is a significant factor in facilitating their success but that there is a challenge in how to implement a strategy and processes for this handling. It is here, in the particular area of knowledge process handling that we can see the contributions of operational researchers most clearly as is illustrated in the papers included in this journal edition. The issue comprises nine papers, contributed by authors based in eight different countries on five continents. Lind and Seigerroth describe an approach that they call team-based reconstruction, intended to help articulate knowledge in a particular organisational. context. They illustrate the use of this approach with three case studies, two in manufacturing and one in public sector health care. Different ways of carrying out reconstruction are analysed, and the benefits of team-based reconstruction are established. Edwards and Kidd, and Connell, Powell and Klein both concentrate on knowledge transfer. Edwards and Kidd discuss the issues involved in transferring knowledge across frontières (borders) of various kinds, from those borders within organisations to those between countries. They present two examples, one in distribution and the other in manufacturing. They conclude that trust and culture both play an important part in facilitating such transfers, that IT should be kept in a supporting role in knowledge management projects, and that a staged approach to this IT support may be the most effective. Connell, Powell and Klein consider the oft-quoted distinction between explicit and tacit knowledge, and argue that such a distinction is sometimes unhelpful. They suggest that knowledge should rather be regarded as a holistic systemic property. The consequences of this for knowledge transfer are examined, with a particular emphasis on what this might mean for the practice of OR Their view of OR in the context of knowledge management very much echoes Lind and Seigerroth's focus on knowledge for human action. This is an interesting convergence of views given that, broadly speaking, one set of authors comes from within the OR community, and the other from outside it. Hafeez and Abdelmeguid present the nearest to a 'hard' OR contribution of the papers in this special issue. In their paper they construct and use system dynamics models to investigate alternative ways in which an organisation might close a knowledge gap or skills gap. The methods they use have the potential to be generalised to any other quantifiable aspects of intellectual capital. The contribution by Revilla, Sarkis and Modrego is also at the 'hard' end of the spectrum. They evaluate the performance of public–private research collaborations in Spain, using an approach based on data envelopment analysis. They found that larger organisations tended to perform relatively better than smaller ones, even though the approach used takes into account scale effects. Perhaps more interesting was that many factors that might have been thought relevant, such as the organisation's existing knowledge base or how widely applicable the results of the project would be, had no significant effect on the performance. It may be that how well the partnership between the collaborators works (not a factor it was possible to take into account in this study) is more important than most other factors. Mak and Ramaprasad introduce the concept of a knowledge supply network. This builds on existing ideas of supply chain management, but also integrates the design chain and the marketing chain, to address all the intellectual property connected with the network as a whole. The authors regard the knowledge supply network as the natural focus for considering knowledge management issues. They propose seven criteria for evaluating knowledge supply network architecture, and illustrate their argument with an example from the electronics industry—integrated circuit design and fabrication. In the paper by Hasan and Crawford, their interest lies in the holistic approach to knowledge management. They demonstrate their argument—that there is no simple IT solution for organisational knowledge management efforts—through two case study investigations. These case studies, in Australian universities, are investigated through cultural historical activity theory, which focuses the study on the activities that are carried out by people in support of their interpretations of their role, the opportunities available and the organisation's purpose. Human activities, it is argued, are mediated by the available tools, including IT and IS and in this particular context, KMS. It is this argument that places the available technology into the knowledge activity process and permits the future design of KMS to be improved through the lessons learnt by studying these knowledge activity systems in practice. Wijnhoven concentrates on knowledge management at the operational level of the organisation. He is concerned with studying the transformation of certain inputs to outputs—the operations function—and the consequent realisation of organisational goals via the management of these operations. He argues that the inputs and outputs of this process in the context of knowledge management are different types of knowledge and names the operation method the knowledge logistics. The method of transformation he calls learning. This theoretical paper discusses the operational management of four types of knowledge objects—explicit understanding; information; skills; and norms and values; and shows how through the proposed framework learning can transfer these objects to clients in a logistical process without a major transformation in content. Millie Kwan continues this theme with a paper about process-oriented knowledge management. In her case study she discusses an implementation of knowledge management where the knowledge is centred around an organisational process and the mission, rationale and objectives of the process define the scope of the project. In her case they are concerned with the effective use of real estate (property and buildings) within a Fortune 100 company. In order to manage the knowledge about this property and the process by which the best 'deal' for internal customers and the overall company was reached, a KMS was devised. She argues that process knowledge is a source of core competence and thus needs to be strategically managed. Finally, you may also wish to read a related paper originally submitted for this Special Issue, 'Customer knowledge management' by Garcia-Murillo and Annabi, which was published in the August 2002 issue of the Journal of the Operational Research Society, 53(8), 875–884.
Resumo:
This paper extends previous analyses of the choice between internal and external R&D to consider the costs of internal R&D. The Heckman two-stage estimator is used to estimate the determinants of internal R&D unit cost (i.e. cost per product innovation) allowing for sample selection effects. Theory indicates that R&D unit cost will be influenced by scale issues and by the technological opportunities faced by the firm. Transaction costs encountered in research activities are allowed for and, in addition, consideration is given to issues of market structure which influence the choice of R&D mode without affecting the unit cost of internal or external R&D. The model is tested on data from a sample of over 500 UK manufacturing plants which have engaged in product innovation. The key determinants of R&D mode are the scale of plant and R&D input, and market structure conditions. In terms of the R&D cost equation, scale factors are again important and have a non-linear relationship with R&D unit cost. Specificities in physical and human capital also affect unit cost, but have no clear impact on the choice of R&D mode. There is no evidence of technological opportunity affecting either R&D cost or the internal/external decision.
Resumo:
In this paper we propose a data envelopment analysis (DEA) based method for assessing the comparative efficiencies of units operating production processes where input-output levels are inter-temporally dependent. One cause of inter-temporal dependence between input and output levels is capital stock which influences output levels over many production periods. Such units cannot be assessed by traditional or 'static' DEA which assumes input-output correspondences are contemporaneous in the sense that the output levels observed in a time period are the product solely of the input levels observed during that same period. The method developed in the paper overcomes the problem of inter-temporal input-output dependence by using input-output 'paths' mapped out by operating units over time as the basis of assessing them. As an application we compare the results of the dynamic and static model for a set of UK universities. The paper is suggested that dynamic model capture the efficiency better than static model. © 2003 Elsevier Inc. All rights reserved.
Resumo:
In some contexts data envelopment analysis (DEA) gives poor discrimination on the performance of units. While this may reflect genuine uniformity of performance between units, it may also reflect lack of sufficient observations or other factors limiting discrimination on performance between units. In this paper, we present an overview of the main approaches that can be used to improve the discrimination of DEA. This includes simple methods such as the aggregation of inputs or outputs, the use of longitudinal data, more advanced methods such as the use of weight restrictions, production trade-offs and unobserved units, and a relatively new method based on the use of selective proportionality between the inputs and outputs. © 2007 Springer Science+Business Media, LLC.
Resumo:
This work investigated the purification of phosphoric acid using a suitable organic solvent, followed by re-extraction of the acid from the solvent using water. The work consisted of practical batch and continuous studies and the economics and design of a full scale plant, based on the experimental data. A comprehensive literature survey on the purification of wet process phosphoric acid by organic solvents is presented and the literature describing the design and operation of mixer-settlers has also been reviewed. In batch studies, the equilibrium and distribution curves for the systems water-phosphoric acid-solvent for Benzaldehyde, Cyclohexanol and Methylisobutylketone (MIBK) were determined together with hydrodynamic characteristics for both pure and impure systems. The settling time increased with acid concentration, but power input had no effect. Drop size was found to reduce with acid concentration and power input. For the continuous studies a novel horizontal mixer~settler cascade was designed, constructed and operated using pure and impure acid with MIBK as the solvent. The cascade incorporates three air turbine agitated, cylindrical 900 ml mixers, and three cylindrical 200 ml settlers with air-lift solvent interstage transfer. Mean drop size in the fully baffled mixer was correlated. Drop size distributions were log-normal and size decreased with acid concentration and power input and increased with dispersed phase hold-up. Phase inversion studies showed that the width of the ambivalent region depended upon rotor speed, hold-up and acid concentration. Settler characteristics were investigated by measuring wedge length. Distribution coefficients of impurities and acid were also investigated. The following optimum extraction conditions were found: initial acid concentration 63%, phase ratio of solvent to acid 1:1 (v/v), impeller speed recommended 900 r.p.m. In the washing step the maximum phase ratio of solvent to water was 8:1 (v/v). Work on phosphoric acid concentration involved constructing distillation equipment consisting of a 10& spherical still. A 100 T/d scale detailed process design including capital cost, operating cost and profitability was also completed. A profit model for phosphoric acid extraction was developed and maximised. Recommendations are made for both the application of the results to a practical design and for extensions of the study.
Resumo:
This thesis provides an interoperable language for quantifying uncertainty using probability theory. A general introduction to interoperability and uncertainty is given, with particular emphasis on the geospatial domain. Existing interoperable standards used within the geospatial sciences are reviewed, including Geography Markup Language (GML), Observations and Measurements (O&M) and the Web Processing Service (WPS) specifications. The importance of uncertainty in geospatial data is identified and probability theory is examined as a mechanism for quantifying these uncertainties. The Uncertainty Markup Language (UncertML) is presented as a solution to the lack of an interoperable standard for quantifying uncertainty. UncertML is capable of describing uncertainty using statistics, probability distributions or a series of realisations. The capabilities of UncertML are demonstrated through a series of XML examples. This thesis then provides a series of example use cases where UncertML is integrated with existing standards in a variety of applications. The Sensor Observation Service - a service for querying and retrieving sensor-observed data - is extended to provide a standardised method for quantifying the inherent uncertainties in sensor observations. The INTAMAP project demonstrates how UncertML can be used to aid uncertainty propagation using a WPS by allowing UncertML as input and output data. The flexibility of UncertML is demonstrated with an extension to the GML geometry schemas to allow positional uncertainty to be quantified. Further applications and developments of UncertML are discussed.
Resumo:
This study investigates concreteness effects in tasks requiring short-term retention. Concreteness effects were assessed in serial recall, matching span, order reconstruction, and free recall. Each task was carried out both in a control condition and under articulatory suppression. Our results show no dissociation between tasks that do and do not require spoken output. This argues against the redintegration hypothesis according to which lexical-semantic effects in short-term memory arise only at the point of production. In contrast, concreteness effects were modulated by task demands that stressed retention of item versus order information. Concreteness effects were stronger in free recall than in serial recall. Suppression, which weakens phonological representations, enhanced the concreteness effect with item scoring. In a matching task, positive effects of concreteness occurred with open sets but not with closed sets of words. Finally, concreteness effects reversed when the task asked only for recall of word positions (as in the matching task), when phonological representations were weak (because of suppression), and when lexical semantic representations overactivated (because of closed sets). We interpret these results as consistent with a model where phonological representations are crucial for the retention of order, while lexical-semantic representations support maintenance of item identity in both input and output buffers.
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
Geometric information relating to most engineering products is available in the form of orthographic drawings or 2D data files. For many recent computer based applications, such as Computer Integrated Manufacturing (CIM), these data are required in the form of a sophisticated model based on Constructive Solid Geometry (CSG) concepts. A recent novel technique in this area transfers 2D engineering drawings directly into a 3D solid model called `the first approximation'. In many cases, however, this does not represent the real object. In this thesis, a new method is proposed and developed to enhance this model. This method uses the notion of expanding an object in terms of other solid objects, which are either primitive or first approximation models. To achieve this goal, in addition to the prepared subroutine to calculate the first approximation model of input data, two other wireframe models are found for extraction of sub-objects. One is the wireframe representation on input, and the other is the wireframe of the first approximation model. A new fast method is developed for the latter special case wireframe, which is named the `first approximation wireframe model'. This method avoids the use of a solid modeller. Detailed descriptions of algorithms and implementation procedures are given. In these techniques utilisation of dashed line information is also considered in improving the model. Different practical examples are given to illustrate the functioning of the program. Finally, a recursive method is employed to automatically modify the output model towards the real object. Some suggestions for further work are made to increase the domain of objects covered, and provide a commercially usable package. It is concluded that the current method promises the production of accurate models for a large class of objects.
Resumo:
The wear rates of sliding surfaces are significantly reduced if mild oxidational wear can be encouraged. It is hence of prime importance in the interest of component life and material conservation to understand the factors necessary to promote mild, oxidational wear, The present work investigates the fundamental mechanism of the running-in wear of BS EN 31!EN 8 steel couples. under various conditions of load. speed and test duration. Unidirectional sliding experiments were carried out on a pin-on~disc wear machine where frictional force, wear rate, temperature and contact resistance were continuously monitored during each test. Physical methods of analysis (x-ray, scanning electron microscopy etc.) were used to examine the wear debris and worn samples. The wear rate versus load curves revealed mild wear transitions, which under long duration of running, categorized mild wear into four distinct regions.α-Fe20s. Fe304, FeO and an oxide mixture were the predominant oxides in four regions of oxidational wear which were identified above the Welsh T2 transition. The wear curves were strongly effected by the speed and test duration. A surface model was used to calculate the surface parameters, and the results were found to be comparable with the experimentally observed parameters. Oxidation was responsible for the transition from severe to mild wear at a load corresponding to the Welsh T2 transition. In the running-in period sufficient energy input and surface hardness enabled oxide growth rate to increase and eventually exceeded the rate of removal, where mild wear ensued. A model was developed to predict the wear volume up to the transition. Remarkable agreement was found between the theoretical prediction and the experimentally-measured values. The oxidational mechanjsm responsible for transitjon to mild wear under equilibrium conditions was related to the formation of thick homogenous oxide plateaux on subsurface hardened layers, FeO was the oxide formed initially at the onset of mild wear but oxide type changed.during the total running period to give an equilibrium oxide whose nature depended on the loads applied.
Resumo:
This study has been conceived with the primary objective of identifying and evaluating the financial aspects of the transformation in country/company relations of the international oil industry from the traditional concessionary system to the system of governmental participation in the ownership and operation of oil concessions. The emphasis of the inquiry was placed on assembling a case study of the oil exploitation arrangements of Libya. Through a comprehensive review of the literature, the sociopolitical factors surrounding the international oil business were identified and examined in an attempt to see their influence on contractual arrangements and particularly to gauge the impact of any induced contractual changes on the revenue benefit accruing to the host country from its oil operations. Some comparative analyses were made in the study to examine the viability of the Libyan participation deals both as an investment proposal and as a system of conducting oil activities in the country. The analysis was carried out in the light of specific hypotheses to assess the relative impact of the participation scheme in comparison with the alternative concessionary model on the net revenue resulting to the government from oil operations and the relative effect on the level of research and development within the industry. A discounted cash flow analysis was conducted to measure inputs and outputs of the comparative models and judge their revenue benefits. Then an empirical analysis was carried out to detect any significant behavioural changes in the exploration and development effort associated with the different oil exploitation systems. Results of the investigation of revenues support the argument that the mere introduction of the participation system has not resulted in a significant revenue benefit to the host government. Though there has been a significant increase in government revenue, associated with the period following the emergence of the participation agreements, this increase was mainly due to socio-economic factors other than the participation scheme. At the same time the empirical results have shown an association of the participation scheme with a decline of the oil industry's research and development efforts.
Resumo:
The development of more realistic constitutive models for granular media, such as sand, requires ingredients which take into account the internal micro-mechanical response to deformation. Unfortunately, at present, very little is known about these mechanisms and therefore it is instructive to find out more about the internal nature of granular samples by conducting suitable tests. In contrast to physical testing the method of investigation used in this study employs the Distinct Element Method. This is a computer based, iterative, time-dependent technique that allows the deformation of granular assemblies to be numerically simulated. By making assumptions regarding contact stiffnesses each individual contact force can be measured and by resolution particle centroid forces can be calculated. Then by dividing particle forces by their respective mass, particle centroid velocities and displacements are obtained by numerical integration. The Distinct Element Method is incorporated into a computer program 'Ball'. This program is effectively a numerical apparatus which forms a logical housing for this method and allows data input and output, and also provides testing control. By using this numerical apparatus tests have been carried out on disc assemblies and many new interesting observations regarding the micromechanical behaviour are revealed. In order to relate the observed microscopic mechanisms of deformation to the flow of the granular system two separate approaches have been used. Firstly a constitutive model has been developed which describes the yield function, flow rule and translation rule for regular assemblies of spheres and discs when subjected to coaxial deformation. Secondly statistical analyses have been carried out using data which was extracted from the simulation tests. These analyses define and quantify granular structure and then show how the force and velocity distributions use the structure to produce the corresponding stress and strain-rate tensors.
Resumo:
Under ideal conditions ion plating produces finely grained dense coatings with excellent adhesion. The ion bombardment induced damage initiates a large number of small nuclei. Simultaneous coating and sputtering stimulates high rates of diffusion and forms an interfacial region of graded composition responsible for good adhesion. To obtain such coatings on components far industrial applications, the design and construction Of an ion plater with a 24" (O.6rn) diameter chamber were investigated and modifications of the electron beam gun were proposed. A 12" (O.3m) diameter ion plater was designed and constructed. The equipment was used to develop surfaces for solar energy applications. The conditions to give extended surfaces by sputter etching were studied. Austenitic stainless steel was sputter etched at 20 and 30 mTorr working pressure and at 3, 4 and 5 kV. Uniform etching was achieved by redesigning the specimen holder to give a uniform electrostatic field over the surfaces of the specimens. Surface protrusions were observed after sputter etching. They were caused by the sputter process and were independent of grain boundaries, surface contaminants and inclusions. The sputtering rate of stainless steel was highly dependent on the background pressure which should be kept below 10-5 Torr. Sputter etching improved the performance of stainless steel used as a solar selective surface. A twofold improvement was achieved on sputter etching bright annealed stainless steel. However, there was only slight improvement after sputter etching stainless steel which had been mechanically polished to a mirror finish. Cooling curves Were used to measure the thermal emittance of specimens.The deposition rate of copper was measured at different levels of power input and was found to be a maximum at 9.5 kW. The diameter of the copper feed rod was found to be critical for the maintenance of a uniform evaporation rate.