911 resultados para Human-Centred Design


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The correction of presbyopia and restoration of true accommodative function to the ageing eye is the focus of much ongoing research and clinical work. A range of accommodating intraocular lenses (AIOLs) implanted during cataract surgery has been developed and they are designed to change either their position or shape in response to ciliary muscle contraction to generate an increase in dioptric power. Two main design concepts exist. First, axial shift concepts rely on anterior axial movement of one or two optics creating accommodative ability. Second, curvature change designs are designed to provide significant amplitudes of accommodation with little physical displacement. Single-optic devices have been used most widely, although the true accommodative ability provided by forward shift of the optic appears limited and recent findings indicate that alternative factors such as flexing of the optic to alter ocular aberrations may be responsible for the enhanced near vision reported in published studies. Techniques for analysing the performance of AIOLs have not been standardised and clinical studies have reported findings using a wide range of both subjective and objective methods, making it difficult to gauge the success of these implants. There is a need for longitudinal studies using objective methods to assess long-term performance of AIOLs and to determine if true accommodation is restored by the designs available. While dual-optic and curvature change IOLs are designed to provide greater amplitudes of accommodation than is possible with single-optic devices, several of these implants are in the early stages of development and require significant further work before human use is possible. A number of challenges remain and must be addressed before the ultimate goal of restoring youthful levels of accommodation to the presbyopic eye can be achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The purpose of the paper is to examine the kind of HRM practices being implemented by overseas firms in their Indian subsidiaries and also to analyze the linkage between HRM practices and organizational performance. Design/methodology/approach: The paper utilizes a mixture of both quantitative and qualitative techniques via personal interviews in 76 subsidiaries. Findings: The results show that while the introduction of HRM practices from the foreign parent organization is negatively associated with performance, local adaption of HRM practices is positively related with the performance of foreign firms operating in India. Research limitations/implications: The main limitations include data being collected by only one respondent from each firm, and the relatively small sample size. Practical implications: The key message for practitioners is that HRM systems do improve organizational performance in the Indian subsidiaries of foreign firms, and an emphasis on the localization of HRM practices can further contribute in this regard. Originality/value: This is perhaps the very first investigation of its kind in the Indian context. © Emerald Group Publishing Limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research agenda for the field of international human resource management (IHRM) is clear. For a better understanding and to benefit substantially, management scholars must study IHRM in context (Jackson, S.E. and Schuler, R.S. 1995. Understanding human resource management in the context of organizations and their environment. Annual Review of Psychology, 46: 237–264; Geringer, J.M., Frayne, C.A. and Milliman, J.F. 2002. In search of 'best practices' in international human resource management: research design and methodology. Human Resource Management, forthcoming). IHRM should be studied within the context of changing economic and business conditions. The dynamics of both the local/regional and international/global business context in which the firm operates should be given serious consideration. Further, it could be beneficial to study IHRM within the context of the industry and the firm's strategy and its other functional areas and operations. In taking these perspectives, one needs to use multiple levels of analysis when studying IHRM: the external social, political, cultural and economic environment; the industry, the firm, the sub-unit, the group, and the individual. Research in contextual isolation is misleading: it fails to advance understanding in any significant way (Adler, N.J. and Ghadar, E. 1990. Strategic human resource management: a global perspective. Human Resource Management in International Comparison. Berlin: de Gruyter; Locke, R. and Thelen, K. 1995. Apples and oranges revisited: contextualized comparisons and the study of comparative labor politics. Politics & Society, 23, 337–367). In this paper, we attempt to review the existing state of academic work in IHRM and illustrate how it incorporates the content and how it might be expanded to do so.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The purpose of this study is to explore the nature of human resource management in publicly listed finance sector companies in Nepal. In particular, it explores the extent to which HR practice is integrated into organisational strategy and devolved to line management. Design/methodology/ approach: A structured interview was conducted with the senior executive responsible for human resource management in 26 commercial banks and insurance companies in Nepal. Findings: The degree of integration of HR practice appears to be increasing within this sector, but this is dependent on the maturity of the organisations. The devolvement of responsibility to line managers is at best partial, and in the case of the insurance companies, it is more out of necessity due to the absence of a strong central HR function. Research limitations/implications: The survey is inevitably based on a small sample; however this represents 90 per cent of the relevant population. The data suggest that Western HR is making inroads into more developed aspects of Nepalese business. Compared with Nepalese business as a whole, the financial sector appears relatively Westernised, although Nepal still lags India in its uptake of HR practices. Practical implications: It appears unlikely from a cultural perspective that the devolvement of responsibility will be achieved as a result of HR strategy. National cultural, political and social factors continue to be highly influential in shaping the Nepalese business environment. Originality/value: Few papers have explored HR practice in Nepal. This paper contributes to the overall assessment of HR uptake globally and highlights emic features impacting on that uptake. © Emerald Group Publishing Limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This special issue of the Journal of the Operational Research Society is dedicated to papers on the related subjects of knowledge management and intellectual capital. These subjects continue to generate considerable interest amongst both practitioners and academics. This issue demonstrates that operational researchers have many contributions to offer to the area, especially by bringing multi-disciplinary, integrated and holistic perspectives. The papers included are both theoretical as well as practical, and include a number of case studies showing how knowledge management has been implemented in practice that may assist other organisations in their search for a better means of managing what is now recognised as a core organisational activity. It has been accepted by a growing number of organisations that the precise handling of information and knowledge is a significant factor in facilitating their success but that there is a challenge in how to implement a strategy and processes for this handling. It is here, in the particular area of knowledge process handling that we can see the contributions of operational researchers most clearly as is illustrated in the papers included in this journal edition. The issue comprises nine papers, contributed by authors based in eight different countries on five continents. Lind and Seigerroth describe an approach that they call team-based reconstruction, intended to help articulate knowledge in a particular organisational. context. They illustrate the use of this approach with three case studies, two in manufacturing and one in public sector health care. Different ways of carrying out reconstruction are analysed, and the benefits of team-based reconstruction are established. Edwards and Kidd, and Connell, Powell and Klein both concentrate on knowledge transfer. Edwards and Kidd discuss the issues involved in transferring knowledge across frontières (borders) of various kinds, from those borders within organisations to those between countries. They present two examples, one in distribution and the other in manufacturing. They conclude that trust and culture both play an important part in facilitating such transfers, that IT should be kept in a supporting role in knowledge management projects, and that a staged approach to this IT support may be the most effective. Connell, Powell and Klein consider the oft-quoted distinction between explicit and tacit knowledge, and argue that such a distinction is sometimes unhelpful. They suggest that knowledge should rather be regarded as a holistic systemic property. The consequences of this for knowledge transfer are examined, with a particular emphasis on what this might mean for the practice of OR Their view of OR in the context of knowledge management very much echoes Lind and Seigerroth's focus on knowledge for human action. This is an interesting convergence of views given that, broadly speaking, one set of authors comes from within the OR community, and the other from outside it. Hafeez and Abdelmeguid present the nearest to a 'hard' OR contribution of the papers in this special issue. In their paper they construct and use system dynamics models to investigate alternative ways in which an organisation might close a knowledge gap or skills gap. The methods they use have the potential to be generalised to any other quantifiable aspects of intellectual capital. The contribution by Revilla, Sarkis and Modrego is also at the 'hard' end of the spectrum. They evaluate the performance of public–private research collaborations in Spain, using an approach based on data envelopment analysis. They found that larger organisations tended to perform relatively better than smaller ones, even though the approach used takes into account scale effects. Perhaps more interesting was that many factors that might have been thought relevant, such as the organisation's existing knowledge base or how widely applicable the results of the project would be, had no significant effect on the performance. It may be that how well the partnership between the collaborators works (not a factor it was possible to take into account in this study) is more important than most other factors. Mak and Ramaprasad introduce the concept of a knowledge supply network. This builds on existing ideas of supply chain management, but also integrates the design chain and the marketing chain, to address all the intellectual property connected with the network as a whole. The authors regard the knowledge supply network as the natural focus for considering knowledge management issues. They propose seven criteria for evaluating knowledge supply network architecture, and illustrate their argument with an example from the electronics industry—integrated circuit design and fabrication. In the paper by Hasan and Crawford, their interest lies in the holistic approach to knowledge management. They demonstrate their argument—that there is no simple IT solution for organisational knowledge management efforts—through two case study investigations. These case studies, in Australian universities, are investigated through cultural historical activity theory, which focuses the study on the activities that are carried out by people in support of their interpretations of their role, the opportunities available and the organisation's purpose. Human activities, it is argued, are mediated by the available tools, including IT and IS and in this particular context, KMS. It is this argument that places the available technology into the knowledge activity process and permits the future design of KMS to be improved through the lessons learnt by studying these knowledge activity systems in practice. Wijnhoven concentrates on knowledge management at the operational level of the organisation. He is concerned with studying the transformation of certain inputs to outputs—the operations function—and the consequent realisation of organisational goals via the management of these operations. He argues that the inputs and outputs of this process in the context of knowledge management are different types of knowledge and names the operation method the knowledge logistics. The method of transformation he calls learning. This theoretical paper discusses the operational management of four types of knowledge objects—explicit understanding; information; skills; and norms and values; and shows how through the proposed framework learning can transfer these objects to clients in a logistical process without a major transformation in content. Millie Kwan continues this theme with a paper about process-oriented knowledge management. In her case study she discusses an implementation of knowledge management where the knowledge is centred around an organisational process and the mission, rationale and objectives of the process define the scope of the project. In her case they are concerned with the effective use of real estate (property and buildings) within a Fortune 100 company. In order to manage the knowledge about this property and the process by which the best 'deal' for internal customers and the overall company was reached, a KMS was devised. She argues that process knowledge is a source of core competence and thus needs to be strategically managed. Finally, you may also wish to read a related paper originally submitted for this Special Issue, 'Customer knowledge management' by Garcia-Murillo and Annabi, which was published in the August 2002 issue of the Journal of the Operational Research Society, 53(8), 875–884.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Study Design. The influence of mechanical load on pleiotrophin (PTM) and aggrecan expression by intervertebral disc (IVD) cells, and the effects of disc cell conditioned medium on endothelial cell migration was investigated. Objective. To examine possible interactions of mechanical loads and known pro- and antiangiogenic factors, which may regulate disc angiogenesis during degeneration. Summary of Background Data. Pleiotrophin expression can be influenced by mechanical stimulation and has been associated with disc vascularization. Disc aggrecan inhibits endothelial cell migration, suggesting an antiangiogenic role. A possible interplay between these factors is unknown. Methods. The influence of the respective predominant load (cyclic strain for anulus fibrosus and hydrostatic pressure for nucleus pulposus cells) on PTN and aggrecan expression by IVD cells was determined by real-time RT-PCR and Western blotting (PTN only). The effects of IVD cell conditioned medium on endothelial cell migration were analyzed in a bioassay using human microvascular endothelial (HMEC-1) cells. Results. Application of both mechanical loads resulted in significant alterations of gene expression of PTN (+67%, P = 0.004 in anulus cells; +29%, P = 0.03 in nucleus cells) and aggrecan (+42%, P = 0.03 in anulus cells, -25%, P = 0.03 in nucleus cells). These effects depended on the cell type, the applied load, and timescale. Conditioned media of nucleus pulposus cells enhanced HMEC-1 migration, but this effect was diminished after 2.5 MPa hydrostatic pressure, when aggrecan expression was diminished, but not 0.25 MPa, when expression levels were unchanged. Conclusion. Mechanical loading influences PTN expression by human IVD cells. Conditioned media from nucleus pulposus cell cultures stimulated HMEC-1 endothelial cell migration. This study demonstrates that the influence of mechanical loads on vascularization of the human IVD is likely to be complex and does not correlate simply with altered expression of known pro- and antiangiogenic factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There have been two main approaches to feature detection in human and computer vision - luminance-based and energy-based. Bars and edges might arise from peaks of luminance and luminance gradient respectively, or bars and edges might be found at peaks of local energy, where local phases are aligned across spatial frequency. This basic issue of definition is important because it guides more detailed models and interpretations of early vision. Which approach better describes the perceived positions of elements in a 3-element contour-alignment task? We used the class of 1-D images defined by Morrone and Burr in which the amplitude spectrum is that of a (partially blurred) square wave and Fourier components in a given image have a common phase. Observers judged whether the centre element (eg ±458 phase) was to the left or right of the flanking pair (eg 0º phase). Lateral offset of the centre element was varied to find the point of subjective alignment from the fitted psychometric function. This point shifted systematically to the left or right according to the sign of the centre phase, increasing with the degree of blur. These shifts were well predicted by the location of luminance peaks and other derivative-based features, but not by energy peaks which (by design) predicted no shift at all. These results on contour alignment agree well with earlier ones from a more explicit feature-marking task, and strongly suggest that human vision does not use local energy peaks to locate basic first-order features. [Supported by the Wellcome Trust (ref: 056093)]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of antioxidants was used to explore the cytotoxicity of one particularly toxic antimycobacterial 2-pyridylcarboxamidrazone anti-tuberculosis agent against human mononuclear leucocytes (MNL), in comparison with isoniazid (INH) to aid future compound design. INH caused a significant reduction of nearly 40% in cell recovery compared with control (P < 0.0001), although the co-incubation with either glutathione (GSH, 1 mM) or (NAC, 1 mM) showed abolition of INH toxicity. In contrast, the addition of GSH or NAC 1 h after INH failed to protect the cells from INH toxicity (P < 0.0001). The 2-pyridyl-carboxamidrazone 'Compound 1' caused a 50% reduction in cell recovery compared with control (P < 0.001), although this was abolished by the presence of either GSH or NAC. A 1 h post incubation with either NAC or GSH after Compound 1 addition failed to protect the cells from toxicity (P < 0.001). Co-administration of lipoic acid (LA) abolished Compound 1-mediated toxicity, although again, this effect did not occur after LA addition 1 h post incubation with Compound 1 (P < 0.001). However, co-administration of dihydrolipoic acid (DHLA) prevented Compound 1-mediated cell death when incubated with the compound and also after 1 h of Compound 1 alone. Pre-treatment with GSH, then removal of the antioxidant resulted in abolition of Compound 1 toxicity (vehicle control, 63.6 ± 16.7 versus Compound 1 alone 26.1 ± 13.6% versus GSH pre-treatment, 65.7 ± 7.3%). In a cell-free incubation, NMR analysis revealed that GSH does not react with Compound 1, indicating that this agent is not likely to directly deplete membrane thiols. Compound 1's MNL toxicity is more likely to be linked with changes in cell membrane conformation, which may induce consequent thiol depletion that is reversible by exogenous thiols. © 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research was concerned with identifying factors which may influence human reliability within chemical process plants - these factors are referred to as Performance Shaping Factors (PSFs). Following a period of familiarization within the industry, a number of case studies were undertaken covering a range of basic influencing factors. Plant records and site `lost time incident reports' were also used as supporting evidence for identifying and classifying PSFs. In parallel to the investigative research, the available literature appertaining to human reliability assessment and PSFs was considered in relation to the chemical process plan environment. As a direct result of this work, a PSF classification structure has been produced with an accompanying detailed listing. Phase two of the research considered the identification of important individual PSFs for specific situations. Based on the experience and data gained during phase one, it emerged that certain generic features of a task influenced PSF relevance. This led to the establishment of a finite set of generic task groups and response types. Similarly, certain PSFs influence some human errors more than others. The result was a set of error type key words, plus the identification and classification of error causes with their underlying error mechanisms. By linking all these aspects together, a comprehensive methodology has been forwarded as the basis of a computerized aid for system designers. To recapitulate, the major results of this research have been: One, the development of a comprehensive PSF listing specifically for the chemical process industries with a classification structure that facilitates future updates; and two, a model of identifying relevant SPFs and their order of priority. Future requirements are the evaluation of the PSF listing and the identification method. The latter must be considered both in terms of `useability' and its success as a design enhancer, in terms of an observable reduction in important human errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Owing to the rise in the volume of literature, problems arise in the retrieval of required information. Various retrieval strategies have been proposed, but most of that are not flexible enough for their users. Specifically, most of these systems assume that users know exactly what they are looking for before approaching the system, and that users are able to precisely express their information needs according to l aid- down specifications. There has, however, been described a retrieval program THOMAS which aims at satisfying incompletely- defined user needs through a man- machine dialogue which does not require any rigid queries. Unlike most systems, Thomas attempts to satisfy the user's needs from a model which it builds of the user's area of interest. This model is a subset of the program's "world model" - a database in the form of a network where the nodes represent concepts since various concepts have various degrees of similarities and associations, this thesis contends that instead of models which assume equal levels of similarities between concepts, the links between the concepts should have values assigned to them to indicate the degree of similarity between the concepts. Furthermore, the world model of the system should be structured such that concepts which are related to one another be clustered together, so that a user- interaction would involve only the relevant clusters rather than the entire database such clusters being determined by the system, not the user. This thesis also attempts to link the design work with the current notion in psychology centred on the use of the computer to simulate human cognitive processes. In this case, an attempt has been made to model a dialogue between two people - the information seeker and the information expert. The system, called Thomas-II, has been implemented and found to require less effort from the user than Thomas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses the viability of automatic speech recognition for control room systems; with careful system design, automatic speech recognition (ASR) devices can be useful means for human computer interaction in specific types of task. These tasks can be defined as complex verbal activities, such as command and control, and can be paired with spatial tasks, such as monitoring, without detriment. It is suggested that ASR use be confined to routine plant operation, as opposed the critical incidents, due to possible problems of stress on the operators' speech.  It is proposed that using ASR will require operators to adapt a commonly used skill to cater for a novel use of speech. Before using the ASR device, new operators will require some form of training. It is shown that a demonstration by an experienced user of the device can lead to superior performance than instructions. Thus, a relatively cheap and very efficient form of operator training can be supplied by demonstration by experienced ASR operators. From a series of studies into speech based interaction with computers, it is concluded that the interaction be designed to capitalise upon the tendency of operators to use short, succinct, task specific styles of speech. From studies comparing different types of feedback, it is concluded that operators be given screen based feedback, rather than auditory feedback, for control room operation. Feedback will take two forms: the use of the ASR device will require recognition feedback, which will be best supplied using text; the performance of a process control task will require task feedback integrated into the mimic display. This latter feedback can be either textual or symbolic, but it is suggested that symbolic feedback will be more beneficial. Related to both interaction style and feedback is the issue of handling recognition errors. These should be corrected by simple command repetition practices, rather than use error handling dialogues. This method of error correction is held to be non intrusive to primary command and control operations. This thesis also addresses some of the problems of user error in ASR use, and provides a number of recommendations for its reduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research thesis is concerned with the human factors aspects of industrial alarm systems within human supervisory control tasks. Typically such systems are located in central control rooms, and the information may be presented via visual display units. The thesis develops a human, rather than engineering, centred approach to the assessment, measurement and analysis of the situation. A human factors methodology was employed to investigate the human requirements through: interviews, questionnaires, observation and controlled experiments. Based on the analysis of current industrial alarm systems in a variety of domains (power generation, manufacturing and coronary care), it is suggested that often designers do not pay due considerations to the human requirements. It is suggested that most alarm systems have severe shortcomings in human factors terms. The interviews, questionnaire and observations led to the proposal of 'alarm initiated activities' as a framework for the research to proceed. The framework comprises of six main stages: observe, accept, analyse, investigate, correct and monitor. This framework served as a basis for laboratory research into alarm media. Under consideration were speech-based alarm displays and visual alarm displays. Non-speech auditory displays were the subject of a literature review. The findings suggest that care needs to be taken when selecting the alarm media. Ideally it should be chosen to support the task requirements of the operator, rather than being arbitrarily assigned. It was also indicated that there may be some interference between the alarm initiated activities and the alarm media, i.e. information that supports one particular stage of alarm handling may interfere with another.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tuberculosis is one of the most devastating diseases in the world primarily due to several decades of neglect and an emergence of multidrug-resitance strains (MDR) of M. tuberculosis together with the increased incidence of disseminated infections produced by other mycobacterium in AIDS patients. This has prompted the search for new antimycobacterial drugs. A series of pyridine-2-, pyridine-3-, pyridine-4-, pyrazine and quinoline-2-carboxamidrazone derivatives and new classes of carboxamidrazone were prepared in an automated fashion and by traditional synthesis. Over nine hundred synthesized compounds were screened for their anti mycobacterial activity against M. fortutium (NGTG 10394) as a surrogate for M. tuberculosis. The new classes of amidrazones were also screened against tuberculosis H37 Rv and antimicrobial activities against various bacteria. Fifteen tested compounds were found to provide 90-100% inhibition of mycobacterium growth of M. tuberculosis H37 Rv in the primary screen at 6.25 μg mL-1. The most active compound in the carboxamidrazone amide series had an MIG value of 0.1-2 μg mL-1 against M. fortutium. The enzyme dihydrofolate reductase (DHFR) has been a drug-design target for decades. Blocking of the enzymatic activity of DHFR is a key element in the treatment of many diseases, including cancer, bacterial and protozoal infection. The x-ray structure of DHFR from M. tuberculosis and human DHFR were found to have differences in substrate binding site. The presence of glycerol molecule in the Xray structure from M. tuberculosis DHFR provided opportunity to design new antifolates. The new antifolates described herein were designed to retain the pharmcophore of pyrimethamine (2,4- diamino-5(4-chlorophenyl)-6-ethylpyrimidine), but encompassing a range of polar groups that might interact with the M. tuberculosis DHFR glycerol binding pockets. Finally, the research described in this thesis contributes to the preparation of molecularly imprinted polymers for the recognition of 2,4-diaminopyrimidine for the binding the target. The formation of hydrogen bonding between the model functional monomer 5-(4-tert-butyl-benzylidene)-pyrimidine-2,4,6-trione and 2,4-diaminopyrimidine in the pre-polymerisation stage was verified by 1H-NMR studies. Having proven that 2,4-diaminopyrimidine interacts strongly with the model 5-(4-tert-butylbenzylidene)- pyrimidine-2,4,6-trione, 2,4-diaminopyrimidine-imprinted polymers were prepared using a novel cyclobarbital derived functional monomer, acrylic acid 4-(2,4,6-trioxo-tetrahydro-pyrimidin-5- ylidenemethyl)phenyl ester, capable of multiple hydrogen bond formation with the 2,4- diaminopyrimidine. The recognition property of the respective polymers toward the template and other test compounds was evaluated by fluorescence. The results demonstrate that the polymers showed dose dependent enhancement of fluorescence emissions. In addition, the results also indicate that synthesized MIPs have higher 2,4-diaminopyrimidine binding ability as compared with corresponding non-imprinting polymers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent explosive growth in advanced manufacturing technology (AMT) and continued development of sophisticated information technologies (IT) is expected to have a profound effect on the way we design and operate manufacturing businesses. Furthermore, the escalating capital requirements associated with these developments have significantly increased the level of risk associated with initial design, ongoing development and operation. This dissertation has examined the integration of two key sub-elements of the Computer Integrated Manufacturing (CIM) system, namely the manufacturing facility and the production control system. This research has concentrated on the interactions between production control (MRP) and an AMT based production facility. The disappointing performance of such systems has been discussed in the context of a number of potential technological and performance incompatibilities between these two elements. It was argued that the design and selection of operating policies for both is the key to successful integration. Furthermore, policy decisions are shown to play an important role in matching the performance of the total system to the demands of the marketplace. It is demonstrated that a holistic approach to policy design must be adopted if successful integration is to be achieved. It is shown that the complexity of the issues resulting from such an approach required the formulation of a structured design methodology. Such a methodology was subsequently developed and discussed. This combined a first principles approach to the behaviour of system elements with the specification of a detailed holistic model for use in the policy design environment. The methodology aimed to make full use of the `low inertia' characteristics of AMT, whilst adopting a JIT configuration of MRP and re-coupling the total system to the market demands. This dissertation discussed the application of the methodology to an industrial case study and the subsequent design of operational policies. Consequently a novel approach to production control resulted. A central feature of which was a move toward reduced manual intervention in the MRP processing and scheduling logic with increased human involvement and motivation in the management of work-flow on the shopfloor. Experimental results indicated that significant performance advantages would result from the adoption of the recommended policy set.