33 resultados para techniques of interview


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of the research is to develop an e-business selection framework for small and medium enterprises (SMEs) by integrating established techniques in planning. The research is case based, comprising four case studies carried out in the printing industry for the purpose of evaluating the framework. Two of the companies are from Singapore, while the other two are from Guangzhou, China and Jinan, China respectively. To determine the need of an e-business selection framework for SMEs, extensive literature reviews were carried out in the area of e-business, business planning frameworks, SMEs and the printing industry. An e-business selection framework is then proposed by integrating the three established techniques of the Balanced Scorecard (BSC), Value Chain Analysis (VCA) and Quality Function Deployment (QFD). The newly developed selection framework is pilot tested using a published case study before actual evaluation is carried out in four case study companies. The case study methodology was chosen because of its ability to integrate diverse data collection techniques required to generate the BSC, VCA and QFD for the selection framework. The findings of the case studies revealed that the three techniques of BSC, VCA and QFD can be integrated seamlessly to complement on each other’s strengths in e-business planning. The eight-step methodology of the selection framework can provide SMEs with a step-by-step approach to e-business through structured planning. Also, the project has also provided better understanding and deeper insights into SMEs in the printing industry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study has concentrated on the development of an impact simulation model for use at the sub-national level. The necessity for the development of this model was demonstrated by the growth of local economic initiatives during the 1970's, and the lack of monitoring and evaluation exercise to assess their success and cost-effectiveness. The first stage of research involved the confirmation that the potential for micro-economic and spatial initiatives existed. This was done by identifying the existence of involuntary structural unemployment. The second stage examined the range of employment policy options from the macroeconomic, micro-economic and spatial perspectives, and focused on the need for evaluation of those policies. The need for spatial impact evaluation exercise in respect of other exogenous shocks, and structural changes was also recognised. The final stage involved the investigation of current techniques of evaluation and their adaptation for the purpose in hand. This led to a recognition of a gap in the armoury of techniques. The employment-dependency model has been developed to fill that gap, providing a low-budget model, capable of implementation at the small area level and generating a vast array of industrially disaggregate data, in terms of employment, employment-income, profits, value-added and gross income, related to levels of United Kingdom final demand. Thus providing scope for a variety of impact simulation exercises.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis attempts a psychological investigation of hemispheric functioning in developmental dyslexia. Previous work using neuropsychological methods with developmental dyslexics is reviewed ,and original work is presented both of a conventional psychometric nature and also utilising a new means of intervention. At the inception of inquiry into dyslexia, comparisons were drawn between developmental dyslexia and acquired alexia, promoting a model of brain damage as the common cause. Subsequent investigators found developmental dyslexics to be neurologically intact, and so an alternative hypothesis was offered, namely that language is abnormally localized (not in the left hemisphere). Research in the last decade, using the advanced techniques of modern neuropsychology, has indicated that developmental dyslexics are probably left hemisphere dominant for language. The development of a new type of pharmaceutical prep~ration (that appears to have a left hemisphere effect) offers an oppertunity to test the experimental hypothesis. This hypothesis propounds that most dyslexics are left hemisphere language dominant, but some of these language related operations are dysfunctioning. The methods utilised are those of psychological assessment of cognitive function, both in a traditional psychometric situation, and with a new form of intervention (Piracetam). The information resulting from intervention will be judged on its therapeutic validity and contribution to the understanding of hemispheric functioning in dyslexics. The experimental studies using conventional psychometric evaluation revealed a dyslexic profile of poor sequencing and name coding ability, with adequate spatial and verbal reasoning skills. Neuropsychological information would tend to suggest that this profile was indicative of adequate right hemsiphere abilities and deficits in some left hemsiphere abilities. When an intervention agent (Piracetam) was used with young adult dyslexics there were improvements in both the rate of acquisition and conservation of verbal learning. An experimental study with dyslexic children revealed that Piracetam appeared to improve reading, writing and sequencing, but did not influence spatial abilities. This would seem to concord with other recent findings, that deve~mental dyslexics may have left hemisphere language localisation, although some of these language related abilities are dysfunctioning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Phosphonoformate and phosphonoacetate are effective antiviral agents, however they are charged at physiological pH and as such penetration into cells and diffusion across the blood-brain bamer is limited. In an attempt to increase the lipophilicity and improve the transport properties of these molecules, prodrugs were synthesised and their stabilities and reconversion to the parent compound subsequently investigated by the techniques of 31P nuclear magnetic resonance spectroscopy and high performance liquid Chromatography. A series of 4-substituted dibenzyl (methoxycarbonyl)phosphonates were prepared and found to be hydrolytically unstable giving predominantly the diesters, benzyl (methoxycarbonyl)phosphonates. This instability arose from the electron-withdrawing effect of the carbonyl group promoting nucleophilic attack at phosphorus. It was possible to influence the mechanism and, to some extent, the rate of hydrolysis of the phosphonoformate triesters to the diesters by varying the electronic nature of the substituent in the 4-position of the aromatic ring. Strongly electron-withdrawing groups increased the sensitivity of phosphorus to nucleophilic attack, thus promoting P-O .bond cleavage and rapid hydrolysis. Conversely, weakly electron-withdrawing substituents encouraged C-O bond fission, presumably through resonance stabilisation of the benzyl carbonium ion. The loss of the protecting group on phosphorus was in competition with nucleophilic attack at the carbonyl group, resulting in P-C bond cleavage with dibenzyl phosphite formation. The high instability and P-C bond fission make triesters unsuitable prodrug forms of phosphonoformate. A range of chemically stable triesters of phosphonoacetate were synthesised and their bioactivation investigated. Di(benzoyloxymethyl) (methoxycarbonylmethyl)phosphonates degraded to the relevant benzoyloxymethyl (methoxycarbonylmethyl)phosphonate in the presence of esterase. The enzymatic activation was restricted to the removal of only one protecting group from phosphorus, most likely due to the close proximity of the benzoyloxy ester function to the anionic charge on the diester. However, in similar systems di(4-alkanoyloxybenzyl) (methoxycarbonylmethyl)phosphonates degraded in the presence of esterase with the loss of both protecting groups on phosphorus to give the monoester, (methoxycarbonylmethyl)phosphonate, via the intermediary of the unstable 4-hydroxy benzyl esters. The methoxycarbonyl function remained intact. The rate of enzymatic hydrolysis and subsequent removal of the protecting groups on phosphorus was dependent on the nature of the alkanoyl group and was most rapid for the 4-nbutanoyloxybenzyl and 4-iso-butanoyloxybenzyl esters of phosphonoacetate. This provides a strategy for the design of a prodrug with sufficient stability in plasma to reach the central nervous system in high concentration, wherein rapid metabolism to the active drug by brain-associated enzymes occurs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ability of Escherichia coli to express the K88 fimbrial adhesin was satisfactorily indicated by the combined techniques of ELISA, haemagglutination and latex agglutination. Detection of expression by electron microscopy and the ability to metabolize raffinose were unsuitable. Quantitative expression of the K88 adhesin was determined by ELISA. Expression was found to vary according to the E.coli strain examined, media type and form. In general it was found that the total amount was greater, while the amount/cfu was less on agar than in broth cultures. Expression of the K88 adhesin during unshaken batch culture was related to the growth rate and was maximal during late logarithmic to early stationary phase. A combination of heat extraction, ammonium sulphate and isoelectric precipitation was found suitable for both large and small scale preparation of purified K88ab adhesin. Extraction of the K88 adhesin was sensitive to pH and it was postulated that this may affect the site of colonisation of by ETEC in vivo. Results of haemagglutination experiments were consistent with the hypothesis that the K88 receptor present on erythrocytes is composed of two elements, one responsible for the binding of K88ab and K88ac and a second responsible for the binding of the K88ad adhesin. Comparison of the haemagglutinating properties of cell-free and cell-bound K88 adhesin revealed some differences probably indicating a minor conformational change in the K88 adhesin on its isolation. The K88ab adhesin was found to bind to erythrocytes over a wide pH range (PH 4-9) and was inhibited by αK88ab and αK88b antisera. Inhibition of haemagglutination was noted with crude heparin, mannan and porcine gastric mucin, chondrosine and several hexosamines, glucosamine in particular. The most potent inhibitor of haemagglutination was n-dodecyl-β-D-glucopyranoside, one of a series of glucosides found to have inhibitory properties. Correlation between hydrophobicity of glucosides tested and degree of inhibition observed suggested hydrophobic forces were important in the interaction of the K88 adhesin with its receptor. The results of Scatchard and Hill plots indicated that binding of the K88ab adhesin to porcine enterocytes in the majority of cases is a two-step, three component system. The first K88 receptor (or site) had a K2. of 1.59x1014M-1 and a minimum of 4.3x104 sites/enterocyte. The second receptor (or site) had a K2 of 4.2x1012M-1 with a calculated 1.75x105 sites/enterocyte. Attempts to inhibit binding of cell-free K88 adhesin to porcine enterocytes by lectins were unsuccessful. However, several carbohydrates including trehalose, lactulose, galactose 1→4 mannopyranoside, chondrosine, galactosamine, stachyose and mannan were inhibitory. The most potent inhibitor was found to be porcine gastric mucin. Inhibition observed with n-octyl-α-D-glucopyranose was difficult to interpret in isolation because of interference with the assay, however, it agreed with the results of haemagglutination inhibition experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis presents an investigation into the application of methods of uncertain reasoning to the biological classification of river water quality. Existing biological methods for reporting river water quality are critically evaluated, and the adoption of a discrete biological classification scheme advocated. Reasoning methods for managing uncertainty are explained, in which the Bayesian and Dempster-Shafer calculi are cited as primary numerical schemes. Elicitation of qualitative knowledge on benthic invertebrates is described. The specificity of benthic response to changes in water quality leads to the adoption of a sensor model of data interpretation, in which a reference set of taxa provide probabilistic support for the biological classes. The significance of sensor states, including that of absence, is shown. Novel techniques of directly eliciting the required uncertainty measures are presented. Bayesian and Dempster-Shafer calculi were used to combine the evidence provided by the sensors. The performance of these automatic classifiers was compared with the expert's own discrete classification of sampled sites. Variations of sensor data weighting, combination order and belief representation were examined for their effect on classification performance. The behaviour of the calculi under evidential conflict and alternative combination rules was investigated. Small variations in evidential weight and the inclusion of evidence from sensors absent from a sample improved classification performance of Bayesian belief and support for singleton hypotheses. For simple support, inclusion of absent evidence decreased classification rate. The performance of Dempster-Shafer classification using consonant belief functions was comparable to Bayesian and singleton belief. Recommendations are made for further work in biological classification using uncertain reasoning methods, including the combination of multiple-expert opinion, the use of Bayesian networks, and the integration of classification software within a decision support system for water quality assessment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The work utilising a new material for contact lenses has fallen into three parts: Physioloeical considerations: Since the cornea is devoid of blood vessels, its oxygen is derived from the atmosphere. Early hydrophilic gel contact lenses interrupted the flow of oxygen and corneal insult resulted. Three techniques of fenestration were tried to overcome this problem. High speed drilling with 0.1 mm diameter twist drills. was found to be mechanically successful, but under clinical conditions mucous blockage of the fenestrations occurred. An investigation was made into the amount of oxygen arriving at the corneal interface; related to gel lens thickness. The results indicated an improvement in corneal oxygen as lens thickness was reduced. The mechanism is thought to be a form of mechanical pump. A series of clinical studies con:firmed the experimental work; the use of thin lenses removing the symptoms of corneal hypoxia. Design: The parameters of lens back curvature. lens thickness and lens diameter have been isolated and related to three criteria of vision (a) Visual acuity. (b) Visual stability and (c) Induced astigmatism. From the results achieved a revised and basically successful design of lens has been developed. Comparative study: The developed form of lens was compared with traditional lenses in a controlled survey. Twelve factors were assessed over a twenty week period of wear using a total of eighty four patients. The results of this study indicate that whilst the expected changes were noted with the traditional lens wearers, gel lens wearers showed no discernible change in any of the factors measured. ldth the exception of' one parameter. In addition to a description of' the completed l'iork. further investigations are ·sug~ested l'lhich. it is hoped. l'iould further improve the optical performance of gel lenses.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Report of the Robens Committee (1972), the Health and Safety at Work Act (1974) and the Safety Representatives and Safety Committees Regulations (1977) provide the framework within which this study of certain aspects of health and safety is carried out. The philosophy of self-regulation is considered and its development is set within an historical and an industrial relations perspective. The research uses a case study approach to examine the effectiveness of self-regulation in health and safety in a public sector organisation. Within this approach, methodological triangulation employs the techniques of interviews, questionnaires, observation and documentary analysis. The work is based in four departments of a Scottish Local Authority and particular attention is given to three of the main 'agents' of self-regulation - safety representatives, supervisors and safety committees and their interactions, strategies and effectiveness. A behavioural approach is taken in considering the attitudes, values, motives and interactions of safety representatives and management. Major internal and external factors, which interact and which influence the effectiveness of joint self-regulation of health and safety, are identified. It is emphasised that an organisation cannot be studied without consideration of the context within which it operates both locally and in the wider environment. One of these factors, organisational structure, is described as bureaucratic and the model of a Representative Bureaucracy described by Gouldner (1954) is compared with findings from the present study. An attempt is made to ascertain how closely the Local Authority fits Gouldner's model. This research contributes both to knowledge and to theory in the subject area by providing an in-depth study of self-regulation in a public sector organisation, which when compared with such studies as those of Beaumont (1980, 1981, 1982) highlights some of the differences between the public and private sectors. Both empirical data and hypothetical models are used to provide description and explanation of the operation of the health and safety system in the Local Authority. As data were collected during a dynamic period in economic, political and social terms, the research discusses some of the effects of the current economic recession upon safety organisation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis addresses the question of how business schoolsestablished as public privatepartnerships (PPPs) within a regional university in the English-speaking Caribbean survived for over twenty-one years and achieved legitimacy in their environment. The aim of the study was to examine how public and private sector actors contributed to the evolution of the PPPs. A social network perspective provided a broad relational focus from which to explore the phenomenon and engage disciplinary and middle-rangetheories to develop explanations. Legitimacy theory provided an appropriate performance dimension from which to assess PPP success. An embedded multiple-case research design, with three case sites analysed at three levels including the country and university environment, the PPP as a firm and the subgroup level constituted the methodological framing of the research process. The analysis techniques included four methods but relied primarily on discourse and social network analysis of interview data from 40 respondents across the three sites. A staged analysis of the evolution of the firm provided the ‘time and effects’ antecedents which formed the basis for sense-making to arrive at explanations of the public-private relationship-influenced change. A conceptual model guided the study and explanations from the cross-case analysis were used to refine the process model and develop a dynamic framework and set of theoretical propositions that would underpin explanations of PPP success and legitimacy in matched contexts through analytical generalisation. The study found that PPP success was based on different models of collaboration and partner resource contribution that arose from a confluence of variables including the development of shared purpose, private voluntary control in corporate governance mechanisms and boundary spanning leadership. The study contributes a contextual theory that explains how PPPs work and a research agenda of ‘corporate governance as inspiration’ from a sociological perspective of ‘liquid modernity’. Recommendations for policy and management practice were developed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to examine the quality of evidence collected during interview. Current UK national guidance on the interviewing of victims and witnesses recommends a phased approach, allowing the interviewee to deliver their free report before any questioning takes place, and stipulating that during this free report the interviewee should not be interrupted. Interviewers, therefore, often find it necessary during questioning to reactivate parts of the interviewee's free report for further elaboration. Design/methodology/approach: The first section of this paper draws on a collection of police interviews with women reporting rape, and discusses one method by which this is achieved - the indirect quotation of the interviewee by the interviewer - exploring the potential implications for the quality of evidence collected during this type of interview. The second section of the paper draws on the same data set and concerns itself with a particular method by which information provided by an interviewee has its meaning "fixed" by the interviewer. Findings: It is found that "formulating" is a recurrent practice arising from the need to clarify elements of the account for the benefit of what is termed the "overhearing audience" - in this context, the police scribe, CPS, and potentially the Court. Since the means by which this "fixing" is achieved necessarily involves the foregrounding of elements of the account deemed to be particularly salient at the expense of other elements which may be entirely deleted, formulations are rarely entirely neutral. Their production, therefore, has the potential to exert undue interviewer influence over the negotiated "final version" of interviewees' accounts. Originality/value: The paper highlights the fact that accurate re-presentations of interviewees' accounts are a crucial tool in ensuring smooth progression of interviews and that re-stated speech and formulation often have implications for the quality of evidence collected during significant witness interviews. © Emerald Group Publishing Limited.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Completing projects faster than the normal duration is always a challenge to the management of any project, as it often demands many paradigm shifts. Opportunities of globalization, competition from private sectors and multinationals force the management of public sector organizations in the Indian petroleum sector to take various aggressive strategies to maintain their profitability. Constructing infrastructure for handling petroleum products is one of them. Moreover, these projects are required to be completed in faster duration compared to normal schedules to remain competitive, to get faster return on investment, and to give longer project life. However, using conventional tools and techniques of project management, it is impossible to handle the problem of reducing the project duration from a normal period. This study proposes the use of concurrent engineering in managing projects for radically reducing project duration. The phases of the project are accomplished concurrently/simultaneously instead of in a series. The complexities that arise in managing projects are tackled through restructuring project organization, improving management commitment, strengthening project-planning activities, ensuring project quality, managing project risk objectively and integrating project activities through management information systems. These would not only ensure completion of projects in fast track, but also improve project effectiveness in terms of quality, cost effectiveness, team building, etc. and in turn overall productivity of the project organization would improve.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To carry out stability studies on more electric systems in which there is a preponderance of motor drive equipment, input admittance expressions are required for the individual pieces of equipment. In this paper the techniques of averaging and small-signal linearisation will be used to derive a simple input admittance model for a low voltage, trapezoidal back EMF, brushless, DC motor drive system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background - MHC Class I molecules present antigenic peptides to cytotoxic T cells, which forms an integral part of the adaptive immune response. Peptides are bound within a groove formed by the MHC heavy chain. Previous approaches to MHC Class I-peptide binding prediction have largely concentrated on the peptide anchor residues located at the P2 and C-terminus positions. Results - A large dataset comprising MHC-peptide structural complexes was created by re-modelling pre-determined x-ray crystallographic structures. Static energetic analysis, following energy minimisation, was performed on the dataset in order to characterise interactions between bound peptides and the MHC Class I molecule, partitioning the interactions within the groove into van der Waals, electrostatic and total non-bonded energy contributions. Conclusion - The QSAR techniques of Genetic Function Approximation (GFA) and Genetic Partial Least Squares (G/PLS) algorithms were used to identify key interactions between the two molecules by comparing the calculated energy values with experimentally-determined BL50 data. Although the peptide termini binding interactions help ensure the stability of the MHC Class I-peptide complex, the central region of the peptide is also important in defining the specificity of the interaction. As thermodynamic studies indicate that peptide association and dissociation may be driven entropically, it may be necessary to incorporate entropic contributions into future calculations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.