949 resultados para cog humanoid robot embodied learning phd thesis metaphor pancake reaching vision
Resumo:
A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.
Resumo:
This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.
Resumo:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
Resumo:
This field work study furthers understanding about expatriate management, in particular, the nature of cross-cultural management in Hong Kong involving Anglo-American expatriate and Chinese host national managers, the important features of adjustment for expatriates living and working there, and the type of training which will assist them to adjust and to work successfully in this Asian environment. Qualitative and quantitative data on each issue was gathered during in-depth interviews in Hong Kong, using structured interview schedules, with 39 expatriate and 31 host national managers drawn from a cross-section of functional areas and organizations. Despite the adoption of Western technology and the influence of Western business practices, micro-level management in Hong Kong retains a cultural specificity which is consistent with the norms and values of Chinese culture. There are differences in how expatriates and host nationals define their social roles, and Hong Kong's recent colonial history appears to influence cross-cultural interpersonal interactions. The inability of the spouse and/or family to adapt to Hong Kong is identified as a major reason for expatriate assignments to fail, though the causes have less to do with living away from family and friends, than with Hong Kong's highly urbanized environment and the heavy demands of work. Culture shock is not identified as a major problem, but in Hong Kong micro-level social factors require greater adjustment than macro-level societal factors. The adjustment of expatriate managers is facilitated by a strong orientation towards career development and hard work, possession of technical/professional expertise, and a willingness to engage in a process of continuous 'active learning' with respect to the host national society and culture. A four-part model of manager training suitable for Hong Kong is derived from the study data. It consists of a pre-departure briefing, post-arrival cross-cultural training, language training in basic Cantonese and in how to communicate more effectively in English with non-native speakers, and the assignment of a mentor to newly arrived expatriate managers.
Resumo:
The point of departure for this study was a recognition of the differences in suppliers' and acquirers' judgements of the value of technology when transferred between the two, and the significant impacts of technology valuation on the establishment of technology partnerships and effectiveness of technology collaborations. The perceptions, transfer strategies and objectives, perceived benefits and assessed technology contributions as well as associated costs and risks of both suppliers and acquirers were seen to be the core to these differences. This study hypothesised that the capability embodied in technology to yield future returns makes technology valuation distinct from the process of valuing manufacturing products. The study hence has gone beyond the dimensions of cost calculation and price determination that have been discussed in the existing literature, by taking a broader view of how to achieve and share future added value from transferred technology. The core of technology valuation was argued as the evaluation of the 'quality' of the capability (technology) in generating future value and the effectiveness of the transfer arrangement for best use of such a capability. A dynamic approach comprising future value generation and realisation within the context of specific forms of collaboration was therefore adopted. The research investigations focused on the UK and China machine tool industries, where there are many technology transfer activities and the value issue has already been recognised in practice. Data were gathered from three groups: machine tool manufacturing technology suppliers in the UK and acquirers in China, and machine tool users in China. Data collecting methods included questionnaire surveys and case studies within all the three groups. The study has focused on identifying and examining the major factors affecting value as well as their interactive effects on technology valuation from both the supplier's and acquirer's point of view. The survey results showed the perceptions and the assessments of the owner's value and transfer value from the supplier's and acquirer's point of view respectively. Benefits, costs and risks related to the technology transfer were the major factors affecting the value of technology. The impacts of transfer payment on the value of technology by the sharing of financial benefits, costs and risks between partners were assessed. The close relationship between technology valuation and transfer arrangements was established by which technical requirements and strategic implications were considered. The case studies reflected the research propositions and revealed that benefits, costs and risks in the financial, technical and strategic dimensions interacted in the process of technology valuation within the context of technology collaboration. Further to the assessment of factors affecting value, a technology valuation framework was developed which suggests that technology attributes for the enhancement of contributory factors and their contributions to the realisation of transfer objectives need to be measured and compared with the associated costs and risks. The study concluded that technology valuation is a dynamic process including the generation and sharing of future value and the interactions between financial, technical and strategic achievements.
Resumo:
The thesis investigated progression of the central 10° visual field with structural changes at the macula in a cross-section of patients with varying degrees of agerelated macular degeneration (AMD). The relationships between structure and function were investigated for both standard and short-wavelength automated perimetry (SWAP). Factors known to influence the measure of visual field progression were considered, including the accuracy of the refractive correction on SWAP thresholds and the learning effect. Techniques of assessing the structure to function relationships between fundus images and the visual field were developed with computer programming and evaluated for repeatability. Drusen quantification of fundus photographs and retro-mode scanning laser ophthalmoscopic images was performed. Visual field progression was related to structural changes derived from both manual and automated methods. Principal Findings: • Visual field sensitivity declined with advancing stage of AMD. SWAP showed greater sensitivity to progressive changes than standard perimetry. • Defects were confined to the central 5°. SWAP defects occurred at similar locations but were deeper and wider than corresponding standard perimetry defects. • The central field became less uniform as severity of AMD increased. SWAP visual field indices of focal loss were of more importance when detecting early change in AMD, than indices of diffuse loss. • The decline in visual field sensitivity over stage of severity of AMD was not uniform, whereas a linear relationship was found between the automated measure of drusen area and visual field parameters. • Perimetry exhibited a stronger relationship with drusen area than other measures of visual function. • Overcorrection of the refraction for the working distance in SWAP should be avoided in subjects with insufficient accommodative facility. • The perimetric learning effect in the 10° field did not differ significantly between normal subjects and AMD patients. • Subretinal deposits appeared more numerous in retro-mode imaging than in fundus photography.
Resumo:
This thesis presents an investigation, of synchronisation and causality, motivated by problems in computational neuroscience. The thesis addresses both theoretical and practical signal processing issues regarding the estimation of interdependence from a set of multivariate data generated by a complex underlying dynamical system. This topic is driven by a series of problems in neuroscience, which represents the principal background motive behind the material in this work. The underlying system is the human brain and the generative process of the data is based on modern electromagnetic neuroimaging methods . In this thesis, the underlying functional of the brain mechanisms are derived from the recent mathematical formalism of dynamical systems in complex networks. This is justified principally on the grounds of the complex hierarchical and multiscale nature of the brain and it offers new methods of analysis to model its emergent phenomena. A fundamental approach to study the neural activity is to investigate the connectivity pattern developed by the brain’s complex network. Three types of connectivity are important to study: 1) anatomical connectivity refering to the physical links forming the topology of the brain network; 2) effective connectivity concerning with the way the neural elements communicate with each other using the brain’s anatomical structure, through phenomena of synchronisation and information transfer; 3) functional connectivity, presenting an epistemic concept which alludes to the interdependence between data measured from the brain network. The main contribution of this thesis is to present, apply and discuss novel algorithms of functional connectivities, which are designed to extract different specific aspects of interaction between the underlying generators of the data. Firstly, a univariate statistic is developed to allow for indirect assessment of synchronisation in the local network from a single time series. This approach is useful in inferring the coupling as in a local cortical area as observed by a single measurement electrode. Secondly, different existing methods of phase synchronisation are considered from the perspective of experimental data analysis and inference of coupling from observed data. These methods are designed to address the estimation of medium to long range connectivity and their differences are particularly relevant in the context of volume conduction, that is known to produce spurious detections of connectivity. Finally, an asymmetric temporal metric is introduced in order to detect the direction of the coupling between different regions of the brain. The method developed in this thesis is based on a machine learning extensions of the well known concept of Granger causality. The thesis discussion is developed alongside examples of synthetic and experimental real data. The synthetic data are simulations of complex dynamical systems with the intention to mimic the behaviour of simple cortical neural assemblies. They are helpful to test the techniques developed in this thesis. The real datasets are provided to illustrate the problem of brain connectivity in the case of important neurological disorders such as Epilepsy and Parkinson’s disease. The methods of functional connectivity in this thesis are applied to intracranial EEG recordings in order to extract features, which characterize underlying spatiotemporal dynamics before during and after an epileptic seizure and predict seizure location and onset prior to conventional electrographic signs. The methodology is also applied to a MEG dataset containing healthy, Parkinson’s and dementia subjects with the scope of distinguishing patterns of pathological from physiological connectivity.
Resumo:
One of the major problems associated with communication via a loudspeaking telephone (LST) is that, using analogue processing, duplex transmission is limited to low-loss lines and produces a low acoustic output. An architectural for an instrument has been developed and tested, which uses digital signal processing to provide duplex transmission between a LST and a telopnone handset over most of the B.T. network. Digital adaptive-filters are used in the duplex LST to cancel coupling between the loudspeaker and microphone, and across the transmit to receive paths of the 2-to-4-wire converter. Normal movement of a person in the acoustic path causes a loss of stability by increasing the level of coupling from the loudspeaker to the microphone, since there is a lag associated the adaptive filters learning about a non-stationary path, Control of the loop stability and the level of sidetone heard by the hadset user is by a microprocessoe, which continually monitors the system and regulates the gain. The result is a system which offers the best compromise available based on a set of measured parameters.A theory has been developed which gives the loop stability requirements based on the error between the parameters of the filter and those of the unknown path. The programme to develope a low-cost adaptive filter in LST produced a low-cost adaptive filter in LST produced a unique architecture which has a number of features not available in any similar system. These include automatic compensation for the rate of adaptation over a 36 dB range of output level, , 4 rates of adaptation (with a maximum of 465 dB/s), plus the ability to cascade up to 4 filters without loss o performance. A complex story has been developed to determine the adptation which can be achieved using finite-precision arithmatic. This enabled the development of an architecture which distributed the normalisation required to achieve optimum rate of adaptation over the useful input range. Comparison of theory and measurement for the adaptive filter show very close agreement. A single experimental LST was built and tested on connections to hanset telephones over the BT network. The LST demonstrated that duplex transmission was feasible using signal processing and produced a more comfortable means of communication beween people than methods emplying deep voice-switching to regulate the local-loop gain. Although, with the current level of processing power, it is not a panacea and attention must be directed toward the physical acoustic isolation between loudspeaker and microphone.
Resumo:
This thesis presents a comparison of integrated biomass to electricity systems on the basis of their efficiency, capital cost and electricity production cost. Four systems are evaluated: combustion to raise steam for a steam cycle; atmospheric gasification to produce fuel gas for a dual fuel diesel engine; pressurised gasification to produce fuel gas for a gas turbine combined cycle; and fast pyrolysis to produce pyrolysis liquid for a dual fuel diesel engine. The feedstock in all cases is wood in chipped form. This is the first time that all three thermochemical conversion technologies have been compared in a single, consistent evaluation.The systems have been modelled from the transportation of the wood chips through pretreatment, thermochemical conversion and electricity generation. Equipment requirements during pretreatment are comprehensively modelled and include reception, storage, drying and communication. The de-coupling of the fast pyrolysis system is examined, where the fast pyrolysis and engine stages are carried out at separate locations. Relationships are also included to allow learning effects to be studied. The modelling is achieved through the use of multiple spreadsheets where each spreadsheet models part of the system in isolation and the spreadsheets are combined to give the cost and performance of a whole system.The use of the models has shown that on current costs the combustion system remains the most cost-effective generating route, despite its low efficiency. The novel systems only produce lower cost electricity if learning effects are included, implying that some sort of subsidy will be required during the early development of the gasification and fast pyrolysis systems to make them competitive with the established combustion approach. The use of decoupling in fast pyrolysis systems is a useful way of reducing system costs if electricity is required at several sites because• a single pyrolysis site can be used to supply all the generators, offering economies of scale at the conversion step. Overall, costs are much higher than conventional electricity generating costs for fossil fuels, due mainly to the small scales used. Biomass to electricity opportunities remain restricted to niche markets where electricity prices are high or feed costs are very low. It is highly recommended that further work examines possibilities for combined beat and power which is suitable for small scale systems and could increase revenues that could reduce electricity prices.
Resumo:
The primary objective of this research was to examine the concepts of the chemical modification of polymer blends by reactive processing using interlinking agents (multi-functional, activated vinyl compounds; trimethylolpropane triacrylates {TRIS} and divinylbenzene {DVD}) to target in-situ interpolymer formation between immiscible polymers in PS/EPDM blends via peroxide-initiated free radical reactions during melt mixing. From a comprehensive survey of previous studies of compatibility enhancement in polystyrene blends, it was recognised that reactive processing offers opportunities for technological success that have not yet been fully realised; learning from this study is expected to assist in the development and application of this potential. In an experimental-scale operation for the simultaneous melt blending and reactive processing of both polymers, involving manual injection of precise reactive agent/free radical initiator mixtures directly into molten polymer within an internal mixer, torque changes were distinct, quantifiable and rationalised by ongoing physical and chemical effects. EPDM content of PS/EPDM blends was the prime determinant of torque increases on addition of TRIS, itself liable to self-polymerisation at high additions, with little indication of PS reaction in initial reactively processed blends with TRIS, though blend compatibility, from visual assessment of morphology by SEM, was nevertheless improved. Suitable operating windows were defined for the optimisation of reactive blending, for use once routes to encourage PS reaction could be identified. The effectiveness of PS modification by reactive processing with interlinking agents was increased by the selection of process conditions to target specific reaction routes, assessed by spectroscopy (FT-IR and NMR) and thermal analysis (DSC) coupled dichloromethane extraction and fractionation of PS. Initiator concentration was crucial in balancing desired PS modification and interlinking agent self-polymerisation, most particularly with TRIS. Pre-addition of initiator to PS was beneficial in the enhancement of TRIS binding to PS and minimisation of modifier polymerisation; believed to arise from direct formation of polystyryl radicals for addition to active unsaturation in TRIS. DVB was found to be a "compatible" modifier for PS, but its efficacy was not quantified. Application of routes for PS reaction in PS/EPDM blends was successful for in-situ formation of interpolymer (shown by sequential solvent extraction combined with FT-IR and DSC analysis); the predominant outcome depending on the degree of reaction of each component, with optimum "between-phase" interpolymer formed under conditions selected for equalisation of differing component reactivities and avoidance of competitive processes. This was achieved for combined addition of TRIS+DVB at optimum initiator concentrations with initiator pre-addition to PS. Improvements in blend compatibility (by tensiles, SEM and thermal analysis) were shown in all cases with significant interpolymer formation, though physical benefits were not; morphology and other reactive effects were also important factors. Interpolymer from specific "between-phase" reaction of blend components and interlinking agent was vital for the realisation of positive performance on compatibilisation by the chemical modification of polymer blends by reactive processing.
Aspects of the learner's dictionary with special reference to advanced Pakistani learners of English
Resumo:
The present work is an empirical investigation into the lq`reference skills' of Pakistani learners and their language needs on semantic, phonetic, lexical and pragmatic levels in the dictionary. The introductory chapter discusses the relatively problematic nature of lexis in comparison with the other aspects in EFL learning and spells out the aim of this study. Chapter two provides an analytical survey of the various types of research undertaken in different contexts of the dictionary and explains the eclectic approach adopted in the present work. Chapter three studies the `reference skills' of this category of learners in the background of highly sophisticated information structure of learners' dictionaries under evaluation and suggests some measures for improvement in this context. Chapter four considers various criteria, eg. pedagogic, linguistic and sociolinguistic for determining the macro-structure of learner's dictionary with a focus on specific Ll speakers. Chapter five is concerned with various aspects of the semantic information provided in the dictionaries matched against the needs of Pakistani learners with regard to both comprehension and production. The type, scale and presentation of grammatical information in the dictionary is analysed in chapter six with the object of discovering their role and utility for the learner. Chapter seven explores the rationale for providing phonological information, the extent to which this guidance is vital and the problems of phonetic symbols employed in the dictionaries. Chapter eight brings into perspective the historical background of English-Urdu bilingual lexicography and evalutes the currently popular bilingual dictionaries among the student community, with the aim of discovering the extent to which they have taken account of the modern tents of lexicography and investigating their validity as a useful reference tool in the learning of English language. The final chapter concludes the findings of individual aspects in a coherent fashion to assess the viability of the original hypothesis that learners' dictionaries if compiled with a specific set of users in mind would be more useful.
Resumo:
This comparative study considers the main causative factors for change in recent years in the teaching of modern languages in England and France and seeks to contribute, in a general sense, to the understanding of change in comparable institutions. In England by 1975 the teaching of modern languages in the comprehensive schools was seen to be inappropriate to the needs of children of the whole ability-range. A combination of the external factor of the Council of Europe initiative in devising a needs-based learning approach for adult learners, and the internal factor of teacher-based initiatives in developing a graded-objectives learning approach for the less-able, has reversed this situation to some extent. The study examines and evaluates this reversal, and, in addition, assesses teachers' attitudes towards, and understanding of, the changes involved. In France the imposition of `la reforme Haby' in 1977 and the creation of `le college unique' were the main external factors for change. The subsequent failure of the reform and the socialist government's support of decentralisation policies returning the initiative for renewal to schools are examined and evaluated, as are the internal factors for changes in language-teaching - `groupes de niveau' and the creation of `equipes pedagogiques'. In both countries changes in the function of examinations at 15/16 plus are examined. The final chapter compared the changes in both education systems.
Resumo:
This study aimed firstly to investigate current patterns of language use amongst young bilinguals in Birmingham and secondly to examine the relationship between this language use and educational achievement. The research then focussed on various practices, customs and attitudes which would favour the attrition or survival of minority languages in the British situation. The data necessary to address this question was provided by a sample of three hundred and seventy-four 16-19 year olds, studying in Birmingham schools and colleges during the period 1987-1990 and drawn from the main linguistic minority communities in Birmingham. The research methods chosen were both quantitative and qualitative. The study found evidence of ethnolinguistic vitality amongst many of the linguistic minority communities in Birmingham: a number of practices and a range of attitudes indicate that linguistic diversity may continue and that a stable diglossic situation may develop in some instances, particularly where demographical and religious factors lead to closeness of association. Where language attrition is occurring it is often because of the move from a less prestigious minority language or dialect to a more prestigious minority language in addition to pressures from English. The educational experience of the sample indicates that literacy and formal language study are of key importance if personal bilingualism is to be experienced as an asset; high levels of oral proficiency in the L1 and L2 do not, on their own, necessarily correlate with positive educational benefit. The intervening variable associated with educational achievement appears to be the formal language learning process and literacy. A number of attitudes and practices, including the very close associations maintained with some of the countries of origin of the families, were seen to aid or hinder first language maintenance and second language acquisition.
Resumo:
Information systems are corporate resources, therefore information systems development must be aligned with corporate strategy. This thesis proposes that effective strategic alignment of information systems requires information systems development, information systems planning and strategic management to be united. Literature in these areas is examined, breaching the academic boundaries which separate these areas, to contribute a synthesised approach to the strategic alignment of information systems development. Previous work in information systems planning has extended information systems development techniques, such as data modelling, into strategic planning activities, neglecting techniques of strategic management. Examination of strategic management in this thesis, identifies parallel trends in strategic management and information systems development; the premises of the learning school of strategic management are similar to those of soft systems approaches to information systems development. It is therefore proposed that strategic management can be supported by a soft systems approach. Strategic management tools and techniques frame individual views of a strategic situation; soft systems approaches can integrate these diverse views to explore the internal and external environments of an organisation. The information derived from strategic analysis justifies the need for an information system and provides a starting point for information systems development. This is demonstrated by a composite framework which enables each information system to be justified according to its direct contribution to corporate strategy. The proposed framework was developed through action research conducted in a number of organisations of varying types. This suggests that the framework can be widely used to support the strategic alignment of information systems development, thereby contributing to organisational success.
Resumo:
Collaborative working with the aid of computers is increasing rapidly due to the widespread use of computer networks, geographic mobility of people, and small powerful personal computers. For the past ten years research has been conducted into this use of computing technology from a wide variety of perspectives and for a wide range of uses. This thesis adds to that previous work by examining the area of collaborative writing amongst groups of people. The research brings together a number of disciplines, namely sociology for examining group dynamics, psychology for understanding individual writing and learning processes, and computer science for database, networking, and programming theory. The project initially looks at groups and how they form, communicate, and work together, progressing on to look at writing and the cognitive processes it entails for both composition and retrieval. The thesis then details a set of issues which need to be addressed in a collaborative writing system. These issues are then followed by developing a model for collaborative writing, detailing an iterative process of co-ordination, writing and annotation, consolidation, and negotiation, based on a structured but extensible document model. Implementation issues for a collaborative application are then described, along with various methods of overcoming them. Finally the design and implementation of a collaborative writing system, named Collaborwriter, is described in detail, which concludes with some preliminary results from initial user trials and testing.