913 resultados para Number system
Resumo:
We have studied the soliton propagation through a segment containing random pointlike scatterers. In the limit of small concentration of scatterers when the mean distance between the scatterers is larger than the soliton width, a method has been developed for obtaining the statistical characteristics of the soliton transmission through the segment. The method is applicable for any classical particle traversing through a disordered segment with the given velocity transformation after each act of scattering. In the case of weak scattering and relatively short disordered segment the transmission time delay of a fast soliton is mostly determined by the shifts of the soliton center after each act of scattering. For sufficiently long segments the main contribution to the delay is due to the shifts of the amplitude and velocity of a fast soliton after each scatterer. Corresponding crossover lengths for both cases of light and heavy solitons have been obtained. We have also calculated the exact probability density function of the soliton transmission time delay for a sufficiently long segment. In the case of weak identical scatterers the latter is a universal function which depends on a sole parameter—the mean number of scatterers in a segment.
Resumo:
Health and safety policies may be regarded as the cornerstone for positive prevention of occupational accidents and diseases. The Health and Safety at Work, etc Act 1974 makes it a legal duty for employers to prepare and revise a written statement of a general policy with respect to the health and safety at work of employees as well as the organisation and arrangements for carrying out that policy. Despite their importance and the legal equipment to prepare them, health and safety policies have been found, in a large number of plastics processing companies (particularly small companies), to be poorly prepared, inadequately implemented and monitored. An important cause of these inadequacies is the lack of necessary health and safety knowledge and expertise to prepare, implement and monitor policies. One possible way of remedying this problem is to investigate the feasibility of using computers to develop expert system programs to simulate the health and safety (HS) experts' task of preparing the policies and assisting companies implement and monitor them. Such programs use artificial intelligence (AI) techniques to solve this sort of problems which are heuristic in nature and require symbolic reasoning. Expert systems have been used successfully in a variety of fields such as medicine and engineering. An important phase in the feasibility of development of such systems is the engineering of knowledge which consists of identifying the knowledge required, eliciting, structuring and representing it in an appropriate computer programming language.
Resumo:
This work concerns the developnent of a proton irduced X-ray emission (PIXE) analysis system and a multi-sample scattering chamber facility. The characteristics of the beam pulsing system and its counting rate capabilities were evaluated by observing the ion-induced X-ray emission from pure thick copper targets, with and without beam pulsing operation. The characteristic X-rays were detected with a high resolution Si(Li) detector coupled to a rrulti-channel analyser. The removal of the pile-up continuum by the use of the on-demand beam pulsing is clearly demonstrated in this work. This new on-demand pu1sirg system with its counting rate capability of 25, 18 and 10 kPPS corresponding to 2, 4 am 8 usec main amplifier time constant respectively enables thick targets to be analysed more readily. Reproducibility tests of the on-demard beam pulsing system operation were checked by repeated measurements of the system throughput curves, with and without beam pulsing. The reproducibility of the analysis performed using this system was also checked by repeated measurements of the intensity ratios from a number of standard binary alloys during the experimental work. A computer programme has been developed to evaluate the calculations of the X-ray yields from thick targets bornbarded by protons, taking into account the secondary X-ray yield production due to characteristic X-ray fluorescence from an element energetically higher than the absorption edge energy of the other element present in the target. This effect was studied on metallic binary alloys such as Fe/Ni and Cr/Fe. The quantitative analysis of Fe/Ni and Cr/Fe alloy samples to determine their elemental composition taking into account the enhancement has been demonstrated in this work. Furthermore, the usefulness of the Rutherford backscattering (R.B.S.) technique to obtain the depth profiles of the elements in the upper micron of the sample is discussed.
Resumo:
Collaborative working with the aid of computers is increasing rapidly due to the widespread use of computer networks, geographic mobility of people, and small powerful personal computers. For the past ten years research has been conducted into this use of computing technology from a wide variety of perspectives and for a wide range of uses. This thesis adds to that previous work by examining the area of collaborative writing amongst groups of people. The research brings together a number of disciplines, namely sociology for examining group dynamics, psychology for understanding individual writing and learning processes, and computer science for database, networking, and programming theory. The project initially looks at groups and how they form, communicate, and work together, progressing on to look at writing and the cognitive processes it entails for both composition and retrieval. The thesis then details a set of issues which need to be addressed in a collaborative writing system. These issues are then followed by developing a model for collaborative writing, detailing an iterative process of co-ordination, writing and annotation, consolidation, and negotiation, based on a structured but extensible document model. Implementation issues for a collaborative application are then described, along with various methods of overcoming them. Finally the design and implementation of a collaborative writing system, named Collaborwriter, is described in detail, which concludes with some preliminary results from initial user trials and testing.
Resumo:
The national systems of innovation (NIS) approach focuses on the patterns and the determinants of innovation processes from the perspective of nation-states. This paper reports on continuing work on the application of an NIS model to the development of technological capability in Turkey. Initial assessment of the literature shows that there are a number of alternative conceptualisations of NIS. An attempt by the Government to identify a NIS for Turkey shows the main actors in the system but does not pay sufficient attention to the processes of interactions between agents within the system. An operational model should be capable of representing these processes and interactions and assessing the strengths and weaknesses of the NIS. For industrialising countries, it is also necessary to incorporate learning mechanisms into the model. Further, there are different levels of innovation and capability in different sectors which the national perspective may not reflect. This paper is arranged into three sections. The first briefly explains the basics of the national innovation and learning system. Although there is no single accepted definition of NIS, alternative definitions reviewed share some common characteristics. In the second section, an NIS model is applied to Turkey in order to identify the elements, which characterise the country’s NIS. This section explains knowledge flow and defines the relations between the actors within the system. The final section draws on the “from imitation to innovation” model apparently so successful in East Asia and assesses its applicability to Turkey. In assessing Turkey’s NIS, the focus is on the automotive and textile sectors.
Resumo:
There is currently, no ideal system for studying nasal drug delivery in vitro. The existing techniques such as the Ussing chamber and cell culture all have major disadvantages. Most importantly, none of the existing techniques accurately represent the interior of the nasal cavity, with its airflow and humidity; neither do they allow the investigation of solid dosage forms.The work in this thesis represents the development of an in vitro model system in which the interior characteristics of the nasal cavity are closely represented, and solid or minimal volume dosage forms can be investigated. The complete nasal chamber consists of two sections: a lower tissue, viability chamber and an upper nasal chamber. The lower tissue viability chamber has been shown, using existing tissue viability monitoring techniques, to maintain the viability of a number of epithelial tissues, including porcine and rabbit nasal tissue, and rat ileal and Payers' patch tissue. The complete chamber including the upper nasal chamber has been shown to provide tissue viability for porcine and rabbit nasal tissue above that available using the existing Ussing chamber techniques. Adaptation of the complete system, and the development of the necessary experimental protocols that allow aerosol particle-sizing, together with videography, has shown that the new factors investigated, humidity and airflow, have a measurable effect on the delivered dose from a typical nasal pump. Similarly, adaptation of the chamber to fit under a confocal microscope, and the development of the necessary protocols has shown the effect of surface and size on the penetration of microparticulate materials into nasal epithelial tissues. The system developed in this thesis has been shown to be flexible, in allowing the development of the confocal and particle-sizing systems. For future nasal drug delivery studies, the ability to measure such factors as the size of the delivered system in the nasal cavity, the depth of penetration of the formulation into the tissue are essential. Additionally, to have access to other data such as that obtained from drug transport in the same system, and to have the tissue available for histological examination represents a significant advance in the usefulness of such an in vitro technique for nasal delivery.
Resumo:
The in vivo and in vitro characteristics of the I2 binding site were probed using the technique of drug discrimination and receptor autoradiography. Data presented in this thesis indicates the I2 ligand 2-BFI generates a cue in drug discrimination. Further studies indicated agmatine, a proposed endogenous imidazoline ligand, and a number of imidazoline and imidazole analogues of 2-BFI substitute significantly for 2-BFI. In addition to specific I2 ligands the administration of NRl's (noradrenaline reuptake inhibitors), the sympathomimetic d-amphetamine, the α1-adrenoceptor agonist methoxamine, but not the β1 agonist dobutamine or the β2 agonist salbutamol, gave rise to significant levels of substitution for the 2-BFI cue. The administration of the α1-adrenoceptor antagonist WB4101, prior to 2- BFI itself significantly reduced levels of 2-BFI appropriate responding. Administration of the reversible MAO-A inhibitors moclobemide and Ro41-1049, but not the reversible MAO-B inhibitors lazabemide and Ro16-6491, gave rise to potent dose dependent levels of substitution for the 2-BFI cue. Further studies indicated the administration of a number of β-carbolines and the structurally related indole alkaloid ibogaine also gave rise to dose dependent significant levels of substitution. Due to the relationship of indole alkaloids to serotonin the 5-HT releaser fenfluramine and a number of SSRI's (selective serotonin reuptake inhibitor) were also administered and these compounds gave rise to significant partial (20-80% responses to the 2-BFI lever) levels of substitution. The autoradiographical studies reported here indicate [3H]2-BFI labels I2 sites within the rat arcuate nucleus, area postrema, pineal gland, interpeduncular nucleus and subfornical organ. Subsequent experiments confirmed that the drug discrimination dosing schedule significantly increases levels of [3H]2-BFI 12 binding within two of these nuclei. However, levels of [3H]2-BFI specific binding were significantly reduced within four of these nuclei after chronic treatment with the irreversible MAO inhibitors deprenyl and tranylcypromine but not pargyline, which only reduced levels significantly in two. Further autoradiographical studies indicated that the distribution of [3H]2-BFI within the C57/B mouse compares favourably to that within the rat. Comparison of these levels of binding to those from transgenic mice who over-express MAO-B indicates two possibly distinct populations of [3H]2-BFI 12 sites exist in mouse brain. The data presented here indicates the 2-BFI cue is associated with the selective activation of α1-adrenoceptors and possibly 5-HT receptors. 2-BFI trained rats recognise reversible MAO-A but not MAO-B inhibitors. However, data within this thesis indicates the autoradiographical distribution of I2 sites bears a closer resemblance to that of MAO-B not MAO-A and further studies using transgenic mice that over-express MAO-B suggests a non-MAO-B I2 site exists in mouse brain.
Resumo:
The human NT2.D1 cell line was differentiated to form both a 1:2 co-culture of post-mitotic NT2 neuronal and NT2 astrocytic (NT2.N/A) cells and a pure NT2.N culture. The respective sensitivities to several test chemicals of the NT2.N/A, the NT2.N, and the NT2.D1 cells were evaluated and compared with the CCF-STTG1 astrocytoma cell line, using a combination of basal cytotoxicity and biochemical endpoints. Using the MTT assay, the basal cytotoxicity data estimated the comparative toxicities of the test chemicals (chronic neurotoxin 2,5-hexanedione, cytotoxins 2,3- and 3,4-hexanedione and acute neurotoxins tributyltin- and trimethyltin- chloride) and also provided the non-cytotoxic concentration-range for each compound. Biochemical endpoints examined over the non-cytotoxic range included assays for ATP levels, oxidative status (H2O2 and GSH levels) and caspase-3 levels as an indicator of apoptosis. although the endpoints did not demonstrate the known neurotoxicants to be consistently more toxic to the cell systems with the greatest number of neuronal properties, the NT2 astrocytes appeared to contribute positively to NT2 neuronal health following exposure to all the test chemicals. The NT2.N/A co-culture generally maintained superior ATP and GSH levels and reduced H2O2 levels in comparison with the NT2.N mono-culture. In addition, the pure NT2.N culture showed a significantly lower level of caspase-3 activation compared with the co-culture, suggesting NT2 astrocytes may be important in modulating the mode of cell death following toxic insult. Overall, these studies provide evidence that an in vitro integrated population of post-mitotic human neurons and astrocytes may offer significant relevance to the human in vivo heterogeneous nervous system, when initially screening compounds for acute neurotoxic potential.
Resumo:
Case studies in copper-alloy rolling mill companies showed that existing planning systems suffer from numerous shortcomings. Where computerised systems are in use, these tend to simply emulate older manual systems and still rely heavily on modification by experienced planners on the shopfloor. As the size and number of orders increase, the task of process planners, while seeking to optimise the manufacturing objectives and keep within the production constraints, becomes extremely complicated because of the number of options for mixing or splitting the orders into batches. This thesis develops a modular approach to computerisation of the production management and planning functions. The full functional specification of each module is discussed, together with practical problems associated with their phased implementation. By adapting the Distributed Bill of Material concept from Material Requirements Planning (MRP) philosophy, the production routes generated by the planning system are broken down to identify the rolling stages required. Then to optimise the use of material at each rolling stage, the system generates an optimal cutting pattern using a new algorithm that produces practical solutions to the cutting stock problem. It is shown that the proposed system can be accommodated on a micro-computer, which brings it into the reach of typical companies in the copper-alloy rolling industry, where profit margins are traditionally low and the cost of widespread use of mainframe computers would be prohibitive.
Resumo:
Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.
Resumo:
More-electric vehicle technology is becoming prevalent in a number of transportation systems because of its ability to improve efficiency and reduce costs. This paper examines the specific case of an Uninhabited Autonomous Vehicle (UAV), and the system topology and control elements required to achieve adequate dc distribution voltage bus regulation. Voltage control methods are investigated and a droop control scheme is implemented on the system. Simulation results are also presented.
Resumo:
Faced with a future of rising energy costs there is a need for industry to manage energy more carefully in order to meet its economic objectives. A problem besetting the growth of energy conservation in the UK is that a large proportion of energy consumption is used in a low intensive manner in organisations where they would be responsibility for energy efficiency is spread over a large number of personnel who each see only small energy costs. In relation to this problem in the non-energy intensive industrial sector, an application of an energy management technique known as monitoring and targeting (M & T) has been installed at the Whetstone site of the General Electric Company Limited in an attempt to prove it as a means for motivating line management and personnel to save energy. The objective energy saving for which the M & T was devised is very specific. During early energy conservation work at the site there had been a change from continuous to intermittent heating but the maintenance of the strategy was receiving a poor level of commitment from line management and performance was some 5% - 10% less than expected. The M & T is concerned therefore with heat for space heating for which a heat metering system was required. Metering of the site high pressure hot water system posed technical difficulties and expenditure was also limited. This led to a ‘tin-house' design being installed for a price less than the commercial equivalent. The timespan of work to achieve an operational heat metering system was 3 years which meant that energy saving results from the scheme were not observed during the study. If successful the replication potential is the larger non energy intensive sites from which some 30 PT savings could be expected in the UK.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
The work described in this thesis is an attempt to elucidate the relationships between the pore system and a number of engineering properties of hardened cement paste, particularly tensile strength and resistances to carbonation and ionic penetration. By examining aspects such as the rate of carbonisation, the pore size distribution, the concentration of ions in the pore solution and the phase composition of cement pastes, relationships between the pore system (pores and pore solution) and the resistance to carbonation were investigated. The study was carried out in two parts. First, cement pastes with different pore systems were compared, whilst secondly comparisons were made between the pore systems of cement pastes with different degrees of carbonation. Relationships between the pore structure and ionic penetration were studied by comparing kinetic data relating to the diffusion of various ions in cement pastes with different pore systems. Diffusion coefficients and activation energies for the diffusion process of Cl- and Na+ ions in carbonated and non-carbonated cement pastes were determined by a quasi-steady state technique. The effect of the geometry of pores on ionic diffusion was studied by comparing the mechanisms of ionic diffusion for ions with different radii. In order to investigate the possible relationship between tensile strength and macroporosity, cement paste specimens with cross sectional areas less than 1mm2 were produced so that the chance of a macropore existing within them was low. The tensile strengths of such specimens were then compared with those of larger specimens.
Resumo:
The soil-plant-moisture subsystem is an important component of the hydrological cycle. Over the last 20 or so years a number of computer models of varying complexity have represented this subsystem with differing degrees of success. The aim of this present work has been to improve and extend an existing model. The new model is less site specific thus allowing for the simulation of a wide range of soil types and profiles. Several processes, not included in the original model, are simulated by the inclusion of new algorithms, including: macropore flow; hysteresis and plant growth. Changes have also been made to the infiltration, water uptake and water flow algorithms. Using field data from various sources, regression equations have been derived which relate parameters in the suction-conductivity-moisture content relationships to easily measured soil properties such as particle-size distribution data. Independent tests have been performed on laboratory data produced by Hedges (1989). The parameters found by regression for the suction relationships were then used in equations describing the infiltration and macropore processes. An extensive literature review produced a new model for calculating plant growth from actual transpiration, which was itself partly determined by the root densities and leaf area indices derived by the plant growth model. The new infiltration model uses intensity/duration curves to disaggregate daily rainfall inputs into hourly amounts. The final model has been calibrated and tested against field data, and its performance compared to that of the original model. Simulations have also been carried out to investigate the effects of various parameters on infiltration, macropore flow, actual transpiration and plant growth. Qualitatively comparisons have been made between these results and data given in the literature.