34 resultados para Neural networks (Computer science) - Design and construction
Resumo:
Advances in both computer technology and the necessary mathematical models capable of capturing the geometry of arbitarily shaped objects has led to the development in this thesis of a surface generation package called 'IBSCURF' aimed at providing a more economically viable solution to free-form surface manufacture. A suit of computer programs written in FORTRAN 77 has been developed to provide computer aids for every aspect of work in designing and machining free-form surfaces. A vector-valued parametric method was used for shape description and a lofting technique employed for the construction of the surface. The development of the package 'IBSCURF' consists of two phases. The first deals with CAD. The design process commences in defining the cross-sections which are represented by uniform B-spline curves as approximations to give polygons. The order of the curve and the position and number of the polygon vertices can be used as parameters for the modification to achieve the required curves. When the definitions of the sectional curves is complete, the surface is interpolated over them by cubic cardinal splines. To use the CAD function of the package to design a mould for a plastic handle, a mathematical model was developed. To facilitate the integration of design and machining using the mathematical representation of the surface, the second phase of the package is concerned with CAM which enables the generation of tool offset positions for ball-nosed cutters and a general post-processor has been developed which automatically generates NC tape programs for any CNC milling machine. The two phases of these programs have been successfully implemented, as a CAD/CAM package for free-form surfaces on the VAX 11/750 super-minicomputer with graphics facilities for displaying drawings interactively on the terminal screen. The development of this package has been beneficial in all aspects of design and machining of free form surfaces.
Resumo:
SINNMR (Sonically Induced Narrowing of the Nuclear Magnetic Resonance spectra of solids), is a novel technique that is being developed to enable the routine study of solids by nuclear magnetic resonance spectroscopy. SINNMR aims to narrow the broad resonances that are characteristic of solid state NMR by inducing rapid incoherent motion of solid particles suspended in a support medium, using high frequency ultrasound in the range 2-10 MHz. The width of the normal broad resonances from solids are due to incomplete averaging of several components of the total spin Hamiltonian caused by restrictions placed on molecular motion within a solid. At present Magic Angle Spinning (MAS) NMR is the classical solid state technique used to reduce line broadening, but: this has associated problems, not least of which is the appearance of many spinning side bands which confuse the spectra. It is hoped that SlNNMR will offer a simple alternative, particularly as it does not reveal spinning sidebands The fundamental question concerning whether the use of ultrasound within a cryo-magnet will cause quenching has been investigated with success, as even under the most extreme conditions of power, frequency and irradiator time, the magnet does not quench. The objective of this work is to design and construct a SINNMR probe for use in a super conducting cryo-magnet NMR spectrometer. A cell for such a probe has been constructed and incorporated into an adapted high resolution broadband probe. It has been proved that the cell is capable of causing cavitation, up to 10 MHz, by running a series of ultrasonic reactions within it and observing the reaction products. It was found that the ultrasound was causing the sample to be heated to unacceptable temperatures and this necessitated the incorporation of temperature stabilisation devices. Work has been performed on the investigation of the narrowing of the solid state 23Na spectrum of tri-sodium phosphate using high frequency ultrasound. Work has also been completed on the signal enhancement and T1 reduction of a liquid mixture and a pure compound using ultrasound. Some preliminary "bench" experiments have been completed on a novel ultrasonic device designed to help minimise sample heating. The concept involves passing the ultrasound through a temperature stabilised, liquid filled funnel that has a drum skin on the end that will enable the passage of ultrasound into the sample. Bench experiments have proved that acoustic attenuation is low and that cavitation in the liquid beyond the device is still possible.
Resumo:
The work reported in this thesis is concerned with the improvement and expansion of the assistance given to the designer by the computer in the design of cold formed sections. The main contributions have been in four areas, which have consequently led to the fifth, the development of a methodology to optimise designs. This methodology can be considered an `Expert Design System' for cold formed sections. A different method of determining section properties of profiles was introduced, using the properties of line and circular elements. Graphics were introduced to show the outline of the profile on screen. The analysis of beam loading has been expanded to beam loading conditions where the number of supports, point loads, and uniform distributive loads can be specified by the designer. The profile can then be checked for suitability for the specified type of loading. Artificial Intelligence concepts have been introduced to give the designer decision support from the computer, in combination with the computer aided design facilities. The more complex decision support was adopted through the use of production rules. All the support was based on the British standards. A method has been introduced, by which the appropriate use of stiffeners can be determined and consequently designed by the designer. Finally, the methodology by which the designer is given assistance from the computer, without constraining the designer, was developed. This methodology gives advice to the designer on possible methods of improving the design, but allows the designer to reject that option, and analyse the profile accordingly. The methodology enables optimisation to be achieved by the designer, designing variety of profiles for a particular loading, and determining which one is best suited.
Resumo:
Cold roll forming of thin-walled sections is a very useful process in the sheet metal industry. However, the conventional method for the design and manufacture of form-rolls, the special tooling used in the cold roll forming process, is a very time consuming and skill demanding exercise. This thesis describes the establishment of a stand-alone minicomputer based CAD/CAM system for assisting the design and manufacture of form-rolls. The work was undertaken in collaboration with a leading manufacturer of thin-walled sections. A package of computer programs have been developed to provide computer aids for every aspect of work in form-roll design and manufacture. The programs have been successfully implemented, as an integrated CAD/CAM software system, on the ICL PERQ minicomputer with graphics facilities. Thus, the developed CAD/CAM system is a single-user workstation, with software facilities to help the user to perform the conventional roll design activities including the design of the finished section, the flower pattern, and the form-rolls. A roll editor program can then be used to modify, if required, the computer generated roll profiles. As far as manufacturing is concerned, a special-purpose roll machining program and postprocessor can be used in conjunction to generate the NC control part-programs for the production of form-rolls by NC turning. Graphics facilities have been incorporated into the CAD/CAM software programs to display drawings interactively on the computer screen throughout all stages of execution of the CAD/CAM software. It has been found that computerisation can shorten the lead time in all activities dealing with the design and manufacture of form-rolls, and small or medium size manufacturing companies can gain benefits from the CAD/CM! technology by developing, according to its own specification, a tailor-made CAD/CAM software system on a low cost minicomputer.
Resumo:
This study is concerned with quality and productivity aspects of traditional house building. The research focuses on these issues by concentrating on the services and finishing stages of the building process. These are work stages which have not been fully investigated in previous productivity related studies. The primary objective of the research is to promote an integrated design and construction led approach to traditional house building based on an original concept of 'development cycles'. This process involves the following: site monitoring; the analysis of work operations; implementing design and construction changes founded on unique information collected during site monitoring; and subsequent re-monitoring to measure and assess Ihe effect of change. A volume house building firm has been involved in this applied research and has allowed access to its sites for production monitoring purposes. The firm also assisted in design detailing for a small group of 'experimental' production houses where various design and construction changes were implemented. Results from the collaborative research have shown certain quality and productivity improvements to be possible using this approach, albeit on a limited scale at this early experimental stage. The improvements have been possible because an improved activity sampling technique, developed for, and employed by the study, has been able to describe why many quality and productivity related problems occur during site building work. Experience derived from the research has shown the following attributes to be important: positive attitudes towards innovation; effective communication; careful planning and organisation; and good coordination and control at site level. These are all essential aspects of quality led management and determine to a large extent the overall success of this approach. Future work recommendations must include a more widespread use of innovative practices so that further design and construction modifications can be made. By doing this, productivity can be improved, cost savings made and better quality afforded.
Resumo:
This thesis presents a thorough and principled investigation into the application of artificial neural networks to the biological monitoring of freshwater. It contains original ideas on the classification and interpretation of benthic macroinvertebrates, and aims to demonstrate their superiority over the biotic systems currently used in the UK to report river water quality. The conceptual basis of a new biological classification system is described, and a full review and analysis of a number of river data sets is presented. The biological classification is compared to the common biotic systems using data from the Upper Trent catchment. This data contained 292 expertly classified invertebrate samples identified to mixed taxonomic levels. The neural network experimental work concentrates on the classification of the invertebrate samples into biological class, where only a subset of the sample is used to form the classification. Other experimentation is conducted into the identification of novel input samples, the classification of samples from different biotopes and the use of prior information in the neural network models. The biological classification is shown to provide an intuitive interpretation of a graphical representation, generated without reference to the class labels, of the Upper Trent data. The selection of key indicator taxa is considered using three different approaches; one novel, one from information theory and one from classical statistical methods. Good indicators of quality class based on these analyses are found to be in good agreement with those chosen by a domain expert. The change in information associated with different levels of identification and enumeration of taxa is quantified. The feasibility of using neural network classifiers and predictors to develop numeric criteria for the biological assessment of sediment contamination in the Great Lakes is also investigated.
Resumo:
This thesis is a study of the generation of topographic mappings - dimension reducing transformations of data that preserve some element of geometric structure - with feed-forward neural networks. As an alternative to established methods, a transformational variant of Sammon's method is proposed, where the projection is effected by a radial basis function neural network. This approach is related to the statistical field of multidimensional scaling, and from that the concept of a 'subjective metric' is defined, which permits the exploitation of additional prior knowledge concerning the data in the mapping process. This then enables the generation of more appropriate feature spaces for the purposes of enhanced visualisation or subsequent classification. A comparison with established methods for feature extraction is given for data taken from the 1992 Research Assessment Exercise for higher educational institutions in the United Kingdom. This is a difficult high-dimensional dataset, and illustrates well the benefit of the new topographic technique. A generalisation of the proposed model is considered for implementation of the classical multidimensional scaling (¸mds}) routine. This is related to Oja's principal subspace neural network, whose learning rule is shown to descend the error surface of the proposed ¸mds model. Some of the technical issues concerning the design and training of topographic neural networks are investigated. It is shown that neural network models can be less sensitive to entrapment in the sub-optimal global minima that badly affect the standard Sammon algorithm, and tend to exhibit good generalisation as a result of implicit weight decay in the training process. It is further argued that for ideal structure retention, the network transformation should be perfectly smooth for all inter-data directions in input space. Finally, there is a critique of optimisation techniques for topographic mappings, and a new training algorithm is proposed. A convergence proof is given, and the method is shown to produce lower-error mappings more rapidly than previous algorithms.
Resumo:
The scaling problems which afflict attempts to optimise neural networks (NNs) with genetic algorithms (GAs) are disclosed. A novel GA-NN hybrid is introduced, based on the bumptree, a little-used connectionist model. As well as being computationally efficient, the bumptree is shown to be more amenable to genetic coding lthan other NN models. A hierarchical genetic coding scheme is developed for the bumptree and shown to have low redundancy, as well as being complete and closed with respect to the search space. When applied to optimising bumptree architectures for classification problems the GA discovers bumptrees which significantly out-perform those constructed using a standard algorithm. The fields of artificial life, control and robotics are identified as likely application areas for the evolutionary optimisation of NNs. An artificial life case-study is presented and discussed. Experiments are reported which show that the GA-bumptree is able to learn simulated pole balancing and car parking tasks using only limited environmental feedback. A simple modification of the fitness function allows the GA-bumptree to learn mappings which are multi-modal, such as robot arm inverse kinematics. The dynamics of the 'geographic speciation' selection model used by the GA-bumptree are investigated empirically and the convergence profile is introduced as an analytical tool. The relationships between the rate of genetic convergence and the phenomena of speciation, genetic drift and punctuated equilibrium arc discussed. The importance of genetic linkage to GA design is discussed and two new recombination operators arc introduced. The first, linkage mapped crossover (LMX) is shown to be a generalisation of existing crossover operators. LMX provides a new framework for incorporating prior knowledge into GAs.Its adaptive form, ALMX, is shown to be able to infer linkage relationships automatically during genetic search.
Resumo:
Optical data communication systems are prone to a variety of processes that modify the transmitted signal, and contribute errors in the determination of 1s from 0s. This is a difficult, and commercially important, problem to solve. Errors must be detected and corrected at high speed, and the classifier must be very accurate; ideally it should also be tunable to the characteristics of individual communication links. We show that simple single layer neural networks may be used to address these problems, and examine how different input representations affect the accuracy of bit error correction. Our results lead us to conclude that a system based on these principles can perform at least as well as an existing non-trainable error correction system, whilst being tunable to suit the individual characteristics of different communication links.
Resumo:
Optical data communication systems are prone to a variety of processes that modify the transmitted signal, and contribute errors in the determination of 1s from 0s. This is a difficult, and commercially important, problem to solve. Errors must be detected and corrected at high speed, and the classifier must be very accurate; ideally it should also be tunable to the characteristics of individual communication links. We show that simple single layer neural networks may be used to address these problems, and examine how different input representations affect the accuracy of bit error correction. Our results lead us to conclude that a system based on these principles can perform at least as well as an existing non-trainable error correction system, whilst being tunable to suit the individual characteristics of different communication links.
Resumo:
We study the effect of two types of noise, data noise and model noise, in an on-line gradient-descent learning scenario for general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labeled by a two-layer teacher network with an arbitrary number of hidden units. Data is then corrupted by Gaussian noise affecting either the output or the model itself. We examine the effect of both types of noise on the evolution of order parameters and the generalization error in various phases of the learning process.
Resumo:
Neural networks have often been motivated by superficial analogy with biological nervous systems. Recently, however, it has become widely recognised that the effective application of neural networks requires instead a deeper understanding of the theoretical foundations of these models. Insight into neural networks comes from a number of fields including statistical pattern recognition, computational learning theory, statistics, information geometry and statistical mechanics. As an illustration of the importance of understanding the theoretical basis for neural network models, we consider their application to the solution of multi-valued inverse problems. We show how a naive application of the standard least-squares approach can lead to very poor results, and how an appreciation of the underlying statistical goals of the modelling process allows the development of a more general and more powerful formalism which can tackle the problem of multi-modality.