27 resultados para Large space structures (Astronautics)

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this work was to investigate the feasibility of detecting and locating damage in large frame structures where visual inspection would be difficult or impossible. This method is based on a vibration technique for non-destructively assessing the integrity of structures by using measurements of changes in the natural frequencies. Such measurements can be made at a single point in the structure. The method requires that initially a comprehensive theoretical vibration analysis of the structure is undertaken and from it predictions are made of changes in dynamic characteristics that will occur if each member of the structure is damaged in turn. The natural frequencies of the undamaged structure are measured, and then routinely remeasured at intervals . If a change in the natural frequencies is detected a statistical method. is used to make the best match between the measured changes in frequency and the family of theoretical predictions. This predicts the most likely damage site. The theoretical analysis was based on the finite element method. Many structures were extensively studied and a computer model was used to simulate the effect of the extent and location of the damage on natural frequencies. Only one such analysis is required for each structure to be investigated. The experimental study was conducted on small structures In the laboratory. Frequency changes were found from inertance measurements on various plane and space frames. The computational requirements of the location analysis are small and a desk-top micro computer was used. Results of this work showed that the method was successful in detecting and locating damage in the test structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we develop a new family of graph kernels where the graph structure is probed by means of a discrete-time quantum walk. Given a pair of graphs, we let a quantum walk evolve on each graph and compute a density matrix with each walk. With the density matrices for the pair of graphs to hand, the kernel between the graphs is defined as the negative exponential of the quantum Jensen–Shannon divergence between their density matrices. In order to cope with large graph structures, we propose to construct a sparser version of the original graphs using the simplification method introduced in Qiu and Hancock (2007). To this end, we compute the minimum spanning tree over the commute time matrix of a graph. This spanning tree representation minimizes the number of edges of the original graph while preserving most of its structural information. The kernel between two graphs is then computed on their respective minimum spanning trees. We evaluate the performance of the proposed kernels on several standard graph datasets and we demonstrate their effectiveness and efficiency.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The finite element method is now well established among engineers as being an extremely useful tool in the analysis of problems with complicated boundary conditions. One aim of this thesis has been to produce a set of computer algorithms capable of efficiently analysing complex three dimensional structures. This set of algorithms has been designed to permit much versatility. Provisions such as the use of only those parts of the system which are relevant to a given analysis and the facility to extend the system by the addition of new elements are incorporate. Five element types have been programmed, these are, prismatic members, rectangular plates, triangular plates and curved plates. The 'in and out of plane' stiffness matrices for a curved plate element are derived using the finite element technique. The performance of this type of element is compared with two other theoretical solutions as well as with a set of independent experimental observations. Additional experimental work was then carried out by the author to further evaluate the acceptability of this element. Finally the analysis of two large civil engineering structures, the shell of an electrical precipitator and a concrete bridge, are presented to investigate the performance of the algorithms. Comparisons are made between the computer time, core store requirements and the accuracy of the analysis, for the proposed system and those of another program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

N-tuple recognition systems (RAMnets) are normally modeled using a small number of input lines to each RAM, because the address space grows exponentially with the number of inputs. It is impossible to implement an arbitrarily-large address space as physical memory. But given modest amounts of training data, correspondingly modest numbers of bits will be set in that memory. Hash arrays can therefore be used instead of a direct implementation of the required address space. This paper describes some exploratory experiments using the hash array technique to investigate the performance of RAMnets with very large numbers of input lines. An argument is presented which concludes that performance should peak at a relatively small n-tuple size, but the experiments carried out so far contradict this. Further experiments are needed to confirm this unexpected result.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer simulated trajectories of bulk water molecules form complex spatiotemporal structures at the picosecond time scale. This intrinsic complexity, which underlies the formation of molecular structures at longer time scales, has been quantified using a measure of statistical complexity. The method estimates the information contained in the molecular trajectory by detecting and quantifying temporal patterns present in the simulated data (velocity time series). Two types of temporal patterns are found. The first, defined by the short-time correlations corresponding to the velocity autocorrelation decay times (â‰0.1â€ps), remains asymptotically stable for time intervals longer than several tens of nanoseconds. The second is caused by previously unknown longer-time correlations (found at longer than the nanoseconds time scales) leading to a value of statistical complexity that slowly increases with time. A direct measure based on the notion of statistical complexity that describes how the trajectory explores the phase space and independent from the particular molecular signal used as the observed time series is introduced. © 2008 The American Physical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents detailed investigation of UV inscribed fibre grating based devices and novel developments in the applications of such devices in optical sensing and fibre laser systems. The major contribution of this PhD programme includes the systematic study on fabrication, spectral characteristics and applications of different types of UV written in-fibre gratings such as Type I and IA Fibre Bragg Gratings (FBGs), Chirped Fibre Bragg Gratings (CFBGs) and Tilted Fibre Gratings (TFGs) with small, large and 45º tilted structures inscribed in normal silica fibre. Three fabrication techniques including holographic, phase-mask and blank beam exposure scanning, which were employed to fabricate a range of gratings in standard single mode fibre, are fully discussed. The thesis reports the creation of smart structures with self-sensing capability by embedding FBG-array sensors in Al matrix composite. In another part of this study, we have demonstrated the particular significant improvements made in sensitising standard FBGs to the chemical surrounding medium by inducing microstructure to the grating by femtosecond (fs) patterning assisted chemical etching technique. Also, a major work is presented for the investigation on the structures, inscription methods and spectral Polarisation Dependent Loss (PDL) and thermal characteristics of different angle TFGs. Finally, a very novel application in realising stable single polarisation and multiwavelength switchable Erbium Doped Fibre Lasers (EDFLs) using intracavity polarisation selective filters based on TFG devices with tilted structures at small, large and exact 45° angles forms another important contribution of this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular transport in phase space is crucial for chemical reactions because it defines how pre-reactive molecular configurations are found during the time evolution of the system. Using Molecular Dynamics (MD) simulated atomistic trajectories we test the assumption of the normal diffusion in the phase space for bulk water at ambient conditions by checking the equivalence of the transport to the random walk model. Contrary to common expectations we have found that some statistical features of the transport in the phase space differ from those of the normal diffusion models. This implies a non-random character of the path search process by the reacting complexes in water solutions. Our further numerical experiments show that a significant long period of non-stationarity in the transition probabilities of the segments of molecular trajectories can account for the observed non-uniform filling of the phase space. Surprisingly, the characteristic periods in the model non-stationarity constitute hundreds of nanoseconds, that is much longer time scales compared to typical lifetime of known liquid water molecular structures (several picoseconds).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A wire drive pulse echo method of measuring the spectrum of solid bodies described. Using an 's' plane representation, a general analysis of the transient response of such solids has been carried out. This was used for the study of the stepped amplitude transient of high order modes of disks and for the case where there are two adjacent resonant frequencies. The techniques developed have been applied to the measurenent of the elasticities of refractory materials at high temperatures. In the experimental study of the high order in-plane resonances of thin disks it was found that the energy travelled at the edge of the disk and this initiated the work on one dimensional Rayleigh waves.Their properties were established for the straight edge condition by following an analysis similar to that of the two dimensional case. Experiments were then carried out on the velocity dispersion of various circuits including the disk and a hole in a large plate - the negative curvature condition.Theoretical analysis established the phase and group velocities for these cases and experimental tests on aluminium and glass gave good agreement with theory. At high frequencies all velocities approach that of the one dimensional Rayleigh waves. When applied to crack detection it was observed that a signal burst travelling round a disk showed an anomalous amplitude effect. In certain cases the signal which travelled the greater distance had the greater amplitude.An experiment was designed to investigate the phenanenon and it was established that the energy travelled in two nodes with different velocities.It was found by analysis that as well as the Rayleigh surface wave on the edge, a seoond node travelling at about the shear velocity was excited and the calculated results gave reasonable agreement with the experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much research is currently centred on the detection of damage in structures using vibrational data. The work presented here examined several areas of interest in support of a practical technique for identifying and locating damage within bridge structures using apparent changes in their vibrational response to known excitation. The proposed goals of such a technique included the need for the measurement system to be operated on site by a minimum number of staff and that the procedure should be as non-invasive to the bridge traffic-flow as possible. Initially the research investigated changes in the vibrational bending characteristics of two series of large-scale model bridge-beams in the laboratory and these included ordinary-reinforced and post-tensioned, prestressed designs. Each beam was progressively damaged at predetermined positions and its vibrational response to impact excitation was analysed. For the load-regime utilised the results suggested that the infuced damage manifested itself as a function of the span of a beam rather than a localised area. A power-law relating apparent damage with the applied loading and prestress levels was then proposed, together with a qualitative vibrational measure of structural damage. In parallel with the laboratory experiments a series of tests were undertaken at the sites of a number of highway bridges. The bridges selected had differing types of construction and geometric design including composite-concrete, concrete slab-and-beam, concrete-slab with supporting steel-troughing constructions together with regular-rectangular, skewed and heavily-skewed geometries. Initial investigations were made of the feasibility and reliability of various methods of structure excitation including traffic and impulse methods. It was found that localised impact using a sledge-hammer was ideal for the purposes of this work and that a cartridge `bolt-gun' could be used in some specific cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Thesis, details of a proposed method for the elastic-plastic failure load analysis of complete building structures are given. In order to handle the problem, a computer programme in Atlas Autocode is produced. The structures consist of a number of parallel shear walls and intermediate frames connected by floor slabs. The results of an experimental investigation are given to verify the theoretical results and to demonstrate various factors that may influence the behaviour of these structures. Large full scale practical structures are also analysed by the proposed method and suggestions are made for achieving design economy as well as for extending research in various aspects of this field. The existing programme for elastic-plastic analysis of large frames is modified to allow for the effect of composite action of structural members, i.e. reinforced concrete floor slabs and the supporting steel beams. This modified programme is used to analyse some framed type structures with composite action as well as those which incorporate plates and shear walls. The results obtained are studied to ascertain the influence of composite action and other factors on the load carrying capacity of both bare frames and complete building structures. The theoretical failure load presented in this thesis does not predict the overall failure load of the structure nor does it predict the partial failure load of the shear walls and slabs but it merely predicts the partial failure load of a single frame and assumes that the loss of stiffess of such a frame renders the overall structure unusable. For most structures the analysis proposed in this thesis is likely to break down prematurely due to the failure of the slab and shear wall system and this factor must be taken into account in any future work on such structures. The experimental work reported in this thesis is acknowledged to be unsatisfactory as a verification of the limited theory proposed. In particular perspex was not found to be a suitable material for testing at high loads, micro-concrete may be more suitable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis demonstrates that the use of finite elements need not be confined to space alone, but that they may also be used in the time domain, It is shown that finite element methods may be used successfully to obtain the response of systems to applied forces, including, for example, the accelerations in a tall structure subjected to an earthquake shock. It is further demonstrated that at least one of these methods may be considered to be a practical alternative to more usual methods of solution. A detailed investigation of the accuracy and stability of finite element solutions is included, and methods of applications to both single- and multi-degree of freedom systems are described. Solutions using two different temporal finite elements are compared with those obtained by conventional methods, and a comparison of computation times for the different methods is given. The application of finite element methods to distributed systems is described, using both separate discretizations in space and time, and a combined space-time discretization. The inclusion of both viscous and hysteretic damping is shown to add little to the difficulty of the solution. Temporal finite elements are also seen to be of considerable interest when applied to non-linear systems, both when the system parameters are time-dependent and also when they are functions of displacement. Solutions are given for many different examples, and the computer programs used for the finite element methods are included in an Appendix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is the result of an action-research-type study of the diversification effort of part of a major U.K. industrial company. Work in contingency theory concerning the impact of environmental factors on organizational design, and the systemic model of viable systems put forward by Stafford Beer form the theoretical basis of the vvork. The two streams of thought are compared and found to offer similar conclusions about the design of effective organizations. These findings are taken as the framework for an analysis both of organization structures for promoting innovation described in the literature, and of those employed by the company for this purpose in recent years. Much attention is given to the use of venture groups, and conclusions are drawn on particular factors which may influence their success or failure. Both theoretical considerations, and the examination of the company' s recent experience suggested that the formation of the policy of diversification, as well as the method of implementation of the police, might affect its outcorre. Attention is therefore focused on the policy-making and planning process, and in particular on possible problems that this process could generate in a multi-division company. The view finally taken of diversification effort is that it should be regarded as a learning system. This view helps to expose some ambiguities in the concepts of success and failure in this area, and demonstrates considerable weaknesses in traditional project evaluation procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Epitopes mediated by T cells lie at the heart of the adaptive immune response and form the essential nucleus of anti-tumour peptide or epitope-based vaccines. Antigenic T cell epitopes are mediated by major histocompatibility complex (MHC) molecules, which present them to T cell receptors. Calculating the affinity between a given MHC molecule and an antigenic peptide using experimental approaches is both difficult and time consuming, thus various computational methods have been developed for this purpose. A server has been developed to allow a structural approach to the problem by generating specific MHC:peptide complex structures and providing configuration files to run molecular modelling simulations upon them. A system has been produced which allows the automated construction of MHC:peptide structure files and the corresponding configuration files required to execute a molecular dynamics simulation using NAMD. The system has been made available through a web-based front end and stand-alone scripts. Previous attempts at structural prediction of MHC:peptide affinity have been limited due to the paucity of structures and the computational expense in running large scale molecular dynamics simulations. The MHCsim server (http://igrid-ext.cryst.bbk.ac.uk/MHCsim) allows the user to rapidly generate any desired MHC:peptide complex and will facilitate molecular modelling simulation of MHC complexes on an unprecedented scale.