32 resultados para data structures
Resumo:
This thesis is concerned with the investigation, by nuclear magnetic resonance spectroscopy, of the molecular interactions occurring in mixtures of benzene and cyclohexane to which either chloroform or deutero-chloroform has been added. The effect of the added polar molecule on the liquid structure has been studied using spin-lattice relaxation time, 1H chemical shift, and nuclear Overhauser effect measurements. The main purpose of the work has been to validate a model for molecular interaction involving local ordering of benzene around chloroform. A chemical method for removing dissolved oxygen from samples has been developed to encompass a number of types of sample, including quantitative mixtures, and its supremacy over conventional deoxygenation technique is shown. A set of spectrometer conditions, the use of which produces the minimal variation in peak height in the steady state, is presented. To separate the general diluting effects of deutero-chloroform from its effects due to the production of local order a series of mixtures involving carbon tetrachloride, instead of deutero-chloroform, have been used as non-interacting references. The effect of molecular interaction is shown to be explainable using a solvation model, whilst an approach involving 1:1 complex formation is shown not to account for the observations. It is calculated that each solvation shell, based on deutero-chloroform, contains about twelve molecules of benzene or cyclohexane. The equations produced to account for the T1 variations have been adapted to account for the 1H chemical shift variations in the same system. The shift measurements are shown to substantiate the solvent cage model with a cage capacity of twelve molecules around each chloroform molecule. Nuclear Overhauser effect data have been analysed quantitatively in a manner consistent with the solvation model. The results show that discrete shells only exist when the mole fraction of deutero-chloroform is below about 0.08.
Resumo:
A series of ethylene propylene terpolymer vulcanizates, prepared by varying termonomer type, cure system, cure time and cure temperature, are characterized by determining the number and type of cross-links present. The termonomers used represent the types currently available in commercial quantities. Characterization is carried out by measuring the C1 constant of the Mooney Rivlin Saunders equation before and after treatment with the chemical probes propane-2-thiol/piperidine and n-hexane thiol/piperidine, thus making it possible to calculate the relative proportions of mono-sulphidic, di-sulphidic and poly- sulphidic cross-links. The cure systems used included both sulphur and peroxide formulations. Specific physical properties are determined for each network and an attempt is made to correlate observed changes in these with variations in network structure. A survey of the economics of each formulation based on a calculated efficiency parameter for each cure system is included. Values of C1 are calculated from compression modulus data after the reliability of the technique when used with ethylene propylene terpolymers had been established. This is carried out by comparing values from both compression and extension stress strain measurements for natural rubber vulcanizates and by assessing the effects of sample dimensions and the degree of swelling. The technique of compression modulus is much more widely applicable than previously thought. The basic structure of an ethylene propylene terpolymer network appears to be independent of the type of cure system used ( sulphur based systems only), the proportions of constituent cross-links being nearly constant.
Resumo:
Much research is currently centred on the detection of damage in structures using vibrational data. The work presented here examined several areas of interest in support of a practical technique for identifying and locating damage within bridge structures using apparent changes in their vibrational response to known excitation. The proposed goals of such a technique included the need for the measurement system to be operated on site by a minimum number of staff and that the procedure should be as non-invasive to the bridge traffic-flow as possible. Initially the research investigated changes in the vibrational bending characteristics of two series of large-scale model bridge-beams in the laboratory and these included ordinary-reinforced and post-tensioned, prestressed designs. Each beam was progressively damaged at predetermined positions and its vibrational response to impact excitation was analysed. For the load-regime utilised the results suggested that the infuced damage manifested itself as a function of the span of a beam rather than a localised area. A power-law relating apparent damage with the applied loading and prestress levels was then proposed, together with a qualitative vibrational measure of structural damage. In parallel with the laboratory experiments a series of tests were undertaken at the sites of a number of highway bridges. The bridges selected had differing types of construction and geometric design including composite-concrete, concrete slab-and-beam, concrete-slab with supporting steel-troughing constructions together with regular-rectangular, skewed and heavily-skewed geometries. Initial investigations were made of the feasibility and reliability of various methods of structure excitation including traffic and impulse methods. It was found that localised impact using a sledge-hammer was ideal for the purposes of this work and that a cartridge `bolt-gun' could be used in some specific cases.
Resumo:
Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.
Resumo:
This thesis explores the interrelationships between the labour process, the development of technology and patterns of gender differentiation. The introduction of front office terminals into building society branches forms the focus of the research. Case studies were carried out in nine branches, three each from three building societies. Statistical data for the whole movement and a survey of ten of the top thirty societies provided the context for the studies. In the process of the research it became clear that it was not technology itself but the way that it was used, that was the main factor in determining outcomes. The introduction of new technologies is occurring at a rapid pace, facilitated by continuing high growth rates, although front office technology could seldom be cost justified. There was great variety between societies in their operating philosophies and their reasons for and approach to computerisation, but all societies foresaw an ultimate saving in staff. Computerisation has resulted in the deskilling of the cashiering role and increased control over work at all stages. Some branch managers experienced a decrease in autonomy and an increase in control over their work. Subsequent to this deskilling there has been a greatly increased use of part time staff which has enabled costs to be reduced. There has also been a polarisation between career and non-career staff which, like the use of part time staff, has occurred along gender lines. There is considerable evidence that societies' policies, structures and managerial attitudes continue to directly and indirectly discriminate against women. It is these practices which confine women to lower grades and ensure their dependence on the family and which create the pool of cheap skilled labour that societies so willingly exploit by increasing part time work. Gender strategies enter management strategies throughout the operations of the organisation.
Resumo:
Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.
Resumo:
Background: Research into mental-health risks has tended to focus on epidemiological approaches and to consider pieces of evidence in isolation. Less is known about the particular factors and their patterns of occurrence that influence clinicians’ risk judgements in practice. Aims: To identify the cues used by clinicians to make risk judgements and to explore how these combine within clinicians’ psychological representations of suicide, self-harm, self-neglect, and harm to others. Method: Content analysis was applied to semi-structured interviews conducted with 46 practitioners from various mental-health disciplines, using mind maps to represent the hierarchical relationships of data and concepts. Results: Strong consensus between experts meant their knowledge could be integrated into a single hierarchical structure for each risk. This revealed contrasting emphases between data and concepts underpinning risks, including: reflection and forethought for suicide; motivation for self-harm; situation and context for harm to others; and current presentation for self-neglect. Conclusions: Analysis of experts’ risk-assessment knowledge identified influential cues and their relationships to risks. It can inform development of valid risk-screening decision support systems that combine actuarial evidence with clinical expertise.
Resumo:
This thesis proposes a novel graphical model for inference called the Affinity Network,which displays the closeness between pairs of variables and is an alternative to Bayesian Networks and Dependency Networks. The Affinity Network shares some similarities with Bayesian Networks and Dependency Networks but avoids their heuristic and stochastic graph construction algorithms by using a message passing scheme. A comparison with the above two instances of graphical models is given for sparse discrete and continuous medical data and data taken from the UCI machine learning repository. The experimental study reveals that the Affinity Network graphs tend to be more accurate on the basis of an exhaustive search with the small datasets. Moreover, the graph construction algorithm is faster than the other two methods with huge datasets. The Affinity Network is also applied to data produced by a synchronised system. A detailed analysis and numerical investigation into this dynamical system is provided and it is shown that the Affinity Network can be used to characterise its emergent behaviour even in the presence of noise.
Resumo:
This dissertation investigates the very important and current problem of modelling human expertise. This is an apparent issue in any computer system emulating human decision making. It is prominent in Clinical Decision Support Systems (CDSS) due to the complexity of the induction process and the vast number of parameters in most cases. Other issues such as human error and missing or incomplete data present further challenges. In this thesis, the Galatean Risk Screening Tool (GRiST) is used as an example of modelling clinical expertise and parameter elicitation. The tool is a mental health clinical record management system with a top layer of decision support capabilities. It is currently being deployed by several NHS mental health trusts across the UK. The aim of the research is to investigate the problem of parameter elicitation by inducing them from real clinical data rather than from the human experts who provided the decision model. The induced parameters provide an insight into both the data relationships and how experts make decisions themselves. The outcomes help further understand human decision making and, in particular, help GRiST provide more accurate emulations of risk judgements. Although the algorithms and methods presented in this dissertation are applied to GRiST, they can be adopted for other human knowledge engineering domains.
Resumo:
We investigate two numerical procedures for the Cauchy problem in linear elasticity, involving the relaxation of either the given boundary displacements (Dirichlet data) or the prescribed boundary tractions (Neumann data) on the over-specified boundary, in the alternating iterative algorithm of Kozlov et al. (1991). The two mixed direct (well-posed) problems associated with each iteration are solved using the method of fundamental solutions (MFS), in conjunction with the Tikhonov regularization method, while the optimal value of the regularization parameter is chosen via the generalized cross-validation (GCV) criterion. An efficient regularizing stopping criterion which ceases the iterative procedure at the point where the accumulation of noise becomes dominant and the errors in predicting the exact solutions increase, is also presented. The MFS-based iterative algorithms with relaxation are tested for Cauchy problems for isotropic linear elastic materials in various geometries to confirm the numerical convergence, stability, accuracy and computational efficiency of the proposed method.
Resumo:
This paper provides an understanding of the current environmental decision structures within companies in the manufacturing sector. Through case study research, we explored the complexity, robustness and decision making processes companies were using in order to cope with ever increasing environmental pressures and choice of environmental technologies. Our research included organisations in UK, Thailand, and Germany. Our research strategy was case study composed of different research methods, namely: focus group, interviews and environmental report analysis. The research methods and their data collection instruments also varied according to the access we had. Our unity of analysis was decision making teams and the scope of our investigation included product development, environment & safety, manufacturing, and supply chain management. This study finds that environmental decision making have been gaining importance over the time as well as complexity when it is starting to move from manufacturing to non,manufacturing activities. Most companies do not have a formal structure to take environmental decisions; hence, they follow a similar path of other corporate decisions, being affected by organizational structures besides the technical competence of the teams. We believe our results will help improving structures in both beginners and leaders teams for environmental decision making across the different departments.
Resumo:
The O–O–N–N–O-type pentadentate ligands H3ed3a, H3pd3a and H3pd3p (H3ed3a stands ethylenediamine-N,N,N′-triacetic acid; H3pd3a stands 1,3-propanediamine-N,N,N′-triacetic acid and H3pd3p stands 1,3-propanediamine-N,N,N′-tri-3-propionic acid) and the corresponding novel octahedral or square-planar/trigonal-bipyramidal copper(II) complexes have been prepared and characterized. H3ed3a, H3pd3a and H3pd3p ligands coordinate to copper(II) ion via five donor atoms (three deprotonated carboxylate atoms and two amine nitrogens) affording octahedral in case of ed3a3− and intermediate square-pyramidal/trigonal-bipyramidal structure in case of pd3a3− and pd3p3−. A six coordinate, octahedral geometry has been established crystallographically for the [Mg(H2O)6][Cu(ed3a)(H2O)]2 · 2H2O complex and five coordinate square-pyramidal for the [Mg(H2O)5Cu(pd3a)][Cu(pd3a)] · 2H2O. Structural data correlating similar chelate Cu(II) complexes have been used for the better understanding the pathway: octahedral → square-pyramidal ↔ trigonal- bipyramid geometry. An extensive configuration analysis is discussed in relation to information obtained for similar complexes. The infra-red and electronic absorption spectra of the complexes are discussed in comparison with related complexes of known geometries. Molecular mechanics and density functional theory (DFT) programs have been used to model the most stable geometric isomer yielding, at the same time, significant structural data. The results from density functional studies have been compared with X-ray data.
Resumo:
Five manganese complexes in an N 4O 2 donor environment have been prepared. Four of the compounds involve aroyl hydrazone as ligands and manganese is in a +2 oxidation state. The fifth compound was prepared using N,Nprime-o-phenylenebis(salicylideneimine) and imidazole as ligands where manganese is present in +3 oxidation state. X-ray crystal structure of one Mn +2 compound and the Mn +3 compound was determined. The relative stabilities of the Mn +2 and Mn +3 oxidation states were analyzed using the structural data and MO calculations. Manganese(II) complexes of four aroyl hydrazone ligands were prepared and characterized by different physicochemical techniques. The complexes are of the type Mn(L) 2, where L stands for the deprotonated hydrazone ligand. One of the compounds, Mn(pybzhz) 2, was also characterized by single crystal structure determination. In all these complexes, the Mn(II) is in an N 4O 2 donor environment and the Mn(II) center cannot be oxidized either chemically or electrochemically. However, when another ligand Ophsal is used to give the compound [Mn(Ophsal)(imzH) 2]ClO 4, which was also characterized by X-ray crystal structure determination, manganese can easily avail the +3 oxidation state. The relative stabilities of the +2 and +3 oxidation states of manganese were analyzed and it was concluded that the extent of distortion from the perfect octahedral geometry is the main controlling factor in these cases. © 2004 Elsevier B.V. All rights reserved.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
Renewable energy forms have been widely used in the past decades highlighting a "green" shift in energy production. An actual reason behind this turn to renewable energy production is EU directives which set the Union's targets for energy production from renewable sources, greenhouse gas emissions and increase in energy efficiency. All member countries are obligated to apply harmonized legislation and practices and restructure their energy production networks in order to meet EU targets. Towards the fulfillment of 20-20-20 EU targets, in Greece a specific strategy which promotes the construction of large scale Renewable Energy Source plants is promoted. In this paper, we present an optimal design of the Greek renewable energy production network applying a 0-1 Weighted Goal Programming model, considering social, environmental and economic criteria. In the absence of a panel of experts Data Envelopment Analysis (DEA) approach is used in order to filter the best out of the possible network structures, seeking for the maximum technical efficiency. Super-Efficiency DEA model is also used in order to reduce the solutions and find the best out of all the possible. The results showed that in order to achieve maximum efficiency, the social and environmental criteria must be weighted more than the economic ones.