34 resultados para lumière non structurée
em Aston University Research Archive
Resumo:
We have investigated the microstructure and bonding of two biomass-based porous carbon chromatographic stationary phase materials (alginic acid-derived Starbon® and calcium alginate-derived mesoporous carbon spheres (AMCS) and a commercial porous graphitic carbon (PGC), using high resolution transmission electron microscopy, electron energy loss spectroscopy (EELS), N2 porosimetry and X-ray photoelectron spectroscopy (XPS). The planar carbon sp -content of all three material types is similar to that of traditional nongraphitizing carbon although, both biomass-based carbon types contain a greater percentage of fullerene character (i.e. curved graphene sheets) than a non-graphitizing carbon pyrolyzed at the same temperature. This is thought to arise during the pyrolytic breakdown of hexauronic acid residues into C5 intermediates. Energy dispersive X-ray and XPS analysis reveals a homogeneous distribution of calcium in the AMCS and a calcium catalysis mechanism is discussed. That both Starbon® and AMCS, with high-fullerene character, show chromatographic properties similar to those of a commercial PGC material with extended graphitic stacks, suggests that, for separations at the molecular level, curved fullerene- like and planar graphitic sheets are equivalent in PGC chromatography. In addition, variation in the number of graphitic layers suggests that stack depth has minimal effect on the retention mechanism in PGC chromatography. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
This paper consides the problem of extracting the relationships between two time series in a non-linear non-stationary environment with Hidden Markov Models (HMMs). We describe an algorithm which is capable of identifying associations between variables. The method is applied both to synthetic data and real data. We show that HMMs are capable of modelling the oil drilling process and that they outperform existing methods.
Resumo:
Exploratory analysis of data in all sciences seeks to find common patterns to gain insights into the structure and distribution of the data. Typically visualisation methods like principal components analysis are used but these methods are not easily able to deal with missing data nor can they capture non-linear structure in the data. One approach to discovering complex, non-linear structure in the data is through the use of linked plots, or brushing, while ignoring the missing data. In this technical report we discuss a complementary approach based on a non-linear probabilistic model. The generative topographic mapping enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate far more structure than a two dimensional principal components plot could, and deal at the same time with missing data. We show that using the generative topographic mapping provides us with an optimal method to explore the data while being able to replace missing values in a dataset, particularly where a large proportion of the data is missing.
Resumo:
Visualising data for exploratory analysis is a big challenge in scientific and engineering domains where there is a need to gain insight into the structure and distribution of the data. Typically, visualisation methods like principal component analysis and multi-dimensional scaling are used, but it is difficult to incorporate prior knowledge about structure of the data into the analysis. In this technical report we discuss a complementary approach based on an extension of a well known non-linear probabilistic model, the Generative Topographic Mapping. We show that by including prior information of the covariance structure into the model, we are able to improve both the data visualisation and the model fit.
Resumo:
The paper examines the capital structure adjustment dynamics of listed non-financial corporations in seven east Asian countries before, during and after the crisis of 1997–1998. Our methodology allows for speeds of adjustment to vary, not only among firms, but also over time, distinguishing between cases of sudden and smooth adjustment.Whereas, compared with firms in the least affected countries, average leverages were much higher, generalized method-ofmoments analysis of the Worldscope panel data suggests that average speeds of adjustment were lower in the worst affected countries. This holds also for the severely financially distressed firms in some worst affected countries, though the trend reversed in the post-crisis period. These findings have important implications for the regulatory environment as well as access to market finance.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
The detection of signals in the presence of noise is one of the most basic and important problems encountered by communication engineers. Although the literature abounds with analyses of communications in Gaussian noise, relatively little work has appeared dealing with communications in non-Gaussian noise. In this thesis several digital communication systems disturbed by non-Gaussian noise are analysed. The thesis is divided into two main parts. In the first part, a filtered-Poisson impulse noise model is utilized to calulate error probability characteristics of a linear receiver operating in additive impulsive noise. Firstly the effect that non-Gaussian interference has on the performance of a receiver that has been optimized for Gaussian noise is determined. The factors affecting the choice of modulation scheme so as to minimize the deterimental effects of non-Gaussian noise are then discussed. In the second part, a new theoretical model of impulsive noise that fits well with the observed statistics of noise in radio channels below 100 MHz has been developed. This empirical noise model is applied to the detection of known signals in the presence of noise to determine the optimal receiver structure. The performance of such a detector has been assessed and is found to depend on the signal shape, the time-bandwidth product, as well as the signal-to-noise ratio. The optimal signal to minimize the probability of error of; the detector is determined. Attention is then turned to the problem of threshold detection. Detector structure, large sample performance and robustness against errors in the detector parameters are examined. Finally, estimators of such parameters as. the occurrence of an impulse and the parameters in an empirical noise model are developed for the case of an adaptive system with slowly varying conditions.
Resumo:
This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.
Resumo:
How effective are non-government organisations (NG0s) in their response to Third World poverty? That is the question which this thesis examines. The thesis begins with an overview of the problems facing Third World communities, and notes the way in which people in Britain have responded through NG0s. A second part of the thesis sets out the issues on which the analysis of NGOs has been made. These are: - the ways in which NGOs analyse the process of development; - the use of 'improving nutrition' and 'promoting self-reliance' as special objectives by NG0s; and - the nature of rural change, and the implications for NGOs as agents of rural development. Kenya is taken as a case study. Firstly the political and economic structure of the country is studied, and the natures of development, nutritional problems and self-reliance in the Kenyan context are noted. The study then focusses attention onto Kitui District, an area of Kenya which at the time of the study was suffering from drought. However, it is argued that the problems of Kitui District and the constraints to change there are as much a consequence of Kenya's structural underdevelopment as of reduced rainfall. Against this background the programmes of some British NGOs in the country are examined, and it is concluded that much of their work has little relevance to the principal problems which have been identified. A final part of the thesis takes a wider look at the policies and practices of NG0s. Issues such as the choice of countries in which NGOs work, how they are represented overseas, and their educational role in Britain are considered. It is concluded that while all NGOs have a concern for the conditions in which the poorest communities of the Third World live, many NGOs take a quite narrow view of development problems, giving only little recognition to the international and intranational political and economic systems which contribute to Third World poverty.
Resumo:
Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.
Resumo:
Exploratory analysis of petroleum geochemical data seeks to find common patterns to help distinguish between different source rocks, oils and gases, and to explain their source, maturity and any intra-reservoir alteration. However, at the outset, one is typically faced with (a) a large matrix of samples, each with a range of molecular and isotopic properties, (b) a spatially and temporally unrepresentative sampling pattern, (c) noisy data and (d) often, a large number of missing values. This inhibits analysis using conventional statistical methods. Typically, visualisation methods like principal components analysis are used, but these methods are not easily able to deal with missing data nor can they capture non-linear structure in the data. One approach to discovering complex, non-linear structure in the data is through the use of linked plots, or brushing, while ignoring the missing data. In this paper we introduce a complementary approach based on a non-linear probabilistic model. Generative topographic mapping enables the visualisation of the effects of very many variables on a single plot, while also dealing with missing data. We show how using generative topographic mapping also provides an optimal method with which to replace missing values in two geochemical datasets, particularly where a large proportion of the data is missing.
Resumo:
The problem of separating structured information representing phenomena of differing natures is considered. A structure is assumed to be independent of the others if can be represented in a complementary subspace. When the concomitant subspaces are well separated the problem is readily solvable by a linear technique. Otherwise, the linear approach fails to correctly discriminate the required information. Hence, a non-extensive approach is proposed. The resulting nonlinear technique is shown to be suitable for dealing with cases that cannot be tackled by the linear one.
Resumo:
An array of different structural probes has been used to define the effect of adding Zn and Ti to a sodium-calcium phosphate glass. X-ray absorption spectroscopy at the Zn K-edge suggests that the Zn atoms occupy mixed (4- and 6-fold) sites within the glass matrix. X-ray diffraction reveals a feature at 2.03 angstrom that develops with the addition of Zn and Ti and is consistent with Zn-O and Ti-O near-neighbour distances. Neutron diffraction is used to resolve two distinct P-O distances and highlights the decrease in P center dot center dot center dot P coordination number from 2.0 to 1.7 as the Ti metal concentration rises, which is attributed to the O/P fraction moving away from the metaphosphate value of 3.0 to 3.1 with the addition of Ti. Other correlations, such as those associated with CaO(x) and NaO(x) polyhedra, remain largely unaffected. These results suggest that the network forming P center dot center dot center dot P correlation is most disrupted, with the disorder parameter rising from 0.07 to 0.10 angstrom with the additional modifiers. Zn appears to be introduced into the network as a direct replacement for Ca and causes no structural variation over the composition range studied.