71 resultados para SPARSE
Resumo:
It is well established that hydrodynamic journal bearings are responsible for self-excited vibrations and have the effect of lowering the critical speeds of rotor systems. The forces within the oil film wedge, generated by the vibrating journal, may be represented by displacement and velocity coefficient~ thus allowing the dynamical behaviour of the rotor to be analysed both for stability purposes and for anticipating the response to unbalance. However, information describing these coefficients is sparse, misleading, and very often not applicable to industrial type bearings. Results of a combined analytical and experimental investigation into the hydrodynamic oil film coefficients operating in the laminar region are therefore presented, the analysis being applied to a 120 degree partial journal bearing having a 5.0 in diameter journal and a LID ratio of 1.0. The theoretical analysis shows that for this type of popular bearing, the eight linearized coefficients do not accurately describe the behaviour of the vibrating journal based on the theory of small perturbations, due to them being masked by the presence of nonlinearity. A method is developed using the second order terms of Taylor expansion whereby design charts are provided which predict the twentyeight force coefficients for both aligned, and for varying amounts of journal misalignment. The resulting non-linear equations of motion are solved using a modified Newton-Raphson method whereby the whirl trajectories are obtained, thus providing a physical appreciation of the bearing characteristics under dynamically loaded conditions.
Resumo:
Satellite information, in combination with conventional point source measurements, can be a valuable source of information. This thesis is devoted to the spatial estimation of areal rainfall over a region using both the measurements from a dense and sparse network of rain-gauges and images from the meteorological satellites. A primary concern is to study the effects of such satellite assisted rainfall estimates on the performance of rainfall-runoff models. Low-cost image processing systems and peripherals are used to process and manipulate the data. Both secondary as well as primary satellite images were used for analysis. The secondary data was obtained from the in-house satellite receiver and the primary data was obtained from an outside source. Ground truth data was obtained from the local Water Authority. A number of algorithms are presented that combine the satellite and conventional data sources to produce areal rainfall estimates and the results are compared with some of the more traditional methodologies. The results indicate that the satellite cloud information is valuable in the assessment of the spatial distribution of areal rainfall, for both half-hourly as well as daily estimates of rainfall. It is also demonstrated how the performance of the simple multiple regression rainfall-runoff model is improved when satellite cloud information is used as a separate input in addition to rainfall estimates from conventional means. The use of low-cost equipment, from image processing systems to satellite imagery, makes it possible for developing countries to introduce such systems in areas where the benefits are greatest.
Resumo:
Replacement of the traditional coil spring with one of more fibre-reinforced plastic sulcated springs is a future possibility. Spring designers of metallic coil springs have design formulae readily available, and software packages specific to coil spring design exist. However, the sulcated spring is at the prototype stage of development, so literature on these springs is very sparse. The thesis contains information on the market for sulcated springs, and their advantages and disadvantages. Literature on other types of fibre reinforced plastic springs has also been reviewed. Design software has been developed for the sulcated spring along similar lines to coil spring design software. In order to develop the software, a theoretical model had to be developed which formed the mathematical basis for the software. The theoretical model is based on a choice of four methods for calculating the flexural rigidity; beam theory, plate theory, and lamination theory assuming isotropic and orthoropic material properties. Experimental results for strain and spring stiffness have been compared with the theoretical model, and were in good agreement. Included in the design software are the results of experimental work on fatigue, and design limiting factors to prevent or warn against impractical designs. Finite element analysis has been used to verify the theoretical model developed, and to find the better approximation to the experimental results. Applications and types of assemblies for the sulcated spring were discussed. Sulcated spring designs for the automotive applications of a suspension, clutch and engine valve spring were found using the design computer software. These sulcated spring designs were within or close to the space of the existing coil spring and yield the same performance. Finally the commercial feasibility of manufacturing the sulcated spring was assessed and compared with the coil spring, to evaluate the plausibility of the sulcated spring replacing the coil spring eventually.
Resumo:
This thesis includes analysis of disordered spin ensembles corresponding to Exact Cover, a multi-access channel problem, and composite models combining sparse and dense interactions. The satisfiability problem in Exact Cover is addressed using a statistical analysis of a simple branch and bound algorithm. The algorithm can be formulated in the large system limit as a branching process, for which critical properties can be analysed. Far from the critical point a set of differential equations may be used to model the process, and these are solved by numerical integration and exact bounding methods. The multi-access channel problem is formulated as an equilibrium statistical physics problem for the case of bit transmission on a channel with power control and synchronisation. A sparse code division multiple access method is considered and the optimal detection properties are examined in typical case by use of the replica method, and compared to detection performance achieved by interactive decoding methods. These codes are found to have phenomena closely resembling the well-understood dense codes. The composite model is introduced as an abstraction of canonical sparse and dense disordered spin models. The model includes couplings due to both dense and sparse topologies simultaneously. The new type of codes are shown to outperform sparse and dense codes in some regimes both in optimal performance, and in performance achieved by iterative detection methods in finite systems.
Resumo:
This thesis is a study of low-dimensional visualisation methods for data visualisation under certainty of the input data. It focuses on the two main feed-forward neural network algorithms which are NeuroScale and Generative Topographic Mapping (GTM) by trying to make both algorithms able to accommodate the uncertainty. The two models are shown not to work well under high levels of noise within the data and need to be modified. The modification of both models, NeuroScale and GTM, are verified by using synthetic data to show their ability to accommodate the noise. The thesis is interested in the controversy surrounding the non-uniqueness of predictive gene lists (PGL) of predicting prognosis outcome of breast cancer patients as available in DNA microarray experiments. Many of these studies have ignored the uncertainty issue resulting in random correlations of sparse model selection in high dimensional spaces. The visualisation techniques are used to confirm that the patients involved in such medical studies are intrinsically unclassifiable on the basis of provided PGL evidence. This additional category of ‘unclassifiable’ should be accommodated within medical decision support systems if serious errors and unnecessary adjuvant therapy are to be avoided.
Resumo:
Background: The controversy surrounding the non-uniqueness of predictive gene lists (PGL) of small selected subsets of genes from very large potential candidates as available in DNA microarray experiments is now widely acknowledged 1. Many of these studies have focused on constructing discriminative semi-parametric models and as such are also subject to the issue of random correlations of sparse model selection in high dimensional spaces. In this work we outline a different approach based around an unsupervised patient-specific nonlinear topographic projection in predictive gene lists. Methods: We construct nonlinear topographic projection maps based on inter-patient gene-list relative dissimilarities. The Neuroscale, the Stochastic Neighbor Embedding(SNE) and the Locally Linear Embedding(LLE) techniques have been used to construct two-dimensional projective visualisation plots of 70 dimensional PGLs per patient, classifiers are also constructed to identify the prognosis indicator of each patient using the resulting projections from those visualisation techniques and investigate whether a-posteriori two prognosis groups are separable on the evidence of the gene lists. A literature-proposed predictive gene list for breast cancer is benchmarked against a separate gene list using the above methods. Generalisation ability is investigated by using the mapping capability of Neuroscale to visualise the follow-up study, but based on the projections derived from the original dataset. Results: The results indicate that small subsets of patient-specific PGLs have insufficient prognostic dissimilarity to permit a distinction between two prognosis patients. Uncertainty and diversity across multiple gene expressions prevents unambiguous or even confident patient grouping. Comparative projections across different PGLs provide similar results. Conclusion: The random correlation effect to an arbitrary outcome induced by small subset selection from very high dimensional interrelated gene expression profiles leads to an outcome with associated uncertainty. This continuum and uncertainty precludes any attempts at constructing discriminative classifiers. However a patient's gene expression profile could possibly be used in treatment planning, based on knowledge of other patients' responses. We conclude that many of the patients involved in such medical studies are intrinsically unclassifiable on the basis of provided PGL evidence. This additional category of 'unclassifiable' should be accommodated within medical decision support systems if serious errors and unnecessary adjuvant therapy are to be avoided.
Resumo:
Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation.
Resumo:
We consider a variation of the prototype combinatorial optimization problem known as graph colouring. Our optimization goal is to colour the vertices of a graph with a fixed number of colours, in a way to maximize the number of different colours present in the set of nearest neighbours of each given vertex. This problem, which we pictorially call palette-colouring, has been recently addressed as a basic example of a problem arising in the context of distributed data storage. Even though it has not been proved to be NP-complete, random search algorithms find the problem hard to solve. Heuristics based on a naive belief propagation algorithm are observed to work quite well in certain conditions. In this paper, we build upon the mentioned result, working out the correct belief propagation algorithm, which needs to take into account the many-body nature of the constraints present in this problem. This method improves the naive belief propagation approach at the cost of increased computational effort. We also investigate the emergence of a satisfiable-to-unsatisfiable 'phase transition' as a function of the vertex mean degree, for different ensembles of sparse random graphs in the large size ('thermodynamic') limit.
Resumo:
This thesis proposes a novel graphical model for inference called the Affinity Network,which displays the closeness between pairs of variables and is an alternative to Bayesian Networks and Dependency Networks. The Affinity Network shares some similarities with Bayesian Networks and Dependency Networks but avoids their heuristic and stochastic graph construction algorithms by using a message passing scheme. A comparison with the above two instances of graphical models is given for sparse discrete and continuous medical data and data taken from the UCI machine learning repository. The experimental study reveals that the Affinity Network graphs tend to be more accurate on the basis of an exhaustive search with the small datasets. Moreover, the graph construction algorithm is faster than the other two methods with huge datasets. The Affinity Network is also applied to data produced by a synchronised system. A detailed analysis and numerical investigation into this dynamical system is provided and it is shown that the Affinity Network can be used to characterise its emergent behaviour even in the presence of noise.
Resumo:
Vision must analyze the retinal image over both small and large areas to represent fine-scale spatial details and extensive textures. The long-range neuronal convergence that this implies might lead us to expect that contrast sensitivity should improve markedly with the contrast area of the image. But this is at odds with the orthodox view that contrast sensitivity is determined merely by probability summation over local independent detectors. To address this puzzle, I aimed to assess the summation of luminance contrast without the confounding influence of area-dependent internal noise. I measured contrast detection thresholds for novel Battenberg stimuli that had identical overall dimensions (to clamp the aggregation of noise) but were constructed from either dense or sparse arrays of micro-patterns. The results unveiled a three-stage visual hierarchy of contrast summation involving (i) spatial filtering, (ii) long-range summation of coherent textures, and (iii) pooling across orthogonal textures. Linear summation over local energy detectors was spatially extensive (as much as 16 cycles) at Stage 2, but the resulting model is also consistent with earlier classical results of contrast summation (J. G. Robson & N. Graham, 1981), where co-aggregation of internal noise has obscured these long-range interactions.
Resumo:
Code division multiple access (CDMA) in which the spreading code assignment to users contains a random element has recently become a cornerstone of CDMA research. The random element in the construction is particularly attractive as it provides robustness and flexibility in utilizing multiaccess channels, whilst not making significant sacrifices in terms of transmission power. Random codes are generated from some ensemble; here we consider the possibility of combining two standard paradigms, sparsely and densely spread codes, in a single composite code ensemble. The composite code analysis includes a replica symmetric calculation of performance in the large system limit, and investigation of finite systems through a composite belief propagation algorithm. A variety of codes are examined with a focus on the high multi-access interference regime. We demonstrate scenarios both in the large size limit and for finite systems in which the composite code has typical performance exceeding those of sparse and dense codes at equivalent signal to noise ratio.
Resumo:
Inference and optimisation of real-value edge variables in sparse graphs are studied using the tree based Bethe approximation optimisation algorithms. Equilibrium states of general energy functions involving a large set of real edge-variables that interact at the network nodes are obtained for networks in various cases. These include different cost functions, connectivity values, constraints on the edge bandwidth and the case of multiclass optimisation.
Resumo:
This thesis looks at the UK onshore oil and gas production industry and follows the history of a population of firms over a fifteen-year period following the industry's renaissance. It examines the linkage between firm survival, selection pressures and adaptation responses at the firm level, especially the role of discretionary adaptation, specifically exploration and exploitation strategies.Taking a Realist approach and using quantitative and qualitative methods for triangulation on a new data base derived from archival data, as well as informant interviews, it tests seven hypotheses' about post-entry survival of firms. The quantitative findings suggest that firm survival within this industry is linked to discretionary adaptation, when measured at the firm level, and to a mixture of selection and adaptation forces when measured for each firm for each individual year. The qualitative research suggests that selection factors dominate. This difference in views is unresolved. However the small, sparse population and the nature of the oil and gas industry compared with other common research contexts such as manufacturing or service firms suggests the results be treated with caution as befits a preliminary investigation. The major findings include limited support for the theory that the external environment is the major determinant of firm survival, though environment components affect firms differentially; resolution of apparent literature differences relating to the sequencing of exploration and exploitation and potential tangible evidence of coevolution. The research also finds that, though selection may be considered important by industry players, discretionary adaptation appears to play the key role, and that the key survival drivers for thispopulation are intra-industry ties, exploitation experience and a learning/experience component. Selection has a place, however, in determining the life-cycle of the firm returning to be a key survival driver at certain ages of the firm inside the industry boundary.
Resumo:
While much has been discussed about the relationship between ownership and financial performance of banks in emerging markets, literature about cross-ownership differences in credit market behaviour of banks in emerging economies is sparse. Using a portfolio choice model and bank-level data from India for 9 years (1995–96 to 2003–04), we examine banks’ behaviour in the context of credit markets of an emerging market economy. Our results indicate that, in India, the data for the domestic banks fit well the aforementioned portfolio-choice model, especially for private banks, but the model cannot explain the behaviour of foreign banks. In general, allocation of assets between risk-free government securities and risky credit is affected by past allocation patterns, stock exchange listing (for private banks), risk averseness of banks, regulations regarding treatment of NPA, and ability of banks to recover doubtful credit. It is also evident that banks deal with changing levels of systematic risk by altering the ratio of securitized to non-securitized credit.
Resumo:
A property of sparse representations in relation to their capacity for information storage is discussed. It is shown that this feature can be used for an application that we term Encrypted Image Folding. The proposed procedure is realizable through any suitable transformation. In particular, in this paper we illustrate the approach by recourse to the Discrete Cosine Transform and a combination of redundant Cosine and Dirac dictionaries. The main advantage of the proposed technique is that both storage and encryption can be achieved simultaneously using simple processing steps.