954 resultados para Generalised Linear Modelling
Resumo:
Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.
Resumo:
Femtosecond laser microfabrication has emerged over the last decade as a 3D flexible technology in photonics. Numerical simulations provide an important insight into spatial and temporal beam and pulse shaping during the course of extremely intricate nonlinear propagation (see e.g. [1,2]). Electromagnetics of such propagation is typically described in the form of the generalized Non-Linear Schrdinger Equation (NLSE) coupled with Drude model for plasma [3]. In this paper we consider a multi-threaded parallel numerical solution for a specific model which describes femtosecond laser pulse propagation in transparent media [4, 5]. However our approach can be extended to similar models. The numerical code is implemented in NVIDIA Graphics Processing Unit (GPU) which provides an effitient hardware platform for multi-threded computing. We compare the performance of the described below parallel code implementated for GPU using CUDA programming interface [3] with a serial CPU version used in our previous papers [4,5]. © 2011 IEEE.
Resumo:
We describe a parallel multi-threaded approach for high performance modelling of wide class of phenomena in ultrafast nonlinear optics. Specific implementation has been performed using the highly parallel capabilities of a programmable graphics processor. © 2011 SPIE.
Resumo:
The cell:cell bond between an immune cell and an antigen presenting cell is a necessary event in the activation of the adaptive immune response. At the juncture between the cells, cell surface molecules on the opposing cells form non-covalent bonds and a distinct patterning is observed that is termed the immunological synapse. An important binding molecule in the synapse is the T-cell receptor (TCR), that is responsible for antigen recognition through its binding with a major-histocompatibility complex with bound peptide (pMHC). This bond leads to intracellular signalling events that culminate in the activation of the T-cell, and ultimately leads to the expression of the immune eector function. The temporal analysis of the TCR bonds during the formation of the immunological synapse presents a problem to biologists, due to the spatio-temporal scales (nanometers and picoseconds) that compare with experimental uncertainty limits. In this study, a linear stochastic model, derived from a nonlinear model of the synapse, is used to analyse the temporal dynamics of the bond attachments for the TCR. Mathematical analysis and numerical methods are employed to analyse the qualitative dynamics of the nonequilibrium membrane dynamics, with the specic aim of calculating the average persistence time for the TCR:pMHC bond. A single-threshold method, that has been previously used to successfully calculate the TCR:pMHC contact path sizes in the synapse, is applied to produce results for the average contact times of the TCR:pMHC bonds. This method is extended through the development of a two-threshold method, that produces results suggesting the average time persistence for the TCR:pMHC bond is in the order of 2-4 seconds, values that agree with experimental evidence for TCR signalling. The study reveals two distinct scaling regimes in the time persistent survival probability density prole of these bonds, one dominated by thermal uctuations and the other associated with the TCR signalling. Analysis of the thermal fluctuation regime reveals a minimal contribution to the average time persistence calculation, that has an important biological implication when comparing the probabilistic models to experimental evidence. In cases where only a few statistics can be gathered from experimental conditions, the results are unlikely to match the probabilistic predictions. The results also identify a rescaling relationship between the thermal noise and the bond length, suggesting a recalibration of the experimental conditions, to adhere to this scaling relationship, will enable biologists to identify the start of the signalling regime for previously unobserved receptor:ligand bonds. Also, the regime associated with TCR signalling exhibits a universal decay rate for the persistence probability, that is independent of the bond length.
Resumo:
This thesis presents the study of a two-degree-of-freedom (2 DOF) nonlinear system consisting of two grounded linear oscillators coupled to two separate light weight nonlinear energy sinks of an essentially nonlinear stiffness. In this thesis, Targeted Energy Transfer (TET) and NES concept are introduced. Previous studies and research of Energy pumping and NES are presented. The characters in nonlinear energy pumping have been introduced at the start of the thesis. For the aim to design the application of a tremor reduction assessment device, the knowledge of tremor reduction has also been mentioned. Two main parties have been presented in the research: dynamical theoretic method of nonlinear energy pumping study and experiments of nonlinear vibration reduction model. In this thesis, nonlinear energy sink (NES) has been studied and used as a core attachment for the research. A new theoretic method of nonlinear vibration reduction which with two NESs has been attached to a primary system has been designed and tested with the technology of targeted energy transfer. Series connection and parallel connection structure systems have been designed to run the tests. Genetic algorithm has been used and presented in the thesis for searching the fit components. One more experiment has been tested with the final components. The results have been compared to find out most efficiency structure and components for the theoretic model. A tremor reduction experiment has been designed and presented in the thesis. The experiment is for designing an application for reducing human body tremor. By using the theoretic method earlier, the experiment has been designed and tested with a tremor reduction model. The experiment includes several tests, one single NES attached system and two NESs attached systems with different structures. The results of theoretic models and experiment models have been compared. The discussion has been made in the end. At the end of the thesis, some further work has been considered to designing the device of the tremor reduction.
Resumo:
Popular dimension reduction and visualisation algorithms rely on the assumption that input dissimilarities are typically Euclidean, for instance Metric Multidimensional Scaling, t-distributed Stochastic Neighbour Embedding and the Gaussian Process Latent Variable Model. It is well known that this assumption does not hold for most datasets and often high-dimensional data sits upon a manifold of unknown global geometry. We present a method for improving the manifold charting process, coupled with Elastic MDS, such that we no longer assume that the manifold is Euclidean, or of any particular structure. We draw on the benefits of different dissimilarity measures allowing for the relative responsibilities, under a linear combination, to drive the visualisation process.
Resumo:
Acknowledgements A.P. would like to acknowledge the support of the National Subsea Research Institute (NSRI) UK. E.P. and M.W. are grateful for partial support provided by the Italian Ministry of Education, University and Research (MIUR) by the PRIN funded program 2010/11 N.2010MBJK5B
Resumo:
Numerous works have been conducted on modelling basic compliant elements such as wire beams, and closed-form analytical models of most basic compliant elements have been well developed. However, the modelling of complex compliant mechanisms is still a challenging work. This paper proposes a constraint-force-based (CFB) modelling approach to model compliant mechanisms with a particular emphasis on modelling complex compliant mechanisms. The proposed CFB modelling approach can be regarded as an improved free-body- diagram (FBD) based modelling approach, and can be extended to a development of the screw-theory-based design approach. A compliant mechanism can be decomposed into rigid stages and compliant modules. A compliant module can offer elastic forces due to its deformation. Such elastic forces are regarded as variable constraint forces in the CFB modelling approach. Additionally, the CFB modelling approach defines external forces applied on a compliant mechanism as constant constraint forces. If a compliant mechanism is at static equilibrium, all the rigid stages are also at static equilibrium under the influence of the variable and constant constraint forces. Therefore, the constraint force equilibrium equations for all the rigid stages can be obtained, and the analytical model of the compliant mechanism can be derived based on the constraint force equilibrium equations. The CFB modelling approach can model a compliant mechanism linearly and nonlinearly, can obtain displacements of any points of the rigid stages, and allows external forces to be exerted on any positions of the rigid stages. Compared with the FBD based modelling approach, the CFB modelling approach does not need to identify the possible deformed configuration of a complex compliant mechanism to obtain the geometric compatibility conditions and the force equilibrium equations. Additionally, the mathematical expressions in the CFB approach have an easily understood physical meaning. Using the CFB modelling approach, the variable constraint forces of three compliant modules, a wire beam, a four-beam compliant module and an eight-beam compliant module, have been derived in this paper. Based on these variable constraint forces, the linear and non-linear models of a decoupled XYZ compliant parallel mechanism are derived, and verified by FEA simulations and experimental tests.
Resumo:
The predictive capability of high fidelity finite element modelling, to accurately capture damage and crush behaviour of composite structures, relies on the acquisition of accurate material properties, some of which have necessitated the development of novel approaches. This paper details the measurement of interlaminar and intralaminar fracture toughness, the non-linear shear behaviour of carbon fibre (AS4)/thermoplastic Polyetherketoneketone (PEKK) composite laminates and the utilisation of these properties for the accurate computational modelling of crush. Double-cantilever-beam (DCB), four-point end-notched flexure (4ENF) and Mixed-mode bending (MMB) test configurations were used to determine the initiation and propagation fracture toughness in mode I, mode II and mixed-mode loading, respectively. Compact Tension (CT) and Compact Compression (CC) test samples were employed to determine the intralaminar longitudinal tensile and compressive fracture toughness. V-notched rail shear tests were used to measure the highly non-linear shear behaviour, associated with thermoplastic composites, and fracture toughness. Corresponding numerical models of these tests were developed for verification and yielded good correlation with the experimental response. This also confirmed the accuracy of the measured values which were then employed as input material parameters for modelling the crush behaviour of a corrugated test specimen.
Resumo:
Thermoplastic composites are likely to emerge as the preferred solution for meeting the high-volume production demands of passenger road vehicles. Substantial effort is currently being directed towards the development of new modelling techniques to reduce the extent of costly and time consuming physical testing. Developing a high-fidelity numerical model to predict the crush behaviour of composite laminates is dependent on the accurate measurement of material properties as well as a thorough understanding of damage mechanisms associated with crush events. This paper details the manufacture, testing and modelling of self-supporting corrugated-shaped thermoplastic composite specimens for crashworthiness assessment. These specimens demonstrated a 57.3% higher specific energy absorption compared to identical specimen made from thermoset composites. The corresponding damage mechanisms were investigated in-situ using digital microscopy and post analysed using Scanning Electron Microscopy (SEM). Splaying and fragmentation modes were the 2 primary failure modes involving fibre breakage, matrix cracking and delamination. A mesoscale composite damage model, with new non-linear shear constitutive laws, which combines a range of novel techniques to accurately capture the material response under crushing, is presented. The force-displacement curves, damage parameter maps and dissipated energy, obtained from the numerical analysis, are shown to be in a good qualitative and quantitative agreement with experimental results. The proposed approach could significantly reduce the extent of physical testing required in the development of crashworthy structures.
Resumo:
Robust joint modelling is an emerging field of research. Through the advancements in electronic patient healthcare records, the popularly of joint modelling approaches has grown rapidly in recent years providing simultaneous analysis of longitudinal and survival data. This research advances previous work through the development of a novel robust joint modelling methodology for one of the most common types of standard joint models, that which links a linear mixed model with a Cox proportional hazards model. Through t-distributional assumptions, longitudinal outliers are accommodated with their detrimental impact being down weighed and thus providing more efficient and reliable estimates. The robust joint modelling technique and its major benefits are showcased through the analysis of Northern Irish end stage renal disease patients. With an ageing population and growing prevalence of chronic kidney disease within the United Kingdom, there is a pressing demand to investigate the detrimental relationship between the changing haemoglobin levels of haemodialysis patients and their survival. As outliers within the NI renal data were found to have significantly worse survival, identification of outlying individuals through robust joint modelling may aid nephrologists to improve patient's survival. A simulation study was also undertaken to explore the difference between robust and standard joint models in the presence of increasing proportions and extremity of longitudinal outliers. More efficient and reliable estimates were obtained by robust joint models with increasing contrast between the robust and standard joint models when a greater proportion of more extreme outliers are present. Through illustration of the gains in efficiency and reliability of parameters when outliers exist, the potential of robust joint modelling is evident. The research presented in this thesis highlights the benefits and stresses the need to utilise a more robust approach to joint modelling in the presence of longitudinal outliers.
Resumo:
Near-surface air temperature is an important determinant of the surface energy balance of glaciers and is often represented by a constant linear temperature gradients (TGs) in models. Spatiotemporal variability in 2 m air temperature was measured across the debris-covered Miage Glacier, Italy, over an 89 d period during the 2014 ablation season using a network of 19 stations. Air temperature was found to be strongly dependent upon elevation for most stations, even under varying meteorological conditions and at different times of day, and its spatial variability was well explained by a locally derived mean linear TG (MG–TG) of −0.0088°C m−1. However, local temperature depressions occurred over areas of very thin or patchy debris cover. The MG–TG, together with other air TGs, extrapolated from both on- and off-glacier sites, were applied in a distributed energy-balance model. Compared with piecewise air temperature extrapolation from all on-glacier stations, modelled ablation, using the MG–TG, increased by <1%, increasing to >4% using the environmental ‘lapse rate’. Ice melt under thick debris was relatively insensitive to air temperature, while the effects of different temperature extrapolation methods were strongest at high elevation sites of thin and patchy debris cover.
Resumo:
This paper presents a three dimensional, thermos-mechanical modelling approach to the cooling and solidification phases associated with the shape casting of metals ei. Die, sand and investment casting. Novel vortex-based Finite Volume (FV) methods are described and employed with regard to the small strain, non-linear Computational Solid Mechanics (CSM) capabilities required to model shape casting. The CSM capabilities include the non-linear material phenomena of creep and thermo-elasto-visco-plasticity at high temperatures and thermo-elasto-visco-plasticity at low temperatures and also multi body deformable contact with which can occur between the metal casting of the mould. The vortex-based FV methods, which can be readily applied to unstructured meshes, are included within a comprehensive FV modelling framework, PHYSICA. The additional heat transfer, by conduction and convection, filling, porosity and solidification algorithms existing within PHYSICA for the complete modelling of all shape casting process employ cell-centred FV methods. The termo-mechanical coupling is performed in a staggered incremental fashion, which addresses the possible gap formation between the component and the mould, and is ultimately validated against a variety of shape casting benchmarks.
Resumo:
This article is the third in a series working towards the construction of a realistic, evolving, non-linear force-free coronal-field model for the solar magnetic carpet. Here, we present preliminary results of 3D time-dependent simulations of the small-scale coronal field of the magnetic carpet. Four simulations are considered, each with the same evolving photospheric boundary condition: a 48-hour time series of synthetic magnetograms produced from the model of Meyer et al. ( Solar Phys. 272, 29, 2011). Three simulations include a uniform, overlying coronal magnetic field of differing strength, the fourth simulation includes no overlying field. The build-up, storage, and dissipation of magnetic energy within the simulations is studied. In particular, we study their dependence upon the evolution of the photospheric magnetic field and the strength of the overlying coronal field. We also consider where energy is stored and dissipated within the coronal field. The free magnetic energy built up is found to be more than sufficient to power small-scale, transient phenomena such as nanoflares and X-ray bright points, with the bulk of the free energy found to be stored low down, between 0.5 - 0.8 Mm. The energy dissipated is currently found to be too small to account for the heating of the entire quiet-Sun corona. However, the form and location of energy-dissipation regions qualitatively agree with what is observed on small scales on the Sun. Future MHD modelling using the same synthetic magnetograms may lead to a higher energy release.
Resumo:
Event extraction from texts aims to detect structured information such as what has happened, to whom, where and when. Event extraction and visualization are typically considered as two different tasks. In this paper, we propose a novel approach based on probabilistic modelling to jointly extract and visualize events from tweets where both tasks benefit from each other. We model each event as a joint distribution over named entities, a date, a location and event-related keywords. Moreover, both tweets and event instances are associated with coordinates in the visualization space. The manifold assumption that the intrinsic geometry of tweets is a low-rank, non-linear manifold within the high-dimensional space is incorporated into the learning framework using a regularization. Experimental results show that the proposed approach can effectively deal with both event extraction and visualization and performs remarkably better than both the state-of-the-art event extraction method and a pipeline approach for event extraction and visualization.