953 resultados para Hierarchical Linear Modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cell:cell bond between an immune cell and an antigen presenting cell is a necessary event in the activation of the adaptive immune response. At the juncture between the cells, cell surface molecules on the opposing cells form non-covalent bonds and a distinct patterning is observed that is termed the immunological synapse. An important binding molecule in the synapse is the T-cell receptor (TCR), that is responsible for antigen recognition through its binding with a major-histocompatibility complex with bound peptide (pMHC). This bond leads to intracellular signalling events that culminate in the activation of the T-cell, and ultimately leads to the expression of the immune eector function. The temporal analysis of the TCR bonds during the formation of the immunological synapse presents a problem to biologists, due to the spatio-temporal scales (nanometers and picoseconds) that compare with experimental uncertainty limits. In this study, a linear stochastic model, derived from a nonlinear model of the synapse, is used to analyse the temporal dynamics of the bond attachments for the TCR. Mathematical analysis and numerical methods are employed to analyse the qualitative dynamics of the nonequilibrium membrane dynamics, with the specic aim of calculating the average persistence time for the TCR:pMHC bond. A single-threshold method, that has been previously used to successfully calculate the TCR:pMHC contact path sizes in the synapse, is applied to produce results for the average contact times of the TCR:pMHC bonds. This method is extended through the development of a two-threshold method, that produces results suggesting the average time persistence for the TCR:pMHC bond is in the order of 2-4 seconds, values that agree with experimental evidence for TCR signalling. The study reveals two distinct scaling regimes in the time persistent survival probability density prole of these bonds, one dominated by thermal uctuations and the other associated with the TCR signalling. Analysis of the thermal fluctuation regime reveals a minimal contribution to the average time persistence calculation, that has an important biological implication when comparing the probabilistic models to experimental evidence. In cases where only a few statistics can be gathered from experimental conditions, the results are unlikely to match the probabilistic predictions. The results also identify a rescaling relationship between the thermal noise and the bond length, suggesting a recalibration of the experimental conditions, to adhere to this scaling relationship, will enable biologists to identify the start of the signalling regime for previously unobserved receptor:ligand bonds. Also, the regime associated with TCR signalling exhibits a universal decay rate for the persistence probability, that is independent of the bond length.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the study of a two-degree-of-freedom (2 DOF) nonlinear system consisting of two grounded linear oscillators coupled to two separate light weight nonlinear energy sinks of an essentially nonlinear stiffness. In this thesis, Targeted Energy Transfer (TET) and NES concept are introduced. Previous studies and research of Energy pumping and NES are presented. The characters in nonlinear energy pumping have been introduced at the start of the thesis. For the aim to design the application of a tremor reduction assessment device, the knowledge of tremor reduction has also been mentioned. Two main parties have been presented in the research: dynamical theoretic method of nonlinear energy pumping study and experiments of nonlinear vibration reduction model. In this thesis, nonlinear energy sink (NES) has been studied and used as a core attachment for the research. A new theoretic method of nonlinear vibration reduction which with two NESs has been attached to a primary system has been designed and tested with the technology of targeted energy transfer. Series connection and parallel connection structure systems have been designed to run the tests. Genetic algorithm has been used and presented in the thesis for searching the fit components. One more experiment has been tested with the final components. The results have been compared to find out most efficiency structure and components for the theoretic model. A tremor reduction experiment has been designed and presented in the thesis. The experiment is for designing an application for reducing human body tremor. By using the theoretic method earlier, the experiment has been designed and tested with a tremor reduction model. The experiment includes several tests, one single NES attached system and two NESs attached systems with different structures. The results of theoretic models and experiment models have been compared. The discussion has been made in the end. At the end of the thesis, some further work has been considered to designing the device of the tremor reduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Popular dimension reduction and visualisation algorithms rely on the assumption that input dissimilarities are typically Euclidean, for instance Metric Multidimensional Scaling, t-distributed Stochastic Neighbour Embedding and the Gaussian Process Latent Variable Model. It is well known that this assumption does not hold for most datasets and often high-dimensional data sits upon a manifold of unknown global geometry. We present a method for improving the manifold charting process, coupled with Elastic MDS, such that we no longer assume that the manifold is Euclidean, or of any particular structure. We draw on the benefits of different dissimilarity measures allowing for the relative responsibilities, under a linear combination, to drive the visualisation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acknowledgements A.P. would like to acknowledge the support of the National Subsea Research Institute (NSRI) UK. E.P. and M.W. are grateful for partial support provided by the Italian Ministry of Education, University and Research (MIUR) by the PRIN funded program 2010/11 N.2010MBJK5B

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerous works have been conducted on modelling basic compliant elements such as wire beams, and closed-form analytical models of most basic compliant elements have been well developed. However, the modelling of complex compliant mechanisms is still a challenging work. This paper proposes a constraint-force-based (CFB) modelling approach to model compliant mechanisms with a particular emphasis on modelling complex compliant mechanisms. The proposed CFB modelling approach can be regarded as an improved free-body- diagram (FBD) based modelling approach, and can be extended to a development of the screw-theory-based design approach. A compliant mechanism can be decomposed into rigid stages and compliant modules. A compliant module can offer elastic forces due to its deformation. Such elastic forces are regarded as variable constraint forces in the CFB modelling approach. Additionally, the CFB modelling approach defines external forces applied on a compliant mechanism as constant constraint forces. If a compliant mechanism is at static equilibrium, all the rigid stages are also at static equilibrium under the influence of the variable and constant constraint forces. Therefore, the constraint force equilibrium equations for all the rigid stages can be obtained, and the analytical model of the compliant mechanism can be derived based on the constraint force equilibrium equations. The CFB modelling approach can model a compliant mechanism linearly and nonlinearly, can obtain displacements of any points of the rigid stages, and allows external forces to be exerted on any positions of the rigid stages. Compared with the FBD based modelling approach, the CFB modelling approach does not need to identify the possible deformed configuration of a complex compliant mechanism to obtain the geometric compatibility conditions and the force equilibrium equations. Additionally, the mathematical expressions in the CFB approach have an easily understood physical meaning. Using the CFB modelling approach, the variable constraint forces of three compliant modules, a wire beam, a four-beam compliant module and an eight-beam compliant module, have been derived in this paper. Based on these variable constraint forces, the linear and non-linear models of a decoupled XYZ compliant parallel mechanism are derived, and verified by FEA simulations and experimental tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biotic interactions can have large effects on species distributions yet their role in shaping species ranges is seldom explored due to historical difficulties in incorporating biotic factors into models without a priori knowledge on interspecific interactions. Improved SDMs, which account for biotic factors and do not require a priori knowledge on species interactions, are needed to fully understand species distributions. Here, we model the influence of abiotic and biotic factors on species distribution patterns and explore the robustness of distributions under future climate change. We fit hierarchical spatial models using Integrated Nested Laplace Approximation (INLA) for lagomorph species throughout Europe and test the predictive ability of models containing only abiotic factors against models containing abiotic and biotic factors. We account for residual spatial autocorrelation using a conditional autoregressive (CAR) model. Model outputs are used to estimate areas in which abiotic and biotic factors determine species’ ranges. INLA models containing both abiotic and biotic factors had substantially better predictive ability than models containing abiotic factors only, for all but one of the four species. In models containing abiotic and biotic factors, both appeared equally important as determinants of lagomorph ranges, but the influences were spatially heterogeneous. Parts of widespread lagomorph ranges highly influenced by biotic factors will be less robust to future changes in climate, whereas parts of more localised species ranges highly influenced by the environment may be less robust to future climate. SDMs that do not explicitly include biotic factors are potentially misleading and omit a very important source of variation. For the field of species distribution modelling to advance, biotic factors must be taken into account in order to improve the reliability of predicting species distribution patterns both presently and under future climate change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The predictive capability of high fidelity finite element modelling, to accurately capture damage and crush behaviour of composite structures, relies on the acquisition of accurate material properties, some of which have necessitated the development of novel approaches. This paper details the measurement of interlaminar and intralaminar fracture toughness, the non-linear shear behaviour of carbon fibre (AS4)/thermoplastic Polyetherketoneketone (PEKK) composite laminates and the utilisation of these properties for the accurate computational modelling of crush. Double-cantilever-beam (DCB), four-point end-notched flexure (4ENF) and Mixed-mode bending (MMB) test configurations were used to determine the initiation and propagation fracture toughness in mode I, mode II and mixed-mode loading, respectively. Compact Tension (CT) and Compact Compression (CC) test samples were employed to determine the intralaminar longitudinal tensile and compressive fracture toughness. V-notched rail shear tests were used to measure the highly non-linear shear behaviour, associated with thermoplastic composites, and fracture toughness. Corresponding numerical models of these tests were developed for verification and yielded good correlation with the experimental response. This also confirmed the accuracy of the measured values which were then employed as input material parameters for modelling the crush behaviour of a corrugated test specimen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermoplastic composites are likely to emerge as the preferred solution for meeting the high-volume production demands of passenger road vehicles. Substantial effort is currently being directed towards the development of new modelling techniques to reduce the extent of costly and time consuming physical testing. Developing a high-fidelity numerical model to predict the crush behaviour of composite laminates is dependent on the accurate measurement of material properties as well as a thorough understanding of damage mechanisms associated with crush events. This paper details the manufacture, testing and modelling of self-supporting corrugated-shaped thermoplastic composite specimens for crashworthiness assessment. These specimens demonstrated a 57.3% higher specific energy absorption compared to identical specimen made from thermoset composites. The corresponding damage mechanisms were investigated in-situ using digital microscopy and post analysed using Scanning Electron Microscopy (SEM). Splaying and fragmentation modes were the 2 primary failure modes involving fibre breakage, matrix cracking and delamination. A mesoscale composite damage model, with new non-linear shear constitutive laws, which combines a range of novel techniques to accurately capture the material response under crushing, is presented. The force-displacement curves, damage parameter maps and dissipated energy, obtained from the numerical analysis, are shown to be in a good qualitative and quantitative agreement with experimental results. The proposed approach could significantly reduce the extent of physical testing required in the development of crashworthy structures.  

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robust joint modelling is an emerging field of research. Through the advancements in electronic patient healthcare records, the popularly of joint modelling approaches has grown rapidly in recent years providing simultaneous analysis of longitudinal and survival data. This research advances previous work through the development of a novel robust joint modelling methodology for one of the most common types of standard joint models, that which links a linear mixed model with a Cox proportional hazards model. Through t-distributional assumptions, longitudinal outliers are accommodated with their detrimental impact being down weighed and thus providing more efficient and reliable estimates. The robust joint modelling technique and its major benefits are showcased through the analysis of Northern Irish end stage renal disease patients. With an ageing population and growing prevalence of chronic kidney disease within the United Kingdom, there is a pressing demand to investigate the detrimental relationship between the changing haemoglobin levels of haemodialysis patients and their survival. As outliers within the NI renal data were found to have significantly worse survival, identification of outlying individuals through robust joint modelling may aid nephrologists to improve patient's survival. A simulation study was also undertaken to explore the difference between robust and standard joint models in the presence of increasing proportions and extremity of longitudinal outliers. More efficient and reliable estimates were obtained by robust joint models with increasing contrast between the robust and standard joint models when a greater proportion of more extreme outliers are present. Through illustration of the gains in efficiency and reliability of parameters when outliers exist, the potential of robust joint modelling is evident. The research presented in this thesis highlights the benefits and stresses the need to utilise a more robust approach to joint modelling in the presence of longitudinal outliers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Safety on public transport is a major concern for the relevant authorities. We
address this issue by proposing an automated surveillance platform which combines data from video, infrared and pressure sensors. Data homogenisation and integration is achieved by a distributed architecture based on communication middleware that resolves interconnection issues, thereby enabling data modelling. A common-sense knowledge base models and encodes knowledge about public-transport platforms and the actions and activities of passengers. Trajectory data from passengers is modelled as a time-series of human activities. Common-sense knowledge and rules are then applied to detect inconsistencies or errors in the data interpretation. Lastly, the rationality that characterises human behaviour is also captured here through a bottom-up Hierarchical Task Network planner that, along with common-sense, corrects misinterpretations to explain passenger behaviour. The system is validated using a simulated bus saloon scenario as a case-study. Eighteen video sequences were recorded with up to six passengers. Four metrics were used to evaluate performance. The system, with an accuracy greater than 90% for each of the four metrics, was found to outperform a rule-base system and a system containing planning alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Genomewide association studies (GWAS) enable detailed dissections of the genetic basis for organisms' ability to adapt to a changing environment. In long-term studies of natural populations, individuals are often marked at one point in their life and then repeatedly recaptured. It is therefore essential that a method for GWAS includes the process of repeated sampling. In a GWAS, the effects of thousands of single-nucleotide polymorphisms (SNPs) need to be fitted and any model development is constrained by the computational requirements. A method is therefore required that can fit a highly hierarchical model and at the same time is computationally fast enough to be useful. 2. Our method fits fixed SNP effects in a linear mixed model that can include both random polygenic effects and permanent environmental effects. In this way, the model can correct for population structure and model repeated measures. The covariance structure of the linear mixed model is first estimated and subsequently used in a generalized least squares setting to fit the SNP effects. The method was evaluated in a simulation study based on observed genotypes from a long-term study of collared flycatchers in Sweden. 3. The method we present here was successful in estimating permanent environmental effects from simulated repeated measures data. Additionally, we found that especially for variable phenotypes having large variation between years, the repeated measurements model has a substantial increase in power compared to a model using average phenotypes as a response. 4. The method is available in the R package RepeatABEL. It increases the power in GWAS having repeated measures, especially for long-term studies of natural populations, and the R implementation is expected to facilitate modelling of longitudinal data for studies of both animal and human populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Near-surface air temperature is an important determinant of the surface energy balance of glaciers and is often represented by a constant linear temperature gradients (TGs) in models. Spatiotemporal variability in 2 m air temperature was measured across the debris-covered Miage Glacier, Italy, over an 89 d period during the 2014 ablation season using a network of 19 stations. Air temperature was found to be strongly dependent upon elevation for most stations, even under varying meteorological conditions and at different times of day, and its spatial variability was well explained by a locally derived mean linear TG (MG–TG) of −0.0088°C m−1. However, local temperature depressions occurred over areas of very thin or patchy debris cover. The MG–TG, together with other air TGs, extrapolated from both on- and off-glacier sites, were applied in a distributed energy-balance model. Compared with piecewise air temperature extrapolation from all on-glacier stations, modelled ablation, using the MG–TG, increased by <1%, increasing to >4% using the environmental ‘lapse rate’. Ice melt under thick debris was relatively insensitive to air temperature, while the effects of different temperature extrapolation methods were strongest at high elevation sites of thin and patchy debris cover.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a three dimensional, thermos-mechanical modelling approach to the cooling and solidification phases associated with the shape casting of metals ei. Die, sand and investment casting. Novel vortex-based Finite Volume (FV) methods are described and employed with regard to the small strain, non-linear Computational Solid Mechanics (CSM) capabilities required to model shape casting. The CSM capabilities include the non-linear material phenomena of creep and thermo-elasto-visco-plasticity at high temperatures and thermo-elasto-visco-plasticity at low temperatures and also multi body deformable contact with which can occur between the metal casting of the mould. The vortex-based FV methods, which can be readily applied to unstructured meshes, are included within a comprehensive FV modelling framework, PHYSICA. The additional heat transfer, by conduction and convection, filling, porosity and solidification algorithms existing within PHYSICA for the complete modelling of all shape casting process employ cell-centred FV methods. The termo-mechanical coupling is performed in a staggered incremental fashion, which addresses the possible gap formation between the component and the mould, and is ultimately validated against a variety of shape casting benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article is the third in a series working towards the construction of a realistic, evolving, non-linear force-free coronal-field model for the solar magnetic carpet. Here, we present preliminary results of 3D time-dependent simulations of the small-scale coronal field of the magnetic carpet. Four simulations are considered, each with the same evolving photospheric boundary condition: a 48-hour time series of synthetic magnetograms produced from the model of Meyer et al. ( Solar Phys. 272, 29, 2011). Three simulations include a uniform, overlying coronal magnetic field of differing strength, the fourth simulation includes no overlying field. The build-up, storage, and dissipation of magnetic energy within the simulations is studied. In particular, we study their dependence upon the evolution of the photospheric magnetic field and the strength of the overlying coronal field. We also consider where energy is stored and dissipated within the coronal field. The free magnetic energy built up is found to be more than sufficient to power small-scale, transient phenomena such as nanoflares and X-ray bright points, with the bulk of the free energy found to be stored low down, between 0.5 - 0.8 Mm. The energy dissipated is currently found to be too small to account for the heating of the entire quiet-Sun corona. However, the form and location of energy-dissipation regions qualitatively agree with what is observed on small scales on the Sun. Future MHD modelling using the same synthetic magnetograms may lead to a higher energy release.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Doutoramento em Engenharia Florestal e dos Recursos Naturais - Instituto Superior de Agronomia - UL