32 resultados para CHD Prediction, Blood Serum Data Chemometrics Methods
Resumo:
Quantitative structure-activity relationship (QSAR) analysis is a cornerstone of modern informatics. Predictive computational models of peptide-major histocompatibility complex (MHC)-binding affinity based on QSAR technology have now become important components of modern computational immunovaccinology. Historically, such approaches have been built around semiqualitative, classification methods, but these are now giving way to quantitative regression methods. We review three methods--a 2D-QSAR additive-partial least squares (PLS) and a 3D-QSAR comparative molecular similarity index analysis (CoMSIA) method--which can identify the sequence dependence of peptide-binding specificity for various class I MHC alleles from the reported binding affinities (IC50) of peptide sets. The third method is an iterative self-consistent (ISC) PLS-based additive method, which is a recently developed extension to the additive method for the affinity prediction of class II peptides. The QSAR methods presented here have established themselves as immunoinformatic techniques complementary to existing methodology, useful in the quantitative prediction of binding affinity: current methods for the in silico identification of T-cell epitopes (which form the basis of many vaccines, diagnostics, and reagents) rely on the accurate computational prediction of peptide-MHC affinity. We have reviewed various human and mouse class I and class II allele models. Studied alleles comprise HLA-A*0101, HLA-A*0201, HLA-A*0202, HLA-A*0203, HLA-A*0206, HLA-A*0301, HLA-A*1101, HLA-A*3101, HLA-A*6801, HLA-A*6802, HLA-B*3501, H2-K(k), H2-K(b), H2-D(b) HLA-DRB1*0101, HLA-DRB1*0401, HLA-DRB1*0701, I-A(b), I-A(d), I-A(k), I-A(S), I-E(d), and I-E(k). In this chapter we show a step-by-step guide into predicting the reliability and the resulting models to represent an advance on existing methods. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made are freely available online at the URL http://www.jenner.ac.uk/MHCPred.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
Ageing is accompanied by many visible characteristics. Other biological and physiological markers are also well-described e.g. loss of circulating sex hormones and increased inflammatory cytokines. Biomarkers for healthy ageing studies are presently predicated on existing knowledge of ageing traits. The increasing availability of data-intensive methods enables deep-analysis of biological samples for novel biomarkers. We have adopted two discrete approaches in MARK-AGE Work Package 7 for biomarker discovery; (1) microarray analyses and/or proteomics in cell systems e.g. endothelial progenitor cells or T cell ageing including a stress model; and (2) investigation of cellular material and plasma directly from tightly-defined proband subsets of different ages using proteomic, transcriptomic and miR array. The first approach provided longitudinal insight into endothelial progenitor and T cell ageing.This review describes the strategy and use of hypothesis-free, data-intensive approaches to explore cellular proteins, miR, mRNA and plasma proteins as healthy ageing biomarkers, using ageing models and directly within samples from adults of different ages. It considers the challenges associated with integrating multiple models and pilot studies as rational biomarkers for a large cohort study. From this approach, a number of high-throughput methods were developed to evaluate novel, putative biomarkers of ageing in the MARK-AGE cohort.
Resumo:
The point of departure for this study was a recognition of the differences in suppliers' and acquirers' judgements of the value of technology when transferred between the two, and the significant impacts of technology valuation on the establishment of technology partnerships and effectiveness of technology collaborations. The perceptions, transfer strategies and objectives, perceived benefits and assessed technology contributions as well as associated costs and risks of both suppliers and acquirers were seen to be the core to these differences. This study hypothesised that the capability embodied in technology to yield future returns makes technology valuation distinct from the process of valuing manufacturing products. The study hence has gone beyond the dimensions of cost calculation and price determination that have been discussed in the existing literature, by taking a broader view of how to achieve and share future added value from transferred technology. The core of technology valuation was argued as the evaluation of the 'quality' of the capability (technology) in generating future value and the effectiveness of the transfer arrangement for best use of such a capability. A dynamic approach comprising future value generation and realisation within the context of specific forms of collaboration was therefore adopted. The research investigations focused on the UK and China machine tool industries, where there are many technology transfer activities and the value issue has already been recognised in practice. Data were gathered from three groups: machine tool manufacturing technology suppliers in the UK and acquirers in China, and machine tool users in China. Data collecting methods included questionnaire surveys and case studies within all the three groups. The study has focused on identifying and examining the major factors affecting value as well as their interactive effects on technology valuation from both the supplier's and acquirer's point of view. The survey results showed the perceptions and the assessments of the owner's value and transfer value from the supplier's and acquirer's point of view respectively. Benefits, costs and risks related to the technology transfer were the major factors affecting the value of technology. The impacts of transfer payment on the value of technology by the sharing of financial benefits, costs and risks between partners were assessed. The close relationship between technology valuation and transfer arrangements was established by which technical requirements and strategic implications were considered. The case studies reflected the research propositions and revealed that benefits, costs and risks in the financial, technical and strategic dimensions interacted in the process of technology valuation within the context of technology collaboration. Further to the assessment of factors affecting value, a technology valuation framework was developed which suggests that technology attributes for the enhancement of contributory factors and their contributions to the realisation of transfer objectives need to be measured and compared with the associated costs and risks. The study concluded that technology valuation is a dynamic process including the generation and sharing of future value and the interactions between financial, technical and strategic achievements.
Resumo:
This thesis presents an investigation, of synchronisation and causality, motivated by problems in computational neuroscience. The thesis addresses both theoretical and practical signal processing issues regarding the estimation of interdependence from a set of multivariate data generated by a complex underlying dynamical system. This topic is driven by a series of problems in neuroscience, which represents the principal background motive behind the material in this work. The underlying system is the human brain and the generative process of the data is based on modern electromagnetic neuroimaging methods . In this thesis, the underlying functional of the brain mechanisms are derived from the recent mathematical formalism of dynamical systems in complex networks. This is justified principally on the grounds of the complex hierarchical and multiscale nature of the brain and it offers new methods of analysis to model its emergent phenomena. A fundamental approach to study the neural activity is to investigate the connectivity pattern developed by the brain’s complex network. Three types of connectivity are important to study: 1) anatomical connectivity refering to the physical links forming the topology of the brain network; 2) effective connectivity concerning with the way the neural elements communicate with each other using the brain’s anatomical structure, through phenomena of synchronisation and information transfer; 3) functional connectivity, presenting an epistemic concept which alludes to the interdependence between data measured from the brain network. The main contribution of this thesis is to present, apply and discuss novel algorithms of functional connectivities, which are designed to extract different specific aspects of interaction between the underlying generators of the data. Firstly, a univariate statistic is developed to allow for indirect assessment of synchronisation in the local network from a single time series. This approach is useful in inferring the coupling as in a local cortical area as observed by a single measurement electrode. Secondly, different existing methods of phase synchronisation are considered from the perspective of experimental data analysis and inference of coupling from observed data. These methods are designed to address the estimation of medium to long range connectivity and their differences are particularly relevant in the context of volume conduction, that is known to produce spurious detections of connectivity. Finally, an asymmetric temporal metric is introduced in order to detect the direction of the coupling between different regions of the brain. The method developed in this thesis is based on a machine learning extensions of the well known concept of Granger causality. The thesis discussion is developed alongside examples of synthetic and experimental real data. The synthetic data are simulations of complex dynamical systems with the intention to mimic the behaviour of simple cortical neural assemblies. They are helpful to test the techniques developed in this thesis. The real datasets are provided to illustrate the problem of brain connectivity in the case of important neurological disorders such as Epilepsy and Parkinson’s disease. The methods of functional connectivity in this thesis are applied to intracranial EEG recordings in order to extract features, which characterize underlying spatiotemporal dynamics before during and after an epileptic seizure and predict seizure location and onset prior to conventional electrographic signs. The methodology is also applied to a MEG dataset containing healthy, Parkinson’s and dementia subjects with the scope of distinguishing patterns of pathological from physiological connectivity.
Resumo:
A ten stage laboratory mixer-settler has been designed, constructed and operated with efficiencies up to 90%. The phase equilibrium data of the system acetic acid-toluene-water at different temperatures has been determined and correlated. Trials for prediction of these data have been investigated and a good agreement between the experimental data and the predictions obtained by the NRTL equation have been found. Extraction processes have been analysed. A model for determination of the time needed for a countercurrent stage-wise process to come to steady state has been derived. The experimental data was in reasonable agreement with this model. The discrete maximum principle has been applied to optimize the countercurrent extraction process and proved to be highly successful in predicting the optimum operating conditions which were confirmed by the experimental results. The temperature has proved to be a prosolvent for mass transfer in both directions but the temperature profile functioned as an anti solvent.
Resumo:
This book is aimed primarily at microbiologists who are undertaking research and who require a basic knowledge of statistics to analyse their experimental data. Computer software employing a wide range of data analysis methods is widely available to experimental scientists. The availability of this software, however, makes it essential that investigators understand the basic principles of statistics. Statistical analysis of data can be complex with many different methods of approach, each of which applies in a particular experimental circumstance. Hence, it is possible to apply an incorrect statistical method to data and to draw the wrong conclusions from an experiment. The purpose of this book, which has its origin in a series of articles published in the Society for Applied Microbiology journal ‘The Microbiologist’, is an attempt to present the basic logic of statistics as clearly as possible and therefore, to dispel some of the myths that often surround the subject. The 28 ‘Statnotes’ deal with various topics that are likely to be encountered, including the nature of variables, the comparison of means of two or more groups, non-parametric statistics, analysis of variance, correlating variables, and more complex methods such as multiple linear regression and principal components analysis. In each case, the relevant statistical method is illustrated with examples drawn from experiments in microbiological research. The text incorporates a glossary of the most commonly used statistical terms and there are two appendices designed to aid the investigator in the selection of the most appropriate test.
Resumo:
Excessive consumption of dietary fat is acknowledged to be a widespread problem linked to a range of medical conditions. Despite this, little is known about the specific sensory appeal held by fats and no previous published research exists concerning human perception of non-textural taste qualities in fats. This research aimed to address whether a taste component can be found in sensory perception of pure fats. It also examined whether individual differences existed in human taste responses to fat, using both aggregated data analysis methods and multidimensional scaling. Results indicated that individuals were able to detect both the primary taste qualities of sweet, salty, sour and bitter in pure processed oils and reliably ascribe their own individually-generated taste labels, suggested that a taste component may be present in human responses to fat. Individual variation appeared to exist, both in the perception of given taste qualities and in perceived intensity and preferences. A number of factors were examined in relation to such individual differences in taste perception, including age, gender, genetic sensitivity to 6-n-propylthiouracil, body mass, dietary preferences and intake, dieting behaviours and restraint. Results revealed that, to varying extents, gender, age, sensitivity to 6-n-propylthiouracil, dietary preferences, habitual dietary intake and restraint all appeared to be related to individual variation in taste responses to fat. However, in general, these differences appeared to exist in the form of differing preferences and levels of intensity with which taste qualities detected in fat were perceived, as opposed to the perception of specific taste qualities being associated with given traits or states. Equally, each of these factors appeared to exert only a limited influence upon variation in sensory responses and thus the potential for using taste responses to fats as a marker for issues such as over-consumption, obesity or eating disorder is at present limited.
Resumo:
This paper investigates the environmental sustainability and competitiveness perceptions of small farmers in a region in northern Brazil. The main data collection instruments included a survey questionnaire and an analysis of the region's strategic plan. In total, ninety-nine goat and sheep breeding farmers were surveyed. Data analysis methods included descriptive statistics, cluster analysis, and chi-squared tests. The main results relate to the impact of education, land size, and location on the farmers' perceptions of competitiveness and environmental issues. Farmers with longer periods of education have higher perception scores about business competitiveness and environmental sustainability than those with less formal education. Farmers who are working larger land areas also have higher scores than those with smaller farms. Lastly, location can yield factors that impact on farmers' perceptions. In our study, farmers located in Angicos and Lajes had higher perception scores than Pedro Avelino and Afonso Bezerra, despite the geographical proximity of these municipalities. On the other hand, three other profile variables did not impact on farmers' perceptions, namely: family income, dairy production volume, and associative condition. The authors believe the results and insights can be extended to livestock farming in other developing countries and contribute generally to fostering effective sustainable development policies, mainly in the agribusiness sector. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
This thesis is about the discretionary role of the line manager in inspiring the work engagement of staff and their resulting innovative behaviour examined through the lens of Social Exchange Theory (Blau, 1964) and the Job Demands-Resources theory (Bakker, Demerouti, Nachreiner & Schaufeli, 2001). The study is focused on a large British Public Sector organisation undergoing a major organisational shift in the way in which they operate as part of the public sector. It is often claimed that people do not leave organisations; they leave line managers (Kozlowski & Doherty, 1989). Regardless of the knowledge in the literature concerning the importance of the line manager in organisations (Purcell, 2003), the engagement literature in particular is lacking in the consideration of such a fundamental figure in organisational life. Further, the understanding of the black box of managerial discretion and its relationship to employee and organisation related outcomes would benefit from greater exploration (Purcell, 2003; Gerhart, 2005; Scott, et al, 2009). The purpose of this research is to address these gaps with relation to the innovative behaviour of employees in the public sector – an area that is not typically associated with the public sector (Bhatta, 2003; McGuire, Stoner & Mylona, 2008; Hughes, Moore & Kataria, 2011). The study is a CASE Award PhD thesis, requiring academic and practical elements to the research. The study is of one case organisation, focusing on one service characterised by a high level of adoption of Strategic Human Resource Management activities and operating in a rather unique manner for the public sector, having private sector competition for work. The study involved a mixed methods approach to data collection. Preliminary focus groups with 45 participants were conducted, followed by an ethnographic period of five months embedded into the service conducting interviews and observations. This culminated in a quantitative survey delivered within the wider directorate to approximately 500 staff members. The study used aspects of the Grounded Theory (Glaser & Strauss, 1967) approach to analyse the data and developed results that highlight the importance of the line manager in an area characterised by SHRM and organisational change for engaging employees and encouraging innovative behaviour. This survey was completed on behalf of the organisation and the findings of this are presented in appendix 1, in order to keep the focus of the PhD on theory development. Implications for theory and practice are discussed alongside the core finding. Line managers’ discretion surrounding the provision of job resources (in particular trust, autonomy and implementation and interpretation of combined bundles of SHRM policies and procedures) influenced the exchange process by which employees responded with work engagement and innovative behaviour. Limitations to the research are the limitations commonly attributed to cross-sectional data collection methods and those surrounding generalisability of the qualitative findings outside of the contextual factors characterising the service area. Suggestions for future research involve addressing these limitations and further exploration of the discretionary role with regards to extending our understanding of line manager discretion.
Resumo:
Adopting a grounded theory methodology, the study describes how an event and pressure impact upon a process of deinstitutionalization and institutional change. Three case studies were theoretically sampled in relation to each other. They yielded mainly qualitative data from methods that included interviews, observations, participant observations, and document reviews. Each case consisted of a boundaried cluster of small enterprises that were not industry specific and were geographically dispersed. Overall findings describe how an event, i.e. a stimulus, causes disruption, which in turn may cause pressure. Pressure is then translated as a tension within the institutional environment, which is characterized by opposing forces that encourage institutional breakdown and institutional maintenance. Several contributions are made: Deinstitutionalization as a process is inextricable from the formation of institutions – both are needed to make sense of institutional change on a conceptual level but are also inseparable experientially in the field; stimuli are conceptually different to pressures; the historical basis of a stimulus may impact on whether pressure and institutional change occurs; pressure exists in a more dynamic capacity rather than only as a catalyst; institutional breakdown is a non-linear irregular process; ethical and survival pressures as new types were identified; institutional current, as an underpinning mechanism, influences how the tension between institutional breakdown and maintenance plays out.
Resumo:
Listening is typically the first language skill to develop in first language (L1) users and has been recognized as a basic and fundamental tool for communication. Despite the importance of listening, aural abilities are often taken for granted, and many people overlook their dependency on listening and the complexities that combine to enable this multi-faceted skill. When second language (L2) students are learning their new language, listening is crucial, as it provides access to oral input and facilitates social interaction. Yet L2 students find listening challenging, and L2 teachers often lack sufficient pedagogy to help learners develop listening abilities that they can use in and beyond the classroom. In an effort to provide a pedagogic alternative to more traditional and limited L2 listening instruction, this thesis investigated the viability of listening strategy instruction (LSI) over three semesters at a private university in Japan through a qualitative action research (AR) intervention. An LSI program was planned and implemented with six classes over the course of three AR phases. Two teachers used the LSI with 121 learners throughout the project. Following each AR phase, student and teacher perceptions of the methodology were investigated via questionnaires and interviews, which were primary data collection methods. Secondary research methods (class observations, pre/post-semester test scores, and a research journal) supplemented the primary methods. Data were analyzed and triangulated for emerging themes related to participants’ perceptions of LSI and the viability thereof. These data showed consistent positive perceptions of LSI on the parts of both learners and teachers, although some aspects of LSI required additional refinement. This project provided insights on LSI specific to the university context in Japan and also produced principles for LSI program planning and implementation that can inform the broader L2 education community.
Resumo:
This paper draws on contributions to and discussions at a recent MRC HSRC-sponsored workshop 'Researching users' experiences of health care: the case of cancer'. We focus on the methodological and ethical challenges that currently face researchers who use self-report methods to investigate experiences of cancer and cancer care. These challenges relate to: the theoretical and conceptual underpinnings of research; participation rates and participant profiles; data collection methods (the retrospective nature of accounts, description and measurement, and data collection as intervention); social desirability considerations; relationship considerations; the experiences of contributing to research; and the synthesis and presentation of findings. We suggest that methodological research to tackle these challenges should be integrated into substantive research projects to promote the development of a strong knowledge base about experiences of cancer and cancer care.
Resumo:
Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset.
Resumo:
Quantitative analysis of solid-state processes from isothermal microcalorimetric data is straightforward if data for the total process have been recorded and problematic (in the more likely case) when they have not. Data are usually plotted as a function of fraction reacted (α); for calorimetric data, this requires knowledge of the total heat change (Q) upon completion of the process. Determination of Q is difficult in cases where the process is fast (initial data missing) or slow (final data missing). Here we introduce several mathematical methods that allow the direct calculation of Q by selection of data points when only partial data are present, based on analysis with the Pérez-Maqueda model. All methods in addition allow direct determination of the reaction mechanism descriptors m and n and from this the rate constant, k. The validity of the methods is tested with the use of simulated calorimetric data, and we introduce a graphical method for generating solid-state power-time data. The methods are then applied to the crystallization of indomethacin from a glass. All methods correctly recovered the total reaction enthalpy (16.6 J) and suggested that the crystallization followed an Avrami model. The rate constants for crystallization were determined to be 3.98 × 10-6, 4.13 × 10-6, and 3.98 × 10 -6 s-1 with methods 1, 2, and 3, respectively. © 2010 American Chemical Society.