948 resultados para MULTIVARIATE CALIBRATION


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis first considers the calibration and signal processing requirements of a neuromagnetometer for the measurement of human visual function. Gradiometer calibration using straight wire grids is examined and optimal grid configurations determined, given realistic constructional tolerances. Simulations show that for gradiometer balance of 1:104 and wire spacing error of 0.25mm the achievable calibration accuracy of gain is 0.3%, of position is 0.3mm and of orientation is 0.6°. Practical results with a 19-channel 2nd-order gradiometer based system exceed this performance. The real-time application of adaptive reference noise cancellation filtering to running-average evoked response data is examined. In the steady state, the filter can be assumed to be driven by a non-stationary step input arising at epoch boundaries. Based on empirical measures of this driving step an optimal progression for the filter time constant is proposed which improves upon fixed time constant filter performance. The incorporation of the time-derivatives of the reference channels was found to improve the performance of the adaptive filtering algorithm by 15-20% for unaveraged data, falling to 5% with averaging. The thesis concludes with a neuromagnetic investigation of evoked cortical responses to chromatic and luminance grating stimuli. The global magnetic field power of evoked responses to the onset of sinusoidal gratings was shown to have distinct chromatic and luminance sensitive components. Analysis of the results, using a single equivalent current dipole model, shows that these components arise from activity within two distinct cortical locations. Co-registration of the resulting current source localisations with MRI shows a chromatically responsive area lying along the midline within the calcarine fissure, possibly extending onto the lingual and cuneal gyri. It is postulated that this area is the human homologue of the primate cortical area V4.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical coherence tomography (OCT) is a non-invasive three-dimensional imaging system that is capable of producing high resolution in-vivo images. OCT is approved for use in clinical trials in Japan, USA and Europe. For OCT to be used effectively in a clinical diagnosis, a method of standardisation is required to assess the performance across different systems. This standardisation can be implemented using highly accurate and reproducible artefacts for calibration at both installation and throughout the lifetime of a system. Femtosecond lasers can write highly reproducible and highly localised micro-structured calibration artefacts within a transparent media. We report on the fabrication of high quality OCT calibration artefacts in fused silica using a femtosecond laser. The calibration artefacts were written in fused silica due to its high purity and ability to withstand high energy femtosecond pulses. An Amplitude Systemes s-Pulse Yb:YAG femtosecond laser with an operating wavelength of 1026 nm was used to inscribe three dimensional patterns within the highly optically transmissive substrate. Four unique artefacts have been designed to measure a wide variety of parameters, including the points spread function (PSF), modulation transfer function (MTF), sensitivity, distortion and resolution - key parameters which define the performance of the OCT. The calibration artefacts have been characterised using an optical microscope and tested on a swept source OCT. The results demonstrate that the femtosecond laser inscribed artefacts have the potential of quantitatively and qualitatively validating the performance of any OCT system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recent expansion of clinical applications for optical coherence tomography (OCT) is driving the development of approaches for consistent image acquisition. There is a simultaneous need for time-stable, easy-to-use imaging targets for calibration and standardization of OCT devices. We present calibration targets consisting of three-dimensional structures etched into nanoparticle-embedded resin. Spherical iron oxide nanoparticles with a predominant particle diameter of 400 nm were homogeneously dispersed in a two part polyurethane resin and allowed to harden overnight. These samples were then etched using a precision micromachining femtosecond laser with a center wavelength of 1026 nm, 100kHz repetition rate and 450 fs pulse duration. A series of lines in depth were etched, varying the percentage of inscription energy and speed of the translation stage moving the target with respect to the laser. Samples were imaged with a dual wavelength spectral-domain OCT system and point-spread function of nanoparticles within the target was measured.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Insights from the stream of research on knowledge calibration, which refers to the correspondence between accuracy and confidence in knowledge, enable a better understanding of consequences of inaccurate perceptions of managers. This paper examines the consequences of inaccurate managerial knowledge through the lens of knowledge calibration. Specifically, the paper examines the antecedent role of miscalibration of knowledge in strategy formation. It is postulated that miscalibrated managers who overestimate external factors and display a high level of confidence in their estimates are likely to enact strategies that are relatively more evolutionary and incremental in nature, whereas miscalibrated managers who overestimate internal factors and display a high level of confidence in their estimates are likely to enact strategies that are relatively more discontinuous and disruptive in nature. Perspectives from social cognitive theory provide support for the underlying processes. The paper, in part, explains the paradox of the prevalence of inaccurate managerial perceptions and efficacious performance. It also advances the literature on strategy formation through the application of the construct of knowledge calibration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Calibration of consumer knowledge of the web refers to the correspondence between accuracy and confidence in knowledge of the web. Being well-calibrated means that a person is realistic in his or her assessment of the level of knowledge that he or she possesses. This study finds that involvement leads to better calibration and that calibration is higher for procedural knowledge and common knowledge, as compared to declarative knowledge and specialized knowledge. Neither usage, nor experience, has any effect on calibration of knowledge of the web. No difference in calibration is observed between genders. But, in agreement with previous findings, this study also finds that males are more confident in their knowledge of the web. The results point out that calibration could be more a function of knowledge-specific factors and less that of individual-specific factors. The study also identifies flow and frustration with the web as consequences of calibration of knowledge of the web and draws the attention of future researchers to examine these aspects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the results of a multivariate spatial analysis of 38 vowel formant variables in the language of 402 informants from 236 cities from across the contiguous United States, based on the acoustic data from the Atlas of North American English (Labov, Ash & Boberg, 2006). The results of the analysis both confirm and challenge the results of the Atlas. Most notably, while the analysis identifies similar patterns as the Atlas in the West and the Southeast, the analysis finds that the Midwest and the Northeast are distinct dialect regions that are considerably stronger than the traditional Midland and Northern dialect region indentified in the Atlas. The analysis also finds evidence that a western vowel shift is actively shaping the language of the Western United States.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The article explores the possibilities of formalizing and explaining the mechanisms that support spatial and social perspective alignment sustained over the duration of a social interaction. The basic proposed principle is that in social contexts the mechanisms for sensorimotor transformations and multisensory integration (learn to) incorporate information relative to the other actor(s), similar to the "re-calibration" of visual receptive fields in response to repeated tool use. This process aligns or merges the co-actors' spatial representations and creates a "Shared Action Space" (SAS) supporting key computations of social interactions and joint actions; for example, the remapping between the coordinate systems and frames of reference of the co-actors, including perspective taking, the sensorimotor transformations required for lifting jointly an object, and the predictions of the sensory effects of such joint action. The social re-calibration is proposed to be based on common basis function maps (BFMs) and could constitute an optimal solution to sensorimotor transformation and multisensory integration in joint action or more in general social interaction contexts. However, certain situations such as discrepant postural and viewpoint alignment and associated differences in perspectives between the co-actors could constrain the process quite differently. We discuss how alignment is achieved in the first place, and how it is maintained over time, providing a taxonomy of various forms and mechanisms of space alignment and overlap based, for instance, on automaticity vs. control of the transformations between the two agents. Finally, we discuss the link between low-level mechanisms for the sharing of space and high-level mechanisms for the sharing of cognitive representations. © 2013 Pezzulo, Iodice, Ferraina and Kessler.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recent expansion of clinical applications for optical coherence tomography (OCT) is driving the development of approaches for consistent image acquisition. There is a simultaneous need for time-stable, easy-to-use imaging targets for calibration and standardization of OCT devices. We present calibration targets consisting of three-dimensional structures etched into nanoparticle-embedded resin. Spherical iron oxide nanoparticles with a predominant particle diameter of 400 nm were homogeneously dispersed in a two part polyurethane resin and allowed to harden overnight. These samples were then etched using a precision micromachining femtosecond laser with a center wavelength of 1026 nm, 100kHz repetition rate and 450 fs pulse duration. A series of lines in depth were etched, varying the percentage of inscription energy and speed of the translation stage moving the target with respect to the laser. Samples were imaged with a dual wavelength spectral-domain OCT system and point-spread function of nanoparticles within the target was measured.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The accurate in silico identification of T-cell epitopes is a critical step in the development of peptide-based vaccines, reagents, and diagnostics. It has a direct impact on the success of subsequent experimental work. Epitopes arise as a consequence of complex proteolytic processing within the cell. Prior to being recognized by T cells, an epitope is presented on the cell surface as a complex with a major histocompatibility complex (MHC) protein. A prerequisite therefore for T-cell recognition is that an epitope is also a good MHC binder. Thus, T-cell epitope prediction overlaps strongly with the prediction of MHC binding. In the present study, we compare discriminant analysis and multiple linear regression as algorithmic engines for the definition of quantitative matrices for binding affinity prediction. We apply these methods to peptides which bind the well-studied human MHC allele HLA-A*0201. A matrix which results from combining results of the two methods proved powerfully predictive under cross-validation. The new matrix was also tested on an external set of 160 binders to HLA-A*0201; it was able to recognize 135 (84%) of them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study examined the effect of range of a confidence scale on consumer knowledge calibration, specifically whether a restricted range scale (25%- 100%) leads to difference in calibration compared to a full range scale (0%-100%), for multiple-choice questions. A quasi-experimental study using student participants (N = 434) was employed. Data were collected from two samples; in the first sample (N = 167) a full range confidence scale was used, and in the second sample (N = 267) a restricted range scale was used. No differences were found between the two scales on knowledge calibration. Results from studies of knowledge calibration employing restricted range and full range confidence scales are thus comparable. © Psychological Reports 2014.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective In this study, we have used a chemometrics-based method to correlate key liposomal adjuvant attributes with in-vivo immune responses based on multivariate analysis. Methods The liposomal adjuvant composed of the cationic lipid dimethyldioctadecylammonium bromide (DDA) and trehalose 6,6-dibehenate (TDB) was modified with 1,2-distearoyl-sn-glycero-3-phosphocholine at a range of mol% ratios, and the main liposomal characteristics (liposome size and zeta potential) was measured along with their immunological performance as an adjuvant for the novel, postexposure fusion tuberculosis vaccine, Ag85B-ESAT-6-Rv2660c (H56 vaccine). Partial least square regression analysis was applied to correlate and cluster liposomal adjuvants particle characteristics with in-vivo derived immunological performances (IgG, IgG1, IgG2b, spleen proliferation, IL-2, IL-5, IL-6, IL-10, IFN-γ). Key findings While a range of factors varied in the formulations, decreasing the 1,2-distearoyl-sn-glycero-3-phosphocholine content (and subsequent zeta potential) together built the strongest variables in the model. Enhanced DDA and TDB content (and subsequent zeta potential) stimulated a response skewed towards a cell mediated immunity, with the model identifying correlations with IFN-γ, IL-2 and IL-6. Conclusion This study demonstrates the application of chemometrics-based correlations and clustering, which can inform liposomal adjuvant design.