903 resultados para COMPLEX METHOD
Resumo:
Computer simulated trajectories of bulk water molecules form complex spatiotemporal structures at the picosecond time scale. This intrinsic complexity, which underlies the formation of molecular structures at longer time scales, has been quantified using a measure of statistical complexity. The method estimates the information contained in the molecular trajectory by detecting and quantifying temporal patterns present in the simulated data (velocity time series). Two types of temporal patterns are found. The first, defined by the short-time correlations corresponding to the velocity autocorrelation decay times (â‰0.1â€ps), remains asymptotically stable for time intervals longer than several tens of nanoseconds. The second is caused by previously unknown longer-time correlations (found at longer than the nanoseconds time scales) leading to a value of statistical complexity that slowly increases with time. A direct measure based on the notion of statistical complexity that describes how the trajectory explores the phase space and independent from the particular molecular signal used as the observed time series is introduced. © 2008 The American Physical Society.
Resumo:
A recently proposed colour based tracking algorithm has been established to track objects in real circumstances [Zivkovic, Z., Krose, B. 2004. An EM-like algorithm for color-histogram-based object tracking. In: Proc, IEEE Conf. on Computer Vision and Pattern Recognition, pp. 798-803]. To improve the performance of this technique in complex scenes, in this paper we propose a new algorithm for optimally adapting the ellipse outlining the objects of interest. This paper presents a Lagrangian based method to integrate a regularising component into the covariance matrix to be computed. Technically, we intend to reduce the residuals between the estimated probability distribution and the expected one. We argue that, by doing this, the shape of the ellipse can be properly adapted in the tracking stage. Experimental results show that the proposed method has favourable performance in shape adaption and object localisation.
Resumo:
Experiments combining different groups or factors and which use ANOVA are a powerful method of investigation in applied microbiology. ANOVA enables not only the effect of individual factors to be estimated but also their interactions; information which cannot be obtained readily when factors are investigated separately. In addition, combining different treatments or factors in a single experiment is more efficient and often reduces the sample size required to estimate treatment effects adequately. Because of the treatment combinations used in a factorial experiment, the degrees of freedom (DF) of the error term in the ANOVA is a more important indicator of the ‘power’ of the experiment than the number of replicates. A good method is to ensure, where possible, that sufficient replication is present to achieve 15 DF for the error term of the ANOVA testing effects of particular interest. Finally, it is important to always consider the design of the experiment because this determines the appropriate ANOVA to use. Hence, it is necessary to be able to identify the different forms of ANOVA appropriate to different experimental designs and to recognise when a design is a split-plot or incorporates a repeated measure. If there is any doubt about which ANOVA to use in a specific circumstance, the researcher should seek advice from a statistician with experience of research in applied microbiology.
Resumo:
Quantum dots (Qdots) are fluorescent nanoparticles that have great potential as detection agents in biological applications. Their optical properties, including photostability and narrow, symmetrical emission bands with large Stokes shifts, and the potential for multiplexing of many different colours, give them significant advantages over traditionally used fluorescent dyes. Here, we report the straightforward generation of stable, covalent quantum dot-protein A/G bioconjugates that will be able to bind to almost any IgG antibody, and therefore can be used in many applications. An additional advantage is that the requirement for a secondary antibody is removed, simplifying experimental design. To demonstrate their use, we show their application in multiplexed western blotting. The sensitivity of Qdot conjugates is found to be superior to fluorescent dyes, and comparable to, or potentially better than, enhanced chemiluminescence. We show a true biological validation using a four-colour multiplexed western blot against a complex cell lysate background, and have significantly improved previously reported non-specific binding of the Qdots to cellular proteins.
Resumo:
In this paper we present a novel method for emulating a stochastic, or random output, computer model and show its application to a complex rabies model. The method is evaluated both in terms of accuracy and computational efficiency on synthetic data and the rabies model. We address the issue of experimental design and provide empirical evidence on the effectiveness of utilizing replicate model evaluations compared to a space-filling design. We employ the Mahalanobis error measure to validate the heteroscedastic Gaussian process based emulator predictions for both the mean and (co)variance. The emulator allows efficient screening to identify important model inputs and better understanding of the complex behaviour of the rabies model.
Resumo:
Numerical techniques have been finding increasing use in all aspects of fracture mechanics, and often provide the only means for analyzing fracture problems. The work presented here, is concerned with the application of the finite element method to cracked structures. The present work was directed towards the establishment of a comprehensive two-dimensional finite element, linear elastic, fracture analysis package. Significant progress has been made to this end, and features which can now be studied include multi-crack tip mixed-mode problems, involving partial crack closure. The crack tip core element was refined and special local crack tip elements were employed to reduce the element density in the neighbourhood of the core region. The work builds upon experience gained by previous research workers and, as part of the general development, the program was modified to incorporate the eight-node isoparametric quadrilateral element. Also. a more flexible solving routine was developed, and provided a very compact method of solving large sets of simultaneous equations, stored in a segmented form. To complement the finite element analysis programs, an automatic mesh generation program has been developed, which enables complex problems. involving fine element detail, to be investigated with a minimum of input data. The scheme has proven to be versati Ie and reasonably easy to implement. Numerous examples are given to demonstrate the accuracy and flexibility of the finite element technique.
Resumo:
We present experimental studies and numerical modeling based on a combination of the Bidirectional Beam Propagation Method and Finite Element Modeling that completely describes the wavelength spectra of point by point femtosecond laser inscribed fiber Bragg gratings, showing excellent agreement with experiment. We have investigated the dependence of different spectral parameters such as insertion loss, all dominant cladding and ghost modes and their shape relative to the position of the fiber Bragg grating in the core of the fiber. Our model is validated by comparing model predictions with experimental data and allows for predictive modeling of the gratings. We expand our analysis to more complicated structures, where we introduce symmetry breaking; this highlights the importance of centered gratings and how maintaining symmetry contributes to the overall spectral quality of the inscribed Bragg gratings. Finally, the numerical modeling is applied to superstructure gratings and a comparison with experimental results reveals a capability for dealing with complex grating structures that can be designed with particular wavelength characteristics.
Resumo:
Thirteen experiments investigated the dynamics of stream segregation. Experiments 1-6b used a similar method, where a same-frequency induction sequence (usually 10 repetitions of an identical pure tone) promoted segregation in a subsequent, briefer test sequence (of alternating low- and high-frequency tones). Experiments 1-2 measured streaming using a direct report of perception and a temporal-discrimination task, respectively. Creating a single deviant by altering the final inducer (e.g. in level or replacement with silence) reduced segregation, often substantially. As the prior inducers remained unaltered, it is proposed that the single change actively reset build-up. The extent of resetting varied gradually with the size of a frequency change, once noticeable (experiments 3a-3b). By manipulating the serial position of a change, experiments 4a-4b demonstrated that resetting only occurred when the final inducer was replaced with silence, as build-up is very rapid during a same-frequency induction sequence. Therefore, the observed resetting cannot be explained by fewer inducers being presented. Experiment 5 showed that resetting caused by a single deviant did not increase when prior inducers were made unpredictable in frequency (four-semitone range). Experiments 6a-6b demonstrated that actual and perceived continuity have a similar effect on subsequent streaming judgements promoting either integration or segregation, depending on listening context. Experiment 7 found that same-frequency inducers were considerably more effective at promoting segregation than an alternating-frequency inducer, and that a trend for deviant-tone resetting was only apparent for the same-frequency case. Using temporal-order judgments, experiments 8-9 demonstrated the stream segregation of pure-tone-like percepts, evoked by sudden changes in amplitude or interaural time difference for individual components of a complex tone, Active resetting was observed when a deviant was inserted into a sequence of these percepts (Experiment 10). Overall, these experiments offer new insight into the segregation-promotIng effect of induction sequences, and the factors which can reset this effect.
Resumo:
The DNA binding fusion protein, LacI-His6-GFP, together with the conjugate PEG-IDA-Cu(II) (10 kDa) was evaluated as a dual affinity system for the pUC19 plasmid extraction from an alkaline bacterial cell lysate in poly(ethylene glycol) (PEG)/dextran (DEX) aqueous two-phase systems (ATPS). In a PEG 600-DEX 40 ATPS containing 0.273 nmol of LacI fusion protein and 0.14% (w/w) of the functionalised PEG-IDA-Cu(II), more than 72% of the plasmid DNA partitioned to the PEG phase, without RNA or genomic DNA contamination as evaluated by agarose gel electrophoresis. In a second extraction stage, the elution of pDNA from the LacI binding complex proved difficult using either dextran or phosphate buffer as second phase, though more than 75% of the overall protein was removed in both systems. A maximum recovery of approximately 27% of the pCU19 plasmid was achieved using the PEG-dextran system as a second extraction system, with 80-90% of pDNA partitioning to the bottom phase. This represents about 7.4 microg of pDNA extracted per 1 mL of pUC19 desalted lysate.
Resumo:
Background - Lung cancer is the commonest cause of cancer in Scotland and is usually advanced at diagnosis. Median time between symptom onset and consultation is 14 weeks, so an intervention to prompt earlier presentation could support earlier diagnosis and enable curative treatment in more cases. Aim - To develop and optimise an intervention to reduce the time between onset and first consultation with symptoms that might indicate lung cancer. Design and setting - Iterative development of complex healthcare intervention according to the MRC Framework conducted in Northeast Scotland. Method - The study produced a complex intervention to promote early presentation of lung cancer symptoms. An expert multidisciplinary group developed the first draft of the intervention based on theory and existing evidence. This was refined following focus groups with health professionals and high-risk patients. Results - First draft intervention components included: information communicated persuasively, demonstrations of early consultation and its benefits, behaviour change techniques, and involvement of spouses/partners. Focus groups identified patient engagement, achieving behavioural change, and conflict at the patient–general practice interface as challenges and measures were incorporated to tackle these. Final intervention delivery included a detailed self-help manual and extended consultation with a trained research nurse at which specific action plans were devised. Conclusion -The study has developed an intervention that appeals to patients and health professionals and has theoretical potential for benefit. Now it requires evaluation.
Resumo:
Renewable energy project development is highly complex and success is by no means guaranteed. Decisions are often made with approximate or uncertain information yet the current methods employed by decision-makers do not necessarily accommodate this. Levelised energy costs (LEC) are one such commonly applied measure utilised within the energy industry to assess the viability of potential projects and inform policy. The research proposes a method for achieving this by enhancing the traditional discounting LEC measure with fuzzy set theory. Furthermore, the research develops the fuzzy LEC (F-LEC) methodology to incorporate the cost of financing a project from debt and equity sources. Applied to an example bioenergy project, the research demonstrates the benefit of incorporating fuzziness for project viability, optimal capital structure and key variable sensitivity analysis decision-making. The proposed method contributes by incorporating uncertain and approximate information to the widely utilised LEC measure and by being applicable to a wide range of energy project viability decisions. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Quantitative structure-activity relationship (QSAR) analysis is a cornerstone of modern informatics. Predictive computational models of peptide-major histocompatibility complex (MHC)-binding affinity based on QSAR technology have now become important components of modern computational immunovaccinology. Historically, such approaches have been built around semiqualitative, classification methods, but these are now giving way to quantitative regression methods. We review three methods--a 2D-QSAR additive-partial least squares (PLS) and a 3D-QSAR comparative molecular similarity index analysis (CoMSIA) method--which can identify the sequence dependence of peptide-binding specificity for various class I MHC alleles from the reported binding affinities (IC50) of peptide sets. The third method is an iterative self-consistent (ISC) PLS-based additive method, which is a recently developed extension to the additive method for the affinity prediction of class II peptides. The QSAR methods presented here have established themselves as immunoinformatic techniques complementary to existing methodology, useful in the quantitative prediction of binding affinity: current methods for the in silico identification of T-cell epitopes (which form the basis of many vaccines, diagnostics, and reagents) rely on the accurate computational prediction of peptide-MHC affinity. We have reviewed various human and mouse class I and class II allele models. Studied alleles comprise HLA-A*0101, HLA-A*0201, HLA-A*0202, HLA-A*0203, HLA-A*0206, HLA-A*0301, HLA-A*1101, HLA-A*3101, HLA-A*6801, HLA-A*6802, HLA-B*3501, H2-K(k), H2-K(b), H2-D(b) HLA-DRB1*0101, HLA-DRB1*0401, HLA-DRB1*0701, I-A(b), I-A(d), I-A(k), I-A(S), I-E(d), and I-E(k). In this chapter we show a step-by-step guide into predicting the reliability and the resulting models to represent an advance on existing methods. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made are freely available online at the URL http://www.jenner.ac.uk/MHCPred.
Resumo:
MEG beamformer algorithms work by making the assumption that correlated and spatially distinct local field potentials do not develop in the human brain. Despite this assumption, images produced by such algorithms concur with those from other non-invasive and invasive estimates of brain function. In this paper we set out to develop a method that could be applied to raw MEG data to explicitly test his assumption. We show that a promax rotation of MEG channel data can be used as an approximate estimator of the number of spatially distinct correlated sources in any frequency band.
Resumo:
The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).
Resumo:
Quantitative structure–activity relationship (QSAR) analysis is a main cornerstone of modern informatic disciplines. Predictive computational models, based on QSAR technology, of peptide-major histocompatibility complex (MHC) binding affinity have now become a vital component of modern day computational immunovaccinology. Historically, such approaches have been built around semi-qualitative, classification methods, but these are now giving way to quantitative regression methods. The additive method, an established immunoinformatics technique for the quantitative prediction of peptide–protein affinity, was used here to identify the sequence dependence of peptide binding specificity for three mouse class I MHC alleles: H2–Db, H2–Kb and H2–Kk. As we show, in terms of reliability the resulting models represent a significant advance on existing methods. They can be used for the accurate prediction of T-cell epitopes and are freely available online (http://www.jenner.ac.uk/MHCPred).