55 resultados para Simulation-based methods
Resumo:
Although many examples exist for shared neural representations of self and other, it is unknown how such shared representations interact with the rest of the brain. Furthermore, do high-level inference-based shared mentalizing representations interact with lower level embodied/simulation-based shared representations? We used functional neuroimaging (fMRI) and a functional connectivity approach to assess these questions during high-level inference-based mentalizing. Shared mentalizing representations in ventromedial prefrontal cortex, posterior cingulate/precuneus, and temporo-parietal junction (TPJ) all exhibited identical functional connectivity patterns during mentalizing of both self and other. Connectivity patterns were distributed across low-level embodied neural systems such as the frontal operculum/ventral premotor cortex, the anterior insula, the primary sensorimotor cortex, and the presupplementary motor area. These results demonstrate that identical neural circuits are implementing processes involved in mentalizing of both self and other and that the nature of such processes may be the integration of low-level embodied processes within higher level inference-based mentalizing.
Resumo:
The infrared spectrum of the stretching fundamentals of SiF2 has been obtained at a resolution of ≈ 0.1 cm−1 using a FTIR spectrometer. The spectrum has been analysed using computer simulation based on a coupled hamiltonian for v1 and v3, giving v1 = 855.01 cm−1 and v3 = 870.40 cm−1. The relative magnitude and sign of the vibrational transition moments has been determined from the ξC13 Coriolis coupling.
Resumo:
The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0-an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/.
Resumo:
Most newly sequenced proteins are likely to adopt a similar structure to one which has already been experimentally determined. For this reason, the most successful approaches to protein structure prediction have been template-based methods. Such prediction methods attempt to identify and model the folds of unknown structures by aligning the target sequences to a set of representative template structures within a fold library. In this chapter, I discuss the development of template-based approaches to fold prediction, from the traditional techniques to the recent state-of-the-art methods. I also discuss the recent development of structural annotation databases, which contain models built by aligning the sequences from entire proteomes against known structures. Finally, I run through a practical step-by-step guide for aligning target sequences to known structures and contemplate the future direction of template-based structure prediction.
Resumo:
Background: Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs) have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results: The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion: MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods tested can often be used as effective post filters for re-ranking few models from individual fold recognition servers and further improvements can be achieved using a consensus of these methods.
Recent developments in genetic data analysis: what can they tell us about human demographic history?
Resumo:
Over the last decade, a number of new methods of population genetic analysis based on likelihood have been introduced. This review describes and explains the general statistical techniques that have recently been used, and discusses the underlying population genetic models. Experimental papers that use these methods to infer human demographic and phylogeographic history are reviewed. It appears that the use of likelihood has hitherto had little impact in the field of human population genetics, which is still primarily driven by more traditional approaches. However, with the current uncertainty about the effects of natural selection, population structure and ascertainment of single-nucleotide polymorphism markers, it is suggested that likelihood-based methods may have a greater impact in the future.
Resumo:
DIGE is a protein labelling and separation technique allowing quantitative proteomics of two or more samples by optical fluorescence detection of differentially labelled proteins that are electrophoretically separated on the same gel. DIGE is an alternative to quantitation by MS-based methodologies and can circumvent their analytical limitations in areas such as intact protein analysis, (linear) detection over a wide range of protein abundances and, theoretically, applications where extreme sensitivity is needed. Thus, in quantitative proteomics DIGE is usually complementary to MS-based quantitation and has some distinct advantages. This review describes the basics of DIGE and its unique properties and compares it to MS-based methods in quantitative protein expression analysis.
Resumo:
Objective: To test the hypothesis that measles vaccination was involved in the pathogenesis of autism spectrum disorders (ASD) as evidenced by signs of a persistent measles infection or abnormally persistent immune response shown by circulating measles virus or raised antibody titres in children with ASD who had been vaccinated against measles, mumps and rubella (MMR) compared with controls. Design: Case-control study, community based. Methods: A community sample of vaccinated children aged 10-12 years in the UK with ASD (n = 98) and two control groups of similar age, one with special educational needs but no ASD (n = 52) and one typically developing group (n = 90), were tested for measles virus and antibody response to measles in the serum. Results: No difference was found between cases and controls for measles antibody response. There was no dose-response relationship between autism symptoms and antibody concentrations. Measles virus nucleic acid was amplified by reverse transcriptase-PCR in peripheral blood mononuclear cells from one patient with autism and two typically developing children. There was no evidence of a differential response to measles virus or the measles component of the MMR in children with ASD, with or without regression, and controls who had either one or two doses of MMR. Only one child from the control group had clinical symptoms of possible enterocolitis. Conclusion: No association between measles vaccination and ASD was shown.
Resumo:
In situ analysis has become increasingly important for contaminated land investigation and remediation. At present, portable techniques are used mainly as scanning tools to assess the spread and magnitude of the contamination, and are an adjunct to conventional laboratory analyses. A site in Cornwall, containing naturally occurring radioactive material (NORM), provided an opportunity for Reading University PhD student Anna Kutner to compare analytical data collected in situ with data generated by laboratory-based methods. The preliminary results in this paper extend the author‟s poster presentation at last September‟s GeoSpec2010 conference held in Lancaster.
Resumo:
An algorithm for tracking multiple feature positions in a dynamic image sequence is presented. This is achieved using a combination of two trajectory-based methods, with the resulting hybrid algorithm exhibiting the advantages of both. An optimizing exchange algorithm is described which enables short feature paths to be tracked without prior knowledge of the motion being studied. The resulting partial trajectories are then used to initialize a fast predictor algorithm which is capable of rapidly tracking multiple feature paths. As this predictor algorithm becomes tuned to the feature positions being tracked, it is shown how the location of occluded or poorly detected features can be predicted. The results of applying this tracking algorithm to data obtained from real-world scenes are then presented.
Resumo:
The structure of the Arctic stratospheric polar vortex in three chemistry–climate models (CCMs) taken from the CCMVal-2 intercomparison is examined using zonal mean and geometric-based methods. The geometric methods are employed by taking 2D moments of potential vorticity fields that are representative of the polar vortices in each of the models. This allows the vortex area, centroid location and ellipticity to be determined, as well as a measure of vortex filamentation. The first part of the study uses these diagnostics to examine how well the mean state, variability and extreme variability of the polar vortices are represented in CCMs compared to ERA-40 reanalysis data, and in particular for the UMUKCA-METO, NIWA-SOCOL and CCSR/NIES models. The second part of the study assesses how the vortices are predicted to change in terms of the frequency of sudden stratospheric warmings and their general structure over the period 1960–2100. In general, it is found that the vortices are climatologically too far poleward in the CCMs and produce too few large-scale filamentation events. Only a small increase is observed in the frequency of sudden stratospheric warming events from the mean of the CCMVal-2 models, but the distribution of extreme variability throughout the winter period is shown to change towards the end of the twentyfirst century.
Resumo:
Diaminofluoresceins are widely used probes for detection and intracellular localization of NO formation in cultured/isolated cells and intact tissues. The fluorinated derivative, 4-amino-5-methylamino-2′,7′-difluorofluorescein (DAF-FM), has gained increasing popularity in recent years due to its improved NO-sensitivity, pH-stability, and resistance to photo-bleaching compared to the first-generation compound, DAF-2. Detection of NO production by either reagent relies on conversion of the parent compound into a fluorescent triazole, DAF-FM-T and DAF-2-T, respectively. While this reaction is specific for NO and/or reactive nitrosating species, it is also affected by the presence of oxidants/antioxidants. Moreover, the reaction with other molecules can lead to the formation of fluorescent products other than the expected triazole. Thus additional controls and structural confirmation of the reaction products are essential. Using human red blood cells as an exemplary cellular system we here describe robust protocols for the analysis of intracellular DAF-FM-T formation using an array of fluorescence-based methods (laser-scanning fluorescence microscopy, flow cytometry and fluorimetry) and analytical separation techniques (reversed-phase HPLC and LC-MS/MS). When used in combination, these assays afford unequivocal identification of the fluorescent signal as being derived from NO and are applicable to most other cellular systems without or with only minor modifications.
Resumo:
An automated cloud band identification procedure is developed that captures the meteorology of such events over southern Africa. This “metbot” is built upon a connected component labelling method that enables blob detection in various atmospheric fields. Outgoing longwave radiation is used to flag candidate cloud band days by thresholding the data and requiring detected blobs to have sufficient latitudinal extent and exhibit positive tilt. The Laplacian operator is used on gridded reanalysis variables to highlight other features of meteorological interest. The ability of this methodology to capture the significant meteorology and rainfall of these synoptic systems is tested in a case study. Usefulness of the metbot in understanding event to event similarities of meteorological features is demonstrated, highlighting features previous studies have noted as key ingredients to cloud band development in the region. Moreover, this allows the presentation of a composite cloud band life cycle for southern Africa events. The potential of metbot to study multiscale interactions is discussed, emphasising its key strength: the ability to retain details of extreme and infrequent events. It automatically builds a database that is ideal for research questions focused on the influence of intraseasonal to interannual variability processes on synoptic events. Application of the method to convergence zone studies and atmospheric river descriptions is suggested. In conclusion, a relation-building metbot can retain details that are often lost with object-based methods but are crucial in case studies. Capturing and summarising these details may be necessary to develop deeper process-level understanding of multiscale interactions.
Resumo:
Many key economic and financial series are bounded either by construction or through policy controls. Conventional unit root tests are potentially unreliable in the presence of bounds, since they tend to over-reject the null hypothesis of a unit root, even asymptotically. So far, very little work has been undertaken to develop unit root tests which can be applied to bounded time series. In this paper we address this gap in the literature by proposing unit root tests which are valid in the presence of bounds. We present new augmented Dickey–Fuller type tests as well as new versions of the modified ‘M’ tests developed by Ng and Perron [Ng, S., Perron, P., 2001. LAG length selection and the construction of unit root tests with good size and power. Econometrica 69, 1519–1554] and demonstrate how these tests, combined with a simulation-based method to retrieve the relevant critical values, make it possible to control size asymptotically. A Monte Carlo study suggests that the proposed tests perform well in finite samples. Moreover, the tests outperform the Phillips–Perron type tests originally proposed in Cavaliere [Cavaliere, G., 2005. Limited time series with a unit root. Econometric Theory 21, 907–945]. An illustrative application to U.S. interest rate data is provided
Resumo:
A great number of studies on wind conditions in passages between slab-type buildings have been conducted in the past. However, wind conditions under different structure and configuration of buildings is still unclear and studies existed still can’t provide guidance on urban planning and design, due to the complexity of buildings and aerodynamics. The aim of this paper is to provide more insight in the mechanism of wind conditions in passages. In this paper, a simplified passage model with non-parallel buildings is developed on the basis of the wind tunnel experiments conducted by Blocken et al. (2008). Numerical simulation based on CFD is employed for a detailed investigation of the wind environment in passages between two long narrow buildings with different directions and model validation is performed by comparing numerical results with corresponding wind tunnel measurements.