903 resultados para Low resolution brain tomography (LORETA)
Resumo:
Introduction. The dimensions of the thoracic intervertebral foramen in adolescent idiopathic scoliosis (AIS) have not previously been quantified. During posterior approach scoliosis correction surgery pedicle screws may occasionally breach into the foramen. Better understanding of the dimensions of the foramen may be useful in surgical planning. This study describes a reproducible method for measurement of the thoracic foramen in AIS using computerized tomography (CT). Methods. In 23 pre-operative female patients with Lenke 1 type AIS with right side convexity major curves confined to the thoracic spine the foraminal height (FH), foraminal width (FW), pedicle to superior articular process distance (P-SAP) and cross sectional foraminal area (FA) were measured using multiplanar reconstructed CT. Measurements were made at entrance, midpoint and exit of the thoracic foramina from T1/T2 to T11/T12. Results were correlated with potential dependent variables of major curve Cobb Angle measured on X-ray and CT, Age, Weight, Lenke classification subtype, Risser Grade and number of spinal levels in the major curve. Results. The FH, FW, P-SAP and FA dimensions and ratios are all significantly larger on the convexity of the major curve and maximal at or close to the apex. Mean thoracic foraminal dimensions change in a predictable manner relative to position on the major thoracic curve. There was no significant correlation with the measured foraminal dimensions or ratios and the potential dependent variables. The average ratio of convexity to concavity dimensions at the apex foramina for entrance, midpoint and exit respectively are FH (1.50, 1.38, 1.25), FW (1.28, 1.30, 0.98), FA (2.06, 1.84, 1.32), P-SAP (1.61, 1.47, 1.30). Conclusion. Foraminal dimensions of the thoracic spine are significantly affected by AIS. Foraminal dimensions have a predictable convexity to concavity ratio relative to the proximity to the major curve apex. Surgeons should be aware of these anatomical differences during scoliosis correction surgery.
Resumo:
INTRODUCTION The dimensions of the thoracic intervertebral foramen in adolescent idiopathic scoliosis (AIS) have not previously been quantified. During posterior approach scoliosis correction surgery pedicle screws may occasionally breach into the foramen. Better understanding of the dimensions of the foramen may be useful in surgical planning. This study describes a reproducible method for measurement of the thoracic foramen in AIS using computerized tomography (CT). METHODS In 23 pre-operative female patients with Lenke 1 type AIS with right side convexity major curves confined to the thoracic spine the foraminal height (FH), foraminal width (FW), pedicle to superior articular process distance (P-SAP) and cross sectional foraminal area (FA) were measured using multiplanar reconstructed CT. Measurements were made at entrance, midpoint and exit of the thoracic foramina from T1/T2 to T11/T12. Results were correlated with potential dependent variables of major curve Cobb Angle measured on X-ray and CT, Age, Weight, Lenke classification subtype, Risser Grade and number of spinal levels in the major curve. RESULTS The FH, FW, P-SAP and FA dimensions and ratios are all significantly larger on the convexity of the major curve and maximal at or close to the apex. Mean thoracic foraminal dimensions change in a predictable manner relative to position on the major thoracic curve. There was no significant correlation with the measured foraminal dimensions or ratios and the potential dependent variables. The average ratio of convexity to concavity dimensions at the apex foramina for entrance, midpoint and exit respectively are FH (1.50, 1.38, 1.25), FW (1.28, 1.30, 0.98), FA (2.06, 1.84, 1.32), P-SAP (1.61, 1.47, 1.30). CONCLUSION Foraminal dimensions of the thoracic spine are significantly affected by AIS. Foraminal dimensions have a predictable convexity to concavity ratio relative to the proximity to the major curve apex. Surgeons should be aware of these anatomical differences during scoliosis correction surgery.
Resumo:
Study Design Retrospective review of prospectively collected data. Objectives To analyze intervertebral (IV) fusion after thoracoscopic anterior spinal fusion (TASF) and explore the relationship between fusion scores and key clinical variables. Summary of Background Information TASF provides comparable correction with some advantages over posterior approaches but reported mechanical complications, and their relationship to non-union and graft material is unclear. Similarly, the optimal combination of graft type and implant stiffness for effecting successful radiologic union remains undetermined. Methods A subset of patients from a large single-center series who had TASF for progressive scoliosis underwent low-dose computed tomographic scans 2 years after surgery. The IV fusion mass in the disc space was assessed using the 4-point Sucato scale, where 1 indicates <50% and 4 indicates 100% bony fusion of the disc space. The effects of rod diameter, rod material, graft type, fusion level, and mechanical complications on fusion scores were assessed. Results Forty-three patients with right thoracic major curves (mean age 14.9 years) participated in the study. Mean fusion scores for patient subgroups ranged from 1.0 (IV levels with rod fractures) to 2.2 (4.5-mm rod with allograft), with scores tending to decrease with increasing rod size and stiffness. Graft type (autograft vs. allograft) did not affect fusion scores. Fusion scores were highest in the middle levels of the rod construct (mean 2.52), dropping off by 20% to 30% toward the upper and lower extremities of the rod. IV levels where a rod fractured had lower overall mean fusion scores compared to levels without a fracture. Mean total Scoliosis Research Society (SRS) questionnaire scores were 98.9 from a possible total of 120, indicating a good level of patient satisfaction. Conclusions Results suggest that 100% radiologic fusion of the entire disc space is not necessary for successful clinical outcomes following thoracoscopic anterior selective thoracic fusion.
Resumo:
Background: The number of available structures of large multi-protein assemblies is quite small. Such structures provide phenomenal insights on the organization, mechanism of formation and functional properties of the assembly. Hence detailed analysis of such structures is highly rewarding. However, the common problem in such analyses is the low resolution of these structures. In the recent times a number of attempts that combine low resolution cryo-EM data with higher resolution structures determined using X-ray analysis or NMR or generated using comparative modeling have been reported. Even in such attempts the best result one arrives at is the very course idea about the assembly structure in terms of trace of the C alpha atoms which are modeled with modest accuracy. Methodology/Principal Findings: In this paper first we present an objective approach to identify potentially solvent exposed and buried residues solely from the position of C alpha atoms and amino acid sequence using residue type-dependent thresholds for accessible surface areas of C alpha. We extend the method further to recognize potential protein-protein interface residues. Conclusion/Significance: Our approach to identify buried and exposed residues solely from the positions of C alpha atoms resulted in an accuracy of 84%, sensitivity of 83-89% and specificity of 67-94% while recognition of interfacial residues corresponded to an accuracy of 94%, sensitivity of 70-96% and specificity of 58-94%. Interestingly, detailed analysis of cases of mismatch between recognition of interface residues from C alpha positions and all-atom models suggested that, recognition of interfacial residues using C alpha atoms only correspond better with intuitive notion of what is an interfacial residue. Our method should be useful in the objective analysis of structures of protein assemblies when positions of only C alpha positions are available as, for example, in the cases of integration of cryo-EM data and high resolution structures of the components of the assembly.
Resumo:
Aims: Develop and validate tools to estimate residual noise covariance in Planck frequency maps. Quantify signal error effects and compare different techniques to produce low-resolution maps. Methods: We derive analytical estimates of covariance of the residual noise contained in low-resolution maps produced using a number of map-making approaches. We test these analytical predictions using Monte Carlo simulations and their impact on angular power spectrum estimation. We use simulations to quantify the level of signal errors incurred in different resolution downgrading schemes considered in this work. Results: We find an excellent agreement between the optimal residual noise covariance matrices and Monte Carlo noise maps. For destriping map-makers, the extent of agreement is dictated by the knee frequency of the correlated noise component and the chosen baseline offset length. The significance of signal striping is shown to be insignificant when properly dealt with. In map resolution downgrading, we find that a carefully selected window function is required to reduce aliasing to the sub-percent level at multipoles, ell > 2Nside, where Nside is the HEALPix resolution parameter. We show that sufficient characterization of the residual noise is unavoidable if one is to draw reliable contraints on large scale anisotropy. Conclusions: We have described how to compute the low-resolution maps, with a controlled sky signal level, and a reliable estimate of covariance of the residual noise. We have also presented a method to smooth the residual noise covariance matrices to describe the noise correlations in smoothed, bandwidth limited maps.
Resumo:
Background: Dengue virus along with the other members of the flaviviridae family has reemerged as deadly human pathogens. Understanding the mechanistic details of these infections can be highly rewarding in developing effective antivirals. During maturation of the virus inside the host cell, the coat proteins E and M undergo conformational changes, altering the morphology of the viral coat. However, due to low resolution nature of the available 3-D structures of viral assemblies, the atomic details of these changes are still elusive. Results: In the present analysis, starting from C alpha positions of low resolution cryo electron microscopic structures the residue level details of protein-protein interaction interfaces of dengue virus coat proteins have been predicted. By comparing the preexisting structures of virus in different phases of life cycle, the changes taking place in these predicted protein-protein interaction interfaces were followed as a function of maturation process of the virus. Besides changing the current notion about the presence of only homodimers in the mature viral coat, the present analysis indicated presence of a proline-rich motif at the protein-protein interaction interface of the coat protein. Investigating the conservation status of these seemingly functionally crucial residues across other members of flaviviridae family enabled dissecting common mechanisms used for infections by these viruses. Conclusions: Thus, using computational approach the present analysis has provided better insights into the preexisting low resolution structures of virus assemblies, the findings of which can be made use of in designing effective antivirals against these deadly human pathogens.
Resumo:
We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm.
Resumo:
The entropy budget is calculated of the coupled atmosphere–ocean general circulation model HadCM3. Estimates of the different entropy sources and sinks of the climate system are obtained directly from the diabatic heating terms, and an approximate estimate of the planetary entropy production is also provided. The rate of material entropy production of the climate system is found to be ∼50 mW m−2 K−1, a value intermediate in the range 30–70 mW m−2 K−1 previously reported from different models. The largest part of this is due to sensible and latent heat transport (∼38 mW m−2 K−1). Another 13 mW m−2 K−1 is due to dissipation of kinetic energy in the atmosphere by friction and Reynolds stresses. Numerical entropy production in the atmosphere dynamical core is found to be about 0.7 mW m−2 K−1. The material entropy production within the ocean due to turbulent mixing is ∼1 mW m−2 K−1, a very small contribution to the material entropy production of the climate system. The rate of change of entropy of the model climate system is about 1 mW m−2 K−1 or less, which is comparable with the typical size of the fluctuations of the entropy sources due to interannual variability, and a more accurate closure of the budget than achieved by previous analyses. Results are similar for FAMOUS, which has a lower spatial resolution but similar formulation to HadCM3, while more substantial differences are found with respect to other models, suggesting that the formulation of the model has an important influence on the climate entropy budget. Since this is the first diagnosis of the entropy budget in a climate model of the type and complexity used for projection of twenty-first century climate change, it would be valuable if similar analyses were carried out for other such models.
Resumo:
The usefulness of any simulation of atmospheric tracers using low-resolution winds relies on both the dominance of large spatial scales in the strain and time dependence that results in a cascade in tracer scales. Here, a quantitative study on the accuracy of such tracer studies is made using the contour advection technique. It is shown that, although contour stretching rates are very insensitive to the spatial truncation of the wind field, the displacement errors in filament position are sensitive. A knowledge of displacement characteristics is essential if Lagrangian simulations are to be used for the inference of airmass origin. A quantitative lower estimate is obtained for the tracer scale factor (TSF): the ratio of the smallest resolved scale in the advecting wind field to the smallest “trustworthy” scale in the tracer field. For a baroclinic wave life cycle the TSF = 6.1 ± 0.3 while for the Northern Hemisphere wintertime lower stratosphere the TSF = 5.5 ± 0.5, when using the most stringent definition of the trustworthy scale. The similarity in the TSF for the two flows is striking and an explanation is discussed in terms of the activity of potential vorticity (PV) filaments. Uncertainty in contour initialization is investigated for the stratospheric case. The effect of smoothing initial contours is to introduce a spinup time, after which wind field truncation errors take over from initialization errors (2–3 days). It is also shown that false detail from the proliferation of finescale filaments limits the useful lifetime of such contour advection simulations to 3σ−1 days, where σ is the filament thinning rate, unless filaments narrower than the trustworthy scale are removed by contour surgery. In addition, PV analysis error and diabatic effects are so strong that only PV filaments wider than 50 km are at all believable, even for very high-resolution winds. The minimum wind field resolution required to accurately simulate filaments down to the erosion scale in the stratosphere (given an initial contour) is estimated and the implications for the modeling of atmospheric chemistry are briefly discussed.
Resumo:
Sting jets are transient coherent mesoscale strong wind features that can cause damaging surface wind gusts in extratropical cyclones. Currently, we have only limited knowledge of their climatological characteristics. Numerical weather prediction models require enough resolution to represent slantwise motions with horizontal scales of tens of kilometres and vertical scales of just a few hundred metres to represent sting jets. Hence, the climatological characteristics of sting jets and the associated extratropical cyclones can not be determined by searching for sting jets in low-resolution datasets such as reanalyses. A diagnostic is presented and evaluated for the detection in low-resolution datasets of atmospheric regions from which sting jets may originate. Previous studies have shown that conditional symmetric instability (CSI) is present in all storms studied with sting jets, while other, rapidly developing storms of a similar character but no CSI do not develop sting jets. Therefore, we assume that the release of CSI is needed for sting jets to develop. While this instability will not be released in a physically realistic way in low-resolution models (and hence sting jets are unlikely to occur), it is hypothesized that the signature of this instability (combined with other criteria that restrict analysis to moist mid-tropospheric regions in the neighbourhood of a secondary cold front) can be used to identify cyclones in which sting jets occurred in reality. The diagnostic is evaluated, and appropriate parameter thresholds defined, by applying it to three case studies simulated using two resolutions (with CSI-release resolved in only the higher-resolution simulation).
Resumo:
Brain-Computer Interfacing (BCI) has been previously demonstrated to restore patient communication, meeting with varying degrees of success. Due to the nature of the equipment traditionally used in BCI experimentation (the electroencephalograph) it is mostly conned to clinical and research environments. The required medical safety standards, subsequent cost of equipment and its application/training times are all issues that need to be resolved if BCIs are to be taken out of the lab/clinic and delivered to the home market. The results in this paper demonstrate a system developed with a low cost medical grade EEG amplier unit in conjunction with the open source BCI2000 software suite thus constructing the cheapest per electrode system available, meeting rigorous clinical safety standards. Discussion of the future of this technology and future work concerning this platform are also introduced.
Resumo:
The accurate prediction of the biochemical function of a protein is becoming increasingly important, given the unprecedented growth of both structural and sequence databanks. Consequently, computational methods are required to analyse such data in an automated manner to ensure genomes are annotated accurately. Protein structure prediction methods, for example, are capable of generating approximate structural models on a genome-wide scale. However, the detection of functionally important regions in such crude models, as well as structural genomics targets, remains an extremely important problem. The method described in the current study, MetSite, represents a fully automatic approach for the detection of metal-binding residue clusters applicable to protein models of moderate quality. The method involves using sequence profile information in combination with approximate structural data. Several neural network classifiers are shown to be able to distinguish metal sites from non-sites with a mean accuracy of 94.5%. The method was demonstrated to identify metal-binding sites correctly in LiveBench targets where no obvious metal-binding sequence motifs were detectable using InterPro. Accurate detection of metal sites was shown to be feasible for low-resolution predicted structures generated using mGenTHREADER where no side-chain information was available. High-scoring predictions were observed for a recently solved hypothetical protein from Haemophilus influenzae, indicating a putative metal-binding site.
Resumo:
World-wide structural genomics initiatives are rapidly accumulating structures for which limited functional information is available. Additionally, state-of-the art structural prediction programs are now capable of generating at least low resolution structural models of target proteins. Accurate detection and classification of functional sites within both solved and modelled protein structures therefore represents an important challenge. We present a fully automatic site detection method, FuncSite, that uses neural network classifiers to predict the location and type of functionally important sites in protein structures. The method is designed primarily to require only backbone residue positions without the need for specific side-chain atoms to be present. In order to highlight effective site detection in low resolution structural models FuncSite was used to screen model proteins generated using mGenTHREADER on a set of newly released structures. We found effective metal site detection even for moderate quality protein models illustrating the robustness of the method.
Resumo:
We present an approach for dealing with coarse-resolution Earth observations (EO) in terrestrial ecosystem data assimilation schemes. The use of coarse-scale observations in ecological data assimilation schemes is complicated by spatial heterogeneity and nonlinear processes in natural ecosystems. If these complications are not appropriately dealt with, then the data assimilation will produce biased results. The “disaggregation” approach that we describe in this paper combines frequent coarse-resolution observations with temporally sparse fine-resolution measurements. We demonstrate the approach using a demonstration data set based on measurements of an Arctic ecosystem. In this example, normalized difference vegetation index observations are assimilated into a “zero-order” model of leaf area index and carbon uptake. The disaggregation approach conserves key ecosystem characteristics regardless of the observation resolution and estimates the carbon uptake to within 1% of the demonstration data set “truth.” Assimilating the same data in the normal manner, but without the disaggregation approach, results in carbon uptake being underestimated by 58% at an observation resolution of 250 m. The disaggregation method allows the combination of multiresolution EO and improves in spatial resolution if observations are located on a grid that shifts from one observation time to the next. Additionally, the approach is not tied to a particular data assimilation scheme, model, or EO product and can cope with complex observation distributions, as it makes no implicit assumptions of normality.