928 resultados para corrected
Resumo:
Executive function (EF) emerges in infancy and continues to develop throughout childhood. Executive dysfunction is believed to contribute to learning and attention problems in children at school age. Children born very preterm are more prone to these problems than their full-term peers.
Resumo:
In the exclusion-process literature, mean-field models are often derived by assuming that the occupancy status of lattice sites is independent. Although this assumption is questionable, it is the foundation of many mean-field models. In this work we develop methods to relax the independence assumption for a range of discrete exclusion process-based mechanisms motivated by applications from cell biology. Previous investigations that focussed on relaxing the independence assumption have been limited to studying initially-uniform populations and ignored any spatial variations. By ignoring spatial variations these previous studies were greatly simplified due to translational invariance of the lattice. These previous corrected mean-field models could not be applied to many important problems in cell biology such as invasion waves of cells that are characterised by moving fronts. Here we propose generalised methods that relax the independence assumption for spatially inhomogeneous problems, leading to corrected mean-field descriptions of a range of exclusion process-based models that incorporate (i) unbiased motility, (ii) biased motility, and (iii) unbiased motility with agent birth and death processes. The corrected mean-field models derived here are applicable to spatially variable processes including invasion wave type problems. We show that there can be large deviations between simulation data and traditional mean-field models based on invoking the independence assumption. Furthermore, we show that the corrected mean-field models give an improved match to the simulation data in all cases considered.
Resumo:
Density functional theory (DFT) is a powerful approach to electronic structure calculations in extended systems, but suffers currently from inadequate incorporation of long-range dispersion, or Van der Waals (VdW) interactions. VdW-corrected DFT is tested for interactions involving molecular hydrogen, graphite, single-walled carbon nanotubes (SWCNTs), and SWCNT bundles. The energy correction, based on an empirical London dispersion term with a damping function at short range, allows a reasonable physisorption energy and equilibrium distance to be obtained for H2 on a model graphite surface. The VdW-corrected DFT calculation for an (8, 8) nanotube bundle reproduces accurately the experimental lattice constant. For H2 inside or outside an (8, 8) SWCNT, we find the binding energies are respectively higher and lower than that on a graphite surface, correctly predicting the well known curvature effect. We conclude that the VdW correction is a very effective method for implementing DFT calculations, allowing a reliable description of both short-range chemical bonding and long-range dispersive interactions. The method will find powerful applications in areas of SWCNT research where empirical potential functions either have not been developed, or do not capture the necessary range of both dispersion and bonding interactions.
Resumo:
Purpose The goal of this work was to set out a methodology for measuring and reporting small field relative output and to assess the application of published correction factors across a population of linear accelerators. Methods and materials Measurements were made at 6 MV on five Varian iX accelerators using two PTW T60017 unshielded diodes. Relative output readings and profile measurements were made for nominal square field sizes of side 0.5 to 1.0 cm. The actual in-plane (A) and cross-plane (B) field widths were taken to be the FWHM at the 50% isodose level. An effective field size, defined as FSeff=A·B, was calculated and is presented as a field size metric. FSeffFSeff was used to linearly interpolate between published Monte Carlo (MC) calculated kQclin,Qmsrfclin,fmsr values to correct for the diode over-response in small fields. Results The relative output data reported as a function of the nominal field size were different across the accelerator population by up to nearly 10%. However, using the effective field size for reporting showed that the actual output ratios were consistent across the accelerator population to within the experimental uncertainty of ±1.0%. Correcting the measured relative output using kQclin,Qmsrfclin,fmsr at both the nominal and effective field sizes produce output factors that were not identical but differ by much less than the reported experimental and/or MC statistical uncertainties. Conclusions In general, the proposed methodology removes much of the ambiguity in reporting and interpreting small field dosimetric quantities and facilitates a clear dosimetric comparison across a population of linacs
Resumo:
Statistical learning algorithms provide a viable framework for geotechnical engineering modeling. This paper describes two statistical learning algorithms applied for site characterization modeling based on standard penetration test (SPT) data. More than 2700 field SPT values (N) have been collected from 766 boreholes spread over an area of 220 sqkm area in Bangalore. To get N corrected value (N,), N values have been corrected (Ne) for different parameters such as overburden stress, size of borehole, type of sampler, length of connecting rod, etc. In three-dimensional site characterization model, the function N-c=N-c (X, Y, Z), where X, Y and Z are the coordinates of a point corresponding to N, value, is to be approximated in which N, value at any half-space point in Bangalore can be determined. The first algorithm uses least-square support vector machine (LSSVM), which is related to aridge regression type of support vector machine. The second algorithm uses relevance vector machine (RVM), which combines the strengths of kernel-based methods and Bayesian theory to establish the relationships between a set of input vectors and a desired output. The paper also presents the comparative study between the developed LSSVM and RVM model for site characterization. Copyright (C) 2009 John Wiley & Sons,Ltd.
Resumo:
The space group of the low thermal expansion phosphates, belonging to NASICON structural family, having divalent cations has been reassigned as RImage based on powder X-ray diffraction studies in the system M0.5Ti2P3O12. This implies further ordered distribution of M2+ cations and vacancies along the hexagonal ‘c’ direction of NASICON structure.
Resumo:
Artificial viscosity in SPH-based computations of impact dynamics is a numerical artifice that helps stabilize spurious oscillations near the shock fronts and requires certain user-defined parameters. Improper choice of these parameters may lead to spurious entropy generation within the discretized system and make it over-dissipative. This is of particular concern in impact mechanics problems wherein the transient structural response may depend sensitively on the transfer of momentum and kinetic energy due to impact. In order to address this difficulty, an acceleration correction algorithm was proposed in Shaw and Reid (''Heuristic acceleration correction algorithm for use in SPH computations in impact mechanics'', Comput. Methods Appl. Mech. Engrg., 198, 3962-3974) and further rationalized in Shaw et al. (An Optimally Corrected Form of Acceleration Correction Algorithm within SPH-based Simulations of Solid Mechanics, submitted to Comput. Methods Appl. Mech. Engrg). It was shown that the acceleration correction algorithm removes spurious high frequency oscillations in the computed response whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. In this paper, we aim at gathering further insights into the acceleration correction algorithm by further exploring its application to problems related to impact dynamics. The numerical evidence in this work thus establishes that, together with the acceleration correction algorithm, SPH can be used as an accurate and efficient tool in dynamic, inelastic structural mechanics. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Double helical structures of DNA and RNA are mostly determined by base pair stacking interactions, which give them the base sequence-directed features, such as small roll values for the purine-pyrimidine steps. Earlier attempts to characterize stacking interactions were mostly restricted to calculations on fiber diffraction geometries or optimized structure using ab initio calculations lacking variation in geometry to comment on rather unusual large roll values observed in AU/AU base pair step in crystal structures of RNA double helices. We have generated stacking energy hyperspace by modeling geometries with variations along the important degrees of freedom, roll, and slide, which were chosen via statistical analysis as maximally sequence dependent. Corresponding energy contours were constructed by several quantum chemical methods including dispersion corrections. This analysis established the most suitable methods for stacked base pair systems despite the limitation imparted by number of atom in a base pair step to employ very high level of theory. All the methods predict negative roll value and near-zero slide to be most favorable for the purine-pyrimidine steps, in agreement with Calladine's steric clash based rule. Successive base pairs in RNA are always linked by sugar-phosphate backbone with C3-endo sugars and this demands C1-C1 distance of about 5.4 angstrom along the chains. Consideration of an energy penalty term for deviation of C1-C1 distance from the mean value, to the recent DFT-D functionals, specifically B97X-D appears to predict reliable energy contour for AU/AU step. Such distance-based penalty improves energy contours for the other purine-pyrimidine sequences also. (c) 2013 Wiley Periodicals, Inc. Biopolymers 101: 107-120, 2014.
Resumo:
A method to weakly correct the solutions of stochastically driven nonlinear dynamical systems, herein numerically approximated through the Eule-Maruyama (EM) time-marching map, is proposed. An essential feature of the method is a change of measures that aims at rendering the EM-approximated solution measurable with respect to the filtration generated by an appropriately defined error process. Using Ito's formula and adopting a Monte Carlo (MC) setup, it is shown that the correction term may be additively applied to the realizations of the numerically integrated trajectories. Numerical evidence, presently gathered via applications of the proposed method to a few nonlinear mechanical oscillators and a semi-discrete form of a 1-D Burger's equation, lends credence to the remarkably improved numerical accuracy of the corrected solutions even with relatively large time step sizes. (C) 2015 Elsevier Inc. All rights reserved.
Resumo:
Estimating the abundance of cetaceans from aerial survey data requires careful attention to survey design and analysis. Once an aerial observer perceives a marine mammal or group of marine mammals, he or she has only a few seconds to identify and enumerate the individuals sighted, as well as to determine the distance to the sighting and record this information. In line-transect survey analyses, it is assumed that the observer has correctly identified and enumerated the group or individual. We describe methods used to test this assumption and how survey data should be adjusted to account for observer errors. Harbor porpoises (Phocoena phocoena) were censused during aerial surveys in the summer of 1997 in Southeast Alaska (9844 km survey effort), in the summer of 1998 in the Gulf of Alaska (10,127 km), and in the summer of 1999 in the Bering Sea (7849 km). Sightings of harbor porpoise during a beluga whale (Phocoena phocoena) survey in 1998 (1355 km) provided data on harbor porpoise abundance in Cook Inlet for the Gulf of Alaska stock. Sightings by primary observers at side windows were compared to an independent observer at a belly window to estimate the probability of misidentification, underestimation of group size, and the probability that porpoise on the surface at the trackline were missed (perception bias, g(0)). There were 129, 96, and 201 sightings of harbor porpoises in the three stock areas, respectively. Both g(0) and effective strip width (the realized width of the survey track) depended on survey year, and g(0) also depended on the visibility reported by observers. Harbor porpoise abundance in 1997–99 was estimated at 11,146 animals for the Southeast Alaska stock, 31,046 animals for the Gulf of Alaska stock, and 48,515 animals for the Bering Sea stock.
Resumo:
The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.
Resumo:
Previously, we demonstrated that alemtuzumab induction with rapamycin as sole maintenance therapy is associated with an increased incidence of humoral rejection in human kidney transplant patients. To investigate the role of rapamycin in posttransplant humoral responses after T cell depletion, fully MHC mismatched hearts were transplanted into hCD52Tg mice, followed by alemtuzumab treatment with or without a short course of rapamycin. While untreated hCD52Tg recipients acutely rejected B6 hearts (n = 12), hCD52Tg recipients treated with alemtuzumab alone or in conjunction with rapamycin showed a lack of acute rejection (MST > 100). However, additional rapamycin showed a reduced beating quality over time and increased incidence of vasculopathy. Furthermore, rapamycin supplementation showed an increased serum donor-specific antibodies (DSA) level compared to alemtuzumab alone at postoperation days 50 and 100. Surprisingly, additional rapamycin treatment significantly reduced CD4(+) CD25(+) FoxP3(+) T reg cell numbers during treatment. On the contrary, ICOS(+) PD-1(+) CD4 follicular helper T cells in the lymph nodes were significantly increased. Interestingly, CTLA4-Ig supplementation in conjunction with rapamycin corrected rapamycin-induced accelerated posttransplant humoral response by directly modulating Tfh cells but not Treg cells. This suggests that rapamycin after T cell depletion could affect Treg cells leading to an increase of Tfh cells and DSA production that can be reversed by CTLA4-Ig.