965 resultados para Error in substance
Resumo:
The concentrations of chironomid remains in lake sediments are very variable and, therefore, chironomid stratigraphies often include samples with a low number of counts. Thus, the effect of low count sums on reconstructed temperatures is an important issue when applying chironomid‐temperature inference models. Using an existing data set, we simulated low count sums by randomly picking subsets of head capsules from surface‐sediment samples with a high number of specimens. Subsequently, a chironomid‐temperature inference model was used to assess how the inferred temperatures are affected by low counts. The simulations indicate that the variability of inferred temperatures increases progressively with decreasing count sums. At counts below 50 specimens, a further reduction in count sum can cause a disproportionate increase in the variation of inferred temperatures, whereas at higher count sums the inferences are more stable. Furthermore, low count samples may consistently infer too low or too high temperatures and, therefore, produce a systematic error in a reconstruction. Smoothing reconstructed temperatures downcore is proposed as a possible way to compensate for the high variability due to low count sums. By combining adjacent samples in a stratigraphy, to produce samples of a more reliable size, it is possible to assess if low counts cause a systematic error in inferred temperatures.
Resumo:
The goal of this study was to investigate the properties of human acid (alpha)-glucosidase with respect to: (i) the molecular heterogeneity of the enzyme and (ii) the synthesis, post-translational modification, and transport of acid (alpha)-glucosidase in human fibroblasts.^ The initial phase of these investigations involved the purification of acid (alpha)-glucosidase from the human liver. Human hepatic acid (alpha)-glucosidase was characterized by isoelectric focusing and native and sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). Four distinct charge forms of hepatic acid (alpha)-glucosidase were separated by chromatofocusing and characterized individually. Charge heterogeneity was demonstrated to result from differences in the polypeptide components of each charge form.^ The second aspect of this research focused on the biosynthesis and the intracellular processing and transport of acid (alpha)-glucosidase in human fibroblasts. These experiments were accomplished by immune precipitation of the biosynthetic intermediates of acid (alpha)-glucosidase from radioactively labeled fibroblasts with polyclonal and monoclonal antibodies raised against human hepatic acid (alpha)-glucosidase. The immune precipitated biosynthetic forms of acid (alpha)-glucosidase were analyzed by SDS-PAGE and autoradiography. The pulse-chase experiments demonstrated the existence of several transient, high molecular weight precursors of acid (alpha)-glucosidase. These precursors were demonstrated to be intermediates of acid (alpha)-glucosidase at different stages of transport and processing in the Golgi apparatus. Other experiments were performed to examine the role of co-translational glycosylation of acid (alpha)-glucosidase in the transport and processing of precursors of this enzyme.^ A specific immunological assay for detecting acid (alpha)-glucosidase was developed using the monoclonal antibodies described above. This method was modified to increase the sensitivity of the assay by utilization of the biotin-avidin amplification system. This method was demonstrated to be more sensitive for detecting human acid (alpha)-glucosidase than the currently used biochemical assay for acid (alpha)-glucosidase activity. It was also demonstrated that the biotin-avidin immunoassay could discriminate between normal and acid (alpha)-glucosidase deficient fibroblasts, thus providing an alternative approach to detecting this inborn error in metabolism. (Abstract shortened with permission of author.) ^
Resumo:
Radiation therapy for patients with intact cervical cancer is frequently delivered using primary external beam radiation therapy (EBRT) followed by two fractions of intracavitary brachytherapy (ICBT). Although the tumor is the primary radiation target, controlling microscopic disease in the lymph nodes is just as critical to patient treatment outcome. In patients where gross lymphadenopathy is discovered, an extra EBRT boost course is delivered between the two ICBT fractions. Since the nodal boost is an addendum to primary EBRT and ICBT, the prescription and delivery must be performed considering previously delivered dose. This project aims to address the major issues of this complex process for the purpose of improving treatment accuracy while increasing dose sparing to the surrounding normal tissues. Because external beam boosts to involved lymph nodes are given prior to the completion of ICBT, assumptions must be made about dose to positive lymph nodes from future implants. The first aim of this project was to quantify differences in nodal dose contribution between independent ICBT fractions. We retrospectively evaluated differences in the ICBT dose contribution to positive pelvic nodes for ten patients who had previously received external beam nodal boost. Our results indicate that the mean dose to the pelvic nodes differed by up to 1.9 Gy between independent ICBT fractions. The second aim is to develop and validate a volumetric method for summing dose of the normal tissues during prescription of nodal boost. The traditional method of dose summation uses the maximum point dose from each modality, which often only represents the worst case scenario. However, the worst case is often an exaggeration when highly conformal therapy methods such as intensity modulated radiation therapy (IMRT) are used. We used deformable image registration algorithms to volumetrically sum dose for the bladder and rectum and created a voxel-by-voxel validation method. The mean error in deformable image registration results of all voxels within the bladder and rectum were 5 and 6 mm, respectively. Finally, the third aim explored the potential use of proton therapy to reduce normal tissue dose. A major physical advantage of protons over photons is that protons stop after delivering dose in the tumor. Although theoretically superior to photons, proton beams are more sensitive to uncertainties caused by interfractional anatomical variations, and must be accounted for during treatment planning to ensure complete target coverage. We have demonstrated a systematic approach to determine population-based anatomical margin requirements for proton therapy. The observed optimal treatment angles for common iliac nodes were 90° (left lateral) and 180° (posterior-anterior [PA]) with additional 0.8 cm and 0.9 cm margins, respectively. For external iliac nodes, lateral and PA beams required additional 0.4 cm and 0.9 cm margins, respectively. Through this project, we have provided radiation oncologists with additional information about potential differences in nodal dose between independent ICBT insertions and volumetric total dose distribution in the bladder and rectum. We have also determined the margins needed for safe delivery of proton therapy when delivering nodal boosts to patients with cervical cancer.
New methods for quantification and analysis of quantitative real-time polymerase chain reaction data
Resumo:
Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^
Resumo:
The rate at which hydrothermal precipitates accumulate, as measured by the accumulation rate of manganese, can be used to identify periods of anomalous hydrothermal activity in the past. From a preliminary study of Sites 597 and 598, four periods prior to 6 Ma of anomalously high hydrothermal activity have been identified: 8.5 to 10.5 Ma, 12 to 16 Ma, 17 to 18 Ma, and 23-to-27 Ma. The 18-Ma anomaly is the largest and is associated with the jump in spreading from the fossil Mendoza Ridge to the East Pacific Rise, whereas the 23-to-27-Ma anomaly is correlated with the birth of the Galapagos Spreading Center and resultant ridge reorganization. The 12-to-16-Ma and 8.5-to-10.5-Ma anomalies are correlated with periods of anomalously high volcanism around the rim of the Pacific Basin and may be related to other periods of ridge reorganization along the East Pacific Rise. There is no apparent correlation between periods of fast spreading at 19°S and periods of high hydrothermal activity. We thus suggest that periods when hydrothermal activity and crustal alteration at mid-ocean ridges are the most pronounced may be periods of large-scale ridge reorganization.
Resumo:
We present a high-resolution magnetostratigraphy and relative paleointensity (RPI) record derived from the upper 85 meters of IODP Site U1336, an equatorial Pacific early to middle Miocene succession recovered during Expedition 320/321. The magnetostratigraphy is well resolved with reversals typically located to within a few centimeters resulting in a well-constrained age model. The lowest normal polarity interval, from 85 to 74.87 meters, is interpreted as the upper part of Chron C6n (18.614-19.599 Ma). Another 33 magnetozones occur from 74.87 to 0.85 m, which are interpret to represent the continuous sequence of chrons from Chron C5Er (18.431-18.614 Ma) up to the top of Chron C5An.1n (12.014 Ma). We identify three new possible subchrons within Chron C5Cn.1n, Chron 5Bn.1r, and C5ABn. Sedimentation rates vary from about 7 to 15 m/Myr with a mean of about 10 m/Myr. We observe rapid, apparent changes in the sedimentation rate at geomagnetic reversals between ~16 and 19 Ma that indicate a calibration error in geomagnetic polarity timescale (ATNTS2004). The remanence is carried mainly by non-interacting particles of fine-grained magnetite, which have FORC distributions characteristic of biogenic magnetite. Given the relative homogeneity of the remanence carriers throughout the 85-m-thick succession and the quality with which the remanence is recorded, we have constructed a relative paleointensity (RPI) record that provides new insights into middle Miocene geomagnetic field behavior. The RPI record indicates a gradual decline in field strength between 18.5 Ma and 14.5 Ma, and indicates no discernible link between RPI and either chron duration or polarity state.
Resumo:
Peridotites (diopside-bearing harzburgites) found at 13°N of the Mid-Atlantic Ridge fall into two compositional groups. Peridotites P1 are plagioclase-free rocks with minerals of uniform composition and Ca-pyroxene strongly depleted in highly incompatible elements. Peridotites P2 bear evidence of interaction with basic melt: mafic veinlets; wide variations in mineral composition; enrichment of minerals in highly incompatible elements (Na, Zr, and LREE); enrichment of minerals in moderately incompatible elements (Ti, Y, and HREE) from P1 level to abundances 4-10 times higher toward the contacts with mafic aggregates; and exotic mineral assemblages Cr-spinel + rutile and Cr-spinel + ilmenite in peridotite and pentlandite + rutile in mafic veinlets. Anomalous incompatible-element enrichment of minerals from peridotites P2 occurred at the spinel-plagioclase facies boundary, which corresponds to pressure of about 0.8-0.9 GPa. Temperature and oxygen fugacity were estimated from spinel-orthopyroxene-olivine equilibria. Peridotites P1 with uniform mineral composition record temperature of the last complete recrystallization at 940-1050°C and FMQ buffer oxygen fugacity within the calculation error. In peridotites P2, local assemblages have different compositions of coexisting minerals, which reflects repeated partial recrystallization during heating to magmatic temperatures (above 1200°C) and subsequent reequilibration at temperatures decreasing to 910°C and oxygen fugacity significantly higher than FMQ buffer (delta log fO2 = 1.3-1.9). Mafic veins are considered to be a crystallization product from basic melt enriched in Mg and Ni via interaction with peridotite. The geochemical type of melt reconstructed by the equilibrium with Ca-pyroxene is defined as T-MORB: (La/Sm)_N~1.6 and (Ce/Yb) )_N~2.3 that is well consistent with compositional variations of modern basaltic lavas in this segment of the Mid-Atlantic Ridge, including new data on quenched basaltic glasses.
Resumo:
We present modern B/Ca core-top calibrations for the epifaunal benthic foraminifer Nuttallides umbonifera and the infaunal Oridorsalis umbonatus to test whether B/Ca values in these species can be used for the reconstruction of paleo-D[[CO3]2-]. O. umbonatus originated in the Late Cretaceous and remains extant, whereas N. umbonifera originated in the Eocene and is the closest extant relative to Nuttallides truempyi, which ranges from the Late Cretaceous through the Eocene. We measured B/Ca in both species in 35 Holocene sediment samples from the Atlantic, Pacific and Southern Oceans. B/Ca values in epifaunal N. umbonifera (~ 85-175 µmol/mol) are consistently lower than values reported for epifaunal Cibicidoides (Cibicides) wuellerstorfi (130-250 µmol/mol), though the sensitivity of D[[CO3]2-] on B/Ca in N. umbonifera (1.23 ± 0.15) is similar to that in C. wuellerstorfi (1.14 ± 0.048). In addition, we show that B/Ca values of paired N. umbonifera and its extinct ancestor, N. truempyi, from Eocene cores are indistinguishable within error. In contrast, both the B/Ca (35-85 µmol/mol) and sensitivity to D[[CO3]2-] (0.29 ± 0.20) of core-top O. umbonatus are considerably lower (as in other infaunal species), and this offset extends into the Paleocene. Thus the B/Ca of N. umbonifera and its ancestor can be used to reconstruct bottom water D[[CO3]2?], whereas O. umbonatus B/Ca appears to be buffered by porewater [[CO3]2-] and suited for constraining long-term drift in seawater B/Ca.
Resumo:
Drilling at Sites 534 and 603 of the Deep Sea Drilling Project recovered thick sections of Berriasian through Aptian white limestones to dark gray marls, interbedded with claystone and clastic turbidites. Progressive thermal demagnetization removed a normal-polarity overprint carried by goethite and/or pyrrhotite. The resulting characteristic magnetization is carried predominantly by magnetite. Directions and reliability of characteristic magnetization of each sample were computed by using least squares line-fits of magnetization vectors. The corrected true mean inclinations of the sites suggest that the western North Atlantic underwent approximately 6° of steady southward motion between the Berriasian and Aptian stages. The patterns of magnetic polarity of the two sites, when plotted on stratigraphic columns of the pelagic sediments without turbidite beds, display a fairly consistent magnetostratigraphy through most of the Hauterivian-Barremian interval, using dinoflagellate and nannofossil events and facies changes in pelagic sediment as controls on the correlations. The composite magnetostratigraphy appears to include most of the features of the M-sequence block model of magnetic anomalies from Ml to Ml ON (Barremian-Hauterivian) and from M16 to M23 (Berriasian-Tithonian). The Valanginian magnetostratigraphy of the sites does not exhibit reversed polarity intervals corresponding to Ml 1 to M13 of the M-sequence model; this may be the result of poor magnetization, of a major unrecognized hiatus in the early to middle Valanginian in the western North Atlantic, or of an error in the standard block model. Based on these tentative polarity-zone correlations, the Hauterivian/Barremian boundary occurs in or near the reversed-polarity Chron M7 or M5, depending upon whether the dinoflagellate or nannofossil zonation, respectively, is used; the Valanginian/Hauterivian boundary, as defined by the dinoflagellate zonation, is near reversed-polarity Chron M10N.
Resumo:
Millennial-scale dry events in the Northern Hemisphere monsoon regions during the last Glacial period are commonly attributed to southward shifts of the Intertropical Convergence Zone (ITCZ) associated with an intensification of the northeasterly (NE) trade wind system during intervals of reduced Atlantic meridional overturning circulation (AMOC). Through the use of high-resolution last deglaciation pollen records from the continental slope off Senegal, our data show that one of the longest and most extreme droughts in the western Sahel history, which occurred during the North Atlantic Heinrich Stadial 1 (HS1), displayed a succession of three major phases. These phases progressed from an interval of maximum pollen representation of Saharan elements between ~19 and 17.4 kyr BP indicating the onset of aridity and intensified NE trade winds, followed by a millennial interlude of reduced input of Saharan pollen and increased input of Sahelian pollen, to a final phase between ~16.2 and 15 kyr BP that was characterized by a second maximum of Saharan pollen abundances. This change in the pollen assemblage indicates a mid-HS1 interlude of NE trade wind relaxation, occurring between two distinct trade wind maxima, along with an intensified mid-tropospheric African Easterly Jet (AEJ) indicating a substantial change in West African atmospheric processes. The pollen data thus suggest that although the NE trades have weakened, the Sahel drought remained severe during this time interval. Therefore, a simple strengthening of trade winds and a southward shift of the West African monsoon trough alone cannot fully explain millennial-scale Sahel droughts during periods of AMOC weakening. Instead, we suggest that an intensification of the AEJ is needed to explain the persistence of the drought during HS1. Simulations with the Community Climate System Model indicate that an intensified AEJ during periods of reduced AMOC affected the North African climate by enhancing moisture divergence over the West African realm, thereby extending the Sahel drought for about 4000 years.
Resumo:
Greenland ice core records indicate that the last deglaciation (~7-21 ka) was punctuated by numerous abrupt climate reversals involving temperature changes of up to 5°C-10°C within decades. However, the cause behind many of these events is uncertain. A likely candidate may have been the input of deglacial meltwater, from the Laurentide ice sheet (LIS), to the high-latitude North Atlantic, which disrupted ocean circulation and triggered cooling. Yet the direct evidence of meltwater input for many of these events has so far remained undetected. In this study, we use the geochemistry (paired Mg/Ca-d18O) of planktonic foraminifera from a sediment core south of Iceland to reconstruct the input of freshwater to the northern North Atlantic during abrupt deglacial climate change. Our record can be placed on the same timescale as ice cores and therefore provides a direct comparison between the timing of freshwater input and climate variability. Meltwater events coincide with the onset of numerous cold intervals, including the Older Dryas (14.0 ka), two events during the Allerød (at ~13.1 and 13.6 ka), the Younger Dryas (12.9 ka), and the 8.2 ka event, supporting a causal link between these abrupt climate changes and meltwater input. During the Bølling-Allerød warm interval, we find that periods of warming are associated with an increased meltwater flux to the northern North Atlantic, which in turn induces abrupt cooling, a cessation in meltwater input, and eventual climate recovery. This implies that feedback between climate and meltwater input produced a highly variable climate. A comparison to published data sets suggests that this feedback likely included fluctuations in the southern margin of the LIS causing rerouting of LIS meltwater between southern and eastern drainage outlets, as proposed by Clark et al. (2001, doi:10.1126/science.1062517).
Resumo:
Multibeam data were collected during R/V Polarstern cruise ARK-XXII/2 leading to the central Arctic Ocean. Multibeam sonar system was ATLAS HYDROSWEEP DS2. Data are unprocessed and may contain outliers and blunders. Because of an error in installation of the transducers, the data are affected by large systematic errors and must not be used for grid calculations and charting projects.
Resumo:
The Tokai-to-Kamioka (T2K) neutrino experiment measures neutrino oscillations by using an almost pure muon neutrino beam produced at the J-PARC accelerator facility. The T2K muon monitor was installed to measure the direction and stability of the muon beam which is produced together with the muon neutrino beam. The systematic error in the muon beam direction measurement was estimated, using data and MC simulation, to be 0.28 mrad. During beam operation, the proton beam has been controlled using measurements from the muon monitor and the direction of the neutrino beam has been tuned to within 0.3 mrad with respect to the designed beam-axis. In order to understand the muon beam properties, measurement of the absolute muon yield at the muon monitor was conducted with an emulsion detector. The number of muon tracks was measured to be (4.06 ± 0.05) × 10⁴ cm⁻² normalized with 4 × 10¹¹protons on target with 250 kA horn operation. The result is in agreement with the prediction which is corrected based on hadron production data.
Resumo:
La presente Tesis Doctoral aborda la aplicación de métodos meshless, o métodos sin malla, a problemas de autovalores, fundamentalmente vibraciones libres y pandeo. En particular, el estudio se centra en aspectos tales como los procedimientos para la resolución numérica del problema de autovalores con estos métodos, el coste computacional y la viabilidad de la utilización de matrices de masa o matrices de rigidez geométrica no consistentes. Además, se acomete en detalle el análisis del error, con el objetivo de determinar sus principales fuentes y obtener claves que permitan la aceleración de la convergencia. Aunque en la actualidad existe una amplia variedad de métodos meshless en apariencia independientes entre sí, se han analizado las diferentes relaciones entre ellos, deduciéndose que el método Element-Free Galerkin Method [Método Galerkin Sin Elementos] (EFGM) es representativo de un amplio grupo de los mismos. Por ello se ha empleado como referencia en este análisis. Muchas de las fuentes de error de un método sin malla provienen de su algoritmo de interpolación o aproximación. En el caso del EFGM ese algoritmo es conocido como Moving Least Squares [Mínimos Cuadrados Móviles] (MLS), caso particular del Generalized Moving Least Squares [Mínimos Cuadrados Móviles Generalizados] (GMLS). La formulación de estos algoritmos indica que la precisión de los mismos se basa en los siguientes factores: orden de la base polinómica p(x), características de la función de peso w(x) y forma y tamaño del soporte de definición de esa función. Se ha analizado la contribución individual de cada factor mediante su reducción a un único parámetro cuantificable, así como las interacciones entre ellos tanto en distribuciones regulares de nodos como en irregulares. El estudio se extiende a una serie de problemas estructurales uni y bidimensionales de referencia, y tiene en cuenta el error no sólo en el cálculo de autovalores (frecuencias propias o carga de pandeo, según el caso), sino también en términos de autovectores. This Doctoral Thesis deals with the application of meshless methods to eigenvalue problems, particularly free vibrations and buckling. The analysis is focused on aspects such as the numerical solving of the problem, computational cost and the feasibility of the use of non-consistent mass or geometric stiffness matrices. Furthermore, the analysis of the error is also considered, with the aim of identifying its main sources and obtaining the key factors that enable a faster convergence of a given problem. Although currently a wide variety of apparently independent meshless methods can be found in the literature, the relationships among them have been analyzed. The outcome of this assessment is that all those methods can be grouped in only a limited amount of categories, and that the Element-Free Galerkin Method (EFGM) is representative of the most important one. Therefore, the EFGM has been selected as a reference for the numerical analyses. Many of the error sources of a meshless method are contributed by its interpolation/approximation algorithm. In the EFGM, such algorithm is known as Moving Least Squares (MLS), a particular case of the Generalized Moving Least Squares (GMLS). The accuracy of the MLS is based on the following factors: order of the polynomial basis p(x), features of the weight function w(x), and shape and size of the support domain of this weight function. The individual contribution of each of these factors, along with the interactions among them, has been studied in both regular and irregular arrangement of nodes, by means of a reduction of each contribution to a one single quantifiable parameter. This assessment is applied to a range of both one- and two-dimensional benchmarking cases, and includes not only the error in terms of eigenvalues (natural frequencies or buckling load), but also of eigenvectors
Resumo:
Several issues concerning the current use of speech interfaces are discussed and the design and development of a speech interface that enables air traffic controllers to command and control their terminals by voice is presented. A special emphasis is made in the comparison between laboratory experiments and field experiments in which a set of ergonomics-related effects are detected that cannot be observed in the controlled laboratory experiments. The paper presents both objective and subjective performance obtained in field evaluation of the system with student controllers at an air traffic control (ATC) training facility. The system exhibits high word recognition test rates (0.4% error in Spanish and 1.5% in English) and low command error (6% error in Spanish and 10.6% error in English in the field tests). Subjective impression has also been positive, encouraging future development and integration phases in the Spanish ATC terminals designed by Aeropuertos Españoles y Navegación Aérea (AENA).