914 resultados para RESIDUAL-BASED TESTS
Resumo:
The objective of this dissertation was to determine the initiation and completion rates of adjuvant chemotherapy, its toxicity and the compliance rates of post-treatment surveillance for elderly patients with colon cancer using the linked Surveillance, Epidemiology, and End Results – Medicare database.^ The first study assessed the initiation and completion rate of 5-fluorouracil-based adjuvant chemotherapy and its relationship with patient characteristics. Of the 12,265 patients diagnosed with stage III colon adenocarcinoma in 1991-2005, 64.4% received adjuvant chemotherapy within 3-months after tumor resection and 40% of them completed the treatment. Age, marital status, and comorbidity score were significant predictors for chemotherapy initiation and completion.^ The second study estimated the incidence rate of toxicity-related endpoints among stage III colon adenocarcinoma patients treated with chemotherapy in 1991-2005. Of the 12,099 patients, 63.9% underwent chemotherapy and had volume depletion disorder (3-month cumulative incidence rate [CIR]=9.1%), agranulocytosis (CIR=3.4%), diarrhea (CIR=2.4%), nausea and vomiting (CIR=2.3%). Cox regression analysis confirmed such association (HR=2.76; 95% CI=2.42-3.15). The risk of ischemic heart diseases was slightly associated with chemotherapy (HR=1.08), but significantly among patients aged <75 with no comorbidity (HR=1.70). ^ The third study determined the adherence rate of follow-up cares among patients diagnosed with stage I-III colon adenocarcinoma in 2000 - June 2002. We identified 7,348 patients with a median follow-up of 59 months. The adherence rate was 83.9% for office visits, 29.4% for CEA tests, and 74.3% for colonoscopy. Overall, 25.2% met the recommended post-treatment care. Younger age at diagnosis, white race, married, advanced stage, fewer comorbidities, and chemotherapy use were significantly associated with guideline adherence.^ In conclusions, not all colon cancer patients received chemotherapy. Receiving chemotherapy was associated with increased risk of developing gastrointestinal, hematological and cardiac toxicities. Patients were more likely to comply with the schedule for office visits and colonoscopy but failed in CEA tests. ^
Resumo:
The clinical advantage for protons over conventional high-energy x-rays stems from their unique depth-dose distribution, which delivers essentially no dose beyond the end of range. In order to achieve it, accurate localization of the tumor volume relative to the proton beam is necessary. For cases where the tumor moves with respiration, the resultant dose distribution is sensitive to such motion. One way to reduce uncertainty caused by respiratory motion is to use gated beam delivery. The main goal of this dissertation is to evaluate the respiratory gating technique in both passive scattering and scanning delivery mode. Our hypothesis for the study was that optimization of the parameters of synchrotron operation and respiratory gating can lead to greater efficiency and accuracy of respiratory gating for all modes of synchrotron-based proton treatment delivery. The hypothesis is tested in two specific aims. The specific aim #1 is to assess the efficiency of respiratory-gated proton beam delivery and optimize the synchrotron operations for the gated proton therapy. A simulation study was performed and introduced an efficient synchrotron operation pattern, called variable Tcyc. In addition, the simulation study estimated the efficiency in the respiratory gated scanning beam delivery mode as well. The specific aim #2 is to assess the accuracy of beam delivery in respiratory-gated proton therapy. The simulation study was extended to the passive scattering mode to estimate the quality of pulsed beam delivery to the residual motion for several synchrotron operation patterns with the gating technique. The results showed that variable Tcyc operation can offer good reproducible beam delivery to the residual motion at a certain phase of the motion. For respiratory gated scanning beam delivery, the impact of motion on the dose distributions by scanned beams was investigated by measurement. The results showed the threshold for motion for a variety of scan patterns and the proper number of paintings for normal and respiratory gated beam deliveries. The results of specific aims 1 and 2 provided supporting data for implementation of the respiratory gating beam delivery technique into both passive and scanning modes and the validation of the hypothesis.
Resumo:
Purpose: School districts in the U.S. regularly offer foods that compete with the USDA reimbursable meal, known as `a la carte' foods. These foods must adhere to state nutritional regulations; however, the implementation of these regulations often differs across districts. The purpose of this study is to compare two methods of offering a la carte foods on student's lunch intake: 1) an extensive a la carte program in which schools have a separate area for a la carte food sales, that includes non-reimbursable entrees; and 2) a moderate a la carte program, which offers the sale of a la carte foods on the same serving line with reimbursable meals. ^ Methods: Direct observation was used to assess children's lunch consumption in six schools, across two districts in Central Texas (n=373 observations). Schools were matched on socioeconomic status. Data collectors were randomly assigned to students, and recorded foods obtained, foods consumed, source of food, gender, grade, and ethnicity. Observations were entered into a nutrient database program, FIAS Millennium Edition, to obtain nutritional information. Differences in energy and nutrient intake across lunch sources and districts were assessed using ANOVA and independent t-tests. A linear regression model was applied to control for potential confounders. ^ Results: Students at schools with extensive a la carte programs consumed significantly more calories, carbohydrates, total fat, saturated fat, calcium, and sodium compared to students in schools with moderate a la carte offerings (p<.05). Students in the extensive a la carte program consumed approximately 94 calories more than students in the moderate a la carte program. There was no significant difference in the energy consumption in students who consumed any amount of a la carte compared to students who consumed none. In both districts, students who consumed a la carte offerings were more likely to consume sugar-sweetened beverages, sweets, chips, and pizza compared to students who consumed no a la carte foods. ^ Conclusion: The amount, type and method of offering a la carte foods can significantly affect student dietary intake. This pilot study indicates that when a la carte foods are more available, students consume more calories. Findings underscore the need for further investigation on how availability of a la carte foods affects children's diets. Guidelines for school a la carte offerings should be maximized to encourage the consumption of healthful foods and appropriate energy intake.^
Resumo:
Objective: The objective of this study is to investigate the association between processed and unprocessed red meat consumption and prostate cancer (PCa) stage in a homogenous Mexican-American population. Methods: This population-based case-control study had a total of 582 participants (287 cases with histologically confirmed adenocarcinoma of the prostate gland and 295 age and ethnicity-matched controls) that were all residing in the Southeast region of Texas from 1998 to 2006. All questionnaire information was collected using a validated data collection instrument. Statistical Analysis: Descriptive analyses included Student's t-test and Pearson's Chi-square tests. Odds ratios and 95% confidence intervals were calculated to quantify the association between nutritional factors and PCa stage. A multivariable model was used for unconditional logistic regression. Results: After adjusting for relevant covariates, those who consume high amounts of processed red meat have a non-significant increased odds of being diagnosed with localized PCa (OR = 1.60 95% CI: 0.85 - 3.03) and total PCa (OR = 1.43 95% CI: 0.81 - 2.52) but not for advanced PCa (OR = 0.91 95% CI: 1.37 - 2.23). Interestingly, high consumption of carbohydrates shows a significant reduction in the odds of being diagnosed with total PCa and advanced PCa (OR = 0.43 95% CI: 0.24 - 0.77; OR = 0.27 95% CI: 0.10 - 0.71, respectively). However, consuming high amounts of energy from protein and fat was shown to increase the odds of being diagnosed with advanced PCa (OR = 4.62 95% CI: 1.69 - 12.59; OR = 2.61 95% CI: 1.04 - 6.58, respectively). Conclusion: Mexican-Americans who consume high amounts of energy from protein and fat had increased odds of being diagnosed with advanced PCa, while high amounts of carbohydrates reduced the odds of being diagnosed with total and advanced PCa.^
Resumo:
The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^
Resumo:
The copepod Calanus finmarchicus is the dominant species of the meso-zooplankton in the Norwegian Sea, and constitutes an important link between the phytoplankton and the higher trophic levels in the Norwegian Sea food chain. An individualbased model for C. finmarchicus, based on super-individuals and evolving traits for behaviour, stages, etc., is two-way coupled to the NORWegian ECOlogical Model system (NORWECOM). One year of modelled C. finmarchicus spatial distribution, production and biomass are found to represent observations reasonably well. High C. finmarchicus abundance is found along the Norwegian shelf-break in the early summer, while the overwintering population is found along the slope and in the deeper Norwegian Sea basins. The timing of the spring bloom is generally later than in the observations. Annual Norwegian Sea production is found to be 29 million tonnes of carbon and a production to biomass (P/B) ratio of 4.3 emerges. Sensitivity tests show that the modelling system is robust to initial values of behavioural traits and with regards to the number of super-individuals simulated given that this is above about 50,000 individuals. Experiments with the model system indicate that it provides a valuable tool for studies of ecosystem responses to causative forces such as prey density or overwintering population size. For example, introducing C. finmarchicus food limitations reduces the stock dramatically, but on the other hand, a reduced stock may rebuild in one year under normal conditions. The NetCDF file contains model grid coordinates and bottom topography.
Resumo:
Predicting species potential and future distribution has become a relevant tool in biodiversity monitoring and conservation. In this data article we present the suitability map of a virtual species generated based on two bioclimatic variables, and a dataset containing more than 700.000 random observations at the extent of Europe. The dataset includes spatial attributes such as, distance to roads, protected areas, country codes, and the habitat suitability of two spatially clustered species (grassland and forest species) and a wide spread species.
Resumo:
The sediment sequence at Ocean Drilling Program (ODP) Site 910 (556 m water depth) on the Yermak Plateau in the Arctic Ocean features a remarkable "overconsolidated section" from ~19 to 70-95 m below sea floor (m bsf), marked by large increases in bulk density and sediment strength. The ODP Leg 151 Shipboard Scientific Party interpreted the overconsolidated section to be caused by (1) grounding of a marine-based ice sheet, derived from Svalbard and perhaps the Barents Sea ice sheet, and/or (2) coarser-grained glacial sedimentation, which allowed increased compaction. Here I present planktonic foraminiferal d18O data based on Neogloboquadrina pachyderma (sinistrally coiling) that date the termination of overconsolidation near the boundary between isotope stages 16 and 17 (ca. 660 ka). No evidence is found for coarser grained sedimentation, because lithic fragments >150 µm exhibit similar mean concentrations throughout the upper 24.5 m bsf. The overconsolidated section may reflect more extensive ice-sheet grounding prior to ca. 660 ka, suggesting a major change in state of the Svalbard ice sheets during the mid-Quaternary. Furthermore, continuous sedimentation since that time argues against a pervasive Arctic ice shelf impinged on the Yermak Plateau during the past 660 k.y. These findings suggest that Svalbard ice-sheet history was largely independent of circum-Arctic ice-sheet history during the middle to late Quaternary.
Resumo:
To reconstruct Export Productivity (Pexp), 27 taxonomic categories of the planktonic foraminifera census data were used with the modern analog technique SIMMAX 28 (Pflaumann et al., 1996, doi:10.1029/95PA01743; 2003, doi:10.1029/2002PA000774). To the 26 taxonomic groups widely used and listed in Kucera et al. (2005, doi:10.1016/j.quascirev.2004.07.014), Turborotalita humilis was added in our calibration as it is associated with the PCC source region (Meggers et al., 2002, doi:10.1016/S0967-0645(02)00103-0). The modern analog file is based on the Iberian margin database (Salgueiro et al., 2008, doi:10.1016/j.marmicro.2007.09.003) combined with the North Atlantic surface samples used by the MARGO project (Kucera et al., 2005). This results in a total of 999 analogs for Pexp. Modern oceanic primary productivity (PP) is obtained for each site by averaging 12 monthly primary productivity values for a 8-year period (1978-1986) that were estimated from satellite color data (CZCS) and gridded at 0.5° latitude - longitude fields (Antoine et al., 1996, doi:10.1029/95GB02832). Export Productivity (Pexp) was calculated from the PP values following the empirical relationship Pexp = PP**2/400 for primary production below 200 gC/m**2/yr, and Pexp = PP/2 for primary production above 200 gC/m2/yr (Eppley and Peterson, 1979, doi:10.1038/282677a0; Sarnthein et al., 1988, doi:10.1029/PA003i003p00361). The residuals gives the differences between satellite based Pexp and foraminiferal Pexp.
Resumo:
The paper presents a consistent set of results showing the ability of Laser Shock Processing (LSP) in modifying the overall properties of the Friction Stir Welded (FSW) joints made of AA 2024-T351. Based on laser beam intensities above 109 W/cm2 with pulse energies of several Joules and pulses durations of nanoseconds, LSP is able of inducing a compression residual stress field, improving the wear and fatigue resistance by slowing crack propagation and stress corrosion cracking, but also improving the overall behaviour of the structure. After the FSW and LSP procedures are briefly presented, the results of micro-hardness measurements and of transverse tensile tests, together with the corrosion resistance of the native joints vs. LSP treated are discussed. The ability of LSP to generate compressive residual stresses and to improve the behaviour of the FSW joints is underscored.
Resumo:
Freezing of water or salt solution in concrete pores is a main cause for severe damage and significant reduction of the service life. Most of the freeze-thaw (F-T) accelerated tests measure the scaling of concrete by weighting. This paper presents complementary procedures based on the use of strain gages and ultrasonic pulse velocity (UPV) for measuring the deterioration of concrete due to freezing and thawing. These non-destructive testing (NDT) procedures are applied to two types of concretes, one susceptible to F-T damage and the other does not. The results show a good correlation between scaling and the measurements obtained with NDT. Showing NDT the advantage to detect before the damage and to perform continuous measurement
Resumo:
EURATOM/CIEMAT and Technical University of Madrid (UPM) have been involved in the development of a FPSC [1] (Fast Plant System Control) prototype for ITER, based on PXIe (PCI eXtensions for Instrumentation). One of the main focuses of this project has been data acquisition and all the related issues, including scientific data archiving. Additionally, a new data archiving solution has been developed to demonstrate the obtainable performances and possible bottlenecks of scientific data archiving in Fast Plant System Control. The presented system implements a fault tolerant architecture over a GEthernet network where FPSC data are reliably archived on remote, while remaining accessible to be redistributed, within the duration of a pulse. The storing service is supported by a clustering solution to guaranty scalability, so that FPSC management and configuration may be simplified, and a unique view of all archived data provided. All the involved components have been integrated under EPICS [2] (Experimental Physics and Industrial Control System), implementing in each case the necessary extensions, state machines and configuration process variables. The prototyped solution is based on the NetCDF-4 [3] and [4] (Network Common Data Format) file format in order to incorporate important features, such as scientific data models support, huge size files management, platform independent codification, or single-writer/multiple-readers concurrency. In this contribution, a complete description of the above mentioned solution is presented, together with the most relevant results of the tests performed, while focusing in the benefits and limitations of the applied technologies.
Resumo:
An important goal in the field of intelligent transportation systems (ITS) is to provide driving aids aimed at preventing accidents and reducing the number of traffic victims. The commonest traffic accidents in urban areas are due to sudden braking that demands a very fast response on the part of drivers. Attempts to solve this problem have motivated many ITS advances including the detection of the intention of surrounding cars using lasers, radars or cameras. However, this might not be enough to increase safety when there is a danger of collision. Vehicle to vehicle communications are needed to ensure that the other intentions of cars are also available. The article describes the development of a controller to perform an emergency stop via an electro-hydraulic braking system employed on dry asphalt. An original V2V communication scheme based on WiFi cards has been used for broadcasting positioning information to other vehicles. The reliability of the scheme has been theoretically analyzed to estimate its performance when the number of vehicles involved is much higher. This controller has been incorporated into the AUTOPIA program control for automatic cars. The system has been implemented in Citroën C3 Pluriel, and various tests were performed to evaluate its operation.
Resumo:
We propose a general framework for assertion-based debugging of constraint logic programs. Assertions are linguistic constructions for expressing properties of programs. We define several assertion schemas for writing (partial) specifications for constraint logic programs using quite general properties, including user-defined programs. The framework is aimed at detecting deviations of the program behavior (symptoms) with respect to the given assertions, either at compile-time (i.e., statically) or run-time (i.e., dynamically). We provide techniques for using information from global analysis both to detect at compile-time assertions which do not hold in at least one of the possible executions (i.e., static symptoms) and assertions which hold for all possible executions (i.e., statically proved assertions). We also provide program transformations which introduce tests in the program for checking at run-time those assertions whose status cannot be determined at compile-time. Both the static and the dynamic checking are provably safe in the sense that all errors flagged are definite violations of the pecifications. Finally, we report briefly on the currently implemented instances of the generic framework.