9 resultados para setup time reduction

em DigitalCommons@The Texas Medical Center


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The influence of respiratory motion on patient anatomy poses a challenge to accurate radiation therapy, especially in lung cancer treatment. Modern radiation therapy planning uses models of tumor respiratory motion to account for target motion in targeting. The tumor motion model can be verified on a per-treatment session basis with four-dimensional cone-beam computed tomography (4D-CBCT), which acquires an image set of the dynamic target throughout the respiratory cycle during the therapy session. 4D-CBCT is undersampled if the scan time is too short. However, short scan time is desirable in clinical practice to reduce patient setup time. This dissertation presents the design and optimization of 4D-CBCT to reduce the impact of undersampling artifacts with short scan times. This work measures the impact of undersampling artifacts on the accuracy of target motion measurement under different sampling conditions and for various object sizes and motions. The results provide a minimum scan time such that the target tracking error is less than a specified tolerance. This work also presents new image reconstruction algorithms for reducing undersampling artifacts in undersampled datasets by taking advantage of the assumption that the relevant motion of interest is contained within a volume-of-interest (VOI). It is shown that the VOI-based reconstruction provides more accurate image intensity than standard reconstruction. The VOI-based reconstruction produced 43% fewer least-squares error inside the VOI and 84% fewer error throughout the image in a study designed to simulate target motion. The VOI-based reconstruction approach can reduce acquisition time and improve image quality in 4D-CBCT.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Because the goal of radiation therapy is to deliver a lethal dose to the tumor, accurate information on the location of the tumor needs to be known. Margins are placed around the tumor to account for variations in the daily position of the tumor. If tumor motion and patient setup uncertainties can be reduced, margins that account for such uncertainties in tumor location in can be reduced allowing dose escalation, which in turn could potentially improve survival rates. ^ In the first part of this study, we monitor the location of fiducials implanted in the periphery of lung tumors to determine the extent of non-gated and gated fiducial motion, and to quantify patient setup uncertainties. In the second part we determine where the tumor is when different methods of image-guided patient setup and respiratory gating are employed. In the final part we develop, validate, and implement a technique in which patient setup uncertainties are reduced by aligning patients based upon fiducial locations in projection images. ^ Results from the first part indicate that respiratory gating reduces fiducial motion relative to motion during normal respiration and setup uncertainties when the patients were aligned each day using externally placed skin marks are large. The results from the second part indicate that current margins that account for setup uncertainty and tumor motion result in less than 2% of the tumor outside of the planning target volume (PTV) when the patient is aligned using skin marks. In addition, we found that if respiratory gating is going to be used, it is most effective if used in conjunction with image-guided patient setup. From the third part, we successfully developed, validated, and implemented on a patient a technique for aligning a moving target prior to treatment to reduce the uncertainties in tumor location. ^ In conclusion, setup uncertainties and tumor motion are a significant problem when treating tumors located within the thoracic region. Image-guided patient setup in conjunction with treatment delivery using respiratory gating reduces these uncertainties in tumor locations. In doing so, margins around the tumor used to generate the PTV can be reduced, which may allow for dose escalation to the tumor. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE. To determine the effectiveness of active surveillance cultures and associated infection control practices on the incidence of methicillin resistant Staphylococcus aureus (MRSA) in the acute care setting. DESIGN. A historical analysis of existing clinical data utilizing an interrupted time series design. ^ SETTING AND PARTICIPANTS. Patients admitted to a 260-bed tertiary care facility in Houston, TX between January 2005 through December 2010. ^ INTERVENTION. Infection control practices, including enhanced barrier precautions, compulsive hand hygiene, disinfection and environmental cleaning, and executive ownership and education, were simultaneously introduced during a 5-month intervention implementation period culminating with the implementation of active surveillance screening. Beginning June 2007, all high risk patients were cultured for MRSA nasal carriage within 48 hours of admission. Segmented Poisson regression was used to test the significance of the difference in incidence of healthcare-associated MRSA during the 29-month pre-intervention period compared to the 43-month post-intervention period. ^ RESULTS. A total of 9,957 of 11,095 high-risk patients (89.7%) were screened for MRSA carriage during the intervention period. Active surveillance cultures identified 1,330 MRSA-positive patients (13.4%) contributing to an admission prevalence of 17.5% in high-risk patients. The mean rate of healthcare-associated MRSA infection and colonization decreased from 1.1 per 1,000 patient-days in the pre-intervention period to 0.36 per 1,000 patient-days in the post-intervention period (P<0.001). The effect of the intervention in association with the percentage of S. aureus isolates susceptible to oxicillin were shown to be statistically significantly associated with the incidence of MRSA infection and colonization (IRR = 0.50, 95% CI = 0.31-0.80 and IRR = 0.004, 95% CI = 0.00003-0.40, respectively). ^ CONCLUSIONS. It can be concluded that aggressively targeting patients at high risk for colonization of MRSA with active surveillance cultures and associated infection control practices as part of a multifaceted, hospital-wide intervention is effective in reducing the incidence of healthcare-associated MRSA.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much of the literature on disparities in access to health care among children has focused on measuring absolute and relative differences experienced by race/ethnic groups and, to a lesser extent, socioeconomic groups. However, it is not clear from existing literature how disparities in access to care may have changed over time for children, especially following implementation of the State Children’s Health Insurance Program (SCHIP). The primary objective of this research was to determine if there has been a decrease in disparities in access to care for children across two socioeconomic groups and race/ethnicity groups after SCHIP implementation. Methods commonly used to measure ‘health inequalities’ were used to measure disparities in access to care including population-attributable risk (PAR) and the relative index of inequality (RII). Using these measures there is evidence of a substantial decrease in socioeconomic disparities in health insurance coverage and to a lesser extent in having a usual source of care since the SCHIP program began. There is also evidence of a considerable decrease in non-Hispanic Black disparities in access to care. However, there appears to be a slight increase in disparities in access to care among Hispanic compared to non-Hispanic White children. While there were great improvements in disparities in access to care with the introduction of the SCHIP program, continuing progress in disparities may depend on continuation of the SCHIP program or similar targeted health policy programs. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. Congenital limb defects are common birth defects occurring in approximately 2-7/10,000 live births. Because congenital limb defects are pervasive throughout all populations, and the conditions profoundly affect quality of life, they represent a significant public health concern. Currently there is a paucity of etiologic information in the literature regarding congenital limb reduction defects which represents a major limitation in developing treatment strategies as well as identifying high risk pregnancies. ^ Additionally, despite the fact that the majority of congenital limb reduction defects are isolated, most previous studies have not separated them from those occurring as part of a known syndrome or with multiple additional congenital anomalies of unknown etiology. It stands to reason that factors responsible for multiple congenital anomalies that happen to include congenital limb reduction defects may be quite different from those factors leading to an isolated congenital limb reduction defect. ^ As a first step toward gaining etiologic understanding, this cross-sectional study was undertaken to determine the birth prevalence and obtain demographic information about non-syndromic (isolated) congenital limb reduction defects that occurred in Texas from 1999-2001. ^ Methods. The study population included all infants/fetuses with isolated congenital limb reduction defects born in Texas during 1999-2001; the comparison population was all infants who were born to mothers who were residents of Texas during the same period of time. The overall birth prevalence of limb reduction defects was determined and adjusted for ethnicity, gender, site of defect (upper limb versus lower limb), county of residence, maternal age and maternal education. ^ Results. In Texas, the overall birth prevalence of isolated CLRDs was 2.1/10,000 live births (1.5 and 0.6/10,000 live births for upper limb and lower limb, respectively). ^ The risk of isolated lower limb CLRDs in Texas was significantly lower in females when gender was examined individually (crude prevalence odds ratio of 0.57, 95% CI of 0.36-0.91) as well as in relation to all other variables used in the analysis (adjusted prevalence odds ratio of 0.58, 95% CI of 0.36-0.93). ^ Harris County (which includes the Houston metropolitan area) had significantly lower risks of all (upper limb and lower limb combined) isolated CLRDs when examined in relation to other counties in Texas, with a crude prevalence odds ratio of 0.4 (95% CI: 0.29-0.72) and an adjusted prevalence odds ratio of 0.50 (95% CI: 0.31-0.80). The risk of isolated upper limb CLRDs was significantly lower in Harris County (crude prevalence odds ratio of 0.45, CI of 0.26-0.76 and adjusted prevalence odds ratio of 0.49, CI of 0.28-0.84). This trend toward decreased risk in Harris County was not observed for isolated lower limb reduction defects (adjusted prevalence odds ratio of 0.50, 95% confidence interval: 0.22-1.12). ^ Conclusions. The birth prevalence of isolated congenital limb reduction defects in Texas is in the lower limits of the range of rates that have been reported by other authors for other states (Alabama, Arkansas, California, Georgia, Hawaii, Iowa, Maryland, Massachusetts, North Carolina, Oklahoma, Utah, Washington) and other countries (Argentina, Australia, Austria, Bolivia, Brazil, Canada, Chile, China, Colombia, Costa Rica, Croatia, Denmark, Ecuador, England, Finland, France, Germany, Hungary, Ireland, Israel, Italy, Lithuania, Mexico, Norway, Paraguay, Peru, Spain, Scotland, Sweden, Switzerland, Uruguay, and Venezuela). In Texas, the birth prevalence of isolated congenital lower limb reduction defects was greater for males than females, while the birth prevalence of isolated congenital upper limb reduction defects was not significantly different between males and females. The reduced rates of limb reduction defects in Harris County warrant further investigation. This study has provided an important first step toward gaining etiologic understanding in the study of isolated congenital limb reduction defects. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relationship between degree of diastolic blood pressure (DBP) reduction and mortality was examined among hypertensives, ages 30-69, in the Hypertension Detection and Follow-up Program (HDFP). The HDFP was a multi-center community-based trial, which followed 10,940 hypertensive participants for five years. One-year survival was required for inclusion in this investigation since the one-year annual visit was the first occasion where change in blood pressure could be measured on all participants. During the subsequent four years of follow-up on 10,052 participants, 568 deaths occurred. For levels of change in DBP and for categories of variables related to mortality, the crude mortality rate was calculated. Time-dependent life tables were also calculated so as to utilize available blood pressure data over time. In addition, the Cox life table regression model, extended to take into account both time-constant and time-dependent covariates, was used to examine the relationship change in blood pressure over time and mortality.^ The results of the time-dependent life table and time-dependent Cox life table regression analyses supported the existence of a quadratic function which modeled the relationship between DBP reduction and mortality, even after adjusting for other risk factors. The minimum mortality hazard ratio, based on a particular model, occurred at a DBP reduction of 22.6 mm Hg (standard error = 10.6) in the whole population and 8.5 mm Hg (standard error = 4.6) in the baseline DBP stratum 90-104. After this reduction, there was a small increase in the risk of death. There was not evidence of the quadratic function after fitting the same model using systolic blood pressure. Methodologic issues involved in studying a particular degree of blood pressure reduction were considered. The confidence interval around the change corresponding to the minimum hazard ratio was wide and the obtained blood pressure level should not be interpreted as a goal for treatment. Blood pressure reduction was attributed, not only to pharmacologic therapy, but also to regression to the mean, and to other unknown factors unrelated to treatment. Therefore, the surprising results of this study do not provide direct implications for treatment, but strongly suggest replication in other populations. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The infant mortality rate (IMR) is considered to be one of the most important indices of a country's well-being. Countries around the world and other health organizations like the World Health Organization are dedicating their resources, knowledge and energy to reduce the infant mortality rates. The well-known Millennium Development Goal 4 (MDG 4), whose aim is to archive a two thirds reduction of the under-five mortality rate between 1990 and 2015, is an example of the commitment. ^ In this study our goal is to model the trends of IMR between the 1950s to 2010s for selected countries. We would like to know how the IMR is changing overtime and how it differs across countries. ^ IMR data collected over time forms a time series. The repeated observations of IMR time series are not statistically independent. So in modeling the trend of IMR, it is necessary to account for these correlations. We proposed to use the generalized least squares method in general linear models setting to deal with the variance-covariance structure in our model. In order to estimate the variance-covariance matrix, we referred to the time-series models, especially the autoregressive and moving average models. Furthermore, we will compared results from general linear model with correlation structure to that from ordinary least squares method without taking into account the correlation structure to check how significantly the estimates change.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study evaluated a modified home-based model of family preservation services, the long-term community case management model, as operationalized by a private child welfare agency that serves as the last resort for hard-to-serve families with children at severe risk of out-of-home placement. The evaluation used a One-Group Pretest-Posttest design with a modified time-series design to determine if the intervention would produce a change over time in the composite score of each family's Child Well-Being Scales (CWBS). A comparison of the mean CWBS scores of the 208 families and subsets of these families at the pretest and various posttests showed a statistically significant decrease in the CWBS scores, indicating decreased risk factors. The longer the duration of services, the greater the statistically significant risk reduction. The results support the conclusion that the families who participate in empowerment-oriented community case management, with the option to extend service duration to resolve or ameliorate chronic family problems, have experienced effective strengthening in family functioning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.