14 resultados para Design time
em DigitalCommons@The Texas Medical Center
Resumo:
Objective: To investigate hemodynamic responses to lateral rotation. ^ Design: Time-series within a randomized controlled trial pilot study. ^ Setting: A medical intensive care unit (ICU) and a medical-surgical ICU in two tertiary care hospitals. ^ Patients: Adult patients receiving mechanical ventilation. ^ Interventions: Two-hourly manual or continuous automated lateral rotation. ^ Measurements and Main Results: Heart rate (HR) and arterial pressure were sampled every 6 seconds for > 24 hours, and pulse pressure (PP) was computed. Turn data were obtained from a turning flow sheet (manual turn) or with an angle sensor (automated turn). Within-subject ensemble averages were computed for HR, mean arterial pressure (MAP), and PP across turns. Sixteen patients were randomized to either the manual (n = 8) or automated (n = 8) turn. Three patients did not complete the study due to hemodynamic instability, bed malfunction or extubation, leaving 13 patients (n = 6 manual turn and n = 7 automated turn) for analysis. Seven patients (54%) had an arterial line. Changes in hemodynamic variables were statistically significant increases ( p < .05), but few changes were clinically important, defined as ≥ 10 bpm (HR) or ≥ 10 mmHg (MAP and PP), and were observed only in the manual-turn group. All manual-turn patients had prolonged recovery to baseline in HR, MAP and PP of up to 45 minutes (p ≤ .05). No significant turning-related periodicities were found for HR, MAP, or PP. Cross-correlations between variables showed variable lead-lag relations in both groups. A statistically, but not clinically, significant increase in HR of 3 bpm was found for the manual-turn group in the back compared with the right lateral position ( F = 14.37, df = 1, 11, p = .003). ^ Conclusions: Mechanically ventilated critically ill patients experience modest hemodynamic changes with manual lateral rotation. A clinically inconsequential increase in HR, MAP, and PP may persist for up to 45 minutes. Automated lateral rotation has negligible hemodynamic effects. ^
Resumo:
Utilizing advanced information technology, Intensive Care Unit (ICU) remote monitoring allows highly trained specialists to oversee a large number of patients at multiple sites on a continuous basis. In the current research, we conducted a time-motion study of registered nurses’ work in an ICU remote monitoring facility. Data were collected on seven nurses through 40 hours of observation. The results showed that nurses’ essential tasks were centered on three themes: monitoring patients, maintaining patients’ health records, and managing technology use. In monitoring patients, nurses spent 52% of the time assimilating information embedded in a clinical information system and 15% on monitoring live vitals. System-generated alerts frequently interrupted nurses in their task performance and redirected them to manage suddenly appearing events. These findings provide insight into nurses’ workflow in a new, technology-driven critical care setting and have important implications for system design, work engineering, and personnel selection and training.
Resumo:
Although we have amassed extensive catalogues of signalling network components, our understanding of the spatiotemporal control of emergent network structures has lagged behind. Dynamic behaviour is starting to be explored throughout the genome, but analysis of spatial behaviours is still confined to individual proteins. The challenge is to reveal how cells integrate temporal and spatial information to determine specific biological functions. Key findings are the discovery of molecular signalling machines such as Ras nanoclusters, spatial activity gradients and flexible network circuitries that involve transcriptional feedback. They reveal design principles of spatiotemporal organization that are crucial for network function and cell fate decisions.
Resumo:
The usage of intensity modulated radiotherapy (IMRT) treatments necessitates a significant amount of patient-specific quality assurance (QA). This research has investigated the precision and accuracy of Kodak EDR2 film measurements for IMRT verifications, the use of comparisons between 2D dose calculations and measurements to improve treatment plan beam models, and the dosimetric impact of delivery errors. New measurement techniques and software were developed and used clinically at M. D. Anderson Cancer Center. The software implemented two new dose comparison parameters, the 2D normalized agreement test (NAT) and the scalar NAT index. A single-film calibration technique using multileaf collimator (MLC) delivery was developed. EDR2 film's optical density response was found to be sensitive to several factors: radiation time, length of time between exposure and processing, and phantom material. Precision of EDR2 film measurements was found to be better than 1%. For IMRT verification, EDR2 film measurements agreed with ion chamber results to 2%/2mm accuracy for single-beam fluence map verifications and to 5%/2mm for transverse plane measurements of complete plan dose distributions. The same system was used to quantitatively optimize the radiation field offset and MLC transmission beam modeling parameters for Varian MLCs. While scalar dose comparison metrics can work well for optimization purposes, the influence of external parameters on the dose discrepancies must be minimized. The ability of 2D verifications to detect delivery errors was tested with simulated data. The dosimetric characteristics of delivery errors were compared to patient-specific clinical IMRT verifications. For the clinical verifications, the NAT index and percent of pixels failing the gamma index were exponentially distributed and dependent upon the measurement phantom but not the treatment site. Delivery errors affecting all beams in the treatment plan were flagged by the NAT index, although delivery errors impacting only one beam could not be differentiated from routine clinical verification discrepancies. Clinical use of this system will flag outliers, allow physicists to examine their causes, and perhaps improve the level of agreement between radiation dose distribution measurements and calculations. The principles used to design and evaluate this system are extensible to future multidimensional dose measurements and comparisons. ^
Resumo:
Background. The high prevalence of obesity among children has spurred creation of a list of possible causative factors, including the advertising of foods of minimal nutritional value, a decrease in physical activity, and increased media use. Few studies show prevalence rates of these factors among large cohorts of children. ^ Methods. Using data from the 2004-2005 School Physical Activity and Nutrition project (SPAN), a secondary analysis of 7907 4th-grade children (mean age 9.74 years) was conducted. In addition, a comic-book–based intervention that addressed advertised food consumption, physical activity, and media use was developed and evaluated using a pre-post test design among 4th-grade children in an urban school district. ^ Results. Among a cohort of 4th-grade children across the state of Texas, children who had more than 2 hours of video game or computer time the previous day were more than twice as likely to drink soda and eat candy or pastries. In addition, children who watched more than 2 hours of TV the previous day were more than three times as likely to consume chips, punch, soda, candy, frozen desserts, or pastries (AOR 3.41, 95% CI: 1.58, 7.37). A comic-book based intervention held great promise and acceptance among 4th-grade children. Outcome evaluation showed that while results moved in a positive direction, they were not statistically significant. ^ Conclusion. Statistically significant associations were found between screen time and eating various types of advertised food. The comic book intervention was widely accepted by the children exposed to it, and pre-post surveys indicated they moved constructs in a positive direction. Further research is needed to look at more specific ways in which children are exposed to TV, and the relationship of the TV viewing time with their consumption of advertised foods. In addition, researchers should look at comic book interventions more closely and attempt to utilize them in more in studies with a longer follow-up time. ^
Resumo:
Cross-sectional designs, longitudinal designs in which a single cohort is followed over time, and mixed-longitudinal designs in which several cohorts are followed for a shorter period are compared by their precision, potential for bias due to age, time and cohort effects, and feasibility. Mixed longitudinal studies have two advantages over longitudinal studies: isolation of time and age effects and shorter completion time. Though the advantages of mixed-longitudinal studies are clear, choosing an optimal design is difficult, especially given the number of possible combinations of the number of cohorts and number of overlapping intervals between cohorts. The purpose of this paper is to determine the optimal design for detecting differences in group growth rates.^ The type of mixed-longitudinal study appropriate for modeling both individual and group growth rates is called a "multiple-longitudinal" design. A multiple-longitudinal study typically requires uniform or simultaneous entry of subjects, who are each observed till the end of the study.^ While recommendations for designing pure-longitudinal studies have been made by Schlesselman (1973b), Lefant (1990) and Helms (1991), design recommendations for multiple-longitudinal studies have never been published. It is shown that by using power analyses to determine the minimum number of occasions per cohort and minimum number of overlapping occasions between cohorts, in conjunction with a cost model, an optimal multiple-longitudinal design can be determined. An example of systolic blood pressure values for cohorts of males and cohorts of females, ages 8 to 18 years, is given. ^
Resumo:
Mixed longitudinal designs are important study designs for many areas of medical research. Mixed longitudinal studies have several advantages over cross-sectional or pure longitudinal studies, including shorter study completion time and ability to separate time and age effects, thus are an attractive choice. Statistical methodology used in general longitudinal studies has been rapidly developing within the last few decades. Common approaches for statistical modeling in studies with mixed longitudinal designs have been the linear mixed-effects model incorporating an age or time effect. The general linear mixed-effects model is considered an appropriate choice to analyze repeated measurements data in longitudinal studies. However, common use of linear mixed-effects model on mixed longitudinal studies often incorporates age as the only random-effect but fails to take into consideration the cohort effect in conducting statistical inferences on age-related trajectories of outcome measurements. We believe special attention should be paid to cohort effects when analyzing data in mixed longitudinal designs with multiple overlapping cohorts. Thus, this has become an important statistical issue to address. ^ This research aims to address statistical issues related to mixed longitudinal studies. The proposed study examined the existing statistical analysis methods for the mixed longitudinal designs and developed an alternative analytic method to incorporate effects from multiple overlapping cohorts as well as from different aged subjects. The proposed study used simulation to evaluate the performance of the proposed analytic method by comparing it with the commonly-used model. Finally, the study applied the proposed analytic method to the data collected by an existing study Project HeartBeat!, which had been evaluated using traditional analytic techniques. Project HeartBeat! is a longitudinal study of cardiovascular disease (CVD) risk factors in childhood and adolescence using a mixed longitudinal design. The proposed model was used to evaluate four blood lipids adjusting for age, gender, race/ethnicity, and endocrine hormones. The result of this dissertation suggest the proposed analytic model could be a more flexible and reliable choice than the traditional model in terms of fitting data to provide more accurate estimates in mixed longitudinal studies. Conceptually, the proposed model described in this study has useful features, including consideration of effects from multiple overlapping cohorts, and is an attractive approach for analyzing data in mixed longitudinal design studies.^
Resumo:
The determination of size as well as power of a test is a vital part of a Clinical Trial Design. This research focuses on the simulation of clinical trial data with time-to-event as the primary outcome. It investigates the impact of different recruitment patterns, and time dependent hazard structures on size and power of the log-rank test. A non-homogeneous Poisson process is used to simulate entry times according to the different accrual patterns. A Weibull distribution is employed to simulate survival times according to the different hazard structures. The current study utilizes simulation methods to evaluate the effect of different recruitment patterns on size and power estimates of the log-rank test. The size of the log-rank test is estimated by simulating survival times with identical hazard rates between the treatment and the control arm of the study resulting in a hazard ratio of one. Powers of the log-rank test at specific values of hazard ratio (≠1) are estimated by simulating survival times with different, but proportional hazard rates for the two arms of the study. Different shapes (constant, decreasing, or increasing) of the hazard function of the Weibull distribution are also considered to assess the effect of hazard structure on the size and power of the log-rank test. ^
Resumo:
OBJECTIVE. To determine the effectiveness of active surveillance cultures and associated infection control practices on the incidence of methicillin resistant Staphylococcus aureus (MRSA) in the acute care setting. DESIGN. A historical analysis of existing clinical data utilizing an interrupted time series design. ^ SETTING AND PARTICIPANTS. Patients admitted to a 260-bed tertiary care facility in Houston, TX between January 2005 through December 2010. ^ INTERVENTION. Infection control practices, including enhanced barrier precautions, compulsive hand hygiene, disinfection and environmental cleaning, and executive ownership and education, were simultaneously introduced during a 5-month intervention implementation period culminating with the implementation of active surveillance screening. Beginning June 2007, all high risk patients were cultured for MRSA nasal carriage within 48 hours of admission. Segmented Poisson regression was used to test the significance of the difference in incidence of healthcare-associated MRSA during the 29-month pre-intervention period compared to the 43-month post-intervention period. ^ RESULTS. A total of 9,957 of 11,095 high-risk patients (89.7%) were screened for MRSA carriage during the intervention period. Active surveillance cultures identified 1,330 MRSA-positive patients (13.4%) contributing to an admission prevalence of 17.5% in high-risk patients. The mean rate of healthcare-associated MRSA infection and colonization decreased from 1.1 per 1,000 patient-days in the pre-intervention period to 0.36 per 1,000 patient-days in the post-intervention period (P<0.001). The effect of the intervention in association with the percentage of S. aureus isolates susceptible to oxicillin were shown to be statistically significantly associated with the incidence of MRSA infection and colonization (IRR = 0.50, 95% CI = 0.31-0.80 and IRR = 0.004, 95% CI = 0.00003-0.40, respectively). ^ CONCLUSIONS. It can be concluded that aggressively targeting patients at high risk for colonization of MRSA with active surveillance cultures and associated infection control practices as part of a multifaceted, hospital-wide intervention is effective in reducing the incidence of healthcare-associated MRSA.^
Resumo:
The influence of respiratory motion on patient anatomy poses a challenge to accurate radiation therapy, especially in lung cancer treatment. Modern radiation therapy planning uses models of tumor respiratory motion to account for target motion in targeting. The tumor motion model can be verified on a per-treatment session basis with four-dimensional cone-beam computed tomography (4D-CBCT), which acquires an image set of the dynamic target throughout the respiratory cycle during the therapy session. 4D-CBCT is undersampled if the scan time is too short. However, short scan time is desirable in clinical practice to reduce patient setup time. This dissertation presents the design and optimization of 4D-CBCT to reduce the impact of undersampling artifacts with short scan times. This work measures the impact of undersampling artifacts on the accuracy of target motion measurement under different sampling conditions and for various object sizes and motions. The results provide a minimum scan time such that the target tracking error is less than a specified tolerance. This work also presents new image reconstruction algorithms for reducing undersampling artifacts in undersampled datasets by taking advantage of the assumption that the relevant motion of interest is contained within a volume-of-interest (VOI). It is shown that the VOI-based reconstruction provides more accurate image intensity than standard reconstruction. The VOI-based reconstruction produced 43% fewer least-squares error inside the VOI and 84% fewer error throughout the image in a study designed to simulate target motion. The VOI-based reconstruction approach can reduce acquisition time and improve image quality in 4D-CBCT.
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
The purpose of this study was to design, synthesize and develop novel transporter targeting agents for image-guided therapy and drug delivery. Two novel agents, N4-guanine (N4amG) and glycopeptide (GP) were synthesized for tumor cell proliferation assessment and cancer theranostic platform, respectively. N4amG and GP were synthesized and radiolabeled with 99mTc and 68Ga. The chemical and radiochemical purities as well as radiochemical stabilities of radiolabeled N4amG and GP were tested. In vitro stability assessment showed both 99mTc-N4amG and 99mTc-GP were stable up to 6 hours, whereas 68Ga-GP was stable up to 2 hours. Cell culture studies confirmed radiolabeled N4amG and GP could penetrate the cell membrane through nucleoside transporters and amino acid transporters, respectively. Up to 40% of intracellular 99mTc-N4amG and 99mTc-GP was found within cell nucleus following 2 hours of incubation. Flow cytometry analysis revealed 99mTc-N4amG was a cell cycle S phase-specific agent. There was a significant difference of the uptake of 99mTc-GP between pre- and post- paclitaxel-treated cells, which suggests that 99mTc-GP may be useful in chemotherapy treatment monitoring. Moreover, radiolabeled N4amG and GP were tested in vivo using tumor-bearing animal models. 99mTc-N4amG showed an increase in tumor-to-muscle count density ratios up to 5 at 4 hour imaging. Both 99mTc-labeled agents showed decreased tumor uptake after paclitaxel treatment. Immunohistochemistry analysis demonstrated that the uptake of 99mTc-N4amG was correlated with Ki-67 expression. Both 99mTc-N4amG and 99mTc-GP could differentiate between tumor and inflammation in animal studies. Furthermore, 68Ga-GP was compared to 18F-FDG in rabbit PET imaging studies. 68Ga-GP had lower tumor standardized uptake values (SUV), but similar uptake dynamics, and different biodistribution compared with 18F-FDG. Finally, to demonstrate that GP can be a potential drug carrier for cancer theranostics, several drugs, including doxorubicin, were selected to be conjugated to GP. Imaging studies demonstrated that tumor uptake of GP-drug conjugates was increased as a function of time. GP-doxorubicin (GP-DOX) showed a slow-release pattern in in vitro cytotoxicity assay and exhibited anti-cancer efficacy with reduced toxicity in in vivo tumor growth delay study. In conclusion, both N4amG and GP are transporter-based targeting agents. Radiolabeled N4amG can be used for tumor cell proliferation assessment. GP is a potential agent for image-guided therapy and drug delivery.
Resumo:
The Phase I clinical trial is considered the "first in human" study in medical research to examine the toxicity of a new agent. It determines the maximum tolerable dose (MTD) of a new agent, i.e., the highest dose in which toxicity is still acceptable. Several phase I clinical trial designs have been proposed in the past 30 years. The well known standard method, so called the 3+3 design, is widely accepted by clinicians since it is the easiest to implement and it does not need a statistical calculation. Continual reassessment method (CRM), a design uses Bayesian method, has been rising in popularity in the last two decades. Several variants of the CRM design have also been suggested in numerous statistical literatures. Rolling six is a new method introduced in pediatric oncology in 2008, which claims to shorten the trial duration as compared to the 3+3 design. The goal of the present research was to simulate clinical trials and compare these phase I clinical trial designs. Patient population was created by discrete event simulation (DES) method. The characteristics of the patients were generated by several distributions with the parameters derived from a historical phase I clinical trial data review. Patients were then selected and enrolled in clinical trials, each of which uses the 3+3 design, the rolling six, or the CRM design. Five scenarios of dose-toxicity relationship were used to compare the performance of the phase I clinical trial designs. One thousand trials were simulated per phase I clinical trial design per dose-toxicity scenario. The results showed the rolling six design was not superior to the 3+3 design in terms of trial duration. The time to trial completion was comparable between the rolling six and the 3+3 design. However, they both shorten the duration as compared to the two CRM designs. Both CRMs were superior to the 3+3 design and the rolling six in accuracy of MTD estimation. The 3+3 design and rolling six tended to assign more patients to undesired lower dose levels. The toxicities were slightly greater in the CRMs.^
Resumo:
Phase I clinical trial is mainly designed to determine the maximum tolerated dose (MTD) of a new drug. Optimization of phase I trial design is crucial to minimize the number of enrolled patients exposed to unsafe dose levels and to provide reliable information to the later phases of clinical trials. Although it has been criticized about its inefficient MTD estimation, nowadays the traditional 3+3 method remains dominant in practice due to its simplicity and conservative estimation. There are many new designs that have been proven to generate more credible MTD estimation, such as the Continual Reassessment Method (CRM). Despite its accepted better performance, the CRM design is still not widely used in real trials. There are several factors that contribute to the difficulties of CRM adaption in practice. First, CRM is not widely accepted by the regulatory agencies such as FDA in terms of safety. It is considered to be less conservative and tend to expose more patients above the MTD level than the traditional design. Second, CRM is relatively complex and not intuitive for the clinicians to fully understand. Third, the CRM method take much more time and need statistical experts and computer programs throughout the trial. The current situation is that the clinicians still tend to follow the trial process that they are comfortable with. This situation is not likely to change in the near future. Based on this situation, we have the motivation to improve the accuracy of MTD selection while follow the procedure of the traditional design to maintain simplicity. We found that in 3+3 method, the dose transition and the MTD determination are relatively independent. Thus we proposed to separate the two stages. The dose transition rule remained the same as 3+3 method. After getting the toxicity information from the dose transition stage, we combined the isotonic transformation to ensure the monotonic increasing order before selecting the optimal MTD. To compare the operating characteristics of the proposed isotonic method and the other designs, we carried out 10,000 simulation trials under different dose setting scenarios to compare the design characteristics of the isotonic modified method with standard 3+3 method, CRM, biased coin design (BC) and k-in-a-row design (KIAW). The isotonic modified method improved MTD estimation of the standard 3+3 in 39 out of 40 scenarios. The improvement is much greater when the target is 0.3 other than 0.25. The modified design is also competitive when comparing with other selected methods. A CRM method performed better in general but was not as stable as the isotonic method throughout the different dose settings. The results demonstrated that our proposed isotonic modified method is not only easily conducted using the same procedure as 3+3 but also outperforms the conventional 3+3 design. It can also be applied to determine MTD for any given TTL. These features make the isotonic modified method of practical value in phase I clinical trials.^