12 resultados para Single-subject design
em DigitalCommons@The Texas Medical Center
Resumo:
People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.
Resumo:
Background. A review of the literature suggests that Hypertension (HTN) in older adults is associated with sympathetic stimulation that results in increasing blood pressure (BP) reactivity. If clinical assessment of BP captured sympathetic stimulation, it would be valuable for hypertension management. ^ Objectives. The study examined whether reactive change scores from a short BPR protocol, resting blood pressure (BP), or resting pulse pressure (PP) is a better predictor of 24 hour ambulatory BP and BP load in cardiac patients. ^ Method. The study used a single-group design, with both an experimental clinical component and an observational field component. Both components used repeated measurement methods. The study population consisted of 45 adult patients with a mean age of 64.6 ± 8.5 years who were diagnosed with cardiac disease and who were taking anti-hypertensive medication. Blood pressure reactivity was operationalized with a speech protocol. During the speech protocol, BP was measured with an automatic device (Dinamap 825XT) while subjects talked about their health and about their usual day. Twenty-four hour ambulatory BP measurement (Spacelabs 90207 monitor) followed the speech protocol. ^ Results. Resting SBP and resting PP were significant predictors of 24-hour SBP, and resting SBP was a significant predictor of SBP load. No predictors were significant of 24-hour DBP or DBP load. ^ Conclusions. Initial resting BP and PP may be used in clinical settings to assess hypertension management. Future studies are necessary to confirm the ability of resting BP to predict ABP and BP load in older, medicated hypertensives. ^
Resumo:
OBJECTIVE: To characterize PubMed usage over a typical day and compare it to previous studies of user behavior on Web search engines. DESIGN: We performed a lexical and semantic analysis of 2,689,166 queries issued on PubMed over 24 consecutive hours on a typical day. MEASUREMENTS: We measured the number of queries, number of distinct users, queries per user, terms per query, common terms, Boolean operator use, common phrases, result set size, MeSH categories, used semantic measurements to group queries into sessions, and studied the addition and removal of terms from consecutive queries to gauge search strategies. RESULTS: The size of the result sets from a sample of queries showed a bimodal distribution, with peaks at approximately 3 and 100 results, suggesting that a large group of queries was tightly focused and another was broad. Like Web search engine sessions, most PubMed sessions consisted of a single query. However, PubMed queries contained more terms. CONCLUSION: PubMed's usage profile should be considered when educating users, building user interfaces, and developing future biomedical information retrieval systems.
Resumo:
A wealth of genetic associations for cardiovascular and metabolic phenotypes in humans has been accumulating over the last decade, in particular a large number of loci derived from recent genome wide association studies (GWAS). True complex disease-associated loci often exert modest effects, so their delineation currently requires integration of diverse phenotypic data from large studies to ensure robust meta-analyses. We have designed a gene-centric 50 K single nucleotide polymorphism (SNP) array to assess potentially relevant loci across a range of cardiovascular, metabolic and inflammatory syndromes. The array utilizes a "cosmopolitan" tagging approach to capture the genetic diversity across approximately 2,000 loci in populations represented in the HapMap and SeattleSNPs projects. The array content is informed by GWAS of vascular and inflammatory disease, expression quantitative trait loci implicated in atherosclerosis, pathway based approaches and comprehensive literature searching. The custom flexibility of the array platform facilitated interrogation of loci at differing stringencies, according to a gene prioritization strategy that allows saturation of high priority loci with a greater density of markers than the existing GWAS tools, particularly in African HapMap samples. We also demonstrate that the IBC array can be used to complement GWAS, increasing coverage in high priority CVD-related loci across all major HapMap populations. DNA from over 200,000 extensively phenotyped individuals will be genotyped with this array with a significant portion of the generated data being released into the academic domain facilitating in silico replication attempts, analyses of rare variants and cross-cohort meta-analyses in diverse populations. These datasets will also facilitate more robust secondary analyses, such as explorations with alternative genetic models, epistasis and gene-environment interactions.
Resumo:
Treatment for cancer often involves combination therapies used both in medical practice and clinical trials. Korn and Simon listed three reasons for the utility of combinations: 1) biochemical synergism, 2) differential susceptibility of tumor cells to different agents, and 3) higher achievable dose intensity by exploiting non-overlapping toxicities to the host. Even if the toxicity profile of each agent of a given combination is known, the toxicity profile of the agents used in combination must be established. Thus, caution is required when designing and evaluating trials with combination therapies. Traditional clinical design is based on the consideration of a single drug. However, a trial of drugs in combination requires a dose-selection procedure that is vastly different than that needed for a single-drug trial. When two drugs are combined in a phase I trial, an important trial objective is to determine the maximum tolerated dose (MTD). The MTD is defined as the dose level below the dose at which two of six patients experience drug-related dose-limiting toxicity (DLT). In phase I trials that combine two agents, more than one MTD generally exists, although all are rarely determined. For example, there may be an MTD that includes high doses of drug A with lower doses of drug B, another one for high doses of drug B with lower doses of drug A, and yet another for intermediate doses of both drugs administered together. With classic phase I trial designs, only one MTD is identified. Our new trial design allows identification of more than one MTD efficiently, within the context of a single protocol. The two drugs combined in our phase I trial are temsirolimus and bevacizumab. Bevacizumab is a monoclonal antibody targeting the vascular endothelial growth factor (VEGF) pathway which is fundamental for tumor growth and metastasis. One mechanism of tumor resistance to antiangiogenic therapy is upregulation of hypoxia inducible factor 1α (HIF-1α) which mediates responses to hypoxic conditions. Temsirolimus has resulted in reduced levels of HIF-1α making this an ideal combination therapy. Dr. Donald Berry developed a trial design schema for evaluating low, intermediate and high dose levels of two drugs given in combination as illustrated in a recently published paper in Biometrics entitled “A Parallel Phase I/II Clinical Trial Design for Combination Therapies.” His trial design utilized cytotoxic chemotherapy. We adapted this design schema by incorporating greater numbers of dose levels for each drug. Additional dose levels are being examined because it has been the experience of phase I trials that targeted agents, when given in combination, are often effective at dosing levels lower than the FDA-approved dose of said drugs. A total of thirteen dose levels including representative high, intermediate and low dose levels of temsirolimus with representative high, intermediate, and low dose levels of bevacizumab will be evaluated. We hypothesize that our new trial design will facilitate identification of more than one MTD, if they exist, efficiently and within the context of a single protocol. Doses gleaned from this approach could potentially allow for a more personalized approach in dose selection from among the MTDs obtained that can be based upon a patient’s specific co-morbid conditions or anticipated toxicities.
Resumo:
Early Employee Assistance Programs (EAPs) had their origin in humanitarian motives, and there was little concern for their cost/benefit ratios; however, as some programs began accumulating data and analyzing it over time, even with single variables such as absenteeism, it became apparent that the humanitarian reasons for a program could be reinforced by cost savings particularly when the existence of the program was subject to justification.^ Today there is general agreement that cost/benefit analyses of EAPs are desirable, but the specific models for such analyses, particularly those making use of sophisticated but simple computer based data management systems, are few.^ The purpose of this research and development project was to develop a method, a design, and a prototype for gathering managing and presenting information about EAPS. This scheme provides information retrieval and analyses relevant to such aspects of EAP operations as: (1) EAP personnel activities, (2) Supervisory training effectiveness, (3) Client population demographics, (4) Assessment and Referral Effectiveness, (5) Treatment network efficacy, (6) Economic worth of the EAP.^ This scheme has been implemented and made operational at The University of Texas Employee Assistance Programs for more than three years.^ Application of the scheme in the various programs has defined certain variables which remained necessary in all programs. Depending on the degree of aggressiveness for data acquisition maintained by program personnel, other program specific variables are also defined. ^
Resumo:
The usage of intensity modulated radiotherapy (IMRT) treatments necessitates a significant amount of patient-specific quality assurance (QA). This research has investigated the precision and accuracy of Kodak EDR2 film measurements for IMRT verifications, the use of comparisons between 2D dose calculations and measurements to improve treatment plan beam models, and the dosimetric impact of delivery errors. New measurement techniques and software were developed and used clinically at M. D. Anderson Cancer Center. The software implemented two new dose comparison parameters, the 2D normalized agreement test (NAT) and the scalar NAT index. A single-film calibration technique using multileaf collimator (MLC) delivery was developed. EDR2 film's optical density response was found to be sensitive to several factors: radiation time, length of time between exposure and processing, and phantom material. Precision of EDR2 film measurements was found to be better than 1%. For IMRT verification, EDR2 film measurements agreed with ion chamber results to 2%/2mm accuracy for single-beam fluence map verifications and to 5%/2mm for transverse plane measurements of complete plan dose distributions. The same system was used to quantitatively optimize the radiation field offset and MLC transmission beam modeling parameters for Varian MLCs. While scalar dose comparison metrics can work well for optimization purposes, the influence of external parameters on the dose discrepancies must be minimized. The ability of 2D verifications to detect delivery errors was tested with simulated data. The dosimetric characteristics of delivery errors were compared to patient-specific clinical IMRT verifications. For the clinical verifications, the NAT index and percent of pixels failing the gamma index were exponentially distributed and dependent upon the measurement phantom but not the treatment site. Delivery errors affecting all beams in the treatment plan were flagged by the NAT index, although delivery errors impacting only one beam could not be differentiated from routine clinical verification discrepancies. Clinical use of this system will flag outliers, allow physicists to examine their causes, and perhaps improve the level of agreement between radiation dose distribution measurements and calculations. The principles used to design and evaluate this system are extensible to future multidimensional dose measurements and comparisons. ^
Resumo:
Cross-sectional designs, longitudinal designs in which a single cohort is followed over time, and mixed-longitudinal designs in which several cohorts are followed for a shorter period are compared by their precision, potential for bias due to age, time and cohort effects, and feasibility. Mixed longitudinal studies have two advantages over longitudinal studies: isolation of time and age effects and shorter completion time. Though the advantages of mixed-longitudinal studies are clear, choosing an optimal design is difficult, especially given the number of possible combinations of the number of cohorts and number of overlapping intervals between cohorts. The purpose of this paper is to determine the optimal design for detecting differences in group growth rates.^ The type of mixed-longitudinal study appropriate for modeling both individual and group growth rates is called a "multiple-longitudinal" design. A multiple-longitudinal study typically requires uniform or simultaneous entry of subjects, who are each observed till the end of the study.^ While recommendations for designing pure-longitudinal studies have been made by Schlesselman (1973b), Lefant (1990) and Helms (1991), design recommendations for multiple-longitudinal studies have never been published. It is shown that by using power analyses to determine the minimum number of occasions per cohort and minimum number of overlapping occasions between cohorts, in conjunction with a cost model, an optimal multiple-longitudinal design can be determined. An example of systolic blood pressure values for cohorts of males and cohorts of females, ages 8 to 18 years, is given. ^
Resumo:
Objective. This research study had two goals: (1) to describe resource consumption patterns for Medi-Cal children with cystic fibrosis, and (2) to explore the feasibility from a rate design perspective of developing specialized managed care plans for such a special needs population.^ Background. Children with special health care needs (CSHN) comprise about 2% of the California Medicaid pediatric population. CSHN have rare but serious health problems, such as cystic fibrosis. Medicaid programs, including Medi-Cal, are enrolling more and more beneficiaries in managed care to control costs. CSHN, however, do not fit the wellness model underlying most managed care plans. Child health advocates believe that both efficiency and quality will suffer if CSHN are removed from regionalized special care centers and scattered among general purpose plans. They believe that CSHN should be "carved out" from enrollment in general plans. One alternative is the Specialized Managed Care Plan, tailored for CSHN.^ Methods. The study population consisted of children under age 21 with CF who were eligible for Medi-Cal and California Children's Services program (CCS) during 1991. Health Care Financing Administration (HCFA) Medicaid Tape-to-Tape data were analyzed as part of a California Children's Hospital Association (CCHA) project.^ Results. Mean Medi-Cal expenditures per month enrolled were $2,302 for 457 CF children, compared to about \$1,270 for all 47,000 CCS special needs children and roughly $60 for almost 2.6 million ``regular needs'' children. For CF children, inpatient care (80\%) and outpatient drugs (9\%) were the major cost drivers, with {\it all\/} outpatient visits comprising only 2\% of expenditures. About one-third of CF children were eligible due to AFDC (Aid to Families with Dependent Children). Age group explained about 17\% of all expenditure variation. Regression analysis was used to select the best capitation rate structure (rate cells by age and eligibility group). Sensitivity analysis estimated moderate financial risk for a statewide plan (360 enrollees), but severe risk for single county implementation due to small numbers of children.^ Conclusions. Study results support the carve out of CSHN due to unique expenditure patterns. The Specialized Managed Care Plan concept appears feasible from a rate design perspective given sufficient enrollees. ^
Resumo:
High-resolution, small-bore PET systems suffer from a tradeoff between system sensitivity, and image quality degradation. In these systems long crystals allow mispositioning of the line of response due to parallax error and this mispositioning causes resolution blurring, but long crystals are necessary for high system sensitivity. One means to allow long crystals without introducing parallax errors is to determine the depth of interaction (DOI) of the gamma ray interaction within the detector module. While DOI has been investigated previously, newly available solid state photomultipliers (SSPMs) well-suited to PET applications and allow new modules for investigation. Depth of interaction in full modules is a relatively new field, and so even if high performance DOI capable modules were available, the appropriate means to characterize and calibrate the modules are not. This work presents an investigation of DOI capable arrays and techniques for characterizing and calibrating those modules. The methods introduced here accurately and reliably characterize and calibrate energy, timing, and event interaction positioning. Additionally presented is a characterization of the spatial resolution of DOI capable modules and a measurement of DOI effects for different angles between detector modules. These arrays have been built into a prototype PET system that delivers better than 2.0 mm resolution with a single-sided-stopping-power in excess of 95% for 511 keV g's. The noise properties of SSPMs scale with the active area of the detector face, and so the best signal-to-noise ratio is possible with parallel readout of each SSPM photodetector pixel rather than multiplexing signals together. This work additionally investigates several algorithms for improving timing performance using timing information from multiple SSPM pixels when light is distributed among several photodetectors.
Resumo:
The development of targeted therapy involve many challenges. Our study will address some of the key issues involved in biomarker identification and clinical trial design. In our study, we propose two biomarker selection methods, and then apply them in two different clinical trial designs for targeted therapy development. In particular, we propose a Bayesian two-step lasso procedure for biomarker selection in the proportional hazards model in Chapter 2. In the first step of this strategy, we use the Bayesian group lasso to identify the important marker groups, wherein each group contains the main effect of a single marker and its interactions with treatments. In the second step, we zoom in to select each individual marker and the interactions between markers and treatments in order to identify prognostic or predictive markers using the Bayesian adaptive lasso. In Chapter 3, we propose a Bayesian two-stage adaptive design for targeted therapy development while implementing the variable selection method given in Chapter 2. In Chapter 4, we proposed an alternate frequentist adaptive randomization strategy for situations where a large number of biomarkers need to be incorporated in the study design. We also propose a new adaptive randomization rule, which takes into account the variations associated with the point estimates of survival times. In all of our designs, we seek to identify the key markers that are either prognostic or predictive with respect to treatment. We are going to use extensive simulation to evaluate the operating characteristics of our methods.^
Resumo:
We designed and synthesized a novel daunorubicin (DNR) analogue that effectively circumvents P-glycoprotein (P-gp)-mediated drug resistance. The fully protected carbohydrate intermediate 1,2-dibromoacosamine was prepared from acosamine and effectively coupled to daunomycinone in high yield. Deprotection under alkaline conditions yielded 2$\sp\prime$-bromo-4$\sp\prime$-epidaunorubicin (WP401). The in vitro cytotoxicity and cellular and molecular pharmacology of WP401 were compared with those of DNR in a panel of wild-type cell lines (KB-3-1, P388S, and HL60S) and their multidrug-resistant (MDR) counterparts (KB-V1, P388/DOX, and HL60/DOX). Fluorescent spectrophotometry, flow cytometry, and confocal laser scanning microscopy were used to measure intracellular accumulation, retention, and subcellular distribution of these agents. All MDR cell lines exhibited reduced DNR uptake that was restored, upon incubation with either verapamil (VER) or cyclosporin A (CSA), to the level found in sensitive cell lines. In contrast, the uptake of WP401 was essentially the same in the absence or presence of VER or CSA in all tested cell lines. The in vitro cytotoxicity of WP401 was similar to that of DNR in the sensitive cell lines but significantly higher in resistant cell lines (resistance index (RI) of 2-6 for WP401 vs 75-85 for DNR). To ascertain whether drug-mediated cytotoxicity and retention were accompanied by DNA strand breaks, DNA single- and double-strand breaks were assessed by alkaline elution. High levels of such breaks were obtained using 0.1-2 $\mu$g/mL of WP401 in both sensitive and resistant cells. In contrast, DNR caused strand breaks only in sensitive cells and not much in resistant cells. We also compared drug-induced DNA fragmentation similar to that induced by DNR. However, in P-gp-positive cells, WP401 induced 2- to 5-fold more DNA fragmentation than DNR. This increased DNA strand breakage by WP401 was correlated with its increased uptake and cytotoxicity in these cell lines. Overall these results indicate that WP401 is more cytotoxic than DNR in MDR cells and that this phenomenon might be related to the reduced basicity of the amino group and increased lipophilicity of WP401. ^