4 resultados para accounting model design
em DigitalCommons@The Texas Medical Center
Resumo:
Objective: To determine how a clinician’s background knowledge, their tasks, and displays of information interact to affect the clinician’s mental model. Design: Repeated Measure Nested Experimental Design Population, Sample, Setting: Populations were gastrointestinal/internal medicine physicians and nurses within the greater Houston area. A purposeful sample of 24 physicians and 24 nurses were studied in 2003. Methods: Subjects were randomized to two different displays of two different mock medical records; one that contained highlighted patient information and one that contained non-highlighted patient information. They were asked to read and summarize their understanding of the patients aloud. Propositional analysis was used to understand their comprehension of the patients. Findings: Different mental models were found between physicians and nurses given the same display of information. The information they shared was very minor compared to the variance in their mental models. There was additionally more variance within the nursing mental models than the physician mental models given different displays of the same information. Statistically, there was no interaction effect between the display of information and clinician type. Only clinician type could account for the differences in the clinician comprehension and thus their mental models of the cases. Conclusion: The factors that may explain the variance within and between the clinician models are clinician type, and only in the nursing group, the use of highlighting.
Resumo:
Objective. To evaluate the HEADS UP Virtual Molecular Biology Lab, a computer-based simulated laboratory designed to teach advanced high school biology students how to create a mouse model. ^ Design. A randomized clinical control design of forty-four students from two science magnet high schools in Mercedes, Texas was utilized to assess knowledge and skills of molecular laboratory procedures, attitudes towards science and computers as a learning tool, and usability of the program. ^ Measurements. Data was collected using five paper-and-pencil formatted questionnaires and an internal "lab notebook." ^ Results. The Virtual Lab was found to significantly increase student knowledge over time (p<0.005) and with each use (p<0.001) as well as positively increase attitudes towards computers (p<0.001) and skills (p<0.005). No significant differences were seen in science attitude scores.^ Conclusion. These results provide evidence that the HEADS UP Virtual Molecular Biology Lab is a potentially effective educational tool for high school molecular biology education.^
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
Conventional designs of animal bioassays allocate the same number of animals into control and dose groups to explore the spontaneous and induced tumor incidence rates, respectively. The purpose of such bioassays are (a) to determine whether or not the substance exhibits carcinogenic properties, and (b) if so, to estimate the human response at relatively low doses. In this study, it has been found that the optimal allocation to the experimental groups which, in some sense, minimize the error of the estimated response for low dose extrapolation is associated with the dose level and tumor risk. The number of dose levels has been investigated at the affordable experimental cost. The pattern of the administered dose, 1 MTD, 1/2 MTD, 1/4 MTD,....., etc. plus control, gives the most reasonable arrangement for the low dose extrapolation purpose. The arrangement of five dose groups may make the highest dose trivial. A four-dose design can circumvent this problem and has also one degree of freedom for testing the goodness-of-fit of the response model.^ An example using the data on liver tumors induced in mice in a lifetime study of feeding dieldrin (Walker et al., 1973) is implemented with the methodology. The results are compared with conclusions drawn from other studies. ^