971 resultados para statistical science
Resumo:
The purpose of this study was to evaluate the effect of cooperative learning strategies on students' attitudes toward science and achievement in BSC 1005L, a non-science majors' general biology laboratory course at an urban community college. Data were gathered on the participants' attitudes toward science and cognitive biology level pre and post treatment in BSC 1005L. Elements of the Learning Together model developed by Johnson and Johnson and the Student Team-Achievement Divisions model created by Slavin were incorporated into the experimental sections of BSC 1005L.^ Four sections of BSC 1005L participated in this study. Participants were enrolled in the 1998 spring (January) term. Students met weekly in a two hour laboratory session. The treatment was administered to the experimental group over a ten week period. A quasi-experimental pretest-posttest control group design was used. Students in the cooperative learning group (n$\sb1$ = 27) were administered the Test of Science-Related Attitudes (TOSRA) and the cognitive biology test at the same time as the control group (n$\sb2$ = 19) (at the beginning and end of the term).^ Statistical analyses confirmed that both groups were equivalent regarding ethnicity, gender, college grade point average and number of absences. Independent sample t-tests performed on pretest mean scores indicated no significant differences in the TOSRA scale two or biology knowledge between the cooperative learning group and the control group. The scores of TOSRA scales: one, three, four, five, six, and seven were significantly lower in the cooperative learning group. Independent sample t-tests of the mean score differences did not show any significant differences in posttest attitudes toward science or biology knowledge between the two groups. Paired t-tests did not indicate any significant differences on the TOSRA or biology knowledge within the cooperative learning group. Paired t-tests did show significant differences within the control group on TOSRA scale two and biology knowledge. ANCOVAs did not indicate any significant differences on the post mean scores of the TOSRA or biology knowledge adjusted by differences in the pretest mean scores. Analysis of the research data did not show any significant correlation between attitudes toward science and biology knowledge. ^
Resumo:
The contextual demands of language in content area are difficult for ELLS. Content in the native language furthers students' academic development and native language skills, while they are learning English. Content in English integrates pedagogical strategies for English acquisition with subject area instruction. The following models of curriculum content are provided in most Miami Dade County Public Schools: (a) mathematics instruction in the native language with science instruction in English or (b) science instruction in the native language with mathematics instruction in English. The purpose of this study was to investigate which model of instruction is more contextually supportive for mathematics and science achievement. ^ A pretest and posttest, nonequivalent group design was used with 94 fifth grade ELLs who received instruction in curriculum model (a) or (b). This allowed for statistical analysis that detected a difference in the means of .5 standard deviations with a power of .80 at the .05 level of significance. Pretreatment and post-treatment assessments of mathematics, reading, and science achievement were obtained through the administration of Aprenda-Segunda Edición and the Florida Comprehensive Achievement Test. ^ The results indicated that students receiving mathematics in English and Science in Spanish scored higher on achievement tests in both Mathematics and Science than the students who received Mathematics in Spanish and Science in English. In addition, the mean score of students on the FCAT mathematics examination was higher than their mean score on the FCAT science examination regardless of the language of instruction. ^
Resumo:
The microarray technology provides a high-throughput technique to study gene expression. Microarrays can help us diagnose different types of cancers, understand biological processes, assess host responses to drugs and pathogens, find markers for specific diseases, and much more. Microarray experiments generate large amounts of data. Thus, effective data processing and analysis are critical for making reliable inferences from the data. ^ The first part of dissertation addresses the problem of finding an optimal set of genes (biomarkers) to classify a set of samples as diseased or normal. Three statistical gene selection methods (GS, GS-NR, and GS-PCA) were developed to identify a set of genes that best differentiate between samples. A comparative study on different classification tools was performed and the best combinations of gene selection and classifiers for multi-class cancer classification were identified. For most of the benchmarking cancer data sets, the gene selection method proposed in this dissertation, GS, outperformed other gene selection methods. The classifiers based on Random Forests, neural network ensembles, and K-nearest neighbor (KNN) showed consistently god performance. A striking commonality among these classifiers is that they all use a committee-based approach, suggesting that ensemble classification methods are superior. ^ The same biological problem may be studied at different research labs and/or performed using different lab protocols or samples. In such situations, it is important to combine results from these efforts. The second part of the dissertation addresses the problem of pooling the results from different independent experiments to obtain improved results. Four statistical pooling techniques (Fisher inverse chi-square method, Logit method. Stouffer's Z transform method, and Liptak-Stouffer weighted Z-method) were investigated in this dissertation. These pooling techniques were applied to the problem of identifying cell cycle-regulated genes in two different yeast species. As a result, improved sets of cell cycle-regulated genes were identified. The last part of dissertation explores the effectiveness of wavelet data transforms for the task of clustering. Discrete wavelet transforms, with an appropriate choice of wavelet bases, were shown to be effective in producing clusters that were biologically more meaningful. ^
Resumo:
Poor informational reading and writing skills in early grades and the need to provide students more experience with informational text have been identified by research as areas of concern. Wilkinson and Son (2011) support future research in dialogic approaches to investigate the impact dialogic teaching has on comprehension. This study (N = 39) examined the gains in reading comprehension, science achievement, and metacognitive functioning of individual second grade students interacting with instructors using dialogue journals alongside their textbook. The 38 week study consisted of two instructional phases, and three assessment points. After a period of oral metacognitive strategies, one class formed the treatment group (n=17), consisting of two teachers following the co-teaching method, and two classes formed the comparison group ( n=22). The dialogue journal intervention for the treatment group embraced the transactional theory of instruction through the use of dialogic interaction between teachers and students. Students took notes on the assigned lesson after an oral discussion. Teachers responded to students' entries with scaffolding using reading strategies (prior knowledge, skim, slow down, mental integration, and diagrams) modeled after Schraw's (1998) strategy evaluation matrix, to enhance students' comprehension. The comparison group utilized text-based, teacher-led whole group discussion. Data were collected using different measures: (a) Florida Assessments for Instruction in Reading (FAIR) Broad Diagnostic Inventory; (b) Scott Foresman end of chapter tests; (c) Metacomprehension Strategy Index (Schmitt, 1990); and (d) researcher-made metacognitive scaffolding rubric. Statistical analyses were performed using paired sample t-tests, regression analysis of covariance, and two way analysis of covariance. Findings from the study revealed that experimental participants performed significantly better on the linear combination of reading comprehension, science achievement, and metacognitive function, than their comparison group counterparts while controlling for pretest scores. Overall, results from the study established that teacher scaffolding using metacognitive strategies can potentially develop students' reading comprehension, science achievement, and metacognitive awareness. This suggests that early childhood students gain from the integration of reading and writing when using authentic materials (science textbooks) in science classrooms. A replication of this study with more students across more schools, and different grade levels would improve the generalizability of these results.
Resumo:
Research endeavors on spoken dialogue systems in the 1990s and 2000s have led to the deployment of commercial spoken dialogue systems (SDS) in microdomains such as customer service automation, reservation/booking and question answering systems. Recent research in SDS has been focused on the development of applications in different domains (e.g. virtual counseling, personal coaches, social companions) which requires more sophistication than the previous generation of commercial SDS. The focus of this research project is the delivery of behavior change interventions based on the brief intervention counseling style via spoken dialogue systems. ^ Brief interventions (BI) are evidence-based, short, well structured, one-on-one counseling sessions. Many challenges are involved in delivering BIs to people in need, such as finding the time to administer them in busy doctors' offices, obtaining the extra training that helps staff become comfortable providing these interventions, and managing the cost of delivering the interventions. Fortunately, recent developments in spoken dialogue systems make the development of systems that can deliver brief interventions possible. ^ The overall objective of this research is to develop a data-driven, adaptable dialogue system for brief interventions for problematic drinking behavior, based on reinforcement learning methods. The implications of this research project includes, but are not limited to, assessing the feasibility of delivering structured brief health interventions with a data-driven spoken dialogue system. Furthermore, while the experimental system focuses on harmful alcohol drinking as a target behavior in this project, the produced knowledge and experience may also lead to implementation of similarly structured health interventions and assessments other than the alcohol domain (e.g. obesity, drug use, lack of exercise), using statistical machine learning approaches. ^ In addition to designing a dialog system, the semantic and emotional meanings of user utterances have high impact on interaction. To perform domain specific reasoning and recognize concepts in user utterances, a named-entity recognizer and an ontology are designed and evaluated. To understand affective information conveyed through text, lexicons and sentiment analysis module are developed and tested.^
Resumo:
A review of the literature reveals few research has attempted to demonstrate if a relationship exists between the type of teacher training a science teacher has received and the perceived attitudes of his/her students. Considering that a great deal of time and energy has been devoted by university colleges, school districts, and educators towards refining the teacher education process, it would be more efficient for all parties involved, if research were available that could discern if certain pathways in achieving that education, would promote the tendency towards certain teacher behaviors occurring in the classroom, while other pathways would lead towards different behaviors. Some of the teacher preparation factors examined in this study include the college major chosen by the science teacher, the highest degree earned, the number of years of teaching experience, the type of science course taught, and the grade level taught by the teacher. This study examined how the various factors mentioned, could influence the behaviors which are characteristic of the teacher, and how these behaviors could be reflective in the classroom environment experienced by the students. The instrument used in the study was the Classroom Environment Scale (CES), Real Form. The measured classroom environment was broken down into three separate dimensions, with three components within each dimension in the CES. Multiple Regression statistical analyses examined how components of the teachers' education influenced the perceived dimensions of the classroom environment from the students. The study occurred in Miami-Dade County Florida, with a predominantly urban high school student population. There were 40 secondary science teachers involved, each with an average of 30 students. The total number of students sampled in the study was 1200. The teachers who participated in the study taught the entire range of secondary science courses offered at this large school district. All teachers were selected by the researcher so that a balance would occur in the sample between teachers who were education major versus science major. Additionally, the researcher selected teachers so that a balance occurred in regards to the different levels of college degrees earned among those involved in the study. Several research questions sought to determine if there was significant difference between the type of the educational background obtained by secondary science teachers and the students' perception of the classroom environment. Other research questions sought to determine if there were significant differences in the students' perceptions of the classroom environment for secondary science teachers who taught biological content, or non-biological content sciences. An additional research question sought to evaluate if the grade level taught would affect the students' perception of the classroom environment. Analysis of the multiple regression were run for each of four scores from the CES, Real Form. For score 1, involvement of students, the results showed that teachers with the highest number of years of experience, with masters or masters plus degrees, who were education majors, and who taught twelfth grade students, had greater amounts of students being attentive and interested in class activities, participating in discussions, and doing additional work on their own, as compared with teachers who had lower experience, a bachelors degree, were science majors, and who taught a grade lower than twelfth. For score 2, task orientation, which emphasized completing the required activities and staying on-task, the results showed that teachers with the highest and intermediate experience, a science major, and with the highest college degree, showed higher scores as compared with the teachers indicating lower experiences, education major and a bachelors degree. For Score 3, competition, which indicated how difficult it was to achieve high grades in the class, the results showed that teachers who taught non-biology content subjects had the greatest effect on the regression. Teachers with a masters degree, low levels of experience, and who taught twelfth grade students were also factored into the regression equation. For Score 4, innovation, which indicated the extent in which the teachers used new and innovative techniques to encourage diverse and creative thinking included teachers with an education major as the first entry into the regression equation. Teachers with the least experience (0 to 3 years), and teachers who taught twelfth and eleventh grade students were also included into the regression equation.
Resumo:
A certain type of bacterial inclusion, known as a bacterial microcompartment, was recently identified and imaged through cryo-electron tomography. A reconstructed 3D object from single-axis limited angle tilt-series cryo-electron tomography contains missing regions and this problem is known as the missing wedge problem. Due to missing regions on the reconstructed images, analyzing their 3D structures is a challenging problem. The existing methods overcome this problem by aligning and averaging several similar shaped objects. These schemes work well if the objects are symmetric and several objects with almost similar shapes and sizes are available. Since the bacterial inclusions studied here are not symmetric, are deformed, and show a wide range of shapes and sizes, the existing approaches are not appropriate. This research develops new statistical methods for analyzing geometric properties, such as volume, symmetry, aspect ratio, polyhedral structures etc., of these bacterial inclusions in presence of missing data. These methods work with deformed and non-symmetric varied shaped objects and do not necessitate multiple objects for handling the missing wedge problem. The developed methods and contributions include: (a) an improved method for manual image segmentation, (b) a new approach to 'complete' the segmented and reconstructed incomplete 3D images, (c) a polyhedral structural distance model to predict the polyhedral shapes of these microstructures, (d) a new shape descriptor for polyhedral shapes, named as polyhedron profile statistic, and (e) the Bayes classifier, linear discriminant analysis and support vector machine based classifiers for supervised incomplete polyhedral shape classification. Finally, the predicted 3D shapes for these bacterial microstructures belong to the Johnson solids family, and these shapes along with their other geometric properties are important for better understanding of their chemical and biological characteristics.
Resumo:
Dynamic positron emission tomography (PET) imaging can be used to track the distribution of injected radio-labelled molecules over time in vivo. This is a powerful technique, which provides researchers and clinicians the opportunity to study the status of healthy and pathological tissue by examining how it processes substances of interest. Widely used tracers include 18F-uorodeoxyglucose, an analog of glucose, which is used as the radiotracer in over ninety percent of PET scans. This radiotracer provides a way of quantifying the distribution of glucose utilisation in vivo. The interpretation of PET time-course data is complicated because the measured signal is a combination of vascular delivery and tissue retention effects. If the arterial time-course is known, the tissue time-course can typically be expressed in terms of a linear convolution between the arterial time-course and the tissue residue function. As the residue represents the amount of tracer remaining in the tissue, this can be thought of as a survival function; these functions been examined in great detail by the statistics community. Kinetic analysis of PET data is concerned with estimation of the residue and associated functionals such as ow, ux and volume of distribution. This thesis presents a Markov chain formulation of blood tissue exchange and explores how this relates to established compartmental forms. A nonparametric approach to the estimation of the residue is examined and the improvement in this model relative to compartmental model is evaluated using simulations and cross-validation techniques. The reference distribution of the test statistics, generated in comparing the models, is also studied. We explore these models further with simulated studies and an FDG-PET dataset from subjects with gliomas, which has previously been analysed with compartmental modelling. We also consider the performance of a recently proposed mixture modelling technique in this study.
Resumo:
With the growing pressure of eutrophication in tropical regions, the Mauritian shelf provides a natural situation to understand the variability in mesotrophic assemblages. Site-specific dynamics occur throughout the 1200 m depth gradient. The shallow assemblages divide into three types of warm-water mesotrophic foraminiferal assemblages, which is not only a consequence of high primary productivity restricting light to the benthos but due to low pore water oxygenation, shelf geomorphology, and sediment partitioning. In the intermediate depth (approx. 500 m), the increase in foraminiferal diversity is due to the cold-water coral habitat providing a greater range of micro niches. Planktonic species characterise the lower bathyal zone, which emphasizes the reduced benthic carbonate production at depth. Although, due to the strong hydrodynamics within the Golf, planktonic species occur in notable abundances through out the whole depth gradient. Overall, this study can easily be compared to other tropical marine settings investigating the long-term effects of tropical eutrophication and the biogeographic distribution of carbonate producing organisms.
Resumo:
Metacognition is the understanding and control of cognitive processes. Students with high levels of metacognition achieve greater academic success. The purpose of this mixed-methods study was to examine elementary teachers’ beliefs about metacognition and integration of metacognitive practices in science. Forty-four teachers were recruited through professional networks to complete a questionnaire containing open-ended questions (n = 44) and Likert-type items (n = 41). Five respondents were selected to complete semi-structured interviews informed by the questionnaire. The selected interview participants had a minimum of three years teaching experience and demonstrated a conceptual understanding of metacognition. Statistical tests (Pearson correlation, t-tests, and multiple regression) on quantitative data and thematic analysis of qualitative data indicated that teachers largely understood metacognition but had some gaps in their understanding. Participants’ reported actions (teaching practices) and beliefs differed according to their years of experience but not gender. Hierarchical multiple regression demonstrated that the first block of gender and experience was not a significant predictor of teachers' metacognitive actions, although experience was a significant predictor by itself. Experience was not a significant predictor once teachers' beliefs were added. The majority of participants indicated that metacognition was indeed appropriate for elementary students. Participants consistently reiterated that students’ metacognition developed with practice, but required explicit instruction. A lack of consensus remained around the domain specificity of metacognition. More specifically, the majority of questionnaire respondents indicated that metacognitive strategies could not be used across subject domains, whereas all interviewees indicated that they used strategies across subjects. Metacognition was integrated frequently into Ontario elementary classrooms; however, metacognition was integrated less frequently in science lessons. Lastly, participants used a variety of techniques to integrate metacognition into their classrooms. Implications for practice include the need for more professional development aimed at integrating metacognition into science lessons at both the Primary and Junior levels. Further, teachers could benefit from additional clarification on the three main components of metacognition and the need to integrate all three to successfully develop students’ metacognition.
Resumo:
Collecting data via a questionnaire and analyzing them while preserving respondents’ privacy may increase the number of respondents and the truthfulness of their responses. It may also reduce the systematic differences between respondents and non-respondents. In this paper, we propose a privacy-preserving method for collecting and analyzing survey responses using secure multi-party computation (SMC). The method is secure under the semi-honest adversarial model. The proposed method computes a wide variety of statistics. Total and stratified statistical counts are computed using the secure protocols developed in this paper. Then, additional statistics, such as a contingency table, a chi-square test, an odds ratio, and logistic regression, are computed within the R statistical environment using the statistical counts as building blocks. The method was evaluated on a questionnaire dataset of 3,158 respondents sampled for a medical study and simulated questionnaire datasets of up to 50,000 respondents. The computation time for the statistical analyses linearly scales as the number of respondents increases. The results show that the method is efficient and scalable for practical use. It can also be used for other applications in which categorical data are collected.
Resumo:
Shape-based registration methods frequently encounters in the domains of computer vision, image processing and medical imaging. The registration problem is to find an optimal transformation/mapping between sets of rigid or nonrigid objects and to automatically solve for correspondences. In this paper we present a comparison of two different probabilistic methods, the entropy and the growing neural gas network (GNG), as general feature-based registration algorithms. Using entropy shape modelling is performed by connecting the point sets with the highest probability of curvature information, while with GNG the points sets are connected using nearest-neighbour relationships derived from competitive hebbian learning. In order to compare performances we use different levels of shape deformation starting with a simple shape 2D MRI brain ventricles and moving to more complicated shapes like hands. Results both quantitatively and qualitatively are given for both sets.
Resumo:
Abstract Heading into the 2020s, Physics and Astronomy are undergoing experimental revolutions that will reshape our picture of the fabric of the Universe. The Large Hadron Collider (LHC), the largest particle physics project in the world, produces 30 petabytes of data annually that need to be sifted through, analysed, and modelled. In astrophysics, the Large Synoptic Survey Telescope (LSST) will be taking a high-resolution image of the full sky every 3 days, leading to data rates of 30 terabytes per night over ten years. These experiments endeavour to answer the question why 96% of the content of the universe currently elude our physical understanding. Both the LHC and LSST share the 5-dimensional nature of their data, with position, energy and time being the fundamental axes. This talk will present an overview of the experiments and data that is gathered, and outlines the challenges in extracting information. Common strategies employed are very similar to industrial data! Science problems (e.g., data filtering, machine learning, statistical interpretation) and provide a seed for exchange of knowledge between academia and industry. Speaker Biography Professor Mark Sullivan Mark Sullivan is a Professor of Astrophysics in the Department of Physics and Astronomy. Mark completed his PhD at Cambridge, and following postdoctoral study in Durham, Toronto and Oxford, now leads a research group at Southampton studying dark energy using exploding stars called "type Ia supernovae". Mark has many years' experience of research that involves repeatedly imaging the night sky to track the arrival of transient objects, involving significant challenges in data handling, processing, classification and analysis.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Abstract not available