881 resultados para large course design


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In an effort to improve instruction and better accommodate the needs of students, community colleges are offering courses delivered in a variety of delivery formats that require students to have some level of technology fluency to be successful in the course. This study was conducted to investigate the relationship between student socioeconomic status (SES), course delivery method, and course type on enrollment, final course grades, course completion status, and course passing status at a state college. ^ A dataset for 20,456 students of low and not low SES enrolled in science, technology, engineering, and mathematics (STEM) course types delivered using traditional, online, blended, and web enhanced course delivery formats at Miami Dade College, a large open access 4-year state college located in Miami-Dade County, Florida, was analyzed. A factorial ANOVA using course type, course delivery method, and student SES found no significant differences in final course grades when used to determine if course delivery methods were equally effective for students of low and not low SES taking STEM course types. Additionally, three chi-square goodness-of-fit tests were used to investigate for differences in enrollment, course completion and course passing status by SES, course type, and course delivery method. The findings of the chi-square tests indicated that: (a) there were significant differences in enrollment by SES and course delivery methods for the Engineering/Technology, Math, and overall course types but not for the Natural Science course type and (b) there were no significant differences in course completion status and course passing status by SES and course types overall and SES and course delivery methods overall. However, there were statistically significant but weak relationships between course passing status, SES and the math course type as well as between course passing status, SES, and online and traditional course delivery methods. ^ The mixed findings in the study indicate that strides have been made in closing the theoretical gap in education and technology skills that may exist for students of different SES levels. MDC's course delivery and student support models may assist other institutions address student success in courses that necessitate students having some level of technology fluency. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the language of one’s cultural environment is important for effective communication and function. As such, students entering U.S. schools from foreign countries are given access to English to Speakers of Other Languages (ESOL) programs and they are referred to as English Language Learner (ELL) students. This dissertation examined the correlation of ELL ACCESS Composite Performance Level (CPL) score to the End of Course tests (EOCTs) and the Georgia High School Graduation Tests (GHSGTs) in the four content courses (language arts, mathematics, science, and social studies). A premise of this study was that English language proficiency is critical in meeting or exceeding state and county assessment standards. A quantitative descriptive research design was conducted using Cross-sectional archival data from a secondary source. There were 148 participants from school years 2011-2012 to 2013- 2014 from Grades 9-12. A Pearson product moment correlation was run to assess the relationship between the ACCESS CPL (independent variable) and the EOCT scores and the GHSGT scores (dependent variables). The findings showed that there was a positive correlation between ACCESS CPL scores and the EOCT scores where language arts showed a strong positive correlation and mathematics showed a positive weak correlation. Also, there was a positive correlation between ACCESS CPL scores and GHSGT scores where language arts showed a weak positive correlation. The results of this study indicated that that there is a relationship between the stated variables, ACCESS CPL, EOCT and GHSGT. Also, the results of this study showed that there were positive correlations at varying degrees for each grade levels. While the null hypothesis for Research Question 1 and Research Question 2 were rejected, there was a slight relationship between the variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to evaluate the incidence of corrosion and fretting in 48 retrieved titanium-6aluminum-4vanadium and/or cobalt-chromium-molybdenum modular total hip prosthesis with respect to alloy material microstructure and design parameters. The results revealed vastly different performance results for the wide array of microstructures examined. Severe corrosion/fretting was seen in 100% of as-cast, 24% of low carbon wrought, 9% of high carbon wrought and 5% of solution heat treated cobalt-chrome. Severe corrosion/fretting was observed in 60% of Ti-6Al-4V components. Design features which allow for fluid entry and stagnation, amplification of contact pressure and/or increased micromotion were also shown to play a role. 75% of prosthesis with high femoral head-trunnion offset exhibited poor performance compared to 15% with a low offset. Large femoral heads (>32mm) did not exhibit poor corrosion or fretting. Implantation time was not sufficient to cause poor performance; 54% of prosthesis with greater than 10 years in-vivo demonstrated none or mild corrosion/fretting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to determine hope’s unique role, if any, in predicting persistence in a developmental writing course. Perceived academic self-efficacy was also included as a variable of interest for comparison because self-efficacy has been more widely studied than hope in terms of its non-cognitive role in predicting academic outcomes. A significant body of research indicates that self-efficacy influences academic motivation to persist and academic performance. Hope, however, is an emerging psychological construct in the study of non-cognitive factors that influence college outcomes and warrants further exploration in higher education. This study examined the predictive value of hope and self-efficacy on persistence in a developmental writing course. The research sample was obtained from a community college in the southeastern United States. Participants were 238 students enrolled in developmental writing courses during their first year of college. Participants were given a questionnaire that included measures for perceived academic self-efficacy and hope. The self-efficacy scale asked participants to self-report on their beliefs about how they cope with different academic tasks in order to be successful. The hope scale asked students to self-report on their beliefs about their capability to initiate action towards a goal (“agency”) and create a plan to attain these goals (“pathways”). This study utilized a correlational research design. A statistical association was estimated between hope and self-efficacy as well as the unique variance contributed by each on course persistence. Correlational analysis confirmed a significant relationship between hope and perceived academic self-efficacy, and a Fisher’s z-transformation confirmed a stronger relationship between the agency component of hope and perceived academic self-efficacy than for the pathways component. A series of multinomial logistic regression analyses were conducted to assess if (a) perceived self-efficacy and hope predict course persistence, (b) hope independent of self-efficacy predicts course persistence, and (c) if including the interaction of perceived self-efficacy and hope predicts course persistence. It was found that hope was only significant independent of self-efficacy. Some implications for future research are drawn for those who lead and coordinate academic support initiatives in student and academic affairs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study reports one of the first controlled studies to examine the impact of a school based positive youth development program (Lerner, Fisher, & Weinberg, 2000) on promoting qualitative change in life course experiences as a positive intervention outcome. The study built on a recently proposed relational developmental methodological metanarrative (Overton, 1998) and advances in use of qualitative research methods (Denzin & Lincoln, 2000). The study investigated the use the Life Course Interview (Clausen, 1998) and an integrated qualitative and quantitative data analytic strategy (IQDAS) to provide empirical documentation of the impact the Changing Lives Program on qualitative change in positive identity in a multicultural population of troubled youth in an alternative public high school. The psychosocial life course intervention approach used in this study draws its developmental framework from both psychosocial developmental theory (Erikson, 1968) and life course theory (Elder, 1998) and its intervention strategies from the transformative pedagogy of Freire's (1983/1970). Using the 22 participants in the Intervention Condition and the 10 participants in the Control Condition, RMANOVAs found significantly more positive qualitative change in personal identity for program participants relative to the non-intervention control condition. In addition, the 2X2X2X3 mixed design RMANOVA in which Time (pre, post) was the repeated factor and Condition (Intervention versus Control), Gender, and Ethnicity the between group factors, also found significant interactions for the Time by Gender and Time by Ethnicity. Moreover, the directionality of the basic pattern of change was positive for participants of both genders and all three ethnic groups. The pattern of the moderation effects also indicated a marked tendency for participants in the intervention group to characterize their sense of self as more secure and less negative at the end of the their first semester in the intervention, that was stable across both genders and all three ethnicities. The basic differential pattern of an increase in the intervention condition of a positive characterization of sense of self relative to both pre test and relative to the directionality of the movement of the non-intervention controls, was stable across both genders and all three ethnic groups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relatório de Estágio para a obtenção do grau de Mestre na área de Educação e Comunicação Multimédia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current interest in measuring quality of life is generating interest in the construction of computerized adaptive tests (CATs) with Likert-type items. Calibration of an item bank for use in CAT requires collecting responses to a large number of candidate items. However, the number is usually too large to administer to each subject in the calibration sample. The concurrent anchor-item design solves this problem by splitting the items into separate subtests, with some common items across subtests; then administering each subtest to a different sample; and finally running estimation algorithms once on the aggregated data array, from which a substantial number of responses are then missing. Although the use of anchor-item designs is widespread, the consequences of several configuration decisions on the accuracy of parameter estimates have never been studied in the polytomous case. The present study addresses this question by simulation, comparing the outcomes of several alternatives on the configuration of the anchor-item design. The factors defining variants of the anchor-item design are (a) subtest size, (b) balance of common and unique items per subtest, (c) characteristics of the common items, and (d) criteria for the distribution of unique items across subtests. The results of this study indicate that maximizing accuracy in item parameter recovery requires subtests of the largest possible number of items and the smallest possible number of common items; the characteristics of the common items and the criterion for distribution of unique items do not affect accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Développer de nouveaux nanomatériaux, interrupteurs et machines nanométriques sensibles à de petites variations de température spécifiques devrait être de grande utilité pour une multitude de domaines œuvrant dans la nanotechnologie. De plus, l’objectif est de convaincre le lecteur que les nanotechnologies à base d’ADN offrent d’énormes possibilités pour la surveillance de température en temps réel à l’échelle nanométrique. Dans la section Résultats, nous exploitons les propriétés de l’ADN pour créer des thermomètres versatiles, robustes et faciles à employer. En utilisant une série de nouvelles stratégies inspirées par la nature, nous sommes en mesure de créer des nanothermomètres d’ADN capables de mesurer des températures de 25 à 95°C avec une précision de <0.1°C. En créant de nouveaux complexes d’ADN multimériques, nous arrivons à développer des thermomètres ultrasensibles pouvant augmenter leur fluorescence 20 fois sur un intervalle de 7°C. En combinant plusieurs brins d’ADN avec des plages dynamiques différentes, nous pouvons former des thermomètres montrant une transition de phase linéaire sur 50°C. Finalement, la vitesse de réponse et la précision des thermomètres développés et leur réversibilité sont illustrées à l’aide d’une expérience de surveillance de température à l’intérieur d’un unique puits d’un appareil de qPCR. En conclusion, les applications potentielles de tels nanothermomètres en biologie synthétique, imagerie thermique cellulaire, nanomachines d’ADN et livraison contrôlée seront considérées.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atomisation of an aqueous solution for tablet film coating is a complex process with multiple factors determining droplet formation and properties. The importance of droplet size for an efficient process and a high quality final product has been noted in the literature, with smaller droplets reported to produce smoother, more homogenous coatings whilst simultaneously avoiding the risk of damage through over-wetting of the tablet core. In this work the effect of droplet size on tablet film coat characteristics was investigated using X-ray microcomputed tomography (XμCT) and confocal laser scanning microscopy (CLSM). A quality by design approach utilising design of experiments (DOE) was used to optimise the conditions necessary for production of droplets at a small (20 μm) and large (70 μm) droplet size. Droplet size distribution was measured using real-time laser diffraction and the volume median diameter taken as a response. DOE yielded information on the relationship three critical process parameters: pump rate, atomisation pressure and coating-polymer concentration, had upon droplet size. The model generated was robust, scoring highly for model fit (R2 = 0.977), predictability (Q2 = 0.837), validity and reproducibility. Modelling confirmed that all parameters had either a linear or quadratic effect on droplet size and revealed an interaction between pump rate and atomisation pressure. Fluidised bed coating of tablet cores was performed with either small or large droplets followed by CLSM and XμCT imaging. Addition of commonly used contrast materials to the coating solution improved visualisation of the coating by XμCT, showing the coat as a discrete section of the overall tablet. Imaging provided qualitative and quantitative evidence revealing that smaller droplets formed thinner, more uniform and less porous film coats.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

UK engineering standards are regulated by the Engineering Council (EC) using a set of generic threshold competence standards which all professionally registered Chartered Engineers in the UK must demonstrate, underpinned by a separate academic qualification at Masters Level. As part of an EC-led national project for the development of work-based learning (WBL) courses leading to Chartered Engineer registration, Aston University has started an MSc Professional Engineering programme, a development of a model originally designed by Kingston University, and build around a set of generic modules which map onto the competence standards. The learning pedagogy of these modules conforms to a widely recognised experiential learning model, with refinements incorporated from a number of other learning models. In particular, the use of workplace mentoring to support the development of critical reflection and to overcome barriers to learning is being incorporated into the learning space. This discussion paper explains the work that was done in collaboration with the EC and a number of Professional Engineering Institutions, to design a course structure and curricular framework that optimises the engineering learning process for engineers already working across a wide range of industries, and to address issues of engineering sustainability. It also explains the thinking behind the work that has been started to provide an international version of the course, built around a set of globalised engineering competences. © 2010 W J Glew, E F Elsworth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement and verification of products and processes during the early design is attracting increasing interest from high value manufacturing industries. Measurement planning is deemed as an effective means to facilitate the integration of the metrology activity into a wider range of production processes. However, the literature reveals that there are very few research efforts in this field, especially regarding large volume metrology. This paper presents a novel approach to accomplish instruments selection, the first stage of measurement planning process, by mapping measurability characteristics between specific measurement assignments and instruments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dimensional and form inspections are key to the manufacturing and assembly of products. Product verification can involve a number of different measuring instruments operated using their dedicated software. Typically, each of these instruments with their associated software is more suitable for the verification of a pre-specified quality characteristic of the product than others. The number of different systems and software applications to perform a complete measurement of products and assemblies within a manufacturing organisation is therefore expected to be large. This number becomes even larger as advances in measurement technologies are made. The idea of a universal software application for any instrument still appears to be only a theoretical possibility. A need for information integration is apparent. In this paper, a design of an information system to consistently manage (store, search, retrieve, search, secure) measurement results from various instruments and software applications is introduced. Two of the main ideas underlying the proposed system include abstracting structures and formats of measurement files from the data so that complexity and compatibility between different approaches to measurement data modelling is avoided. Secondly, the information within a file is enriched with meta-information to facilitate its consistent storage and retrieval. To demonstrate the designed information system, a web application is implemented. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metrology processes used in the manufacture of large products include tool setting, product verification and flexible metrology enabled automation. The range of applications and instruments available makes the selection of the appropriate instrument for a given task highly complex. Since metrology is a key manufacturing process it should be considered in the early stages of design. This paper provides an overview of the important selection criteria for typical measurement processes and presents some novel selection strategies. Metrics which can be used to assess measurability are also discussed. A prototype instrument selection and measurability analysis application is presented with discussion of how this can be used as the basis for development of a more sophisticated measurement planning tool. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis focuses on the development of algorithms that will allow protein design calculations to incorporate more realistic modeling assumptions. Protein design algorithms search large sequence spaces for protein sequences that are biologically and medically useful. Better modeling could improve the chance of success in designs and expand the range of problems to which these algorithms are applied. I have developed algorithms to improve modeling of backbone flexibility (DEEPer) and of more extensive continuous flexibility in general (EPIC and LUTE). I’ve also developed algorithms to perform multistate designs, which account for effects like specificity, with provable guarantees of accuracy (COMETS), and to accommodate a wider range of energy functions in design (EPIC and LUTE).