7 resultados para Boolean Computations
em Digital Commons at Florida International University
Resumo:
This dissertation derived hypotheses from the theories of Piaget, Bruner and Dienes regarding the effects of using Algebra Tiles and other manipulative materials to teach remedial algebra to community college students. The dependent variables measured were achievement and attitude towards mathematics. The Piagetian cognitive level of the students in the study was measured and used as a concomitant factor in the study.^ The population for the study was comprised of remedial algebra students at a large urban community college. The sample for the study consisted of 253 students enrolled in 10 sections of remedial algebra at three of the six campuses of the college. Pretests included administration of an achievement pre-measure, Aiken's Mathematics Attitude Inventory (MAI), and the Group Assessment of Logical Thinking (GALT). Posttest measures included a course final exam and a second administration of the MAI.^ The results of the GALT test revealed that 161 students (63.6%) were concrete operational, 65 (25.7%) were transitional, and 27 (10.7%) were formal operational. For the purpose of analyzing the data, the transitional and formal operational students were grouped together.^ Univariate factorial analyses of covariance ($\alpha$ =.05) were performed on the posttest of achievement (covariate = achievement pretest) and the MAI posttest (covariate = MAI pretest). The factors used in the analysis were method of teaching (manipulative vs. traditional) and cognitive level (concrete operational vs. transitional/formal operational).^ The analyses for achievement revealed a significant difference in favor of the manipulatives groups in the computations by campus. Significant differences were not noted in the analysis by individual instructors.^ The results for attitude towards mathematics showed a significant difference in favor of the manipulatives groups for the college-wide analysis and for one campus. The analysis by individual instructor was not significant. In addition, the college-wide analysis was significant in favor of the transitional/formal operational stage of cognitive development. However, support for this conclusion was not obtained in the analyses by campus or individual instructor. ^
Resumo:
If we classify variables in a program into various security levels, then a secure information flow analysis aims to verify statically that information in a program can flow only in ways consistent with the specified security levels. One well-studied approach is to formulate the rules of the secure information flow analysis as a type system. A major trend of recent research focuses on how to accommodate various sophisticated modern language features. However, this approach often leads to overly complicated and restrictive type systems, making them unfit for practical use. Also, problems essential to practical use, such as type inference and error reporting, have received little attention. This dissertation identified and solved major theoretical and practical hurdles to the application of secure information flow. ^ We adopted a minimalist approach to designing our language to ensure a simple lenient type system. We started out with a small simple imperative language and only added features that we deemed most important for practical use. One language feature we addressed is arrays. Due to the various leaking channels associated with array operations, arrays have received complicated and restrictive typing rules in other secure languages. We presented a novel approach for lenient array operations, which lead to simple and lenient typing of arrays. ^ Type inference is necessary because usually a user is only concerned with the security types for input/output variables of a program and would like to have all types for auxiliary variables inferred automatically. We presented a type inference algorithm B and proved its soundness and completeness. Moreover, algorithm B stays close to the program and the type system and therefore facilitates informative error reporting that is generated in a cascading fashion. Algorithm B and error reporting have been implemented and tested. ^ Lastly, we presented a novel framework for developing applications that ensure user information privacy. In this framework, core computations are defined as code modules that involve input/output data from multiple parties. Incrementally, secure flow policies are refined based on feedback from the type checking/inference. Core computations only interact with code modules from involved parties through well-defined interfaces. All code modules are digitally signed to ensure their authenticity and integrity. ^
Resumo:
This research is to establish new optimization methods for pattern recognition and classification of different white blood cells in actual patient data to enhance the process of diagnosis. Beckman-Coulter Corporation supplied flow cytometry data of numerous patients that are used as training sets to exploit the different physiological characteristics of the different samples provided. The methods of Support Vector Machines (SVM) and Artificial Neural Networks (ANN) were used as promising pattern classification techniques to identify different white blood cell samples and provide information to medical doctors in the form of diagnostic references for the specific disease states, leukemia. The obtained results prove that when a neural network classifier is well configured and trained with cross-validation, it can perform better than support vector classifiers alone for this type of data. Furthermore, a new unsupervised learning algorithm---Density based Adaptive Window Clustering algorithm (DAWC) was designed to process large volumes of data for finding location of high data cluster in real-time. It reduces the computational load to ∼O(N) number of computations, and thus making the algorithm more attractive and faster than current hierarchical algorithms.
Resumo:
Purpose. The goal of this study is to improve the favorable molecular interactions between starch and PPC by addition of grafting monomers MA and ROM as compatibilizers, which would advance the mechanical properties of starch/PPC composites. ^ Methodology. DFT and semi-empirical methods based calculations were performed on three systems: (a) starch/PPC, (b) starch/PPC-MA, and (c) starch-ROM/PPC. Theoretical computations involved the determination of optimal geometries, binding-energies and vibrational frequencies of the blended polymers. ^ Findings. Calculations performed on five starch/PPC composites revealed hydrogen bond formation as the driving force behind stable composite formation, also confirmed by the negative relative energies of the composites indicating the existence of binding forces between the constituent co-polymers. The interaction between starch and PPC is also confirmed by the computed decrease in stretching CO and OH group frequencies participating in hydrogen bond formation, which agree qualitatively with the experimental values. ^ A three-step mechanism of grafting MA on PPC was proposed to improve the compatibility of PPC with starch. Nine types of 'blends' produced by covalent bond formation between starch and MA-grafted PPC were found to be energetically stable, with blends involving MA grafted at the 'B' and 'C' positions of PPC indicating a binding-energy increase of 6.8 and 6.2 kcal/mol, respectively, as compared to the non-grafted starch/PPC composites. A similar increase in binding-energies was also observed for three types of 'composites' formed by hydrogen bond formation between starch and MA-grafted PPC. ^ Next, grafting of ROM on starch and subsequent blend formation with PPC was studied. All four types of blends formed by the reaction of ROM-grafted starch with PPC were found to be more energetically stable as compared to the starch/PPC composite and starch/PPC-MA composites and blends. A blend of PPC and ROM grafted at the ' a&d12; ' position on amylose exhibited a maximal increase of 17.1 kcal/mol as compared with the starch/PPC-MA blend. ^ Conclusions. ROM was found to be a more effective compatibilizer in improving the favorable interactions between starch and PPC as compared to MA. The ' a&d12; ' position was found to be the most favorable attachment point of ROM to amylose for stable blend formation with PPC.^
Resumo:
This study investigated the influence that receiving instruction in two languages, English and Spanish, had on the performance of students enrolled in the International Studies Program (delayed partial immersion model) of Miami Dade County Public Schools on a standardized test in English, the Stanford Achievement Test, eighth edition, for three of its sections, Reading Comprehension, Mathematics Computations, and Mathematics Applications.^ The performance of the selected IS program/Spanish section cohort of students (N = 55) on the SAT Reading Comprehension, Mathematics Computation, and Mathematics Application along four consecutive years was contrasted with that of a control group of comparable students selected within the same feeder pattern where the IS program is implemented (N = 21). The performance of the group was also compared to the cross-sectional achievement patterns of the school's corresponding feeder pattern, region, and district.^ The research model for the study was a variation of the "causal-comparative" or "ex post facto design" sometimes referred to as "prospective". After data were collected from MDCPS, t-tests were performed to compare IS-Spanish students SAT performance for grades 3 to 6 for years 1994 to 1997 to control group, feeder pattern, region and district norms for each year for Reading Comprehension, Mathematics Computation, and Mathematics Applications. Repeated measures ANOVA and Tukey's tests were calculated to compare the mean percentiles of the groups under study and the possible interactions of the different variables. All tests were performed at the 5% significance level.^ From the analyses of the tests it was deduced that the IS group performed significantly better than the control group for all the three measures along the four years. The IS group mean percentiles on the three measures were also significantly higher than those of the feeder pattern, region, and district. The null hypotheses were rejected and it was concluded that receiving instruction in two languages did not negatively affect the performance of IS program students on tests taken in English. It was also concluded that the particular design the IS program enhances the general performance of participant students on Standardized tests.^ The quantitative analyses were coupled with interviews from teachers and administrators of the IS program to gain additional insight about different aspects of the implementation of the program at each particular school. ^
Resumo:
A class of lifetime distributions which has received considerable attention in modelling and analysis of lifetime data is the class of lifetime distributions with bath-tub shaped failure rate functions because of their extensive applications. The purpose of this thesis was to introduce a new class of bivariate lifetime distributions with bath-tub shaped failure rates (BTFRFs). In this research, first we reviewed univariate lifetime distributions with bath-tub shaped failure rates, and several multivariate extensions of a univariate failure rate function. Then we introduced a new class of bivariate distributions with bath-tub shaped failure rates (hazard gradients). Specifically, the new class of bivariate lifetime distributions were developed using the method of Morgenstern’s method of defining bivariate class of distributions with given marginals. The computer simulations and numerical computations were used to investigate the properties of these distributions.
Resumo:
With the exponential growth of the usage of web-based map services, the web GIS application has become more and more popular. Spatial data index, search, analysis, visualization and the resource management of such services are becoming increasingly important to deliver user-desired Quality of Service. First, spatial indexing is typically time-consuming and is not available to end-users. To address this, we introduce TerraFly sksOpen, an open-sourced an Online Indexing and Querying System for Big Geospatial Data. Integrated with the TerraFly Geospatial database [1-9], sksOpen is an efficient indexing and query engine for processing Top-k Spatial Boolean Queries. Further, we provide ergonomic visualization of query results on interactive maps to facilitate the user’s data analysis. Second, due to the highly complex and dynamic nature of GIS systems, it is quite challenging for the end users to quickly understand and analyze the spatial data, and to efficiently share their own data and analysis results with others. Built on the TerraFly Geo spatial database, TerraFly GeoCloud is an extra layer running upon the TerraFly map and can efficiently support many different visualization functions and spatial data analysis models. Furthermore, users can create unique URLs to visualize and share the analysis results. TerraFly GeoCloud also enables the MapQL technology to customize map visualization using SQL-like statements [10]. Third, map systems often serve dynamic web workloads and involve multiple CPU and I/O intensive tiers, which make it challenging to meet the response time targets of map requests while using the resources efficiently. Virtualization facilitates the deployment of web map services and improves their resource utilization through encapsulation and consolidation. Autonomic resource management allows resources to be automatically provisioned to a map service and its internal tiers on demand. v-TerraFly are techniques to predict the demand of map workloads online and optimize resource allocations, considering both response time and data freshness as the QoS target. The proposed v-TerraFly system is prototyped on TerraFly, a production web map service, and evaluated using real TerraFly workloads. The results show that v-TerraFly can accurately predict the workload demands: 18.91% more accurate; and efficiently allocate resources to meet the QoS target: improves the QoS by 26.19% and saves resource usages by 20.83% compared to traditional peak load-based resource allocation.