12 resultados para mixed verification methods

em Digital Commons at Florida International University


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose. The Internet has provided an unprecedented opportunity for psychotropic medication consumers, a traditionally silenced group in clinical trial research, to have voice by contributing to the construction of drug knowledge in an immediate, direct manner. Currently, there are no systematic appraisals of the potential of online consumer drug reviews to contribute to drug knowledge. The purpose of this research was to explore the content of drug information on various websites representing themselves as consumer- and expert-constructed, and as a practical consideration, to examine how each source may help and hinder treatment decision-making.^ Methodology. A mixed-methods research strategy utilizing a grounded theory approach was used to analyze drug information on 5 exemplar websites (3 consumer- and 2 expertconstructed) for 2 popularly prescribed psychotropic drugs (escitalopram and quetiapine). A stratified simple random sample was used to select 1,080 consumer reviews from the websites (N=7,114) through February 2009. Text was coded using QDA Miner 3.2 software by Provalis Research. A combination of frequency tables, descriptive excerpts from text, and chi-square tests for association were used throughout analyses.^ Findings. The most frequently mentioned effects by consumers taking either drug were related to psychological/behavioral symptoms and sleep. Consumers reported many of the same effects as found on expert health sites, but provided more descriptive language and situational examples. Expert labels of less serious on certain effects were not congruent with the sometimes tremendous burden described by consumers. Consumers mentioned more than double the themes mentioned in expert text, and demonstrated a diversity and range of discourses around those themes.^ Conclusions. Drug effects from each source were complete relative to the information provided in the other, but each also offered distinct advantages. Expert health sites provided concise summaries of medications’ effects, while consumer reviews had the added advantage of concrete descriptions and greater context. In short, consumer reviews better prepared potential consumers for what it’s like to take psychotropic drugs. Both sources of information benefit clinicians and consumers in making informed treatment-related decisions. Social work practitioners are encouraged to thoughtfully utilize online consumer drug reviews as a legitimate additional source for assisting clients in learning about treatment options.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a heuristic investigation of mixed methods organized around three pairs of opposing standpoints: methods (qualitative vs. quantitative), paradigms (constructivist vs. post positive), and inquiry approaches (dialectical vs. pragmatic).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This sequential explanatory, mixed methods research design examines the role teachers should enact in the development process of the teacher evaluation system in Louisiana. These insights will ensure teachers are catalysts in the classroom to significantly increase student achievement and allow policymakers, practitioners, and instructional leaders to direct as learned decision makers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

English has been taught as a core and compulsory subject in China for decades. Recently, the demand for English in China has increased dramatically. China now has the world's largest English-learning population. The traditional English-teaching method cannot continue to be the only approach because it merely focuses on reading, grammar and translation, which cannot meet English learners and users' needs (i.e., communicative competence and skills in speaking and writing). ^ This study was conducted to investigate if the Picture-Word Inductive Model (PWIM), a new pedagogical method using pictures and inductive thinking, would benefit English learners in China in terms of potential higher output in speaking and writing. With the gauge of Cognitive Load Theory (CLT), specifically, its redundancy effect, I investigated whether processing words and a picture concurrently would present a cognitive overload for English learners in China. ^ I conducted a mixed methods research study. A quasi-experiment (pretest, intervention for seven weeks, and posttest) was conducted using 234 students in four groups in Lianyungang, China (58 fourth graders and 57 seventh graders as an experimental group with PWIM and 59 fourth graders and 60 seventh graders as a control group with the traditional method). No significant difference in the effects of PWIM was found on vocabulary acquisition based on grade levels. Observations, questionnaires with open-ended questions, and interviews were deployed to answer the three remaining research questions. A few students felt cognitively overloaded when they encountered too many writing samples, too many new words at one time, repeated words, mismatches between words and pictures, and so on. Many students listed and exemplified numerous strengths of PWIM, but a few mentioned weaknesses of PWIM. The students expressed the idea that PWIM had a positive effect on their English teaching. ^ As integrated inferences, qualitative findings were used to explain the quantitative results that there were no significant differences of the effects of the PWIM between the experimental and control groups in both grade levels, from four contextual aspects: time constraints on PWIM implementation, teachers' resistance, how to use PWIM and PWIM implemented in a classroom over 55 students.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

English has been taught as a core and compulsory subject in China for decades. Recently, the demand for English in China has increased dramatically. China now has the world’s largest English-learning population. The traditional English-teaching method cannot continue to be the only approach because it merely focuses on reading, grammar and translation, which cannot meet English learners and users’ needs (i.e., communicative competence and skills in speaking and writing). This study was conducted to investigate if the Picture-Word Inductive Model (PWIM), a new pedagogical method using pictures and inductive thinking, would benefit English learners in China in terms of potential higher output in speaking and writing. With the gauge of Cognitive Load Theory (CLT), specifically, its redundancy effect, I investigated whether processing words and a picture concurrently would present a cognitive overload for English learners in China. I conducted a mixed methods research study. A quasi-experiment (pretest, intervention for seven weeks, and posttest) was conducted using 234 students in four groups in Lianyungang, China (58 fourth graders and 57 seventh graders as an experimental group with PWIM and 59 fourth graders and 60 seventh graders as a control group with the traditional method). No significant difference in the effects of PWIM was found on vocabulary acquisition based on grade levels. Observations, questionnaires with open-ended questions, and interviews were deployed to answer the three remaining research questions. A few students felt cognitively overloaded when they encountered too many writing samples, too many new words at one time, repeated words, mismatches between words and pictures, and so on. Many students listed and exemplified numerous strengths of PWIM, but a few mentioned weaknesses of PWIM. The students expressed the idea that PWIM had a positive effect on their English teaching. As integrated inferences, qualitative findings were used to explain the quantitative results that there were no significant differences of the effects of the PWIM between the experimental and control groups in both grade levels, from four contextual aspects: time constraints on PWIM implementation, teachers’ resistance, how to use PWIM and PWIM implemented in a classroom over 55 students.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern software systems are often large and complicated. To better understand, develop, and manage large software systems, researchers have studied software architectures that provide the top level overall structural design of software systems for the last decade. One major research focus on software architectures is formal architecture description languages, but most existing research focuses primarily on the descriptive capability and puts less emphasis on software architecture design methods and formal analysis techniques, which are necessary to develop correct software architecture design. ^ Refinement is a general approach of adding details to a software design. A formal refinement method can further ensure certain design properties. This dissertation proposes refinement methods, including a set of formal refinement patterns and complementary verification techniques, for software architecture design using Software Architecture Model (SAM), which was developed at Florida International University. First, a general guideline for software architecture design in SAM is proposed. Second, specification construction through property-preserving refinement patterns is discussed. The refinement patterns are categorized into connector refinement, component refinement and high-level Petri nets refinement. These three levels of refinement patterns are applicable to overall system interaction, architectural components, and underlying formal language, respectively. Third, verification after modeling as a complementary technique to specification refinement is discussed. Two formal verification tools, the Stanford Temporal Prover (STeP) and the Simple Promela Interpreter (SPIN), are adopted into SAM to develop the initial models. Fourth, formalization and refinement of security issues are studied. A method for security enforcement in SAM is proposed. The Role-Based Access Control model is formalized using predicate transition nets and Z notation. The patterns of enforcing access control and auditing are proposed. Finally, modeling and refining a life insurance system is used to demonstrate how to apply the refinement patterns for software architecture design using SAM and how to integrate the access control model. ^ The results of this dissertation demonstrate that a refinement method is an effective way to develop a high assurance system. The method developed in this dissertation extends existing work on modeling software architectures using SAM and makes SAM a more usable and valuable formal tool for software architecture design. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because past research has shown faculty as the driving force affecting student academic library use, librarians have tried for decades to engage classroom faculty in library activities. Nevertheless, a low rate of library use by faculty on behalf of their students persists. This study investigated the organizational culture dimensions affecting library faculty demand at a community college. The study employed a sequential quantitative-qualitative research design. A random sample of full-time faculty at a large urban community college responded to a 46-item survey. The survey data showed strong espoused support (84%) for the use of library-based materials but a much lower incidence of putting this construct into practice (46%). Interviews were conducted with 11 full-time faculty from two academic groups, English-Humanities and Engineering-Math-Science. These groups were selected because the survey data resulted in statistically significant differences between the groups pertaining to several key variables. These variables concerned the professors' perceptions of the importance of library research in their discipline, the amount of time spent on the course textbook during a term, the frequency of conversations about the library in the academic department, and the professors' ratings of the librarians' skill in instruction related to the academic discipline. All interviewees described the student culture as the predominant organizational culture at Major College. Although most interview subjects held to high information literacy standards in their courses, others were less convinced these could be realistically practiced, based on a perception of students' poor academic skills, lack of time for students to complete assignments due to their commuter and family responsibilities, and the need to focus on textbook content. Recommended future research would involve investigation of methods to bridge the gap between high espoused value toward information literacy and implementation of information-literate coursework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In an attempt to improve students' functional understanding of plagiarism a variety of approaches were tried within the context of a more comprehensive information literacy program. Sessions were taught as a one hour "module" inside a required communications skills class at a small private university. Approaches taken included control sessions (a straightforward PowerPoint presentation of the material), direct instruction sessions (featuring mostly direct lecture but with some seatwork as well), and student-centered sessions (utilizing role playing and group exercises). Students were taught basic content and definitions regarding plagiarism, what circumstances or instances constitute plagiarism, where to go for help in avoiding plagiarism, and what constitutes appropriate paraphrasing and citation. Pre-test and post-test scores determined students' functional understanding primarily by their ability to recognize properly and improperly paraphrased text, content understanding by their combined total score on a multiple choice quiz, and their attitude and conceptual understanding by their ability to recognize circumstances which would constitute plagiarism. While students improved across all methods the study was unable to identify one that performed significantly better than the others. The results supported the need for more education with regard to plagiarism and suggested a need for perhaps more time on task and/or a mixed approach towards conveying the content.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic. This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.