10 resultados para processing effects

em Digital Commons at Florida International University


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to investigate the effects of direct instruction in story grammar on the reading and writing achievement of second graders. Three aspects of story grammar (character, setting, and plot) were taught with direct instruction using the concept development technique of deep processing. Deep processing which included (a) visualization (the drawing of pictures), (b) verbalization (the writing of sentences), (c) the attachment of physical sensations, and (d) the attachment of emotions to concepts was used to help students make mental connections necessary for recall and application of character, setting, and plot when constructing meaning in reading and writing.^ Four existing classrooms consisting of seventy-seven second-grade students were randomly assigned to two treatments, experimental and comparison. Both groups were pretested and posttested for reading achievement using the Gates-MacGinitie Reading Tests. Pretest and posttest writing samples were collected and evaluated. Writing achievement was measured using (a) a primary trait scoring scale (an adapted version of the Glazer Narrative Composition Scale) and (b) an holistic scoring scale by R. J. Pritchard. ANCOVAs were performed on the posttests adjusted for the pretests to determine whether or not the methods differed. There was no significant improvement in reading after the eleven-day experimental period for either group; nor did the two groups differ. There was significant improvement in writing for the experimental group over the comparison group. Pretreatment and posttreatment interviews were selectively collected to evaluate qualitatively if the students were able to identify and manipulate elements of story grammar and to determine patterns in metacognitive processing. Interviews provided evidence that most students in the experimental group gained while most students in the comparison group did not gain in their ability to manipulate, with understanding, the concepts of character, setting, and plot. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as “histogram binning” inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. ^ Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. ^ The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. ^ These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. ^ In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the effects of word prediction and text-to-speech on the narrative composition writing skills of 6, fifth-grade Hispanic boys with specific learning disabilities (SLD). A multiple baseline design across subjects was used to explore the efficacy of word prediction and text-to-speech alone and in combination on four dependent variables: writing fluency (words per minute), syntax (T-units), spelling accuracy, and overall organization (holistic scoring rubric). Data were collected and analyzed during baseline, assistive technology interventions, and at 2-, 4-, and 6-week maintenance probes. ^ Participants were equally divided into Cohorts A and B, and two separate but related studies were conducted. Throughout all phases of the study, participants wrote narrative compositions for 15-minute sessions. During baseline, participants used word processing only. During the assistive technology intervention condition, Cohort A participants used word prediction followed by word prediction with text-to-speech. Concurrently, Cohort B participants used text-to-speech followed by text-to-speech with word prediction. ^ The results of this study indicate that word prediction alone or in combination with text-to-speech has a positive effect on the narrative writing compositions of students with SLD. Overall, participants in Cohorts A and B wrote more words, more T-units, and spelled more words correctly. A sign test indicated that these perceived effects were not likely due to chance. Additionally, the quality of writing improved as measured by holistic rubric scores. When participants in Cohort B used text-to-speech alone, with the exception of spelling accuracy, inconsequential results were observed on all dependent variables. ^ This study demonstrated that word prediction alone or in combination assists students with SLD to write longer, improved-quality, narrative compositions. These results suggest that word prediction or word prediction with text-to-speech be considered as a writing support to facilitate the production of a first draft of a narrative composition. However, caution should be given to the use of text-to-speech alone as its effectiveness has not been established. Recommendations for future research include investigating the use of these technologies in other phases of the writing process, with other student populations, and with other writing styles. Further, these technologies should be investigated while integrated into classroom composition instruction. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Menu analysis is the gathering and processing of key pieces of information to make it more manageable and understandable. Ultimately, menu analysis allows managers to make more informed decisions about prices, costs, and items to be included on a menu. The author discusses If labor as well as food casts need to be included in menu analysis and if managers need to categorize menu items differently when doing menu analysis based on customer eating patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study analyzes the qualitative and quantitative patterns of notetaking by learning disabled (LD) and nondisabled (ND) adolescents and the effectiveness of notetaking and review as measured by the subjects' ability to recall information presented during a lecture. The study also examines relationships between certain learner characteristics and notetaking. The following notetaking variables were investigated: note completeness, number of critical ideas recorded, levels of processing information, organizational strategies, fluency of notes, and legibility of notes. The learner characteristics examined pertained to measures on achievement, short-term memory, listening comprehension, and verbal ability.^ Students from the 11th and 12th grades were randomly selected from four senior high schools in Dade County, Florida. Seventy learning disabled and 79 nondisabled subjects were shown a video tape lecture and required to take notes. The lecture conditions controlled for presentation rate, prior knowledge, information density, and difficulty level. After 8 weeks, their notes were returned to the subjects for a review period, and a posttest was administered.^ Results of this study suggest significant differences (p $\le$.01) in the patterns of notetaking between LD and ND groups not due to differences in the learner characteristics listed above. In addition, certain notetaking variables such as process levels, number of critical ideas, and note completeness were found to be significantly correlated to learning outcome. Further, deficiencies in the spontaneous use of organizational strategies and abbreviations adversely affected the notetaking effectiveness of learning disabled students.^ Both LD and ND subjects recalled more information recorded in their notes than not recorded. This difference was significant only for the ND group. By contrast, LD subjects compensated for their poor notetaking skills and recalled significantly more information not recorded on their notes than did ND subjects. The major implications of these findings suggest that LD and ND subjects exhibit very different entry behaviors when asked to perform a notetaking task; hence, teaching approaches to notetaking must differ as well. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tall buildings are wind-sensitive structures and could experience high wind-induced effects. Aerodynamic boundary layer wind tunnel testing has been the most commonly used method for estimating wind effects on tall buildings. Design wind effects on tall buildings are estimated through analytical processing of the data obtained from aerodynamic wind tunnel tests. Even though it is widely agreed that the data obtained from wind tunnel testing is fairly reliable the post-test analytical procedures are still argued to have remarkable uncertainties. This research work attempted to assess the uncertainties occurring at different stages of the post-test analytical procedures in detail and suggest improved techniques for reducing the uncertainties. Results of the study showed that traditionally used simplifying approximations, particularly in the frequency domain approach, could cause significant uncertainties in estimating aerodynamic wind-induced responses. Based on identified shortcomings, a more accurate dual aerodynamic data analysis framework which works in the frequency and time domains was developed. The comprehensive analysis framework allows estimating modal, resultant and peak values of various wind-induced responses of a tall building more accurately. Estimating design wind effects on tall buildings also requires synthesizing the wind tunnel data with local climatological data of the study site. A novel copula based approach was developed for accurately synthesizing aerodynamic and climatological data up on investigating the causes of significant uncertainties in currently used synthesizing techniques. Improvement of the new approach over the existing techniques was also illustrated with a case study on a 50 story building. At last, a practical dynamic optimization approach was suggested for tuning structural properties of tall buildings towards attaining optimum performance against wind loads with less number of design iterations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence of inhibitory substances in biological forensic samples has, and continues to affect the quality of the data generated following DNA typing processes. Although the chemistries used during the procedures have been enhanced to mitigate the effects of these deleterious compounds, some challenges remain. Inhibitors can be components of the samples, the substrate where samples were deposited or chemical(s) associated to the DNA purification step. Therefore, a thorough understanding of the extraction processes and their ability to handle the various types of inhibitory substances can help define the best analytical processing for any given sample. A series of experiments were conducted to establish the inhibition tolerance of quantification and amplification kits using common inhibitory substances in order to determine if current laboratory practices are optimal for identifying potential problems associated with inhibition. DART mass spectrometry was used to determine the amount of inhibitor carryover after sample purification, its correlation to the initial inhibitor input in the sample and the overall effect in the results. Finally, a novel alternative at gathering investigative leads from samples that would otherwise be ineffective for DNA typing due to the large amounts of inhibitory substances and/or environmental degradation was tested. This included generating data associated with microbial peak signatures to identify locations of clandestine human graves. Results demonstrate that the current methods for assessing inhibition are not necessarily accurate, as samples that appear inhibited in the quantification process can yield full DNA profiles, while those that do not indicate inhibition may suffer from lowered amplification efficiency or PCR artifacts. The extraction methods tested were able to remove >90% of the inhibitors from all samples with the exception of phenol, which was present in variable amounts whenever the organic extraction approach was utilized. Although the results attained suggested that most inhibitors produce minimal effect on downstream applications, analysts should practice caution when selecting the best extraction method for particular samples, as casework DNA samples are often present in small quantities and can contain an overwhelming amount of inhibitory substances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as "histogram binning" inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence of inhibitory substances in biological forensic samples has, and continues to affect the quality of the data generated following DNA typing processes. Although the chemistries used during the procedures have been enhanced to mitigate the effects of these deleterious compounds, some challenges remain. Inhibitors can be components of the samples, the substrate where samples were deposited or chemical(s) associated to the DNA purification step. Therefore, a thorough understanding of the extraction processes and their ability to handle the various types of inhibitory substances can help define the best analytical processing for any given sample. A series of experiments were conducted to establish the inhibition tolerance of quantification and amplification kits using common inhibitory substances in order to determine if current laboratory practices are optimal for identifying potential problems associated with inhibition. DART mass spectrometry was used to determine the amount of inhibitor carryover after sample purification, its correlation to the initial inhibitor input in the sample and the overall effect in the results. Finally, a novel alternative at gathering investigative leads from samples that would otherwise be ineffective for DNA typing due to the large amounts of inhibitory substances and/or environmental degradation was tested. This included generating data associated with microbial peak signatures to identify locations of clandestine human graves. Results demonstrate that the current methods for assessing inhibition are not necessarily accurate, as samples that appear inhibited in the quantification process can yield full DNA profiles, while those that do not indicate inhibition may suffer from lowered amplification efficiency or PCR artifacts. The extraction methods tested were able to remove >90% of the inhibitors from all samples with the exception of phenol, which was present in variable amounts whenever the organic extraction approach was utilized. Although the results attained suggested that most inhibitors produce minimal effect on downstream applications, analysts should practice caution when selecting the best extraction method for particular samples, as casework DNA samples are often present in small quantities and can contain an overwhelming amount of inhibitory substances.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tall buildings are wind-sensitive structures and could experience high wind-induced effects. Aerodynamic boundary layer wind tunnel testing has been the most commonly used method for estimating wind effects on tall buildings. Design wind effects on tall buildings are estimated through analytical processing of the data obtained from aerodynamic wind tunnel tests. Even though it is widely agreed that the data obtained from wind tunnel testing is fairly reliable the post-test analytical procedures are still argued to have remarkable uncertainties. This research work attempted to assess the uncertainties occurring at different stages of the post-test analytical procedures in detail and suggest improved techniques for reducing the uncertainties. Results of the study showed that traditionally used simplifying approximations, particularly in the frequency domain approach, could cause significant uncertainties in estimating aerodynamic wind-induced responses. Based on identified shortcomings, a more accurate dual aerodynamic data analysis framework which works in the frequency and time domains was developed. The comprehensive analysis framework allows estimating modal, resultant and peak values of various wind-induced responses of a tall building more accurately. Estimating design wind effects on tall buildings also requires synthesizing the wind tunnel data with local climatological data of the study site. A novel copula based approach was developed for accurately synthesizing aerodynamic and climatological data up on investigating the causes of significant uncertainties in currently used synthesizing techniques. Improvement of the new approach over the existing techniques was also illustrated with a case study on a 50 story building. At last, a practical dynamic optimization approach was suggested for tuning structural properties of tall buildings towards attaining optimum performance against wind loads with less number of design iterations.