24 resultados para requirement-based testing
em Digital Commons at Florida International University
Resumo:
The adverse health effects of long-term exposure to lead are well established, with major uptake into the human body occurring mainly through oral ingestion by young children. Lead-based paint was frequently used in homes built before 1978, particularly in inner-city areas. Minority populations experience the effects of lead poisoning disproportionately. ^ Lead-based paint abatement is costly. In the United States, residents of about 400,000 homes, occupied by 900,000 young children, lack the means to correct lead-based paint hazards. The magnitude of this problem demands research on affordable methods of hazard control. One method is encapsulation, defined as any covering or coating that acts as a permanent barrier between the lead-based paint surface and the environment. ^ Two encapsulants were tested for reliability and effective life span through an accelerated lifetime experiment that applied stresses exceeding those encountered under normal use conditions. The resulting time-to-failure data were used to extrapolate the failure time under conditions of normal use. Statistical analysis and models of the test data allow forecasting of long-term reliability relative to the 20-year encapsulation requirement. Typical housing material specimens simulating walls and doors coated with lead-based paint were overstressed before encapsulation. A second, un-aged set was also tested. Specimens were monitored after the stress test with a surface chemical testing pad to identify the presence of lead breaking through the encapsulant. ^ Graphical analysis proposed by Shapiro and Meeker and the general log-linear model developed by Cox were used to obtain results. Findings for the 80% reliability time to failure varied, with close to 21 years of life under normal use conditions for encapsulant A. The application of product A on the aged gypsum and aged wood substrates yielded slightly lower times. Encapsulant B had an 80% reliable life of 19.78 years. ^ This study reveals that encapsulation technologies can offer safe and effective control of lead-based paint hazards and may be less expensive than other options. The U.S. Department of Health and Human Services and the CDC are committed to eliminating childhood lead poisoning by 2010. This ambitious target is feasible, provided there is an efficient application of innovative technology, a goal to which this study aims to contribute. ^
Resumo:
The primary purpose of this study was to examine the influences of literacy variables on high-stakes test performance including: (a) student achievement on the Metropolitan Achievement Test, Seventh Edition (MAT-7) as correlated to the high-stakes test such as the FCAT examination and (b) the English language proficiency attained by English Language Learners (ELL) students when participating in, or exiting from English Speakers of Other Languages (ESOL) program as determined by the Limited English Proficient (LEP) committee. ^ Two one-sample Chi-square tests were conducted to investigate the relationship between passing the MAT-7 Reading and Language examinations and the FCAT-SSS Reading Comprehension and FCAT-NRT examinations. In addition, 2x2 Analyses of Variance (ANOVAs) were conducted to address the relationship between the time ELL students spent in the ESOL program and the level of achievement on MAT-7 Reading and Language examinations and the FCAT-SSS Reading Comprehension and FCAT-NRT. ^ Findings of this study indicated that more ELL students exit the program based on the LEP committee decisions than by passing the MAT-7. The majority of ELL students failed the 10th grade FCAT, the passing of which is needed for graduation. A significant number of ELL students failed, even when passing the MAT-7 or being duly exited through the decision of the LEP committee. The data also indicated that ELL students who exited the ESOL program in six semesters or fewer had higher FCAT scores than those who exited the program in seven semesters or more. The MAT-7 and the decision of the LEP committee were shown to be ineffective as predictors of success on the FCAT. ^ Further research to determine the length of time a student in the ESOL program uses English to read, write, and speak should be conducted. Additionally, the development of a new assessment instrument to better predict student success should be considered. However, it should be noted that the results of this study are limited to the context in which it was conducted and does not warrant generalizations beyond that context. ^
Resumo:
The purpose of this investigation was to develop and implement a general purpose VLSI (Very Large Scale Integration) Test Module based on a FPGA (Field Programmable Gate Array) system to verify the mechanical behavior and performance of MEM sensors, with associated corrective capabilities; and to make use of the evolving System-C, a new open-source HDL (Hardware Description Language), for the design of the FPGA functional units. System-C is becoming widely accepted as a platform for modeling, simulating and implementing systems consisting of both hardware and software components. In this investigation, a Dual-Axis Accelerometer (ADXL202E) and a Temperature Sensor (TMP03) were used for the test module verification. Results of the test module measurement were analyzed for repeatability and reliability, and then compared to the sensor datasheet. Further study ideas were identified based on the study and results analysis. ASIC (Application Specific Integrated Circuit) design concepts were also being pursued.
Resumo:
Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our nation’s highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.
Resumo:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
Resumo:
Over the last two decades social vulnerability has emerged as a major area of study, with increasing attention to the study of vulnerable populations. Generally, the elderly are among the most vulnerable members of any society, and widespread population aging has led to greater focus on elderly vulnerability. However, the absence of a valid and practical measure constrains the ability of policy-makers to address this issue in a comprehensive way. This study developed a composite indicator, The Elderly Social Vulnerability Index (ESVI), and used it to undertake a comparative analysis of the availability of support for elderly Jamaicans based on their access to human, material and social resources. The results of the ESVI indicated that while the elderly are more vulnerable overall, certain segments of the population appear to be at greater risk. Females had consistently lower scores than males, and the oldest-old had the highest scores of all groups of older persons. Vulnerability scores also varied according to place of residence, with more rural parishes having higher scores than their urban counterparts. These findings support the political economy framework which locates disadvantage in old age within political and ideological structures. The findings also point to the pervasiveness and persistence of gender inequality as argued by feminist theories of aging. Based on the results of the study it is clear that there is a need for policies that target specific population segments, in addition to universal policies that could make the experience of old age less challenging for the majority of older persons. Overall, the ESVI has displayed usefulness as a tool for theoretical analysis and demonstrated its potential as a policy instrument to assist decision-makers in determining where to target their efforts as they seek to address the issue of social vulnerability in old age. Data for this study came from the 2001 population and housing census of Jamaica, with multiple imputation for missing data. The index was derived from the linear aggregation of three equally weighted domains, comprised of eleven unweighted indicators which were normalized using z-scores. Indicators were selected based on theoretical relevance and data availability.
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^
Resumo:
Damages during extreme wind events highlight the weaknesses of mechanical fasteners at the roof-to-wall connections in residential timber frame buildings. The allowable capacity of the metal fasteners is based on results of unidirectional component testing that do not simulate realistic tri-axial aerodynamic loading effects. The first objective of this research was to simulate hurricane effects and study hurricane-structure interaction at full-scale, facilitating better understanding of the combined impacts of wind, rain, and debris on inter-component connections at spatial and temporal scales. The second objective was to evaluate the performance of a non-intrusive roof-to-wall connection system using fiber reinforced polymer (FRP) materials and compare its load capacity to the capacity of an existing metal fastener under simulated aerodynamic loads. ^ The Wall of Wind (WoW) testing performed using FRP connections on a one-story gable-roof timber structure instrumented with a variety of sensors, was used to create a database on aerodynamic and aero-hydrodynamic loading on roof-to-wall connections tested under several parameters: angles of attack, wind-turbulence content, internal pressure conditions, with and without effects of rain. Based on the aerodynamic loading results obtained from WoW tests, sets of three force components (tri-axial mean loads) were combined into a series of resultant mean forces, which were used to test the FRP and metal connections in the structures laboratory up to failure. A new component testing system and test protocol were developed for testing fasteners under simulated triaxial loading as opposed to uni-axial loading. The tri-axial and uni-axial test results were compared for hurricane clips. Also, comparison was made between tri-axial load capacity of FRP and metal connections. ^ The research findings demonstrate that the FRP connection is a viable option for use in timber roof-to-wall connection system. Findings also confirm that current testing methods of mechanical fasteners tend to overestimate the actual load capacities of a connector. Additionally, the research also contributes to the development a new testing protocol for fasteners using tri-axial simultaneous loads based on the aerodynamic database obtained from the WoW testing. ^
Resumo:
This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.
Resumo:
In Florida, third and tenth graders are required to take the Florida Comprehensive Assessment Test (FCAT), a high stakes test. Third graders must score a Level 2 or higher on the Sunshine States Standards portion of the FCAT or fifty-one percent or higher on the Normreference portion of the FCAT in order to be promoted. In 2003, the Florida Department of Education reported 31 percent of third graders in Miami-Dade County were retained (2003). The purpose of the study is to investigate how third grade teachers can decrease the level of parental anxiety for this high stakes test and determine if there are different levels of parental anxiety based upon certain variables. Parent workshops will be offered to families focusing on FCAT strategies and relaxation techniques designed to increase parental efficacy and decrease parental anxiety related to high stakes testing.
Resumo:
This qualitative study investigated factors that produced or perpetuated standardized test-based stereotype threat effects for a group of African American children. Findings revealed 4 themes: a perception of education as strictly test preparation, test-based stress and anxiety, racial salience, and stereotypes. Implications for practice and policy are discussed.
Resumo:
Detection canines represent the fastest and most versatile means of illicit material detection. This research endeavor in its most simplistic form is the improvement of detection canines through training, training aids, and calibration. This study focuses on developing a universal calibration compound for which all detection canines, regardless of detection substance, can be tested daily to ensure that they are working with acceptable parameters. Surrogate continuation aids (SCAs) were developed for peroxide based explosives along with the validation of the SCAs already developed within the International Forensic Research Institute (IFRI) prototype surrogate explosives kit. Storage parameters of the SCAs were evaluated to give recommendations to the detection canine community on the best possible training aid storage solution that minimizes the likelihood of contamination. Two commonly used and accepted detection canine imprinting methods were also evaluated for the speed in which the canine is trained and their reliability. As a result of the completion of this study, SCAs have been developed for explosive detection canine use covering: peroxide based explosives, TNT based explosives, nitroglycerin based explosives, tagged explosives, plasticized explosives, and smokeless powders. Through the use of these surrogate continuation aids a more uniform and reliable system of training can be implemented in the field than is currently used today. By examining the storage parameters of the SCAs, an ideal storage system has been developed using three levels of containment for the reduction of possible contamination. The developed calibration compound will ease the growing concerns over the legality and reliability of detection canine use by detailing the daily working parameters of the canine, allowing for Daubert rules of evidence admissibility to be applied. Through canine field testing, it has been shown that the IFRI SCAs outperform other commercially available training aids on the market. Additionally, of the imprinting methods tested, no difference was found in the speed in which the canines are trained or their reliability to detect illicit materials. Therefore, if the recommendations discovered in this study are followed, the detection canine community will greatly benefit through the use of scientifically validated training techniques and training aids.
Resumo:
This study evaluated the early development and pilot-testing of Project IMPACT, a case management intervention for victims of stalking. The Design and Development framework (Rothman & Thomas, 1994) was used as a guide for program development and evaluation. Nine research questions examined the processes and outcomes associated with program implementation. ^ The sample included all 36 clients who participated in Project IMPACT between February of 2000 and June of 2001, as well as the victim advocates who provided them with services. Quantitative and qualitative data were drawn from client case files, participant observation field notes and interview transcriptions. Quantitative data were entered into three databases where: (1) clients were the units of analysis (n = 36), (2) services were the units of analysis (n = 1146), and (3) goals were the units of analysis (n = 149). These data were analyzed using descriptive statistics, Pearson's Chi-square, Spearman's Rho, Phi, Cramer's V, Wilcoxon's Matched Pairs Signed-Ranked Test and McNemar's Test Statistic. Qualitative data were reduced via open, axial and selective coding methods. Grounded theory and case study frameworks were utilized to analyze these data. ^ Results showed that most clients noted an improved sense of well-being and safety, although residual symptoms of trauma remained for numerous individuals. Stalkers appeared to respond to criminal and civil justice-based interventions by reducing violent and threatening behaviors; however, covert behaviors continued. The study produced findings that provided preliminary support for the use of several intervention components including support services, psycho-education, safety planning, and boundary spanning. The psycho-education and safety planning in particular seemed to help clients cognitively reframe their perceptions of the stalking experience and gain a sense of increased safety and well-being. A 65% level of satisfactory goal achievement was observed overall, although goals involving justice-based organizations were associated with lower achievement. High service usage was related to low-income clients and those lacking in social support. Numerous inconsistencies in program implementation were found to be associated with the skills and experiences of victim advocates. Thus, recommendations were made to further refine, develop and evaluate the intervention. ^
Resumo:
Hardware/software (HW/SW) cosimulation integrates software simulation and hardware simulation simultaneously. Usually, HW/SW co-simulation platform is used to ease debugging and verification for very large-scale integration (VLSI) design. To accelerate the computation of the gesture recognition technique, an HW/SW implementation using field programmable gate array (FPGA) technology is presented in this paper. The major contributions of this work are: (1) a novel design of memory controller in the Verilog Hardware Description Language (Verilog HDL) to reduce memory consumption and load on the processor. (2) The testing part of the neural network algorithm is being hardwired to improve the speed and performance. The American Sign Language gesture recognition is chosen to verify the performance of the approach. Several experiments were carried out on four databases of the gestures (alphabet signs A to Z). (3) The major benefit of this design is that it takes only few milliseconds to recognize the hand gesture which makes it computationally more efficient.
Resumo:
Effective interaction with personal computers is a basic requirement for many of the functions that are performed in our daily lives. With the rapid emergence of the Internet and the World Wide Web, computers have become one of the premier means of communication in our society. Unfortunately, these advances have not become equally accessible to physically handicapped individuals. In reality, a significant number of individuals with severe motor disabilities, due to a variety of causes such as Spinal Cord Injury (SCI), Amyothrophic Lateral Sclerosis (ALS), etc., may not be able to utilize the computer mouse as a vital input device for computer interaction. The purpose of this research was to further develop and improve an existing alternative input device for computer cursor control to be used by individuals with severe motor disabilities. This thesis describes the development and the underlying principle for a practical hands-off human-computer interface based on Electromyogram (EMG) signals and Eye Gaze Tracking (EGT) technology compatible with the Microsoft Windows operating system (OS). Results of the software developed in this thesis show a significant improvement in the performance and usability of the EMG/EGT cursor control HCI.