11 resultados para test cases generator
em Digital Commons at Florida International University
Resumo:
The aim of this work is to present a methodology to develop cost-effective thermal management solutions for microelectronic devices, capable of removing maximum amount of heat and delivering maximally uniform temperature distributions. The topological and geometrical characteristics of multiple-story three-dimensional branching networks of microchannels were developed using multi-objective optimization. A conjugate heat transfer analysis software package and an automatic 3D microchannel network generator were developed and coupled with a modified version of a particle-swarm optimization algorithm with a goal of creating a design tool for 3D networks of optimized coolant flow passages. Numerical algorithms in the conjugate heat transfer solution package include a quasi-ID thermo-fluid solver and a steady heat diffusion solver, which were validated against results from high-fidelity Navier-Stokes equations solver and analytical solutions for basic fluid dynamics test cases. Pareto-optimal solutions demonstrate that thermal loads of up to 500 W/cm2 can be managed with 3D microchannel networks, with pumping power requirements up to 50% lower with respect to currently used high-performance cooling technologies.
Resumo:
Optimization of adaptive traffic signal timing is one of the most complex problems in traffic control systems. This dissertation presents a new method that applies the parallel genetic algorithm (PGA) to optimize adaptive traffic signal control in the presence of transit signal priority (TSP). The method can optimize the phase plan, cycle length, and green splits at isolated intersections with consideration for the performance of both the transit and the general vehicles. Unlike the simple genetic algorithm (GA), PGA can provide better and faster solutions needed for real-time optimization of adaptive traffic signal control. ^ An important component in the proposed method involves the development of a microscopic delay estimation model that was designed specifically to optimize adaptive traffic signal with TSP. Macroscopic delay models such as the Highway Capacity Manual (HCM) delay model are unable to accurately consider the effect of phase combination and phase sequence in delay calculations. In addition, because the number of phases and the phase sequence of adaptive traffic signal may vary from cycle to cycle, the phase splits cannot be optimized when the phase sequence is also a decision variable. A "flex-phase" concept was introduced in the proposed microscopic delay estimation model to overcome these limitations. ^ The performance of PGA was first evaluated against the simple GA. The results show that PGA achieved both faster convergence and lower delay for both under- or over-saturated traffic conditions. A VISSIM simulation testbed was then developed to evaluate the performance of the proposed PGA-based adaptive traffic signal control with TSP. The simulation results show that the PGA-based optimizer for adaptive TSP outperformed the fully actuated NEMA control in all test cases. The results also show that the PGA-based optimizer was able to produce TSP timing plans that benefit the transit vehicles while minimizing the impact of TSP on the general vehicles. The VISSIM testbed developed in this research provides a powerful tool to design and evaluate different TSP strategies under both actuated and adaptive signal control. ^
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^
Resumo:
In a post-Cold War, post-9/11 world, the advent of US global supremacy resulted in the installation, perpetuation, and dissemination of an Absolutist Security Agenda (hereinafter, ASA). The US ASA explicitly and aggressively articulates and equates US national security interests with the security of all states in the international system, and replaced the bipolar, Cold War framework that defined international affairs from 1945-1992. Since the collapse of the USSR and the 11 September 2001 terrorist attacks, the US has unilaterally defined, implemented, and managed systemic security policy. The US ASA is indicative of a systemic category of knowledge (security) anchored in variegated conceptual and material components, such as morality, philosophy, and political rubrics. The US ASA is based on a logic that involves the following security components: (1) hyper militarization, (2) intimidation,(3) coercion, (4) criminalization, (5) panoptic surveillance, (6) plenary security measures, and (7) unabashed US interference in the domestic affairs of select states. Such interference has produced destabilizing tensions and conflicts that have, in turn, produced resistance, revolutions, proliferation, cults of personality, and militarization. This is the case because the US ASA rests on the notion that the international system of states is an extension, instrument of US power, rather than a system and/or society of states comprised of functionally sovereign entities. To analyze the US ASA, this study utilizes: (1) official government statements, legal doctrines, treaties, and policies pertaining to US foreign policy; (2) militarization rationales, budgets, and expenditures; and (3) case studies of rogue states. The data used in this study are drawn from information that is publicly available (academic journals, think-tank publications, government publications, and information provided by international organizations). The data supports the contention that global security is effectuated via a discrete set of hegemonic/imperialistic US values and interests, finding empirical expression in legal acts (USA Patriot ACT 2001) and the concept of rogue states. Rogue states, therefore, provide test cases to clarify the breadth, depth, and consequentialness of the US ASA in world affairs vis-à-vis the relationship between US security and global security.
Resumo:
Many classical as well as modern optimization techniques exist. One such modern method belonging to the field of swarm intelligence is termed ant colony optimization. This relatively new concept in optimization involves the use of artificial ants and is based on real ant behavior inspired by the way ants search for food. In this thesis, a novel ant colony optimization technique for continuous domains was developed. The goal was to provide improvements in computing time and robustness when compared to other optimization algorithms. Optimization function spaces can have extreme topologies and are therefore difficult to optimize. The proposed method effectively searched the domain and solved difficult single-objective optimization problems. The developed algorithm was run for numerous classic test cases for both single and multi-objective problems. The results demonstrate that the method is robust, stable, and that the number of objective function evaluations is comparable to other optimization algorithms.
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic. This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.
Resumo:
In a post-Cold War, post-9/11 world, the advent of US global supremacy resulted in the installation, perpetuation, and dissemination of an Absolutist Security Agenda (hereinafter, ASA). The US ASA explicitly and aggressively articulates and equates US national security interests with the security of all states in the international system, and replaced the bipolar, Cold War framework that defined international affairs from 1945-1992. Since the collapse of the USSR and the 11 September 2001 terrorist attacks, the US has unilaterally defined, implemented, and managed systemic security policy. The US ASA is indicative of a systemic category of knowledge (security) anchored in variegated conceptual and material components, such as morality, philosophy, and political rubrics. The US ASA is based on a logic that involves the following security components: 1., hyper militarization, 2., intimidation, 3., coercion, 4., criminalization, 5., panoptic surveillance, 6., plenary security measures, and 7., unabashed US interference in the domestic affairs of select states. Such interference has produced destabilizing tensions and conflicts that have, in turn, produced resistance, revolutions, proliferation, cults of personality, and militarization. This is the case because the US ASA rests on the notion that the international system of states is an extension, instrument of US power, rather than a system and/or society of states comprised of functionally sovereign entities. To analyze the US ASA, this study utilizes: 1., official government statements, legal doctrines, treaties, and policies pertaining to US foreign policy; 2., militarization rationales, budgets, and expenditures; and 3., case studies of rogue states. The data used in this study are drawn from information that is publicly available (academic journals, think-tank publications, government publications, and information provided by international organizations). The data supports the contention that global security is effectuated via a discrete set of hegemonic/imperialistic US values and interests, finding empirical expression in legal acts (USA Patriot ACT 2001) and the concept of rogue states. Rogue states, therefore, provide test cases to clarify the breadth, depth, and consequentialness of the US ASA in world affairs vis-a-vis the relationship between US security and global security.
Resumo:
The effectiveness of an optimization algorithm can be reduced to its ability to navigate an objective function’s topology. Hybrid optimization algorithms combine various optimization algorithms using a single meta-heuristic so that the hybrid algorithm is more robust, computationally efficient, and/or accurate than the individual algorithms it is made of. This thesis proposes a novel meta-heuristic that uses search vectors to select the constituent algorithm that is appropriate for a given objective function. The hybrid is shown to perform competitively against several existing hybrid and non-hybrid optimization algorithms over a set of three hundred test cases. This thesis also proposes a general framework for evaluating the effectiveness of hybrid optimization algorithms. Finally, this thesis presents an improved Method of Characteristics Code with novel boundary conditions, which better characterizes pipelines than previous codes. This code is coupled with the hybrid optimization algorithm in order to optimize the operation of real-world piston pumps.
Resumo:
This dissertation describes the findings and implications of a correlational analysis. Scores earned on the Computerized Placement Test (CPT), sentence skills, were compared to essay scores of advanced English as a Second Language (ESL) students. As the CPT is designed for native speakers of English, it was hypothesized that it could be an invalid or unreliable instrument for non-native speakers. Florida community college students are mandated to take the CPT to determine preparedness, as are students at many other U.S. and Canadian colleges. If incoming students score low on the CPT, they may be required to take up to three semesters of remedial coursework. It is essential that scores earned by non-native speakers of English accurately reflect their ability level. They constitute a large and growing body of non-traditional students enrolled at community colleges.^ The study was conducted at Miami-Dade Community College, Wolfson Campus, fall 1997. Participants included 106 advanced ESL students who took both the CPT sentence skills test and wrote final essay exams. The essay exams were holistically scored by trained readers. Also, the participants took the Placement Articulation Software Service (PASS) exam, an alternative form of the CPT. Scores on the CPT and essays were compared by means of a Pearson product-moment correlation to validate the CPT. Scores on the CPT and the PASS exam were compared in the same manner to verify reliability. A percentage of appropriate placements was determined by comparing essay scores to CPT cutoff score ranges. Finally, the instruments were evaluated by means of independent-samples t-tests for performance differences between gender, age, and first language groups.^ The results indicate that the CPT sentence skills test is a valid and reliable placement instrument for advanced- level ESL students who intend to pursue community college degrees. The correlations demonstrated a substantial relationship between CPT and essay scores and a marked relationship between CPT and PASS scores. Appropriate placements were made in 86% of the cases. Furthermore, the CPT was found to discriminate equally among the gender, age, and first language groups included in this study. ^
Resumo:
Major portion of hurricane-induced economic loss originates from damages to building structures. The damages on building structures are typically grouped into three main categories: exterior, interior, and contents damage. Although the latter two types of damages, in most cases, cause more than 50% of the total loss, little has been done to investigate the physical damage process and unveil the interdependence of interior damage parameters. Building interior and contents damages are mainly due to wind-driven rain (WDR) intrusion through building envelope defects, breaches, and other functional openings. The limitation of research works and subsequent knowledge gaps, are in most part due to the complexity of damage phenomena during hurricanes and lack of established measurement methodologies to quantify rainwater intrusion. This dissertation focuses on devising methodologies for large-scale experimental simulation of tropical cyclone WDR and measurements of rainwater intrusion to acquire benchmark test-based data for the development of hurricane-induced building interior and contents damage model. Target WDR parameters derived from tropical cyclone rainfall data were used to simulate the WDR characteristics at the Wall of Wind (WOW) facility. The proposed WDR simulation methodology presents detailed procedures for selection of type and number of nozzles formulated based on tropical cyclone WDR study. The simulated WDR was later used to experimentally investigate the mechanisms of rainwater deposition/intrusion in buildings. Test-based dataset of two rainwater intrusion parameters that quantify the distribution of direct impinging raindrops and surface runoff rainwater over building surface — rain admittance factor (RAF) and surface runoff coefficient (SRC), respectively —were developed using common shapes of low-rise buildings. The dataset was applied to a newly formulated WDR estimation model to predict the volume of rainwater ingress through envelope openings such as wall and roof deck breaches and window sill cracks. The validation of the new model using experimental data indicated reasonable estimation of rainwater ingress through envelope defects and breaches during tropical cyclones. The WDR estimation model and experimental dataset of WDR parameters developed in this dissertation work can be used to enhance the prediction capabilities of existing interior damage models such as the Florida Public Hurricane Loss Model (FPHLM).^
Resumo:
Goodness-of-fit tests have been studied by many researchers. Among them, an alternative statistical test for uniformity was proposed by Chen and Ye (2009). The test was used by Xiong (2010) to test normality for the case that both location parameter and scale parameter of the normal distribution are known. The purpose of the present thesis is to extend the result to the case that the parameters are unknown. A table for the critical values of the test statistic is obtained using Monte Carlo simulation. The performance of the proposed test is compared with the Shapiro-Wilk test and the Kolmogorov-Smirnov test. Monte-Carlo simulation results show that proposed test performs better than the Kolmogorov-Smirnov test in many cases. The Shapiro Wilk test is still the most powerful test although in some cases the test proposed in the present research performs better.