916 resultados para automatic test case generation
Resumo:
Steady-state computational fluid dynamics (CFD) simulations are an essential tool in the design process of centrifugal compressors. Whilst global parameters, such as pressure ratio and efficiency, can be predicted with reasonable accuracy, the accurate prediction of detailed compressor flow fields is a much more significant challenge. Much of the inaccuracy is associated with the incorrect selection of turbulence model. The need for a quick turnaround in simulations during the design optimisation process, also demands that the turbulence model selected be robust and numerically stable with short simulation times.
In order to assess the accuracy of a number of turbulence model predictions, the current study used an exemplar open CFD test case, the centrifugal compressor ‘Radiver’, to compare the results of three eddy viscosity models and two Reynolds stress type models. The turbulence models investigated in this study were (i) Spalart-Allmaras (SA) model, (ii) the Shear Stress Transport (SST) model, (iii) a modification to the SST model denoted the SST-curvature correction (SST-CC), (iv) Reynolds stress model of Speziale, Sarkar and Gatski (RSM-SSG), and (v) the turbulence frequency formulated Reynolds stress model (RSM-ω). Each was found to be in good agreement with the experiments (below 2% discrepancy), with respect to total-to-total parameters at three different operating conditions. However, for the off-design conditions, local flow field differences were observed between the models, with the SA model showing particularly poor prediction of local flow structures. The SST-CC showed better prediction of curved rotating flows in the impeller. The RSM-ω was better for the wake and separated flow in the diffuser. The SST model showed reasonably stable, robust and time efficient capability to predict global and local flow features.
Resumo:
The aim of this thesis was to investigate, using the real-time test case of the 2014 Commonwealth Games, whether the realist synthesis methodology could contribute to the making of health policy in a meaningful way. This was done by looking at two distinct research questions: first, whether realist synthesis could contribute new insights to the health policymaking process, and second, whether the 2014 Commonwealth Games volunteer programme was likely to have any significant, measurable, impact on health inequalities experienced by large sections of the host population. The 2014 Commonwealth Games legacy laid out ambitious plans for the event, in which it was anticipated that it would provide explicit opportunities to impact positively on health inequalities. By using realist synthesis to unpick the theories underpinning the volunteer programme, the review identifies the population subgroups for whom the programme was likely to be successful, how this could be achieved and in what contexts. In answer to the first research question, the review found that while realist methods were able to provide a more nuanced exposition of the impacts of the Games volunteer programme on health inequalities than previous traditional reviews had been able to provide, there were several drawbacks to using the method. It was found to be resource-intensive and complex, encouraging the exploration of a much wider set of literatures at the expense of an in-depth grasp of the complexities of those literatures. In answer to the second research question, the review found that the Games were, if anything, likely to exacerbate health inequalities because the programme was designed in such a way that individuals recruited to it were most likely to be those in least need of the additional mental and physical health benefits that Games volunteering was designed to provide. The following thesis details the approach taken to investigate both the realist approach to evidence synthesis and the likelihood that the 2014 Games volunteer programme would yield the expected results.
Resumo:
During the lifetime of a research project, different partners develop several research prototype tools that share many common aspects. This is equally true for researchers as individuals and as groups: during a period of time they often develop several related tools to pursue a specific research line. Making research prototype tools easily accessible to the community is of utmost importance to promote the corresponding research, get feedback, and increase the tools’ lifetime beyond the duration of a specific project. One way to achieve this is to build graphical user interfaces (GUIs) that facilitate trying tools; in particular, with web-interfaces one avoids the overhead of downloading and installing the tools. Building GUIs from scratch is a tedious task, in particular for web-interfaces, and thus it typically gets low priority when developing a research prototype. Often we opt for copying the GUI of one tool and modifying it to fit the needs of a new related tool. Apart from code duplication, these tools will “live” separately, even though we might benefit from having them all in a common environment since they are related. This work aims at simplifying the process of building GUIs for research prototypes tools. In particular, we present EasyInterface, a toolkit that is based on novel methodology that provides an easy way to make research prototype tools available via common different environments such as a web-interface, within Eclipse, etc. It includes a novel text-based output language that allows to present results graphically without requiring any knowledge in GUI/Web programming. For example, an output of a tool could be (a structured version of) “highlight line number 10 of file ex.c” and “when the user clicks on line 10, open a dialog box with the text ...”. The environment will interpret this output and converts it to corresponding visual e_ects. The advantage of using this approach is that it will be interpreted equally by all environments of EasyInterface, e.g., the web-interface, the Eclipse plugin, etc. EasyInterface has been developed in the context of the Envisage [5] project, and has been evaluated on tools developed in this project, which include static analyzers, test-case generators, compilers, simulators, etc. EasyInterface is open source and available at GitHub2.
Resumo:
Este trabalho tem como objectivo explorar a metodologia de preparação do actor, centrada nas teorias teatrais do teatro de encontro/contacto, à luz da teoria do momento presente e do momento de encontro. A investigação consiste num estudo exploratório de caso que pretende compreender e aprofundar as proximidades existentes entre as teorias teatrais e psicológicas do encontro. Partindo de uma reflexão teórica pretendeu-se responder a algumas das questões levantadas, através de uma pesquisa junto de um grupo de actores. Para isso procurou-se perceber a possibilidade de identificar e caracterizar “momentos de encontro” durante o treino e desempenho dos actores. Em paralelo procurou-se indagar, sobre o eventual efeito de facilitação do crescimento pessoal das técnicas de preparação de actor identificadas. As temáticas desta investigação foram exploradas através de uma entrevista semi-dirigida Este estudo de caso contou com a participação de três actores de teatro universitário, do dISPArteatro, com pelo menos 2 anos de frequência do grupo. Na prática do grupo foram encontradas as aproximações sugeridas entre as concepções de uma preparação do actor através do contacto e as características do momento presente/momento de encontro. Todos os actores experienciaram mudanças na sua vida pessoal, nas áreas investigadas (relação com o corpo, relação consigo mesmos e relação com os outros). O processo foi sentido pelos actores como um crescimento pessoal e no global mencionam que se sentem mais capazes de viver o momento, o aqui-e-agora e que procuram mais relacionamentos com maior nível de intimidade e partilha.
Resumo:
Scientific studies exploring the environmental and experiential elements that help boost human happiness have become a significant and expanding body of work. Some urban designers, architects and planners are looking to apply this knowledge through policy decisions and design, but there is a great deal of room for further study and exploration. This paper looks at definitions of happiness and happiness measurements used in research. The paper goes on to introduce six environmental factors identified in a literature review that have design implications relating to happiness: Nature, Light, Surprise, Access, Identity, and Sociality. Architectural precedents are examined and design strategies are proposed for each factor, which are then applied to a test case site and building in Baltimore, Maryland. It is anticipated that these factors and strategies will be useful to architects, urban designers and planners as they endeavor to design positive user experiences and set city shaping policy.
Resumo:
The constant need to improve helicopter performance requires the optimization of existing and future rotor designs. A crucial indicator of rotor capability is hover performance, which depends on the near-body flow as well as the structure and strength of the tip vortices formed at the trailing edge of the blades. Computational Fluid Dynamics (CFD) solvers must balance computational expenses with preservation of the flow, and to limit computational expenses the mesh is often coarsened in the outer regions of the computational domain. This can lead to degradation of the vortex structures which compose the rotor wake. The current work conducts three-dimensional simulations using OVERTURNS, a three-dimensional structured grid solver that models the flow field using the Reynolds-Averaged Navier-Stokes equations. The S-76 rotor in hover was chosen as the test case for evaluating the OVERTURNS solver, focusing on methods to better preserve the rotor wake. Using the hover condition, various computational domains, spatial schemes, and boundary conditions were tested. Furthermore, a mesh adaption routine was implemented, allowing for the increased refinement of the mesh in areas of turbulent flow without the need to add points to the mesh. The adapted mesh was employed to conduct a sweep of collective pitch angles, comparing the resolved wake and integrated forces to existing computational and experimental results. The integrated thrust values saw very close agreement across all tested pitch angles, while the power was slightly over predicted, resulting in under prediction of the Figure of Merit. Meanwhile, the tip vortices have been preserved for multiple blade passages, indicating an improvement in vortex preservation when compared with previous work. Finally, further results from a single collective pitch case were presented to provide a more complete picture of the solver results.
Resumo:
Many geological formations consist of crystalline rocks that have very low matrix permeability but allow flow through an interconnected network of fractures. Understanding the flow of groundwater through such rocks is important in considering disposal of radioactive waste in underground repositories. A specific area of interest is the conditioning of fracture transmissivities on measured values of pressure in these formations. This is the process where the values of fracture transmissivities in a model are adjusted to obtain a good fit of the calculated pressures to measured pressure values. While there are existing methods to condition transmissivity fields on transmissivity, pressure and flow measurements for a continuous porous medium there is little literature on conditioning fracture networks. Conditioning fracture transmissivities on pressure or flow values is a complex problem because the measurements are not linearly related to the fracture transmissivities and they are also dependent on all the fracture transmissivities in the network. We present a new method for conditioning fracture transmissivities on measured pressure values based on the calculation of certain basis vectors; each basis vector represents the change to the log transmissivity of the fractures in the network that results in a unit increase in the pressure at one measurement point whilst keeping the pressure at the remaining measurement points constant. The fracture transmissivities are updated by adding a linear combination of basis vectors and coefficients, where the coefficients are obtained by minimizing an error function. A mathematical summary of the method is given. This algorithm is implemented in the existing finite element code ConnectFlow developed and marketed by Serco Technical Services, which models groundwater flow in a fracture network. Results of the conditioning are shown for a number of simple test problems as well as for a realistic large scale test case.
Resumo:
Facility location concerns the placement of facilities, for various objectives, by use of mathematical models and solution procedures. Almost all facility location models that can be found in literature are based on minimizing costs or maximizing cover, to cover as much demand as possible. These models are quite efficient for finding an optimal location for a new facility for a particular data set, which is considered to be constant and known in advance. In a real world situation, input data like demand and travelling costs are not fixed, nor known in advance. This uncertainty and uncontrollability can lead to unacceptable losses or even bankruptcy. A way of dealing with these factors is robustness modelling. A robust facility location model aims to locate a facility that stays within predefined limits for all expectable circumstances as good as possible. The deviation robustness concept is used as basis to develop a new competitive deviation robustness model. The competition is modelled with a Huff based model, which calculates the market share of the new facility. Robustness in this model is defined as the ability of a facility location to capture a minimum market share, despite variations in demand. A test case is developed by which algorithms can be tested on their ability to solve robust facility location models. Four stochastic optimization algorithms are considered from which Simulated Annealing turned out to be the most appropriate. The test case is slightly modified for a competitive market situation. With the Simulated Annealing algorithm, the developed competitive deviation model is solved, for three considered norms of deviation. At the end, also a grid search is performed to illustrate the landscape of the objective function of the competitive deviation model. The model appears to be multimodal and seems to be challenging for further research.
Resumo:
Las líneas de productos software son familias de productos que están íntimamente relacionados entre sí, normalmente formados por combinaciones de un conjunto de características software. Generalmente no es factible testar todos los productos de la familia, ya que el número de productos es muy elevado debido a la explosión combinatoria de características. Por este motivo, se han propuesto criterios de cobertura que pretenden probar al menos todas las interacciones entre características sin necesidad de probar todos los productos, por ejemplo todos los pares de características (emph{pairwise coverage}). Además, es deseable testar primero los productos compuestos por un conjunto de características prioritarias. Este problema es conocido como emph{Prioritized Pairwise Test Data Generation}. En este trabajo proponemos una técnica basada en programación lineal entera para generar este conjunto de pruebas priorizado. Nuestro estudio revela que la propuesta basada en programación lineal entera consigue mejores resultados estadísticamente tanto en calidad como en tiempo de computación con respecto a las técnicas existentes para este problema.
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Es tracta d'una recerca d'eines CASEque actualment suporten OCL en la generació automàtica de codi Java per estudiar-les ianalitzar-les a través d'un model de proves consistent en un diagrama de classes del modelestàtic de l'UML i una mostra variada d'instruccions OCL, amb l'objectiu de detectar lesseves mancances, analitzant el codi obtingut i determinar si controla o no cada tipus derestricció, i si s'han implementat bé en el codi.
Resumo:
In general, laboratory activities are costly in terms of time, space, and money. As such, the ability to provide realistically simulated laboratory data that enables students to practice data analysis techniques as a complementary activity would be expected to reduce these costs while opening up very interesting possibilities. In the present work, a novel methodology is presented for design of analytical chemistry instrumental analysis exercises that can be automatically personalized for each student and the results evaluated immediately. The proposed system provides each student with a different set of experimental data generated randomly while satisfying a set of constraints, rather than using data obtained from actual laboratory work. This allows the instructor to provide students with a set of practical problems to complement their regular laboratory work along with the corresponding feedback provided by the system's automatic evaluation process. To this end, the Goodle Grading Management System (GMS), an innovative web-based educational tool for automating the collection and assessment of practical exercises for engineering and scientific courses, was developed. The proposed methodology takes full advantage of the Goodle GMS fusion code architecture. The design of a particular exercise is provided ad hoc by the instructor and requires basic Matlab knowledge. The system has been employed with satisfactory results in several university courses. To demonstrate the automatic evaluation process, three exercises are presented in detail. The first exercise involves a linear regression analysis of data and the calculation of the quality parameters of an instrumental analysis method. The second and third exercises address two different comparison tests, a comparison test of the mean and a t-paired test.
Resumo:
In this paper, we consider a classical problem of complete test generation for deterministic finite-state machines (FSMs) in a more general setting. The first generalization is that the number of states in implementation FSMs can even be smaller than that of the specification FSM. Previous work deals only with the case when the implementation FSMs are allowed to have the same number of states as the specification FSM. This generalization provides more options to the test designer: when traditional methods trigger a test explosion for large specification machines, tests with a lower, but yet guaranteed, fault coverage can still be generated. The second generalization is that tests can be generated starting with a user-defined test suite, by incrementally extending it until the desired fault coverage is achieved. Solving the generalized test derivation problem, we formulate sufficient conditions for test suite completeness weaker than the existing ones and use them to elaborate an algorithm that can be used both for extending user-defined test suites to achieve the desired fault coverage and for test generation. We present the experimental results that indicate that the proposed algorithm allows obtaining a trade-off between the length and fault coverage of test suites.
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper describes a program for the automatic generation of code for Intel's 8051 microcontroller. The code is generated from a place-transition Petri net specification. Our goal is to minimize programming time. The code generated by our program has been observed to exactly match the net model. It has also been observed that no change is needed to be made to the generated code for its compilation to the target architecture. © 2011 IFAC.