954 resultados para Test Case Generator
Resumo:
In this dissertation, I offer a pedagogical proposal for learning the Christian Scriptures guided by respect for the nature of the reader and the integrity of the biblical text. Christian educators have profitably developed recent theoretical interest in the body’s role in human meaning with regard to worship and praxis methodologies, but the implications of this research for communal study of the biblical text merit further development. I make the case for adopting scriptural imagination as the goal of pedagogically constructed encounters with the Christian Scriptures. The argument proceeds through a series of questions addressing both sides of the text/reader encounter.
Chapter one considers the question “what is the nature of the reader and, subsequently, the shape of the reader’s ways of knowing?” This investigation into recent literature on the body’s involvement in human knowing includes related epistemological shifts with Christian education. On the basis of this survey, imagination emerges as a compelling designator of an incorporative, constructive creaturely capacity that gives rise to a way of being in the world. Teachers of Scripture who intend to participate in Christian formation should account for the imagination’s centrality for all knowing. After briefly situating this proposal within a theological account of creatureliness, I make the initial case for Scriptural imagination as a pedagogical aim.
Imagination as creaturely capacity addresses the first guiding value, but does this proposal also respect the integrity and nature of the biblical text, and specifically of biblical narratives? In response, in chapter two I take up the Acts of the Apostles as a potential test case and exemplar for the dynamics pertinent to the formation of imagination. Drawing on secondary literature on the genre and literary features of Acts, I conclude that Acts coheres with this project’s explicit interest in imagination as a central component of the process of Christian formation in relationship to the Scriptures.
Chapters three and four each take up a pericope from Acts to assess whether the theoretical perspectives developed in prior chapters generate any interpretive payoff. In each of these chapters, a particular story within Acts functions as a test case for readings of biblical narratives guided by a concern for scriptural imagination. Each of these chapters begins with further theoretical development of some element of imaginal formation. Chapter three provides a theoretical account of practices as they relate to imagination, bringing that theory into conversation with Peter’s engagement in hospitality practices with Cornelius in Acts 10:1-11:18. Chapter four discusses the formative power of narratives, with implications for the analysis of Paul’s shipwreck in Acts 27:1-28:16.
In the final chapter, I offer a two-part constructive pedagogical proposal for reading scriptural narratives in Christian communities. First, I suggest adopting resonance above relevance as the goal of pedagogically constructed encounters with the Scriptures. Second, I offer three ways of reading with the body, including the physical, ecclesial, and social bodies that shape all learning. I conclude by identifying the importance of scriptural imagination for Christian formation and witness in the twenty-first century.
Resumo:
This study examines the impact of ambient temperature on emotional well-being in the U.S. population aged 18+. The U.S. is an interesting test case because of its resources, technology and variation in climate across different areas, which also allows us to examine whether adaptation to different climates could weaken or even eliminate the impact of heat on well-being. Using survey responses from 1.9 million Americans over the period from 2008 to 2013, we estimate the effect of temperature on well-being from exogenous day-to-day temperature variation within respondents’ area of residence and test whether this effect varies across areas with different climates. We find that increasing temperatures significantly reduce well-being. Compared to average daily temperatures in the 50–60 °F (10–16 °C) range, temperatures above 70 °F (21 °C) reduce positive emotions (e.g. joy, happiness), increase negative emotions (e.g. stress, anger), and increase fatigue (feeling tired, low energy). These effects are particularly strong among less educated and older Americans. However, there is no consistent evidence that heat effects on well-being differ across areas with mild and hot summers, suggesting limited variation in heat adaptation.
Resumo:
Steady-state computational fluid dynamics (CFD) simulations are an essential tool in the design process of centrifugal compressors. Whilst global parameters, such as pressure ratio and efficiency, can be predicted with reasonable accuracy, the accurate prediction of detailed compressor flow fields is a much more significant challenge. Much of the inaccuracy is associated with the incorrect selection of turbulence model. The need for a quick turnaround in simulations during the design optimisation process, also demands that the turbulence model selected be robust and numerically stable with short simulation times.
In order to assess the accuracy of a number of turbulence model predictions, the current study used an exemplar open CFD test case, the centrifugal compressor ‘Radiver’, to compare the results of three eddy viscosity models and two Reynolds stress type models. The turbulence models investigated in this study were (i) Spalart-Allmaras (SA) model, (ii) the Shear Stress Transport (SST) model, (iii) a modification to the SST model denoted the SST-curvature correction (SST-CC), (iv) Reynolds stress model of Speziale, Sarkar and Gatski (RSM-SSG), and (v) the turbulence frequency formulated Reynolds stress model (RSM-ω). Each was found to be in good agreement with the experiments (below 2% discrepancy), with respect to total-to-total parameters at three different operating conditions. However, for the off-design conditions, local flow field differences were observed between the models, with the SA model showing particularly poor prediction of local flow structures. The SST-CC showed better prediction of curved rotating flows in the impeller. The RSM-ω was better for the wake and separated flow in the diffuser. The SST model showed reasonably stable, robust and time efficient capability to predict global and local flow features.
Resumo:
With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.
Resumo:
The aim of this thesis was to investigate, using the real-time test case of the 2014 Commonwealth Games, whether the realist synthesis methodology could contribute to the making of health policy in a meaningful way. This was done by looking at two distinct research questions: first, whether realist synthesis could contribute new insights to the health policymaking process, and second, whether the 2014 Commonwealth Games volunteer programme was likely to have any significant, measurable, impact on health inequalities experienced by large sections of the host population. The 2014 Commonwealth Games legacy laid out ambitious plans for the event, in which it was anticipated that it would provide explicit opportunities to impact positively on health inequalities. By using realist synthesis to unpick the theories underpinning the volunteer programme, the review identifies the population subgroups for whom the programme was likely to be successful, how this could be achieved and in what contexts. In answer to the first research question, the review found that while realist methods were able to provide a more nuanced exposition of the impacts of the Games volunteer programme on health inequalities than previous traditional reviews had been able to provide, there were several drawbacks to using the method. It was found to be resource-intensive and complex, encouraging the exploration of a much wider set of literatures at the expense of an in-depth grasp of the complexities of those literatures. In answer to the second research question, the review found that the Games were, if anything, likely to exacerbate health inequalities because the programme was designed in such a way that individuals recruited to it were most likely to be those in least need of the additional mental and physical health benefits that Games volunteering was designed to provide. The following thesis details the approach taken to investigate both the realist approach to evidence synthesis and the likelihood that the 2014 Games volunteer programme would yield the expected results.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2015.
Resumo:
During the lifetime of a research project, different partners develop several research prototype tools that share many common aspects. This is equally true for researchers as individuals and as groups: during a period of time they often develop several related tools to pursue a specific research line. Making research prototype tools easily accessible to the community is of utmost importance to promote the corresponding research, get feedback, and increase the tools’ lifetime beyond the duration of a specific project. One way to achieve this is to build graphical user interfaces (GUIs) that facilitate trying tools; in particular, with web-interfaces one avoids the overhead of downloading and installing the tools. Building GUIs from scratch is a tedious task, in particular for web-interfaces, and thus it typically gets low priority when developing a research prototype. Often we opt for copying the GUI of one tool and modifying it to fit the needs of a new related tool. Apart from code duplication, these tools will “live” separately, even though we might benefit from having them all in a common environment since they are related. This work aims at simplifying the process of building GUIs for research prototypes tools. In particular, we present EasyInterface, a toolkit that is based on novel methodology that provides an easy way to make research prototype tools available via common different environments such as a web-interface, within Eclipse, etc. It includes a novel text-based output language that allows to present results graphically without requiring any knowledge in GUI/Web programming. For example, an output of a tool could be (a structured version of) “highlight line number 10 of file ex.c” and “when the user clicks on line 10, open a dialog box with the text ...”. The environment will interpret this output and converts it to corresponding visual e_ects. The advantage of using this approach is that it will be interpreted equally by all environments of EasyInterface, e.g., the web-interface, the Eclipse plugin, etc. EasyInterface has been developed in the context of the Envisage [5] project, and has been evaluated on tools developed in this project, which include static analyzers, test-case generators, compilers, simulators, etc. EasyInterface is open source and available at GitHub2.
Resumo:
Este trabalho tem como objectivo explorar a metodologia de preparação do actor, centrada nas teorias teatrais do teatro de encontro/contacto, à luz da teoria do momento presente e do momento de encontro. A investigação consiste num estudo exploratório de caso que pretende compreender e aprofundar as proximidades existentes entre as teorias teatrais e psicológicas do encontro. Partindo de uma reflexão teórica pretendeu-se responder a algumas das questões levantadas, através de uma pesquisa junto de um grupo de actores. Para isso procurou-se perceber a possibilidade de identificar e caracterizar “momentos de encontro” durante o treino e desempenho dos actores. Em paralelo procurou-se indagar, sobre o eventual efeito de facilitação do crescimento pessoal das técnicas de preparação de actor identificadas. As temáticas desta investigação foram exploradas através de uma entrevista semi-dirigida Este estudo de caso contou com a participação de três actores de teatro universitário, do dISPArteatro, com pelo menos 2 anos de frequência do grupo. Na prática do grupo foram encontradas as aproximações sugeridas entre as concepções de uma preparação do actor através do contacto e as características do momento presente/momento de encontro. Todos os actores experienciaram mudanças na sua vida pessoal, nas áreas investigadas (relação com o corpo, relação consigo mesmos e relação com os outros). O processo foi sentido pelos actores como um crescimento pessoal e no global mencionam que se sentem mais capazes de viver o momento, o aqui-e-agora e que procuram mais relacionamentos com maior nível de intimidade e partilha.
Resumo:
Scientific studies exploring the environmental and experiential elements that help boost human happiness have become a significant and expanding body of work. Some urban designers, architects and planners are looking to apply this knowledge through policy decisions and design, but there is a great deal of room for further study and exploration. This paper looks at definitions of happiness and happiness measurements used in research. The paper goes on to introduce six environmental factors identified in a literature review that have design implications relating to happiness: Nature, Light, Surprise, Access, Identity, and Sociality. Architectural precedents are examined and design strategies are proposed for each factor, which are then applied to a test case site and building in Baltimore, Maryland. It is anticipated that these factors and strategies will be useful to architects, urban designers and planners as they endeavor to design positive user experiences and set city shaping policy.
Resumo:
The constant need to improve helicopter performance requires the optimization of existing and future rotor designs. A crucial indicator of rotor capability is hover performance, which depends on the near-body flow as well as the structure and strength of the tip vortices formed at the trailing edge of the blades. Computational Fluid Dynamics (CFD) solvers must balance computational expenses with preservation of the flow, and to limit computational expenses the mesh is often coarsened in the outer regions of the computational domain. This can lead to degradation of the vortex structures which compose the rotor wake. The current work conducts three-dimensional simulations using OVERTURNS, a three-dimensional structured grid solver that models the flow field using the Reynolds-Averaged Navier-Stokes equations. The S-76 rotor in hover was chosen as the test case for evaluating the OVERTURNS solver, focusing on methods to better preserve the rotor wake. Using the hover condition, various computational domains, spatial schemes, and boundary conditions were tested. Furthermore, a mesh adaption routine was implemented, allowing for the increased refinement of the mesh in areas of turbulent flow without the need to add points to the mesh. The adapted mesh was employed to conduct a sweep of collective pitch angles, comparing the resolved wake and integrated forces to existing computational and experimental results. The integrated thrust values saw very close agreement across all tested pitch angles, while the power was slightly over predicted, resulting in under prediction of the Figure of Merit. Meanwhile, the tip vortices have been preserved for multiple blade passages, indicating an improvement in vortex preservation when compared with previous work. Finally, further results from a single collective pitch case were presented to provide a more complete picture of the solver results.
Resumo:
Many geological formations consist of crystalline rocks that have very low matrix permeability but allow flow through an interconnected network of fractures. Understanding the flow of groundwater through such rocks is important in considering disposal of radioactive waste in underground repositories. A specific area of interest is the conditioning of fracture transmissivities on measured values of pressure in these formations. This is the process where the values of fracture transmissivities in a model are adjusted to obtain a good fit of the calculated pressures to measured pressure values. While there are existing methods to condition transmissivity fields on transmissivity, pressure and flow measurements for a continuous porous medium there is little literature on conditioning fracture networks. Conditioning fracture transmissivities on pressure or flow values is a complex problem because the measurements are not linearly related to the fracture transmissivities and they are also dependent on all the fracture transmissivities in the network. We present a new method for conditioning fracture transmissivities on measured pressure values based on the calculation of certain basis vectors; each basis vector represents the change to the log transmissivity of the fractures in the network that results in a unit increase in the pressure at one measurement point whilst keeping the pressure at the remaining measurement points constant. The fracture transmissivities are updated by adding a linear combination of basis vectors and coefficients, where the coefficients are obtained by minimizing an error function. A mathematical summary of the method is given. This algorithm is implemented in the existing finite element code ConnectFlow developed and marketed by Serco Technical Services, which models groundwater flow in a fracture network. Results of the conditioning are shown for a number of simple test problems as well as for a realistic large scale test case.
Resumo:
Forecasting abrupt variations in wind power generation (the so-called ramps) helps achieve large scale wind power integration. One of the main issues to be confronted when addressing wind power ramp forecasting is the way in which relevant information is identified from large datasets to optimally feed forecasting models. To this end, an innovative methodology oriented to systematically relate multivariate datasets to ramp events is presented. The methodology comprises two stages: the identification of relevant features in the data and the assessment of the dependence between these features and ramp occurrence. As a test case, the proposed methodology was employed to explore the relationships between atmospheric dynamics at the global/synoptic scales and ramp events experienced in two wind farms located in Spain. The achieved results suggested different connection degrees between these atmospheric scales and ramp occurrence. For one of the wind farms, it was found that ramp events could be partly explained from regional circulations and zonal pressure gradients. To perform a comprehensive analysis of ramp underlying causes, the proposed methodology could be applied to datasets related to other stages of the wind-topower conversion chain.
Resumo:
Facility location concerns the placement of facilities, for various objectives, by use of mathematical models and solution procedures. Almost all facility location models that can be found in literature are based on minimizing costs or maximizing cover, to cover as much demand as possible. These models are quite efficient for finding an optimal location for a new facility for a particular data set, which is considered to be constant and known in advance. In a real world situation, input data like demand and travelling costs are not fixed, nor known in advance. This uncertainty and uncontrollability can lead to unacceptable losses or even bankruptcy. A way of dealing with these factors is robustness modelling. A robust facility location model aims to locate a facility that stays within predefined limits for all expectable circumstances as good as possible. The deviation robustness concept is used as basis to develop a new competitive deviation robustness model. The competition is modelled with a Huff based model, which calculates the market share of the new facility. Robustness in this model is defined as the ability of a facility location to capture a minimum market share, despite variations in demand. A test case is developed by which algorithms can be tested on their ability to solve robust facility location models. Four stochastic optimization algorithms are considered from which Simulated Annealing turned out to be the most appropriate. The test case is slightly modified for a competitive market situation. With the Simulated Annealing algorithm, the developed competitive deviation model is solved, for three considered norms of deviation. At the end, also a grid search is performed to illustrate the landscape of the objective function of the competitive deviation model. The model appears to be multimodal and seems to be challenging for further research.
Resumo:
The objective of he article is to research the dynamic capacities developed and used by WEG in its internationalization process and to explain how these capacities help the company defends and supports competitive advantage. The article presents an exploratory study of the internationalization process of WEG in Argentina and China. This article has as analysis approach the dynamic capacities, contributes to the literature of international management in two aspects. First, it adds the analytical look of the internationalization based on dynamic capacities that are still well restricted. Second, when working the dynamic capacities as central element of the analysis of the internationalization process, it Proposes one framework of integrative analysis of the economic and behavioral theories that are used to explain the process of companies`-internationalization, although they are dealt independently and sometimes antagonistic way. The result shows as the dynamic capacities are articulated in the base of WEG in its process of internationalization for Argentina and the subsequent movement for China. The developed dynamic capacities in Argentina were acquired for the Brazilian headquarter and could have been applied in the process of internationalization for China. However, a more complex organizational structure cannot be identified where the inter-subsidiary relationships could share dynamic capacities as proposed in framework.