894 resultados para test case optimization


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Population growth is always increasing, and thus the concept of smart and cognitive cities is becoming more important. Developed countries are aware of and working towards needed changes in city management. However, emerging countries require the optimization of their own city management. This chapter illustrates, based on a use case, how a city in an emerging country can quickly progress using the concept of smart and cognitive cities. Nairobi, the capital of Kenya, is chosen for the test case. More than half of the population of Nairobi lives in slums with poor sanitation, and many slum inhabitants often share a single toilet, so the proper functioning and reliable maintenance of toilets are crucial. For this purpose, an approach for processing text messages based on cognitive computing (using soft computing methods) is introduced. Slum inhabitants can inform the responsible center via text messages in cases when toilets are not functioning properly. Through cognitive computer systems, the responsible center can fix the problem in a quick and efficient way by sending repair workers to the area. Focusing on the slum of Kibera, an easy-to-handle approach for slum inhabitants is presented, which can make the city more efficient, sustainable and resilient (i.e., cognitive).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Usually, data mining projects that are based on decision trees for classifying test cases will use the probabilities provided by these decision trees for ranking classified test cases. We have a need for a better method for ranking test cases that have already been classified by a binary decision tree because these probabilities are not always accurate and reliable enough. A reason for this is that the probability estimates computed by existing decision tree algorithms are always the same for all the different cases in a particular leaf of the decision tree. This is only one reason why the probability estimates given by decision tree algorithms can not be used as an accurate means of deciding if a test case has been correctly classified. Isabelle Alvarez has proposed a new method that could be used to rank the test cases that were classified by a binary decision tree [Alvarez, 2004]. In this paper we will give the results of a comparison of different ranking methods that are based on the probability estimate, the sensitivity of a particular case or both.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The constant need to improve helicopter performance requires the optimization of existing and future rotor designs. A crucial indicator of rotor capability is hover performance, which depends on the near-body flow as well as the structure and strength of the tip vortices formed at the trailing edge of the blades. Computational Fluid Dynamics (CFD) solvers must balance computational expenses with preservation of the flow, and to limit computational expenses the mesh is often coarsened in the outer regions of the computational domain. This can lead to degradation of the vortex structures which compose the rotor wake. The current work conducts three-dimensional simulations using OVERTURNS, a three-dimensional structured grid solver that models the flow field using the Reynolds-Averaged Navier-Stokes equations. The S-76 rotor in hover was chosen as the test case for evaluating the OVERTURNS solver, focusing on methods to better preserve the rotor wake. Using the hover condition, various computational domains, spatial schemes, and boundary conditions were tested. Furthermore, a mesh adaption routine was implemented, allowing for the increased refinement of the mesh in areas of turbulent flow without the need to add points to the mesh. The adapted mesh was employed to conduct a sweep of collective pitch angles, comparing the resolved wake and integrated forces to existing computational and experimental results. The integrated thrust values saw very close agreement across all tested pitch angles, while the power was slightly over predicted, resulting in under prediction of the Figure of Merit. Meanwhile, the tip vortices have been preserved for multiple blade passages, indicating an improvement in vortex preservation when compared with previous work. Finally, further results from a single collective pitch case were presented to provide a more complete picture of the solver results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Facility location concerns the placement of facilities, for various objectives, by use of mathematical models and solution procedures. Almost all facility location models that can be found in literature are based on minimizing costs or maximizing cover, to cover as much demand as possible. These models are quite efficient for finding an optimal location for a new facility for a particular data set, which is considered to be constant and known in advance. In a real world situation, input data like demand and travelling costs are not fixed, nor known in advance. This uncertainty and uncontrollability can lead to unacceptable losses or even bankruptcy. A way of dealing with these factors is robustness modelling. A robust facility location model aims to locate a facility that stays within predefined limits for all expectable circumstances as good as possible. The deviation robustness concept is used as basis to develop a new competitive deviation robustness model. The competition is modelled with a Huff based model, which calculates the market share of the new facility. Robustness in this model is defined as the ability of a facility location to capture a minimum market share, despite variations in demand. A test case is developed by which algorithms can be tested on their ability to solve robust facility location models. Four stochastic optimization algorithms are considered from which Simulated Annealing turned out to be the most appropriate. The test case is slightly modified for a competitive market situation. With the Simulated Annealing algorithm, the developed competitive deviation model is solved, for three considered norms of deviation. At the end, also a grid search is performed to illustrate the landscape of the objective function of the competitive deviation model. The model appears to be multimodal and seems to be challenging for further research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Transport regulators consider that, with respect to pavement damage, heavy vehicles (HVs) are the riskiest vehicles on the road network. That HV suspension design contributes to road and bridge damage has been recognised for some decades. This thesis deals with some aspects of HV suspension characteristics, particularly (but not exclusively) air suspensions. This is in the areas of developing low-cost in-service heavy vehicle (HV) suspension testing, the effects of larger-than-industry-standard longitudinal air lines and the characteristics of on-board mass (OBM) systems for HVs. All these areas, whilst seemingly disparate, seek to inform the management of HVs, reduce of their impact on the network asset and/or provide a measurement mechanism for worn HV suspensions. A number of project management groups at the State and National level in Australia have been, and will be, presented with the results of the project that resulted in this thesis. This should serve to inform their activities applicable to this research. A number of HVs were tested for various characteristics. These tests were used to form a number of conclusions about HV suspension behaviours. Wheel forces from road test data were analysed. A “novel roughness” measure was developed and applied to the road test data to determine dynamic load sharing, amongst other research outcomes. Further, it was proposed that this approach could inform future development of pavement models incorporating roughness and peak wheel forces. Left/right variations in wheel forces and wheel force variations for different speeds were also presented. This led on to some conclusions regarding suspension and wheel force frequencies, their transmission to the pavement and repetitive wheel loads in the spatial domain. An improved method of determining dynamic load sharing was developed and presented. It used the correlation coefficient between two elements of a HV to determine dynamic load sharing. This was validated against a mature dynamic loadsharing metric, the dynamic load sharing coefficient (de Pont, 1997). This was the first time that the technique of measuring correlation between elements on a HV has been used for a test case vs. a control case for two different sized air lines. That dynamic load sharing was improved at the air springs was shown for the test case of the large longitudinal air lines. The statistically significant improvement in dynamic load sharing at the air springs from larger longitudinal air lines varied from approximately 30 percent to 80 percent. Dynamic load sharing at the wheels was improved only for low air line flow events for the test case of larger longitudinal air lines. Statistically significant improvements to some suspension metrics across the range of test speeds and “novel roughness” values were evident from the use of larger longitudinal air lines, but these were not uniform. Of note were improvements to suspension metrics involving peak dynamic forces ranging from below the error margin to approximately 24 percent. Abstract models of HV suspensions were developed from the results of some of the tests. Those models were used to propose further development of, and future directions of research into, further gains in HV dynamic load sharing. This was from alterations to currently available damping characteristics combined with implementation of large longitudinal air lines. In-service testing of HV suspensions was found to be possible within a documented range from below the error margin to an error of approximately 16 percent. These results were in comparison with either the manufacturer’s certified data or test results replicating the Australian standard for “road-friendly” HV suspensions, Vehicle Standards Bulletin 11. OBM accuracy testing and development of tamper evidence from OBM data were detailed for over 2000 individual data points across twelve test and control OBM systems from eight suppliers installed on eleven HVs. The results indicated that 95 percent of contemporary OBM systems available in Australia are accurate to +/- 500 kg. The total variation in OBM linearity, after three outliers in the data were removed, was 0.5 percent. A tamper indicator and other OBM metrics that could be used by jurisdictions to determine tamper events were developed and documented. That OBM systems could be used as one vector for in-service testing of HV suspensions was one of a number of synergies between the seemingly disparate streams of this project.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An experimental programme in 2007 used three air suspended heavy vehicles travelling over typical urban roads to determine whether dynamic axle-to-chassis forces could be reduced by using larger-than-standard diameter longitudinal air lines. This paper presents methodology, interim analysis and partial results from that programme. Alterations to dynamic measures derived from axle-to-chassis forces for the case of standard-sized longitudinal air lines vs. the test case where larger longitudinal air lines were fitted are presented and discussed. This leads to conclusions regarding the possibility that dynamic loadings between heavy vehicle suspensions and chassis may be reduced by fitting larger longitudinal air lines to air-suspended heavy vehicles. Reductions in the shock and vibration loads to heavy vehicle suspension components could lead to lighter and more economical chassis and suspensions. This could therefore lead to reduced tare and increased payloads without an increase in gross vehicle mass.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Daylighting in tropical and sub-tropical climates presents a unique challenge that is generally not well understood by designers. In a sub-tropical region such as Brisbane, Australia the majority of the year comprises of sunny clear skies with few overcast days and as a consequence windows can easily become sources of overheating and glare. The main strategy in dealing with this issue is extensive shading on windows. However, this in turn prevents daylight penetration into buildings often causing an interior to appear gloomy and dark even though there is more than sufficient daylight available. As a result electric lighting is the main source of light, even during the day. Innovative daylight devices which redirect light from windows offer a potential solution to this issue. These devices can potentially improve daylighting in buildings by increasing the illumination within the environment decreasing the high contrast between the window and work regions and deflecting potentially glare causing sunlight away from the observer. However, the performance of such innovative daylighting devices are generally quantified under overcast skies (i.e. daylight factors) or skies without sun, which are typical of European climates and are misleading when considering these devices for tropical or sub-tropical climates. This study sought to compare four innovative window daylighting devices in RADIANCE; light shelves, laser cut panels, micro-light guides and light redirecting blinds. These devices were simulated in RADIANCE under sub-tropical skies (for Brisbane) within the test case of a typical CBD office space. For each device the quantity of light redirected and its distribution within the space was used as the basis for comparison. In addition, glare analysis on each device was conducted using Weinold and Christoffersons evalglare. The analysis was conducted for selected hours for a day in each season. The majority of buildings that humans will occupy in their lifetime are already constructed, and extensive remodelling of most of these buildings is unlikely. Therefore the most effective way to improve daylighting in the near future will be through the alteration existing window spaces. Thus it will be important to understand the performance of daylighting systems with respect to the climate it is to be used in. This type of analysis is important to determine the applicability of a daylighting strategy so that designers can achieve energy efficiency as well the health benefits of natural daylight.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Libyan regime’s attacks on its own civilian population are a test case for the international community’s commitment to the notion of a “responsibility to protect” (R2P). The UN Security Council’s statement on 22 February 2011 explicitly invoked this concept by calling on “the Government of Libya to meet its responsibility to protect its population”. Yet, with Muammar Gaddafi encouraging further violence against protesters and threatening to fight “until the last drop of blood” it seems unlikely that the Security Council’s warning will be heeded. Greater pressure from the international community will be needed to bring an end to the atrocities in Libya. The international response to the Libyan crisis represents an opportunity to translate the theory of R2P into practice.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computational Fluid Dynamics (CFD) simulations are widely used in mechanical engineering. Although achieving a high level of confidence in numerical modelling is of crucial importance in the field of turbomachinery, verification and validation of CFD simulations are very tricky especially for complex flows encountered in radial turbines. Comprehensive studies of radial machines are available in the literature. Unfortunately, none of them include enough detailed geometric data to be properly reproduced and so cannot be considered for academic research and validation purposes. As a consequence, design improvements of such configurations are difficult. Moreover, it seems that well-developed analyses of radial turbines are used in commercial software but are not available in the open literature especially at high pressure ratios. It is the purpose of this paper to provide a fully open set of data to reproduce the exact geometry of the high pressure ratio single stage radial-inflow turbine used in the Sundstrand Power Systems T-100 Multipurpose Small Power Unit. First, preliminary one-dimensional meanline design and analysis are performed using the commercial software RITAL from Concepts-NREC in order to establish a complete reference test case available for turbomachinery code validation. The proposed design of the existing turbine is then carefully and successfully checked against the geometrical and experimental data partially published in the literature. Then, three-dimensional Reynolds-Averaged Navier-Stokes simulations are conducted by means of the Axcent-PushButton CFDR CFD software. The effect of the tip clearance gap is investigated in detail for a wide range of operating conditions. The results confirm that the 3D geometry is correctly reproduced. It also reveals that the turbine is shocked while designed to give a high-subsonic flow and highlight the importance of the diffuser.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present paper presents and discusses the use of dierent codes regarding the numerical simulation of a radial-in ow turbine. A radial-in ow turbine test case was selected from published literature [1] and commercial codes (Fluent and CFX) were used to perform the steady-state numerical simulations. An in-house compressible- ow simulation code, Eilmer3 [2] was also adapted in order to make it suitable to perform turbomachinery simulations and preliminary results are presented and discussed. The code itself as well as its adaptation, comprising the addition of terms for the rotating frame of reference, programmable boundary conditions for periodic boundaries and a mixing plane interface between the rotating and non-rotating blocks are also discussed. Several cases with dierent orders of complexity in terms of geometry were considered and the results were compared across the dierent codes. The agreement between these results and published data is also discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A large number of methods have been published that aim to evaluate various components of multi-view geometry systems. Most of these have focused on the feature extraction, description and matching stages (the visual front end), since geometry computation can be evaluated through simulation. Many data sets are constrained to small scale scenes or planar scenes that are not challenging to new algorithms, or require special equipment. This paper presents a method for automatically generating geometry ground truth and challenging test cases from high spatio-temporal resolution video. The objective of the system is to enable data collection at any physical scale, in any location and in various parts of the electromagnetic spectrum. The data generation process consists of collecting high resolution video, computing accurate sparse 3D reconstruction, video frame culling and down sampling, and test case selection. The evaluation process consists of applying a test 2-view geometry method to every test case and comparing the results to the ground truth. This system facilitates the evaluation of the whole geometry computation process or any part thereof against data compatible with a realistic application. A collection of example data sets and evaluations is included to demonstrate the range of applications of the proposed system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reconfigurable computing devices can increase the performance of compute intensive algorithms by implementing application specific co-processor architectures. The power cost for this performance gain is often an order of magnitude less than that of modern CPUs and GPUs. Exploiting the potential of reconfigurable devices such as Field-Programmable Gate Arrays (FPGAs) is typically a complex and tedious hardware engineering task. Re- cently the major FPGA vendors (Altera, and Xilinx) have released their own high-level design tools, which have great potential for rapid development of FPGA based custom accelerators. In this paper, we will evaluate Altera’s OpenCL Software Development Kit, and Xilinx’s Vivado High Level Sythesis tool. These tools will be compared for their per- formance, logic utilisation, and ease of development for the test case of a Tri-diagonal linear system solver.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Visual localization in outdoor environments is often hampered by the natural variation in appearance caused by such things as weather phenomena, diurnal fluctuations in lighting, and seasonal changes. Such changes are global across an environment and, in the case of global light changes and seasonal variation, the change in appearance occurs in a regular, cyclic manner. Visual localization could be greatly improved if it were possible to predict the appearance of a particular location at a particular time, based on the appearance of the location in the past and knowledge of the nature of appearance change over time. In this paper, we investigate whether global appearance changes in an environment can be learned sufficiently to improve visual localization performance. We use time of day as a test case, and generate transformations between morning and afternoon using sample images from a training set. We demonstrate the learned transformation can be generalized from training data and show the resulting visual localization on a test set is improved relative to raw image comparison. The improvement in localization remains when the area is revisited several weeks later.