910 resultados para test case generation
Resumo:
This paper presents the development and the application of a multi-objective optimization framework for the design of two-dimensional multi-element high-lift airfoils. An innovative and efficient optimization algorithm, namely Multi-Objective Tabu Search (MOTS), has been selected as core of the framework. The flow-field around the multi-element configuration is simulated using the commercial computational fluid dynamics (cfd) suite Ansys cfx. Elements shape and deployment settings have been considered as design variables in the optimization of the Garteur A310 airfoil, as presented here. A validation and verification process of the cfd simulation for the Garteur airfoil is performed using available wind tunnel data. Two design examples are presented in this study: a single-point optimization aiming at concurrently increasing the lift and drag performance of the test case at a fixed angle of attack and a multi-point optimization. The latter aims at introducing operational robustness and off-design performance into the design process. Finally, the performance of the MOTS algorithm is assessed by comparison with the leading NSGA-II (Non-dominated Sorting Genetic Algorithm) optimization strategy. An equivalent framework developed by the authors within the industrial sponsor environment is used for the comparison. To eliminate cfd solver dependencies three optimum solutions from the Pareto optimal set have been cross-validated. As a result of this study MOTS has been demonstrated to be an efficient and effective algorithm for aerodynamic optimizations. Copyright © 2012 Tech Science Press.
Resumo:
Hybrid numerical large eddy simulation (NLES) and detached eddy simulation (DES) methods are assessed on a labyrinth seal geometry. A high sixth order discretization scheme is used and is validated using a test case of a two dimensional vortex. The hybrid approach adopts a new blending function and along with DES is initially validated using a simple cavity flow. The NLES method is also validated outside of RANS zones. It is found that there is very little resolved turbulence in the cavity for the DES simulation. For the labyrinth seal calculations the DES approach is problematic giving virtually no resolved turbulence content. It is seen that over the tooth tips the extent of the LES region is small and is likely to be a strong contributor to excessive flow damping in these regions. On the other hand the zonal Hamilton-Jacobi approach did not suffer from this trait. In both cases the meshes used are considered to be hybrid RANS-LES adequate. Fortunately (or perhaps unfortunately) the DES profiles are in agreement with the time mean experimental measurements. It is concluded that for an inexperienced CFD practitioner this could have wider implications particularly if transient results such as unsteady loading are desired. Copyright © 2012 by the American Institute of Aeronautics and Astronautics, Inc.
Resumo:
Hybrid numerical large eddy simulation (NLES), detached eddy simulation (DES) and URANS methods are assessed on a cavity and a labyrinth seal geometry. A high sixth-order discretization scheme is used and is validated using the test case of a two-dimensional vortex. The hybrid approach adopts a new blending function. For the URANS simulations, the flow within the cavity remains steady, and the results show significant variation between models. Surprisingly, low levels of resolved turbulence are observed in the cavity for the DES simulation, and the cavity shear layer remains two dimensional. The hybrid RANS-NLES approach does not suffer from this trait.For the labyrinth seal, both the URANS and DES approaches give low levels of resolved turbulence. The zonal Hamilton-Jacobi approach on the other had given significantly more resolved content. Both DES and hybrid RANS-NLES give good agreement with the experimentally measured velocity profiles. Again, there is significant variation between the URANS models, and swirl velocities are overpredicted. © 2013 John Wiley & Sons, Ltd.
Resumo:
Underground structures constitute crucial components of the transportation networks. Considering their significance for modern societies, their proper seismic design is of great importance. However, this design may become very tricky, accounting of the lack of knowledge regarding their seismic behavior. Several issues that are significantly affecting this behavior (i.e. earth pressures on the structure, seismic shear stresses around the structure, complex deformation modes for rectangular structures during shaking etc.) are still open. The problem is wider for the non-circular (i.e. rectangular) structures, were the soilstructure interaction effects are expected to be maximized. The paper presents representative experimental results from a test case of a series of dynamic centrifuge tests that were performed on rectangular tunnels embedded in dry sand. The tests were carried out at the centrifuge facility of the University of Cambridge, within the Transnational Task of the SERIES EU research program. The presented test case is also numerically simulated and studied. Preliminary full dynamic time history analyses of the coupled soil-tunnel system are performed, using ABAQUS. Soil non-linearity and soil-structure interaction are modeled, following relevant specifications for underground structures and tunnels. Numerical predictions are compared to experimental results and discussed. Based on this comprehensive experimental and numerical study, the seismic behavior of rectangular embedded structures is better understood and modeled, consisting an important step in the development of appropriate specifications for the seismic design of rectangular shallow tunnels.
Resumo:
Basis path testing is a very powerful structural testing criterion. The number of test paths equals to the cyclomatic complexity of program defined by McCabe. Traditional test generation methods select the paths either without consideration of the constraints of variables or interactively. In this note, an efficient method is presented to generate a set of feasible basis paths. The experiments show that this method can generate feasible basis paths for real-world C programs automatically in acceptable time.
Resumo:
Many testing methods are based on program paths. A well-known problem with them is that some paths are infeasible. To decide the feasibility of paths, we may solve a set of constraints. In this paper, we describe constraint-based tools that can be used for this purpose. They accept constraints expressed in a natural form, which may involve variables of different types such as integers, Booleans, reals and fixed-size arrays. The constraint solver is an extension of a Boolean satisfiability checker and it makes use of a linear programming package. The solving algorithm is described, and examples are given to illustrate the use of the tools. For many paths in the testing literature, their feasibility can be decided in a reasonable amount of time.
Resumo:
实施第三方安全功能独立测试是信息安全产品测评中的一个重要环节,对于以安全数据库管理系统为代表的信息安全产品,其系统规约的测试并不能完全真实反映系统行为,还需要满足系统安全策略.提出了基于安全策略模型的安全功能测试用例自动生成方法,该方法包括基于语法的划分、基于规则的划分、基于类型的划分等步骤,依据形式化安全模型生成正确描述系统行为的操作测试用例集.该方法有助于提高测试质量,发现手工测试中难以发现的缺陷,并有助于减少测试过程中的重复劳动,实现测试自动化并提高测试效率.
Resumo:
软件测试是保证软件质量的重要手段. 随着软件技术的发展, 软件的规模越来越大, 程序的复杂度也逐渐增加. 软件测试也由原来的人工操作逐渐走向自动化. 自动化软件测试已经成为国内外软件工程研究的热点之一. 本文研究了自动软件测试中的两个问题, 它们分别属于自动测试数据生成和错误查找两方面. 主要贡献如下: 本文提出了一种对含有字符串和字符串函数调用的C语言程序自动生成测试数据的方法. 具体做法是将C语言程序中的字符变量看成是取值范围在0~255之间的整数, 并使用字符数组来表示字符串, 同时将字符串函数建模成一阶逻辑公式和赋值语句. 通过使用前置条件和后置条件来描述函数调用语句, 将程序中的字符串函数调用语句替换成逻辑公式和赋值语句, 之后使用路径分析技术自动生成程序的测试数据. 此外, 本文还实现了一个自动化工具, 能够为真实的C程序自动生成测试数据. 另一方面, 本文还提出了一种自动检查程序中是否含有死循环的方法. 该方法基于静态代码分析, 结合了循环展开和路径可行性分析技术. 具体做法是首先通过遍历控制流图生成待查循环的检验路径, 之后通过分析检验路径的可行性以及路径之间的联系, 判断这些路径是否符合死循环模式. 在此基础上, 本文实现了原型工具, 并对一组基准程序进行测试. 实验结果表明, 工具能高效地检测出C语言程序中的死循环, 准确率较高. 工具的自动化程度较高, 能处理复杂的控制流以及嵌套的循环.
Resumo:
Investigating the interplay between continental weathering and erosion, climate, and atmospheric CO2 concentrations is significant in understanding the mechanisms that force the Cenozoic global cooling and predicting the future climatic and environmental response to increasing temperature and CO2 levels. The Miocene represents an ideal test case as it encompasses two distinct extreme climate periods, the Miocene Climatic Optimum (MCO) with the warmest time since 35 Ma in Earth's history and the transition to the Late Cenozoic icehouse mode with the establishment of the east Antarctic ice sheet. However the precise role of continental weathering during this period of major climate change is poorly understood. Here we show changes in the rates of Miocene continental chemical weathering and physical erosion, which we tracked using the chemical index of alteration ( CIA) and mass accumulation rate ( MAR) respectively from Ocean Drilling Program (ODP) Site 1146 and 1148 in the South China Sea. We found significantly increased CIA values and terrigenous MARs during the MCO (ca. 17-15 Ma) compared to earlier and later periods suggests extreme continental weathering and erosion at that time. Similar high rates were revealed in the early-middle Miocene of Asia, the European Alps, and offshore Angola. This suggests that rapid sedimentation during the MCO was a global erosion event triggered by climate rather than regional tectonic activity. The close coherence of our records with high temperature, strong precipitation, increased burial of organic carbon and elevated atmospheric CO2 concentration during the MCO argues for long-term, close coupling between continental silicate weathering, erosion, climate and atmospheric CO2 during the Miocene. Citation: Wan, S., W. M. Kurschner, P. D. Clift, A. Li, and T. Li (2009), Extreme weathering/ erosion during the Miocene Climatic Optimum: Evidence from sediment record in the South China Sea, Geophys. Res. Lett., 36, L19706, doi: 10.1029/2009GL040279.
Resumo:
Comfort is, in essence, satisfaction with the environment, and with respect to the indoor environment it is primarily satisfaction with the thermal conditions and air quality. Improving comfort has social, health and economic benefits, and is more financially significant than any other building cost. Despite this, comfort is not strictly managed throughout the building lifecycle. This is mainly due to the lack of an appropriate system to adequately manage comfort knowledge through the construction process into operation. Previous proposals to improve knowledge management have not been successfully adopted by the construction industry. To address this, the BabySteps approach was devised. BabySteps is an approach, proposed by this research, which states that for an innovation to be adopted into the industry it must be implementable through a number of small changes. This research proposes that improving the management of comfort knowledge will improve comfort. ComMet is a new methodology proposed by this research that manages comfort knowledge. It enables comfort knowledge to be captured, stored and accessed throughout the building life-cycle and so allowing it to be re-used in future stages of the building project and in future projects. It does this using the following: Comfort Performances – These are simplified numerical representations of the comfort of the indoor environment. Comfort Performances quantify the comfort at each stage of the building life-cycle using standard comfort metrics. Comfort Ratings - These are a means of classifying the comfort conditions of the indoor environment according to an appropriate standard. Comfort Ratings are generated by comparing different Comfort Performances. Comfort Ratings provide additional information relating to the comfort conditions of the indoor environment, which is not readily determined from the individual Comfort Performances. Comfort History – This is a continuous descriptive record of the comfort throughout the project, with a focus on documenting the items and activities, proposed and implemented, which could potentially affect comfort. Each aspect of the Comfort History is linked to the relevant comfort entity it references. These three components create a comprehensive record of the comfort throughout the building lifecycle. They are then stored and made available in a common format in a central location which allows them to be re-used ad infinitum. The LCMS System was developed to implement the ComMet methodology. It uses current and emerging technologies to capture, store and allow easy access to comfort knowledge as specified by ComMet. LCMS is an IT system that is a combination of the following six components: Building Standards; Modelling & Simulation; Physical Measurement through the specially developed Egg-Whisk (Wireless Sensor) Network; Data Manipulation; Information Recording; Knowledge Storage and Access.Results from a test case application of the LCMS system - an existing office room at a research facility - highlighted that while some aspects of comfort were being maintained, the building’s environment was not in compliance with the acceptable levels as stipulated by the relevant building standards. The implementation of ComMet, through LCMS, demonstrates how comfort, typically only considered during early design, can be measured and managed appropriately through systematic application of the methodology as means of ensuring a healthy internal environment in the building.
Resumo:
© 2013 The Association for the Study of Animal Behaviour.Social complexity, often estimated by group size, is seen as driving the complexity of vocal signals, but its relation to olfactory signals, which arguably arose to function in nonsocial realms, remains underappreciated. That olfactory signals also may mediate within-group interaction, vary with social complexity and promote social cohesion underscores a potentially crucial link with sociality. To examine that link, we integrated chemical and behavioural analyses to ask whether olfactory signals facilitate reproductive coordination in a strepsirrhine primate, the Coquerel's sifaka, Propithecus coquereli. Belonging to a clade comprising primarily solitary, nocturnal species, the diurnal, group-living sifaka represents an interesting test case. Convergent with diurnal, group-living lemurids, sifakas expressed chemically rich scent signals, consistent with the social complexity hypothesis for communication. These signals minimally encoded the sex of the signaller and varied with female reproductive state. Likewise, sex and female fertility were reflected in within-group scent investigation, scent marking and overmarking. We further asked whether, within breeding pairs, the stability or quality of the pair's bond influences the composition of glandular signals and patterns of investigatory or scent-marking behaviour. Indeed, reproductively successful pairs tended to show greater similarity in their scent signals than did reproductively unsuccessful pairs, potentially through chemical convergence. Moreover, scent marking was temporally coordinated within breeding pairs and was influenced by past reproductive success. That olfactory signalling reflects social bondedness or reproductive history lends support to recent suggestions that the quality of relationships may be a more valuable proxy than group size for estimating social complexity. We suggest that olfactory signalling in sifakas is more complex than previously recognized and, as in other socially integrated species, can be a crucial mechanism for promoting group cohesion and maintaining social bonds. Thus, the evolution of sociality may well be reflected in the complexity of olfactory signalling.
Resumo:
The high energetic costs of building and maintaining large brains are thought to constrain encephalization. The 'expensive-tissue hypothesis' (ETH) proposes that primates (especially humans) overcame this constraint through reduction of another metabolically expensive tissue, the gastrointestinal tract. Small guts characterize animals specializing on easily digestible diets. Thus, the hypothesis may be tested via the relationship between brain size and diet quality. Platyrrhine primates present an interesting test case, as they are more variably encephalized than other extant primate clades (excluding Hominoidea). We find a high degree of phylogenetic signal in the data for diet quality, endocranial volume and body size. Controlling for phylogenetic effects, we find no significant correlation between relative diet quality and relative endocranial volume. Thus, diet quality fails to account for differences in platyrrhine encephalization. One taxon, in particular, Brachyteles, violates predictions made by ETH in having a large brain and low-quality diet. Dietary reconstructions of stem platyrrhines further indicate that a relatively high-quality diet was probably in place prior to increases in encephalization. Therefore, it is unlikely that a shift in diet quality was a primary constraint release for encephalization in platyrrhines and, by extrapolation, humans.
Resumo:
This work comprises accurate computational analysis of levitated liquid droplet oscillations in AC and DC magnetic fields. The AC magnetic field interacting with the induced electric current within the liquid metal droplet generates intense fluid flow and the coupled free surface oscillations. The pseudo-spectral technique is used to solve the turbulent fluid flow equations for the continuously dynamically transformed axisymmetric fluid volume. The volume electromagnetic force distribution is updated with the shape and position change. We start with the ideal fluid test case for undamped Rayleigh frequency oscillations in the absence of gravity, and then add the viscous and the DC magnetic field damping. The oscillation frequency spectra are further analysed for droplets levitated against gravity in AC and DC magnetic fields at various combinations. In the extreme case electrically poorly conducting, diamagnetic droplet (water) levitation dynamics are simulated. Applications are aimed at pure electromagnetic material processing techniques and the material properties measurements in uncontaminated conditions.
Resumo:
Motion transparency provides a challenging test case for our understanding of how visual motion, and other attributes, are computed and represented in the brain. However, previous studies of visual transparency have used subjective criteria which do not confirm the existence of independent representations of the superimposed motions. We have developed measures of performance in motion transparency that require observers to extract information about two motions jointly, and therefore test the information that is simultaneously represented for each motion. Observers judged whether two motions were at 90 to one another; the base direction was randomized so that neither motion taken alone was informative. The precision of performance was determined by the standard deviations (S.D.s) of probit functions fitted to the data. Observers also made judgments of orthogonal directions between a single motion stream and a line, for one of two transparent motions against a line and for two spatially segregated motions. The data show that direction judgments with transparency can be made with comparable accuracy to segregated (non-transparent) conditions, supporting the idea that transparency involves the equivalent representation of two global motions in the same region. The precision of this joint direction judgment is, however, 2–3 times poorer than that for a single motion stream. The precision in directional judgment for a single stream is reduced only by a factor of about 1.5 by superimposing a second stream. The major effect in performance, therefore, appears to be associated with the need to compute and compare two global representations of motion, rather than with interference between the dot streams per se. Experiment 2tested the transparency of motions separated by a range of angles from 5 to 180 by requiring subjects to set a line matching the perceived direction of each motion. The S.D.s of these settings demonstrated that directions of transparent motions were represented independently for separations over 20. Increasing dot speeds from 1 to 10 deg/s improved directional performance but had no effect on transparency perception. Transparency was also unaffected by variations of density between 0.1 and 19 dots/deg2
Resumo:
We present the first detailed kinematical analysis of the planetary nebula Abell 63, which is known to contain the eclipsing close-binary nucleus UU Sge. Abell 63 provides an important test case in investigating the role of close-binary central stars on the evolution of planetary nebulae. Longslit observations were obtained using the Manchester echelle spectrometer combined with the 2.1-m San Pedro Martir Telescope. The spectra reveal that the central bright rim of Abell 63 has a tube-like structure. A deep image shows collimated lobes extending from the nebula, which are shown to be high-velocity outflows. The kinematic ages of the nebular rim and the extended lobes are calculated to be 8400 +/- 500 and 12900 +/- 2800 yr, respectively, which suggests that the lobes were formed at an earlier stage than the nebular rim. This is consistent with expectations that disc-generated jets form immediately after the common envelope phase. A morphological-kinematical model of the central nebula is presented and the best-fitting model is found to have the same inclination as the orbital plane of the central binary system; this is the first proof that a close-binary system directly affects the shaping of its nebula. A Hubble-type flow is well-established in the morphological-kinematical modelling of the observed line profiles and imagery. Two possible formation models for the elongated lobes of Abell 63 are considered, (i) a low-density, pressure-driven jet excavates a cavity in the remnant asymptotic giant branch (AGB) envelope; (ii) high-density bullets form the lobes in a single ballistic ejection event.