910 resultados para Run-Time
Resumo:
Computational modelling of dynamic fluid–structure interaction (DFSI) is a considerable challenge. Our approach to this class of problems involves the use of a single software framework for all the phenomena involved, employing finite volume methods on unstructured meshes in three dimensions. This method enables time and space accurate calculations in a consistent manner. One key application of DFSI simulation is the analysis of the onset of flutter in aircraft wings, where the work of Yates et al. [Measured and Calculated Subsonic and Transonic Flutter Characteristics of a 45° degree Sweptback Wing Planform in Air and Freon-12 in the Langley Transonic Dynamic Tunnel. NASA Technical Note D-1616, 1963] on the AGARD 445.6 wing planform still provides the most comprehensive benchmark data available. This paper presents the results of a significant effort to model the onset of flutter for the AGARD 445.6 wing planform geometry. A series of key issues needs to be addressed for this computational approach. • The advantage of using a single mesh, in order to eliminate numerical problems when applying boundary conditions at the fluid-structure interface, is counteracted by the challenge of generating a suitably high quality mesh in both the fluid and structural domains. • The computational effort for this DFSI procedure, in terms of run time and memory requirements, is very significant. Practical simulations require even finer meshes and shorter time steps, requiring parallel implementation for operation on large, high performance parallel systems. • The consistency and completeness of the AGARD data in the public domain is inadequate for use in the validation of DFSI codes when predicting the onset of flutter.
Resumo:
The electronics industry is developing rapidly together with the increasingly complex problem of microelectronic equipment cooling. It has now become necessary for thermal design engineers to consider the problem of equipment cooling at some level. The use of Computational Fluid Dynamics (CFD) for such investigations is fast becoming a powerful and almost essential tool for the design, development and optimisation of engineering applications. However turbulence models remain a key issue when tackling such flow phenomena. The reliability of CFD analysis depends heavily on the turbulence model employed together with the wall functions implemented. In order to resolve the abrupt fluctuations experienced by the turbulent energy and other parameters located at near wall regions and shear layers a particularly fine computational mesh is necessary which inevitably increases the computer storage and run-time requirements. This paper will discuss results from an investigation into the accuract of currently used turbulence models. Also a newly formulated transitional hybrid turbulence model will be introduced with comparisonsaagainst experimental data.
Resumo:
This paper will discuss Computational Fluid Dynamics (CFD) results from an investigation into the accuracy of several turbulence models to predict air cooling for electronic packages and systems. Also new transitional turbulence models will be proposed with emphasis on hybrid techniques that use the k-ε model at an appropriate distance away from the wall and suitable models, with wall functions, near wall regions. A major proportion of heat emitted from electronic packages can be extracted by air cooling. This flow of air throughout an electronic system and the heat extracted is highly dependent on the nature of turbulence present in the flow. The use of CFD for such investigations is fast becoming a powerful and almost essential tool for the design, development and optimization of engineering applications. However turbulence models remain a key issue when tackling such flow phenomena. The reliability of CFD analysis depends heavily on the turbulence model employed together with the wall functions implemented. In order to resolve the abrupt fluctuations experienced by the turbulent energy and other parameters located at near wall regions and shear layers a particularly fine computational mesh is necessary which inevitably increases the computer storage and run-time requirements. The PHYSICA Finite Volume code was used for this investigation. With the exception of the k-ε and k-ω models which are available as standard within PHYSICA, all other turbulence models mentioned were implemented via the source code by the authors. The LVEL, LVEL CAP, Wolfshtein, k-ε, k-ω, SST and kε/kl models are described and compared with experimental data.
Resumo:
Computational Fluid Dynamics (CFD) is gradually becoming a powerful and almost essential tool for the design, development and optimization of engineering applications. However the mathematical modelling of the erratic turbulent motion remains the key issue when tackling such flow phenomena. The reliability of CFD analysis depends heavily on the turbulence model employed together with the wall functions implemented. In order to resolve the abrupt changes in the turbulent energy and other parameters situated at near wall regions a particularly fine mesh is necessary which inevitably increases the computer storage and run-time requirements. Turbulence modelling can be considered to be one of the three key elements in CFD. Precise mathematical theories have evolved for the other two key elements, grid generation and algorithm development. The principal objective of turbulence modelling is to enhance computational procedures of efficient accuracy to reproduce the main structures of three dimensional fluid flows. The flow within an electronic system can be characterized as being in a transitional state due to the low velocities and relatively small dimensions encountered. This paper presents simulated CFD results for an investigation into the predictive capability of turbulence models when considering both fluid flow and heat transfer phenomena. Also a new two-layer hybrid kε / kl turbulence model for electronic application areas will be presented which holds the advantages of being cheap in terms of the computational mesh required and is also economical with regards to run-time.
Resumo:
This paper presents an investigation into dynamic self-adjustment of task deployment and other aspects of self-management, through the embedding of multiple policies. Non-dedicated loosely-coupled computing environments, such as clusters and grids are increasingly popular platforms for parallel processing. These abundant systems are highly dynamic environments in which many sources of variability affect the run-time efficiency of tasks. The dynamism is exacerbated by the incorporation of mobile devices and wireless communication. This paper proposes an adaptive strategy for the flexible run-time deployment of tasks; to continuously maintain efficiency despite the environmental variability. The strategy centres on policy-based scheduling which is informed by contextual and environmental inputs such as variance in the round-trip communication time between a client and its workers and the effective processing performance of each worker. A self-management framework has been implemented for evaluation purposes. The framework integrates several policy-controlled, adaptive services with the application code, enabling the run-time behaviour to be adapted to contextual and environmental conditions. Using this framework, an exemplar self-managing parallel application is implemented and used to investigate the extent of the benefits of the strategy
Resumo:
This paper presents an empirical investigation of policy-based self-management techniques for parallel applications executing in loosely-coupled environments. The dynamic and heterogeneous nature of these environments is discussed and the special considerations for parallel applications are identified. An adaptive strategy for the run-time deployment of tasks of parallel applications is presented. The strategy is based on embedding numerous policies which are informed by contextual and environmental inputs. The policies govern various aspects of behaviour, enhancing flexibility so that the goals of efficiency and performance are achieved despite high levels of environmental variability. A prototype self-managing parallel application is used as a vehicle to explore the feasibility and benefits of the strategy. In particular, several aspects of stability are investigated. The implementation and behaviour of three policies are discussed and sample results examined.
Resumo:
This paper presents innovative work in the development of policy-based autonomic computing. The core of the work is a powerful and flexible policy-expression language AGILE, which facilitates run-time adaptable policy configuration of autonomic systems. AGILE also serves as an integrating platform for other self-management technologies including signal processing, automated trend analysis and utility functions. Each of these technologies has specific advantages and applicability to different types of dynamic adaptation. The AGILE platform enables seamless interoperability of the different technologies to each perform various aspects of self-management within a single application. The various technologies are implemented as object components. Self-management behaviour is specified using the policy language semantics to bind the various components together as required. Since the policy semantics support run-time re-configuration, the self-management architecture is dynamically composable. Additional benefits include the standardisation of the application programmer interface, terminology and semantics, and only a single point of embedding is required.
Resumo:
This paper describes work towards the deployment of flexible self-management into real-time embedded systems. A challenging project which focuses specifically on the development of a dynamic, adaptive automotive middleware is described, and the specific self-management requirements of this project are discussed. These requirements have been identified through the refinement of a wide-ranging set of use cases requiring context-sensitive behaviours. A sample of these use-cases is presented to illustrate the extent of the demands for self-management. The strategy that has been adopted to achieve self-management, based on the use of policies is presented. The embedded and real-time nature of the target system brings the constraints that dynamic adaptation capabilities must not require changes to the run-time code (except during hot update of complete binary modules), adaptation decisions must have low latency, and because the target platforms are resource-constrained the self-management mechanism have low resource requirements (especially in terms of processing and memory). Policy-based computing is thus and ideal candidate for achieving the self-management because the policy itself is loaded at run-time and can be replaced or changed in the future in the same way that a data file is loaded. Policies represent a relatively low complexity and low risk means of achieving self-management, with low run-time costs. Policies can be stored internally in ROM (such as default policies) as well as externally to the system. The architecture of a designed-for-purpose powerful yet lightweight policy library is described. A suitable evaluation platform, supporting the whole life-cycle of feasibility analysis, concept evaluation, development, rigorous testing and behavioural validation has been devised and is described.
Resumo:
Providing a method of transparent communication and interoperation between distributed software is a requirement for many organisations and several standard and non-standard infrastructures exist for this purpose. Component models do more than just provide a plumbing mechanism for distributed applications, they provide a more controlled interoperation between components. There are very few component models however that have support for advanced dynamic reconfigurability. This paper describes a component model which provides controlled and constrained transparent communication and inter-operation between components in the form of a hierarchical component model. At the same time, the model contains support for advanced run-time reconfigurability of components. The process and benefits of designing a system using the presented model are discussed. A way in which reflective techniques and component frameworks can work together to produce dynamic adaptable systems is explained.
Resumo:
This paper describes a methodology for embedding dynamic behaviour into software components. The implications and system architecture requirements to support this adaptivity are discussed. This work is part of a European Commission funded and industry supported project to produce a reconfigurable middleware for use in automotive systems. Such systems must be trustable against illegal internal behaviour and activity with external origins, additional devices for example. Policy-based computing is used here as an example of embedded logic. A key contribution of this work is the way in which static and dynamic aspects of the system are interfaced, such that the behaviour can be changed very flexibly (even during run-time), without modification, recompilation or redeployment of the embedded application code. An implementation of these concepts is presented, focussing on achieving trust in the use of dynamic behaviour.
Resumo:
One of the most of challenging steps in the development of coupled hydrodynamic-biogeochemical models is the combination of multiple, often incompatible computer codes that describe individual physical, chemical, biological and geological processes. This “coupling” is time-consuming, error-prone, and demanding in terms of scientific and programming expertise. The open source, Fortran-based Framework for Aquatic Biogeochemical Models addresses these problems by providing a consistent set of programming interfaces through which hydrodynamic and biogeochemical models communicate. Models are coded once to connect to FABM, after which arbitrary combinations of hydrodynamic and biogeochemical models can be made. Thus, a biogeochemical model code works unmodified within models of a chemostat, a vertically structured water column, and a three-dimensional basin. Moreover, complex biogeochemistry can be distributed over many compact, self-contained modules, coupled at run-time. By enabling distributed development and user-controlled coupling of biogeochemical models, FABM enables optimal use of the expertise of scientists, programmers and end-users.
Resumo:
The census and similar sources of data have been published for two centuries so the information that they contain should provide an unparalleled insight into the changing population of Britain over this time period. To date, however, the seemingly trivial problem of changes in boundaries has seriously hampered the use of these sources as they make it impossible to create long run time series of spatially detailed data. The paper reviews methodologies that attempt to resolve this problem by using geographical information systems and areal inter-polation to allow the reallocation of data from one set of administrative units onto another. This makes it possible to examine change over time for a standard geography and thus it becomes possible to unlock the spatial detail and the temporal depth that are held in the census and in related sources.
Resumo:
Parallelizing compilers have difficulty analysing and optimising complex code. To address this, some analysis may be delayed until run-time, and techniques such as speculative execution used. Furthermore, to enhance performance, a feedback loop may be setup between the compile time and run-time analysis systems, as in iterative compilation. To extend this, it is proposed that the run-time analysis collects information about the values of variables not already determined, and estimates a probability measure for the sampled values. These measures may be used to guide optimisations in further analyses of the program. To address the problem of variables with measures as values, this paper also presents an outline of a novel combination of previous probabilistic denotational semantics models, applied to a simple imperative language.
Resumo:
In Run Time Reconfiguration (RTR) systems, the amount of reconfiguration is considerable when compared to the circuit changes implemented. This is because reconfiguration is not considered as part of the design flow. This paper presents a method for reconfigurable circuit design by modeling the underlying FPGA reconfigurable circuitry and taking it into consideration in the system design. This is demonstrated for an image processing example on the Xilinx Virtex FPGA.
Resumo:
A method is proposed to accelerate the evaluation of the Green's function of an infinite double periodic array of thin wire antennas. The method is based on the expansion of the Green's function into series corresponding to the propagating and evanescent waves and the use of Poisson and Kummer transformations enhanced with the analytic summation of the slowly convergent asymptotic terms. Unlike existing techniques the procedure reported here provides uniform convergence regardless of the geometrical parameters of the problem or plane wave excitation wavelength. In addition, it is numerically stable and does not require numerical integration or internal tuning parameters, since all necessary series are directly calculated in terms of analytical functions. This means that for nonlinear problem scenarios that the algorithm can be deployed without run time intervention or recursive adjustment within a harmonic balance engine. Numerical examples are provided to illustrate the efficiency and accuracy of the developed approach as compared with the Ewald method for which these classes of problems requires run time splitting parameter adaptation.