907 resultados para Pumping schemes
Resumo:
In this talk, we propose an all regime Lagrange-Projection like numerical scheme for the gas dynamics equations. By all regime, we mean that the numerical scheme is able to compute accurate approximate solutions with an under-resolved discretization with respect to the Mach number M, i.e. such that the ratio between the Mach number M and the mesh size or the time step is small with respect to 1. The key idea is to decouple acoustic and transport phenomenon and then alter the numerical flux in the acoustic approximation to obtain a uniform truncation error in term of M. This modified scheme is conservative and endowed with good stability properties with respect to the positivity of the density and the internal energy. A discrete entropy inequality under a condition on the modification is obtained thanks to a reinterpretation of the modified scheme in the Harten Lax and van Leer formalism. A natural extension to multi-dimensional problems discretized over unstructured mesh is proposed. Then a simple and efficient semi implicit scheme is also proposed. The resulting scheme is stable under a CFL condition driven by the (slow) material waves and not by the (fast) acoustic waves and so verifies the all regime property. Numerical evidences are proposed and show the ability of the scheme to deal with tests where the flow regime may vary from low to high Mach values.
Resumo:
In this work, we introduce a new class of numerical schemes for rarefied gas dynamic problems described by collisional kinetic equations. The idea consists in reformulating the problem using a micro-macro decomposition and successively in solving the microscopic part by using asymptotic preserving Monte Carlo methods. We consider two types of decompositions, the first leading to the Euler system of gas dynamics while the second to the Navier-Stokes equations for the macroscopic part. In addition, the particle method which solves the microscopic part is designed in such a way that the global scheme becomes computationally less expensive as the solution approaches the equilibrium state as opposite to standard methods for kinetic equations which computational cost increases with the number of interactions. At the same time, the statistical error due to the particle part of the solution decreases as the system approach the equilibrium state. This causes the method to degenerate to the sole solution of the macroscopic hydrodynamic equations (Euler or Navier-Stokes) in the limit of infinite number of collisions. In a last part, we will show the behaviors of this new approach in comparisons to standard Monte Carlo techniques for solving the kinetic equation by testing it on different problems which typically arise in rarefied gas dynamic simulations.
Resumo:
This paper deals with the development and the analysis of asymptotically stable and consistent schemes in the joint quasi-neutral and fluid limits for the collisional Vlasov-Poisson system. In these limits, the classical explicit schemes suffer from time step restrictions due to the small plasma period and Knudsen number. To solve this problem, we propose a new scheme stable for choices of time steps independent from the small scales dynamics and with comparable computational cost with respect to standard explicit schemes. In addition, this scheme reduces automatically to consistent discretizations of the underlying asymptotic systems. In this first work on this subject, we propose a first order in time scheme and we perform a relative linear stability analysis to deal with such problems. The framework we propose permits to extend this approach to high order schemes in the next future. We finally show the capability of the method in dealing with small scales through numerical experiments.
Resumo:
It is suggested here that the ultimate accuracy of DFT methods arises from the type of hybridization scheme followed. This idea can be cast into a mathematical formulation utilizing an integrand connecting the noninteracting and the interacting particle system. We consider two previously developed models for it, dubbed as HYB0 and QIDH, and assess a large number of exchange-correlation functionals against the AE6, G2/148, and S22 reference data sets. An interesting consequence of these hybridization schemes is that the error bars, including the standard deviation, are found to markedly decrease with respect to the density-based (nonhybrid) case. This improvement is substantially better than variations due to the underlying density functional used. We thus finally hypothesize about the universal character of the HYB0 and QIDH models.
Resumo:
Heat management in mines is a growing issue as mines expand physically in size and depth and as the infrastructure grows that is required to maintain them. Heat management is a concern as it relates to the health and safety of the workers as set by the regulations of governing bodies as well as the heat sensitive equipment that may be found throughout the mine workings. In order to reduce the exposure of working in hot environments there are engineering and management systems that can monitor and control the environmental conditions within the mine. The successful implementation of these methods can manage the downtime caused by heat stress environments, which can increase overall production. This thesis introduces an approach to monitoring and data based heat management. A case study is presented with an in depth approach to data collection. Data was collected for a period of up to and over one year. Continuous monitoring was conducted by equipment that was developed both commercially and within the mine site. The monitoring instrumentation was used to assess the environmental conditions found within the study area. Analysis of the data allowed for an engineering assessment of viable options in order to control and manage the environment heat stress. An option is developed and presented which allows for the greatest impact on the heat stress conditions within the case study area and is economically viable for the mine site.
Resumo:
Things change. Words change, meaning changes and use changes both words and meaning. In information access systems this means concept schemes such as thesauri or clas- sification schemes change. They always have. Concept schemes that have survived have evolved over time, moving from one version, often called an edition, to the next. If we want to manage how words and meanings - and as a conse- quence use - change in an effective manner, and if we want to be able to search across versions of concept schemes, we have to track these changes. This paper explores how we might expand SKOS, a World Wide Web Consortium (W3C) draft recommendation in order to do that kind of tracking.The Simple Knowledge Organization System (SKOS) Core Guide is sponsored by the Semantic Web Best Practices and Deployment Working Group. The second draft, edited by Alistair Miles and Dan Brickley, was issued in November 2005. SKOS is a “model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, other types of controlled vocabulary and also concept schemes embedded in glossaries and terminologies” in RDF. How SKOS handles version in concept schemes is an open issue. The current draft guide suggests using OWL and DCTERMS as mechanisms for concept scheme revision.As it stands an editor of a concept scheme can make notes or declare in OWL that more than one version exists. This paper adds to the SKOS Core by introducing a tracking sys- tem for changes in concept schemes. We call this tracking system vocabulary ontogeny. Ontogeny is a biological term for the development of an organism during its lifetime. Here we use the ontogeny metaphor to describe how vocabularies change over their lifetime. Our purpose here is to create a conceptual mechanism that will track these changes and in so doing enhance information retrieval and prevent document loss through versioning, thereby enabling persistent retrieval.
Resumo:
This thesis examines cultural policy for film in Scotland, from 1997 to 2010. It explores the extent to which the industry is shaped by film policy strategies and through the agency of public funding bodies. It reflects on how Scottish Screen, Scotland’s former screen agency, articulated its role as a national institution concerned with both commercial and cultural remits, with the conflicting interests of different industry groups. The study examines how the agency developed funding schemes to fulfil policy directives during a tumultuous period in Scottish cultural policy history, following the establishment of the Scottish Parliament with the Scotland Act 1998 and preceding the Independence Referendum Act 2013. In order to investigate how policy has shaped the development of a national film industry, a further two case studies are explored. These are Tartan Shorts, Scotland’s former flagship short film scheme, and the Audience Development Fund, Scotland’s first project based film exhibition scheme. The first study explores the planning, implementation and evaluation of the scheme as part of the agency’s talent development strategy. The outcomes of this study show the potential impact of funding methods aimed at developing and retaining Scottish filmmaking talent. Thereafter, the Scottish exhibition sector is discussed; a formerly unexplored field within film policy discussions and academic debate. It outlines Scottish Screen’s legacy to current film exhibition funding practices and the practical mechanisms the agency utilised to foster Scottish audiences. By mapping the historical and political terrain, the research analyses the specificity of Scotland within the UK context and explores areas in which short-term, context-driven policies become problematic. The work concludes by presenting the advantages and issues caused by film funding practices, advocating what is needed for the film industry in Scotland today with suggestions for long-term and cohesive policy development.
Resumo:
This research work analyses techniques for implementing a cell-centred finite-volume time-domain (ccFV-TD) computational methodology for the purpose of studying microwave heating. Various state-of-the-art spatial and temporal discretisation methods employed to solve Maxwell's equations on multidimensional structured grid networks are investigated, and the dispersive and dissipative errors inherent in those techniques examined. Both staggered and unstaggered grid approaches are considered. Upwind schemes using a Riemann solver and intensity vector splitting are studied and evaluated. Staggered and unstaggered Leapfrog and Runge-Kutta time integration methods are analysed in terms of phase and amplitude error to identify which method is the most accurate and efficient for simulating microwave heating processes. The implementation and migration of typical electromagnetic boundary conditions. from staggered in space to cell-centred approaches also is deliberated. In particular, an existing perfectly matched layer absorbing boundary methodology is adapted to formulate a new cell-centred boundary implementation for the ccFV-TD solvers. Finally for microwave heating purposes, a comparison of analytical and numerical results for standard case studies in rectangular waveguides allows the accuracy of the developed methods to be assessed.
Resumo:
Bid opening in e-auction is efficient when a homomorphic secret sharing function is employed to seal the bids and homomorphic secret reconstruction is employed to open the bids. However, this high efficiency is based on an assumption: the bids are valid (e.g., within a special range). An undetected invalid bid can compromise correctness and fairness of the auction. Unfortunately, validity verification of the bids is ignored in the auction schemes employing homomorphic secret sharing (called homomorphic auction in this paper). In this paper, an attack against the homomorphic auction in the absence of bid validity check is presented and a necessary bid validity check mechanism is proposed. Then a batch cryptographic technique is introduced and applied to improve the efficiency of bid validity check.
Resumo:
For a largely arid country with generally low relief, Australia has a remarkably large number and variety of waterfalls. Found mainly near the coast, close to where most of the population lives and near the major tourist resort areas, these landscape features have long been popular scenic attractions. As sights to see and places to enjoy a variety of recreational activities, waterfalls continue to play an important role in Australia’s tourism, even in seaside resort areas where the main attractions are sunshine, sandy beaches and surf. The aesthetic appeal of waterfalls and their value as recreational resources are recognized by the inclusion of many in national parks. Even here, demands of visitors and pressures from developers raise serious problems. This paper examines the way in which waterfalls have been developed and promoted as tourist attractions, demonstrating their importance to Australian tourism. It considers threats to the sustainable use of waterfall resources posed by power schemes and, particularly, by the tourist industry itself. Queensland’s Gold Coast is selected as a case study, and comparisons are made with other areas in which waterfalls have played important roles as tourist attractions, especially the Yorkshire coast of northeast England. The discussion draws largely on an examination of tourist literature from the nineteenth to the twenty-first century, including holiday brochures and guide books, as well as other published sources, together with field observation in various parts of the world
Resumo:
Instead of the costly encryption algorithms traditionally employed in auction schemes, efficient Goldwasser-Micali encryption is used to design a new sealed-bid auction. Multiplicative homomorphism instead of the traditional additive homomorphism is exploited to achieve security and high efficiency in the auction. The new scheme is the currently known most efficient non-interactive sealed-bid auction with bid privacy.
Resumo:
The ability of agents and services to automatically locate and interact with unknown partners is a goal for both the semantic web and web services. This, \serendipitous interoperability", is hindered by the lack of an explicit means of describing what services (or agents) are able to do, that is, their capabilities. At present, informal descriptions of what services can do are found in \documentation" elements; or they are somehow encoded in operation names and signatures. We show, by ref- erence to existing service examples, how ambiguous and imprecise capa- bility descriptions hamper the attainment of automated interoperability goals in the open, global web environment. In this paper we propose a structured, machine readable description of capabilities, which may help to increase the recall and precision of service discovery mechanisms. Our capability description draws on previous work in capability and process modeling and allows the incorporation of external classi¯cation schemes. The capability description is presented as a conceptual meta model. The model supports conceptual queries and can be used as an extension to the DAML-S Service Pro¯le.
Resumo:
The next phase envisioned for the World Wide Web is automated ad-hoc interaction between intelligent agents, web services, databases and semantic web enabled applications. Although at present this appears to be a distant objective, there are practical steps that can be taken to advance the vision. We propose an extension to classical conceptual models to allow the definition of application components in terms of public standards and explicit semantics, thus building into web-based applications, the foundation for shared understanding and interoperability. The use of external definitions and the need to store outsourced type information internally, brings to light the issue of object identity in a global environment, where object instances may be identified by multiple externally controlled identification schemes. We illustrate how traditional conceptual models may be augmented to recognise and deal with multiple identities.
Resumo:
We propose two public-key schemes to achieve “deniable authentication” for the Internet Key Exchange (IKE). Our protocols can be implemented using different concrete mechanisms and we discuss different options; in particular we suggest solutions based on elliptic curve pairings. The protocol designs use the modular construction method of Canetti and Krawczyk which provides the basis for a proof of security. Our schemes can, in some situations, be more efficient than existing IKE protocols as well as having stronger deniability properties.
Resumo:
This paper presents an efficient low-complexity clipping noise compensation scheme for PAR reduced orthogonal frequency division multiple access (OFDMA) systems. Conventional clipping noise compensation schemes proposed for OFDM systems are decision directed schemes which use demodulated data symbols. Thus these schemes fail to deliver expected performance in OFDMA systems where multiple users share a single OFDM symbol and a specific user may only know his/her own modulation scheme. The proposed clipping noise estimation and compensation scheme does not require the knowledge of the demodulated symbols of the other users, making it very promising for OFDMA systems. It uses the equalized output and the reserved tones to reconstruct the signal by compensating the clipping noise. Simulation results show that the proposed scheme can significantly improve the system performance.