5 resultados para singleton design pattern, symmetric key encryption
em Nottingham eTheses
Resumo:
In this article we consider the application of the generalization of the symmetric version of the interior penalty discontinuous Galerkin finite element method to the numerical approximation of the compressible Navier--Stokes equations. In particular, we consider the a posteriori error analysis and adaptive mesh design for the underlying discretization method. Indeed, by employing a duality argument (weighted) Type I a posteriori bounds are derived for the estimation of the error measured in terms of general target functionals of the solution; these error estimates involve the product of the finite element residuals with local weighting terms involving the solution of a certain dual problem that must be numerically approximated. This general approach leads to the design of economical finite element meshes specifically tailored to the computation of the target functional of interest, as well as providing efficient error estimation. Numerical experiments demonstrating the performance of the proposed approach will be presented.
Resumo:
Transmitting sensitive data over non-secret channels has always required encryption technologies to ensure that the data arrives without exposure to eavesdroppers. The Internet has made it possible to transmit vast volumes of data more rapidly and cheaply and to a wider audience than ever before. At the same time, strong encryption makes it possible to send data securely, to digitally sign it, to prove it was sent or received, and to guarantee its integrity. The Internet and encryption make bulk transmission of data a commercially viable proposition. However, there are implementation challenges to solve before commercial bulk transmission becomes mainstream. Powerful have a performance cost, and may affect quality of service. Without encryption, intercepted data may be illicitly duplicated and re-sold, or its commercial value diminished because its secrecy is lost. Performance degradation and potential for commercial loss discourage the bulk transmission of data over the Internet in any commercial application. This paper outlines technical solutions to these problems. We develop new technologies and combine existing ones in new and powerful ways to minimise commercial loss without compromising performance or inflating overheads.
Resumo:
Secure transmission of bulk data is of interest to many content providers. A commercially-viable distribution of content requires technology to prevent unauthorised access. Encryption tools are powerful, but have a performance cost. Without encryption, intercepted data may be illicitly duplicated and re-sold, or its commercial value diminished because its secrecy is lost. Two technical solutions make it possible to perform bulk transmissions while retaining security without too high a performance overhead. These are: 1. a) hierarchical encryption - the stronger the encryption, the harder it is to break but also the more computationally expensive it is. A hierarchical approach to key exchange means that simple and relatively weak encryption and keys are used to encrypt small chunks of data, for example 10 seconds of video. Each chunk has its own key. New keys for this bottom-level encryption are exchanged using a slightly stronger encryption, for example a whole-video key could govern the exchange of the 10-second chunk keys. At a higher level again, there could be daily or weekly keys, securing the exchange of whole-video keys, and at a yet higher level, a subscriber key could govern the exchange of weekly keys. At higher levels, the encryption becomes stronger but is used less frequently, so that the overall computational cost is minimal. The main observation is that the value of each encrypted item determines the strength of the key used to secure it. 2. b) non-symbolic fragmentation with signal diversity - communications are usually assumed to be sent over a single communications medium, and the data to have been encrypted and/or partitioned in whole-symbol packets. Network and path diversity break up a file or data stream into fragments which are then sent over many different channels, either in the same network or different networks. For example, a message could be transmitted partly over the phone network and partly via satellite. While TCP/IP does a similar thing in sending different packets over different paths, this is done for load-balancing purposes and is invisible to the end application. Network and path diversity deliberately introduce the same principle as a secure communications mechanism - an eavesdropper would need to intercept not just one transmission path but all paths used. Non-symbolic fragmentation of data is also introduced to further confuse any intercepted stream of data. This involves breaking up data into bit strings which are subsequently disordered prior to transmission. Even if all transmissions were intercepted, the cryptanalyst still needs to determine fragment boundaries and correctly order them. These two solutions depart from the usual idea of data encryption. Hierarchical encryption is an extension of the combined encryption of systems such as PGP but with the distinction that the strength of encryption at each level is determined by the "value" of the data being transmitted. Non- symbolic fragmentation suppresses or destroys bit patterns in the transmitted data in what is essentially a bit-level transposition cipher but with unpredictable irregularly-sized fragments. Both technologies have applications outside the commercial and can be used in conjunction with other forms of encryption, being functionally orthogonal.
Resumo:
In this article we propose a new symmetric version of the interior penalty discontinuous Galerkin finite element method for the numerical approximation of the compressible Navier-Stokes equations. Here, particular emphasis is devoted to the construction of an optimal numerical method for the evaluation of certain target functionals of practical interest, such as the lift and drag coefficients of a body immersed in a viscous fluid. With this in mind, the key ingredients in the construction of the method include: (i) An adjoint consistent imposition of the boundary conditions; (ii) An adjoint consistent reformulation of the underlying target functional of practical interest; (iii) Design of appropriate interior-penalty stabilization terms. Numerical experiments presented within this article clearly indicate the optimality of the proposed method when the error is measured in terms of both the L_2-norm, as well as for certain target functionals. Computational comparisons with other discontinuous Galerkin schemes proposed in the literature, including the second scheme of Bassi & Rebay, cf. [11], the standard SIPG method outlined in [25], and an NIPG variant of the new scheme will be undertaken.
Resumo:
Background: This paper describes the results of a feasibility study for a randomised controlled trial (RCT). Methods: Twenty-nine members of the UK Dermatology Clinical Trials Network (UK DCTN) expressed an interest in recruiting for this study. Of these, 17 obtained full ethics and Research & Development (R&D) approval, and 15 successfully recruited patients into the study. A total of 70 participants with a diagnosis of cellulitis of the leg were enrolled over a 5-month period. These participants were largely recruited from medical admissions wards, although some were identified from dermatology, orthopaedic, geriatric and general surgery wards. Data were collected on patient demographics, clinical features and willingness to take part in a future RCT. Results: Despite being a relatively common condition, cellulitis patients were difficult to locate through our network of UK DCTN clinicians. This was largely because patients were rarely seen by dermatologists, and admissions were not co-ordinated centrally. In addition, the impact of the proposed exclusion criteria was high; only 26 (37%) of those enrolled in the study fulfilled all of the inclusion criteria for the subsequent RCT, and were willing to be randomised to treatment. Of the 70 participants identified during the study as having cellulitis of the leg (as confirmed by a dermatologist), only 59 (84%) had all 3 of the defining features of: i) erythema, ii) oedema, and iii) warmth with acute pain/tenderness upon examination. Twenty-two (32%) patients experienced a previous episode of cellulitis within the last 3 years. The median time to recurrence (estimated as the time since the most recent previous attack) was 205 days (95% CI 102 to 308). Service users were generally supportive of the trial, although several expressed concerns about taking antibiotics for lengthy periods, and felt that multiple morbidity/old age would limit entry into a 3-year study. Conclusion: This pilot study has been crucial in highlighting some key issues for the conduct of a future RCT. As a result of these findings, changes have been made to i) the planned recruitment strategy, ii) the proposed inclusion criteria and ii) the definition of cellulitis for use in the future trial.