20 resultados para Monotone Iterations
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
We study a protocol for two-qubit-state guidance that does not rely on feedback mechanisms. In our scheme, entanglement can be concentrated by arranging the interactions of the qubits with a continuous variable ancilla. By properly post-selecting the outcomes of repeated measurements performed on the state of the ancilla, the qubit state is driven to have a desired amount of purity and entanglement. We stress the primary role played by the first iterations of the protocol. Inefficiencies in the detection operations can be fully taken into account. We also discuss the robustness of the guidance protocol to the effects of an experimentally motivated model for mixedness of the ancillary states.
Resumo:
An extensive experimental program has been carried out on a 135?mm tip diameter radial turbine using a variety of stator designs, in order to facilitate direct performance comparisons of varying stator vane solidity and the effect of varying the vaneless space. A baseline vaned stator was designed using commercial blade design software, having 15 vanes and a vane trailing edge to rotor leading edge radius ratio (Rte/rle) of 1.13. Two additional series of stator vanes were designed and manufactured; one series having varying vane numbers of 12, 18, 24, and 30, and a further series with Rte/rle ratios of 1.05, 1.175, 1.20, and 1.25. As part of the design process a series of CFD simulations were carried out in order to guide design iterations towards achieving a matched flow capacity for each stator. In this way the variations in the measured stage efficiency could be attributed to the stator passages only, thus allowing direct comparisons to be made. Interstage measurements were taken to capture the static pressure distribution at the rotor inlet and these measurements were then used to validate subsequent numerical models. The overall losses for different stators have been quantified and the variations in the measured and computed efficiency were used to recommend optimum values of vane solidity and Rte/rle.
Resumo:
In this paper, we present an investigation into using fuzzy methodologies to guide the construction of high quality feasible examination timetabling solutions. The provision of automated solutions to the examination timetabling problem is achieved through a combination of construction and improvement. The enhancement of solutions through the use of techniques such as metaheuristics is, in some cases, dependent on the quality of the solution obtained during the construction process. With a few notable exceptions, recent research has concentrated on the improvement of solutions as opposed to focusing on investigating the ‘best’ approaches to the construction phase. Addressing this issue, our approach is based on combining multiple criteria in deciding on how the construction phase should proceed. Fuzzy methods were used to combine three single construction heuristics into three different pair wise combinations of heuristics in order to guide the order in which exams were selected to be inserted into the timetable solution. In order to investigate the approach, we compared the performance of the various heuristic approaches with respect to a number of important criteria (overall cost penalty, number of skipped exams, number of iterations of a rescheduling procedure required and computational time) on twelve well-known benchmark problems. We demonstrate that the fuzzy combination of heuristics allows high quality solutions to be constructed. On one of the twelve problems we obtained lower penalty than any previously published constructive method and for all twelve we obtained lower penalty than when any of the single heuristics were used alone. Furthermore, we demonstrate that the fuzzy approach used less backtracking when constructing solutions than any of the single heuristics. We conclude that this novel fuzzy approach is a highly effective method for heuristically constructing solutions and, as such, has particular relevance to real-world situations in which the construction of feasible solutions is often a difficult task in its own right.
Resumo:
We present a scheme for the extraction of singlet states of two remote particles of arbitrary quantum spin number. The goal is achieved through post-selection of the state of interaction mediators sent in succession. A small number of iterations is sufficient to make the scheme effective. We propose two suitable experimental setups where the protocol can be implemented.
Resumo:
In this paper, the compression of multispectral images is addressed. Such 3-D data are characterized by a high correlation across the spectral components. The efficiency of the state-of-the-art wavelet-based coder 3-D SPIHT is considered. Although the 3-D SPIHT algorithm provides the obvious way to process a multispectral image as a volumetric block and, consequently, maintain the attractive properties exhibited in 2-D (excellent performance, low complexity, and embeddedness of the bit-stream), its 3-D trees structure is shown to be not adequately suited for 3-D wavelet transformed (DWT) multispectral images. The fact that each parent has eight children in the 3-D structure considerably increases the list of insignificant sets (LIS) and the list of insignificant pixels (LIP) since the partitioning of any set produces eight subsets which will be processed similarly during the sorting pass. Thus, a significant portion from the overall bit-budget is wastedly spent to sort insignificant information. Through an investigation based on results analysis, we demonstrate that a straightforward 2-D SPIHT technique, when suitably adjusted to maintain the rate scalability and carried out in the 3-D DWT domain, overcomes this weakness. In addition, a new SPIHT-based scalable multispectral image compression algorithm is used in the initial iterations to exploit the redundancies within each group of two consecutive spectral bands. Numerical experiments on a number of multispectral images have shown that the proposed scheme provides significant improvements over related works.
Resumo:
When simulating the High Pressure Die Casting ‘HPDC’ process, the heat transfer coefficient ‘HTC’ between the casting and the die is critical to accurately predict the quality of the casting. To determine the HTC at the metal–die interface a production die for an automotive engine bearing beam, Die 1, was instrumented with type K thermocouples. A Magmasoft® simulation model was generated with virtual thermocouple points placed in the same location as the production die. The temperature traces from the simulation model were compared to the instrumentation results. Using the default simulation HTC for the metal–die interface, a poor correlation was seen, with the temperature response being much less for the simulation model. Because of this, the HTC at the metal–die interface was modified in order to get a better fit. After many simulation iterations, a good fit was established using a peak HTC of 42,000 W/m2 K, this modified HTC was further validated by a second instrumented production die, proving that the modified HTC gives good correlation to the instrumentation trials. The updated HTC properties for the simulation model will improve the predictive capabilities of the casting simulation software and better predict casting defects.
Resumo:
Modern Multiple-Input Multiple-Output (MIMO) communication systems place huge demands on embedded processing resources in terms of throughput, latency and resource utilization. State-of-the-art MIMO detector algorithms, such as Fixed-Complexity Sphere Decoding (FSD), rely on efficient channel preprocessing involving numerous calculations of the pseudo-inverse of the channel matrix by QR Decomposition (QRD) and ordering. These highly complicated operations can quickly become the critical prerequisite for real-time MIMO detection, exaggerated as the number of antennas in a MIMO detector increases. This paper describes a sorted QR decomposition (SQRD) algorithm extended for FSD, which significantly reduces the complexity and latency
of this preprocessing step and increases the throughput of MIMO detection. It merges the calculations of the QRD and ordering operations to avoid multiple iterations of QRD. Specifically, it shows that SQRD reduces the computational complexity by over 60-70% when compared to conventional
MIMO preprocessing algorithms. In 4x4 to 7x7 MIMO cases, the approach suffers merely 0.16-0.2 dB reduction in Bit Error Rate (BER) performance.
Resumo:
In many situations, the number of data points is fixed, and the asymptotic convergence results of popular model selection tools may not be useful. A new algorithm for model selection, RIVAL (removing irrelevant variables amidst Lasso iterations), is presented and shown to be particularly effective for a large but fixed number of data points. The algorithm is motivated by an application of nuclear material detection where all unknown parameters are to be non-negative. Thus, positive Lasso and its variants are analyzed. Then, RIVAL is proposed and is shown to have some desirable properties, namely the number of data points needed to have convergence is smaller than existing methods.
Resumo:
A silicon implementation of the Approximate Rotations algorithm capable of carrying the computational load of algorithms such as QRD and SVD, within the real-time realisation of applications such as Adaptive Beamforming, is described. A modification to the original Approximate Rotations algorithm to simplify the method of optimal angle selection is proposed. Analysis shows that fewer iterations of the Approximate Rotations algorithm are required compared with the conventional CORDIC algorithm to achieve similar degrees of accuracy. The silicon design studies undertaken provide direct practical evidence of superior performance with the Approximate Rotations algorithm, requiring approximately 40% of the total computation time of the conventional CORDIC algorithm, for a similar silicon area cost. © 2004 IEEE.
Resumo:
Among the key challenges present in the modelling and optimisation of composite structures against impact is the computational expense involved in setting up accurate simulations of the impact event and then performing the iterations required to optimise the designs. It is of more interest to find good designs given the limitations of the resources and time available rather than the best possible design. In this paper, low cost but sufficiently accurate finite element (FE) models were generated in LS Dyna for several experimentally characterised materials by semi-automating the modelling process and using existing material models. These models were then used by an optimisation algorithm to generate new hybrid offspring, leading to minimum weight and/or cost designs from a selection of isotropic metals, polymers and orthotropic fibre-reinforced laminates that countered a specified impact threat. Experimental validation of the optimal designs thus identified was then successfully carried out using a single stage gas gun. With sufficient computational hardware, the techniques developed in this pilot study can further utilise fine meshes, equations of state and sophisticated material models, so that optimal hybrid systems can be identified from a wide range of materials, designs and threats.
Resumo:
The Richardson-Lucy algorithm is one of the most important algorithms in the image deconvolution area. However, one of its drawbacks is slow convergence. A very significant acceleration is obtained by the technique proposed by Biggs and Andrews (BA), which is implemented in the deconvlucy function of the Image Processing MATLAB toolbox. The BA method was developed heuristically with no proof of convergence. In this paper, we introduce the Heavy-Ball (H-B) method for Poisson data optimization and extend it to a scaled H-B method, which includes the BA method as a special case. The method has proof of the convergence rate of O(k-2), where k is the number of iterations. We demonstrate the superior convergence performance of the scaled H-B method on both synthetic and real 3D images.
Resumo:
Dynamic Voltage and Frequency Scaling (DVFS) exhibits fundamental limitations as a method to reduce energy consumption in computing systems. In the HPC domain, where performance is of highest priority and codes are heavily optimized to minimize idle time, DVFS has limited opportunity to achieve substantial energy savings. This paper explores if operating processors Near the transistor Threshold Volt- age (NTV) is a better alternative to DVFS for break- ing the power wall in HPC. NTV presents challenges, since it compromises both performance and reliability to reduce power consumption. We present a first of its kind study of a significance-driven execution paradigm that selectively uses NTV and algorithmic error tolerance to reduce energy consumption in performance- constrained HPC environments. Using an iterative algorithm as a use case, we present an adaptive execution scheme that switches between near-threshold execution on many cores and above-threshold execution on one core, as the computational significance of iterations in the algorithm evolves over time. Using this scheme on state-of-the-art hardware, we demonstrate energy savings ranging between 35% to 67%, while compromising neither correctness nor performance.
Resumo:
The cycle of the academic year impacts on efforts to refine and improve major group design-build-test (DBT) projects since the time to run and evaluate projects is generally a full calendar year. By definition these major projects have a high degree of complexity since they act as the vehicle for the application of a range of technical knowledge and skills. There is also often an extensive list of desired learning outcomes which extends to include professional skills and attributes such as communication and team working. It is contended that student project definition and operation, like any other designed product, requires a number of iterations to achieve optimisation. The problem however is that if this cycle takes four or more years then by the time a project’s operational structure is fine tuned it is quite possible that the project theme is no longer relevant. The majority of the students will also inevitably experience a sub-optimal project experience over the 5 year development period. It would be much better if the ratio were flipped so that in 1 year an optimised project definition could be achieved which had sufficient longevity that it could run in the same efficient manner for 4 further years. An increased number of parallel investigators would also enable more varied and adventurous project concepts to be examined than a single institution could undertake alone in the same time frame.
This work-in-progress paper describes a parallel processing methodology for the accelerated definition of new student DBT project concepts. This methodology has been devised and implemented by a number of CDIO partner institutions in the UK & Ireland region. An agreed project theme was operated in parallel in one academic year with the objective of replacing a multi-year iterative cycle. Additionally the close collaboration and peer learning derived from the interaction between the coordinating academics facilitated the development of faculty teaching skills in line with CDIO standard 10.
Resumo:
Unsteady simulations were performed to investigate time dependent behaviors of the leakage flow structures and heat transfer on the rotor blade tip and casing in a single stage gas turbine engine. This paper mainly illustrates the unsteady nature of the leakage flow and heat transfer, particularly, that caused by the stator–rotor interactions. In order to obtain time-accurate results, the effects of varying the number of time steps, sub iterations, and the number of vane passing periods was firstly examined. The effect of tip clearance height and rotor speeds was also examined. The results showed periodic patterns of the tip leakage flow and heat transfer rate distribution for each vane passing. The relative position of the vane and vane trailing edge shock with respect to time alters the flow conditions in the rotor domain, and results in significant variations in the tip leakage flow structures and heat transfer rate distributions. It is observed that the trailing edge shock phenomenon results in a critical heat transfer region on the blade tip and casing. Consequently, the turbine blade tip and casing are subjected to large fluctuations of Nusselt number (about Nu = 2000 to 6000 and about Nu = 1000 to 10000, respectively) at a high frequency (coinciding with the rotor speed).
Resumo:
Since the late nineteenth-century works of criminologists Lombroso and Lacassagne, tattoos in Europe have been commonly associated with deviant bodies. Like many other studies of tattoos of non-indigenous origin, the locus of our research is the convict body. Given the corporeal emphasis of prison records, we argue that tattoos form a crucial part of the power dynamic. Tattoos in the carceral context embody an inherent paradox of their being a component in the reidentification of 'habitual criminals'. We argue that their presence can be regarded as an expression of convict agency: by the act of imprinting unique identifiers on their bodies, convicts boldly defied the official gaze, while equally their description in official records exacted power over the deviant body. Cursory findings show an alignment with other national studies; corporeal inscriptions in Ireland were more prevalent in men's prisons than women's and associated, however loosely, with certain occupations. For instance, maritime and military motifs find representation. Recidivists were more likely to have tattoos than first-time offenders; inscriptions were described as monotone, rudimentary in design and incorporated a limited range of impressions. Further to our argument that tattoos form an expression of convict defiance of prison authority, we have found an unusual idiosyncrasy in the convict record, that is, that the agency of photography, while undermined in general terms, was manipulated by prison officers.