959 resultados para efficient algorithms
Resumo:
We prove that the simple group L-3(5) which has order 372000 is efficient by providing an efficient presentation for it. This leaves one simple group with order less than one million, S-4(4) which has order 979200, whose efficiency or otherwise remains to be determined.
Resumo:
An efficient Lanczos subspace method has been devised for calculating state-to-state reaction probabilities. The method recasts the time-independent wave packet Lippmann-Schwinger equation [Kouri , Chem. Phys. Lett. 203, 166 (1993)] inside a tridiagonal (Lanczos) representation in which action of the causal Green's operator is affected easily with a QR algorithm. The method is designed to yield all state-to-state reaction probabilities from a given reactant-channel wave packet using a single Lanczos subspace; the spectral properties of the tridiagonal Hamiltonian allow calculations to be undertaken at arbitrary energies within the spectral range of the initial wave packet. The method is applied to a H+O-2 system (J=0), and the results indicate the approach is accurate and stable. (C) 2002 American Institute of Physics.
Resumo:
The Lanczos algorithm is appreciated in many situations due to its speed. and economy of storage. However, the advantage that the Lanczos basis vectors need not be kept is lost when the algorithm is used to compute the action of a matrix function on a vector. Either the basis vectors need to be kept, or the Lanczos process needs to be applied twice. In this study we describe an augmented Lanczos algorithm to compute a dot product relative to a function of a large sparse symmetric matrix, without keeping the basis vectors.
Resumo:
A more efficient classifying cyclone (CC) for fine particle classification has been developed in recent years at the JKMRC. The novel CC, known as the JKCC, has modified profiles of the cyclone body, vortex finder, and spigot when compared to conventional hydrocyclones. The novel design increases the centrifugal force inside the cyclone and mitigates the short circuiting flow that exists in all current cyclones. It also decreases the probability of particle contamination in the place near the cyclone spigot. Consequently the cyclone efficiency is improved while the unit maintains a simple structure. An international patent has been granted for this novel cyclone design. In the first development stage-a feasibility study-a 100 mm JKCC was tested and compared with two 100 min commercial units. Very encouraging results were achieved, indicating good potential for the novel design. In the second development stage-a scale-up stage-the JKCC was scaled up to 200 mm in diameter, and its geometry was optimized through numerous tests. The performance of the JKCC was compared with a 150 nun commercial unit and exhibited sharper separation, finer separation size, and lower flow ratios. The JKCC is now being scaled up into a fill-size (480 mm) hydrocyclone in the third development stage-an industrial study. The 480 mm diameter unit will be tested in an Australian coal preparation plant, and directly compared with a commercial CC operating under the same conditions. Classifying cyclone performance for fine coal could be further improved if the unit is installed in an inclined position. The study using the 200 mm JKCC has revealed that sharpness of separation improved and the flow ratio to underflow was decreased by 43% as the cyclone inclination was varied from the vertical position (0degrees) to the horizontal position (90degrees). The separation size was not affected, although the feed rate was slightly decreased. To ensure self-emptying upon shutdown, it is recommended that the JKCC be installed at an inclination of 75-80degrees. At this angle the cyclone performance is very similar to that at a horizontal position. Similar findings have been derived from the testing of a conventional hydrocyclone. This may be of benefit to operations that require improved performance from their classifying cyclones in terms of sharpness of separation and flow ratio, while tolerating slightly reduced feed rate.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Background: In the presence of dNTPs, intact HIV-1 virions are capable of reverse transcribing at least part of their genome, a process known as natural endogenous reverse transcription (NERT). PCR analysis of virion DNA produced by NERT revealed that the first strand transfer reaction (1stST) was inefficient in intact virions, with minus strand (-) strong stop DNA (ssDNA) copy numbers up to 200 times higher than post-1stST products measured using primers in U3 and U5. This was in marked contrast to the efficiency of 1stST observed in single-round cell infection assays, in which (-) ssDNA and U3-U5 copy numbers were indistinguishable. Objectives: To investigate the reasons for the discrepancy in first strand transfer efficiency between intact cell-free virus and the infection process. Study design: Alterations of both NERT reactions and the conditions of cell infection were used to test whether uncoating and/or entry play a role in the discrepancy in first strand transfer efficiency. Results and Conclusions: The difference in 1stST efficiency could not be attributed simply to viral uncoating, since addition of very low concentrations of detergent to NERT reactions removed the viral envelope without disrupting the reverse transcription complex, and these conditions resulted in no improvement in 1stST efficiency. Virus pseudotyped with surface glycoproteins from either vesicular stomatitis virus or amphotrophic murine leukaemia virus also showed low levels of 1stST in low detergent NERT assays and equivalent levels of (-) ssDNA and 1stST in single-round infections of cells, demonstrating that the gp120-mediated infection process did not select for virions capable of carrying out 1stST. These data indicate that a post-entry event or factor may be involved in efficient HIV-1 reverse transcription in vivo. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
In this paper we propose a second linearly scalable method for solving large master equations arising in the context of gas-phase reactive systems. The new method is based on the well-known shift-invert Lanczos iteration using the GMRES iteration preconditioned using the diffusion approximation to the master equation to provide the inverse of the master equation matrix. In this way we avoid the cubic scaling of traditional master equation solution methods while maintaining the speed of a partial spectral decomposition. The method is tested using a master equation modeling the formation of propargyl from the reaction of singlet methylene with acetylene, proceeding through long-lived isomerizing intermediates. (C) 2003 American Institute of Physics.
Resumo:
In this paper we propose a novel fast and linearly scalable method for solving master equations arising in the context of gas-phase reactive systems, based on an existent stiff ordinary differential equation integrator. The required solution of a linear system involving the Jacobian matrix is achieved using the GMRES iteration preconditioned using the diffusion approximation to the master equation. In this way we avoid the cubic scaling of traditional master equation solution methods and maintain the low temperature robustness of numerical integration. The method is tested using a master equation modelling the formation of propargyl from the reaction of singlet methylene with acetylene, proceeding through long lived isomerizing intermediates. (C) 2003 American Institute of Physics.
Resumo:
This paper delineates the development of a prototype hybrid knowledge-based system for the optimum design of liquid retaining structures by coupling the blackboard architecture, an expert system shell VISUAL RULE STUDIO and genetic algorithm (GA). Through custom-built interactive graphical user interfaces under a user-friendly environment, the user is directed throughout the design process, which includes preliminary design, load specification, model generation, finite element analysis, code compliance checking, and member sizing optimization. For structural optimization, GA is applied to the minimum cost design of structural systems with discrete reinforced concrete sections. The design of a typical example of the liquid retaining structure is illustrated. The results demonstrate extraordinarily converging speed as near-optimal solutions are acquired after merely exploration of a small portion of the search space. This system can act as a consultant to assist novice designers in the design of liquid retaining structures.
Resumo:
This paper presents experimental results of the communication performance evaluation of a prototype ZigBee-based patient monitoring system commissioned in an in-patient floor of a Portuguese hospital (HPG – Hospital Privado de Guimar~aes). Besides, it revisits relevant problems that affect the performance of nonbeacon-enabled ZigBee networks. Initially, the presence of hidden-nodes and the impact of sensor node mobility are discussed. It was observed, for instance, that the message delivery ratio in a star network consisting of six wireless electrocardiogram sensor devices may decrease from 100% when no hidden-nodes are present to 83.96% when half of the sensor devices are unable to detect the transmissions made by the other half. An additional aspect which affects the communication reliability is a deadlock condition that can occur if routers are unable to process incoming packets during the backoff part of the CSMA-CA mechanism. A simple approach to increase the message delivery ratio in this case is proposed and its effectiveness is verified. The discussion and results presented in this paper aim to contribute to the design of efficient networks,and are valid to other scenarios and environments rather than hospitals.
Resumo:
Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.
Resumo:
The use of iris recognition for human authentication has been spreading in the past years. Daugman has proposed a method for iris recognition, composed by four stages: segmentation, normalization, feature extraction, and matching. In this paper we propose some modifications and extensions to Daugman's method to cope with noisy images. These modifications are proposed after a study of images of CASIA and UBIRIS databases. The major modification is on the computationally demanding segmentation stage, for which we propose a faster and equally accurate template matching approach. The extensions on the algorithm address the important issue of pre-processing that depends on the image database, being mandatory when we have a non infra-red camera, like a typical WebCam. For this scenario, we propose methods for reflection removal and pupil enhancement and isolation. The tests, carried out by our C# application on grayscale CASIA and UBIRIS images show that the template matching segmentation method is more accurate and faster than the previous one, for noisy images. The proposed algorithms are found to be efficient and necessary when we deal with non infra-red images and non uniform illumination.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.