152 resultados para Prefiltering algorithm
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
BACKGROUND: Male fertility potential cannot be measured by conventional parameters for assisted reproduction by intracytoplasmic sperm injection. This study determines the relationship between testicular and ejaculated sperm mitochondrial (mt) DNA deletions, nuclear (n) DNA fragmentation and fertilisation and pregnancy rates in ICSI. METHODS: Ejaculated sperm were obtained from 77 men and testicular sperm from 28 men with obstructive azoospermia undergoing ICSI. Testicular sperm were retrieved using a Trucut needle. MtDNA analysed using a long polymerase chain reaction. The alkaline Comet assay determined nDNA fragmentation. RESULTS: Of subjects who achieved a pregnancy (50%) using testicular sperm, only 26% had partners�??�?�¢?? sperm with wild type (WT) mtDNA. Of pregnant subjects (38%) using ejaculated sperm, only 8% had partner sperm with WT mtDNA.. In each, the successful group had less mtDNA deletions and less nDNA fragmentation. There were inverse relationships between pregnancy and mtDNA deletion numbers, size and nDNA fragmentation for both testicular and ejaculated sperm. No relationships were observed with fertilisation rates. An algorithm for the prediction of pregnancy is presented based on the quality of sperm nDNA and mtDNA. CONCLUSION: In both testicular and ejaculated sperm, mtDNA deletions and nDNA fragmentation are closely associated with pregnancy in ICSI.
Resumo:
A hardware performance analysis of the SHACAL-2 encryption algorithm is presented in this paper. SHACAL-2 was one of four symmetric key algorithms chosen in the New European Schemes for Signatures, Integrity and Encryption (NESSIE) initiative in 2003. The paper describes a fully pipelined encryption SHACAL-2 architecture implemented on a Xilinx Field Programmable Gate Array (FPGA) device that achieves a throughput of over 25 Gbps. This is the fastest private key encryption algorithm architecture currently available. The SHACAL-2 decryption algorithm is also defined in the paper as it was not provided in the NESSIE submission.
Resumo:
This paper investigates the two-stage stepwise identification for a class of nonlinear dynamic systems that can be described by linear-in-the-parameters models, and the model has to be built from a very large pool of basis functions or model terms. The main objective is to improve the compactness of the model that is obtained by the forward stepwise methods, while retaining the computational efficiency. The proposed algorithm first generates an initial model using a forward stepwise procedure. The significance of each selected term is then reviewed at the second stage and all insignificant ones are replaced, resulting in an optimised compact model with significantly improved performance. The main contribution of this paper is that these two stages are performed within a well-defined regression context, leading to significantly reduced computational complexity. The efficiency of the algorithm is confirmed by the computational complexity analysis, and its effectiveness is demonstrated by the simulation results.
Resumo:
This paper proposes a novel hybrid forward algorithm (HFA) for the construction of radial basis function (RBF) neural networks with tunable nodes. The main objective is to efficiently and effectively produce a parsimonious RBF neural network that generalizes well. In this study, it is achieved through simultaneous network structure determination and parameter optimization on the continuous parameter space. This is a mixed integer hard problem and the proposed HFA tackles this problem using an integrated analytic framework, leading to significantly improved network performance and reduced memory usage for the network construction. The computational complexity analysis confirms the efficiency of the proposed algorithm, and the simulation results demonstrate its effectiveness
Resumo:
We present a fast and efficient hybrid algorithm for selecting exoplanetary candidates from wide-field transit surveys. Our method is based on the widely used SysRem and Box Least-Squares (BLS) algorithms. Patterns of systematic error that are common to all stars on the frame are mapped and eliminated using the SysRem algorithm. The remaining systematic errors caused by spatially localized flat-fielding and other errors are quantified using a boxcar-smoothing method. We show that the dimensions of the search-parameter space can be reduced greatly by carrying out an initial BLS search on a coarse grid of reduced dimensions, followed by Newton-Raphson refinement of the transit parameters in the vicinity of the most significant solutions. We illustrate the method's operation by applying it to data from one field of the SuperWASP survey, comprising 2300 observations of 7840 stars brighter than V = 13.0. We identify 11 likely transit candidates. We reject stars that exhibit significant ellipsoidal variations caused indicative of a stellar-mass companion. We use colours and proper motions from the Two Micron All Sky Survey and USNO-B1.0 surveys to estimate the stellar parameters and the companion radius. We find that two stars showing unambiguous transit signals pass all these tests, and so qualify for detailed high-resolution spectroscopic follow-up.
Resumo:
Course Scheduling consists of assigning lecture events to a limited set of specific timeslots and rooms. The objective is to satisfy as many soft constraints as possible, while maintaining a feasible solution timetable. The most successful techniques to date require a compute-intensive examination of the solution neighbourhood to direct searches to an optimum solution. Although they may require fewer neighbourhood moves than more exhaustive techniques to gain comparable results, they can take considerably longer to achieve success. This paper introduces an extended version of the Great Deluge Algorithm for the Course Timetabling problem which, while avoiding the problem of getting trapped in local optima, uses simple Neighbourhood search heuristics to obtain solutions in a relatively short amount of time. The paper presents results based on a standard set of benchmark datasets, beating over half of the currently published best results with in some cases up to 60% of an improvement.
Resumo:
The utilization of the computational Grid processor network has become a common method for researchers and scientists without access to local processor clusters to avail of the benefits of parallel processing for compute-intensive applications. As a result, this demand requires effective and efficient dynamic allocation of available resources. Although static scheduling and allocation techniques have proved effective, the dynamic nature of the Grid requires innovative techniques for reacting to change and maintaining stability for users. The dynamic scheduling process requires quite powerful optimization techniques, which can themselves lack the performance required in reaction time for achieving an effective schedule solution. Often there is a trade-off between solution quality and speed in achieving a solution. This paper presents an extension of a technique used in optimization and scheduling which can provide the means of achieving this balance and improves on similar approaches currently published.
Resumo:
A new search-space-updating technique for genetic algorithms is proposed for continuous optimisation problems. Other than gradually reducing the search space during the evolution process with a fixed reduction rate set ‘a priori’, the upper and the lower boundaries for each variable in the objective function are dynamically adjusted based on its distribution statistics. To test the effectiveness, the technique is applied to a number of benchmark optimisation problems in comparison with three other techniques, namely the genetic algorithms with parameter space size adjustment (GAPSSA) technique [A.B. Djurišic, Elite genetic algorithms with adaptive mutations for solving continuous optimization problems – application to modeling of the optical constants of solids, Optics Communications 151 (1998) 147–159], successive zooming genetic algorithm (SZGA) [Y. Kwon, S. Kwon, S. Jin, J. Kim, Convergence enhanced genetic algorithm with successive zooming method for solving continuous optimization problems, Computers and Structures 81 (2003) 1715–1725] and a simple GA. The tests show that for well-posed problems, existing search space updating techniques perform well in terms of convergence speed and solution precision however, for some ill-posed problems these techniques are statistically inferior to a simple GA. All the tests show that the proposed new search space update technique is statistically superior to its counterparts.