4 resultados para Branch and bound algorithms
em Universidad de Alicante
Resumo:
In this paper, parallel Relaxed and Extrapolated algorithms based on the Power method for accelerating the PageRank computation are presented. Different parallel implementations of the Power method and the proposed variants are analyzed using different data distribution strategies. The reported experiments show the behavior and effectiveness of the designed algorithms for realistic test data using either OpenMP, MPI or an hybrid OpenMP/MPI approach to exploit the benefits of shared memory inside the nodes of current SMP supercomputers.
Resumo:
This research study deals with the quantification and characterization of the EPS obtained from two 25 L bench scale membrane bioreactors (MBRs) with micro-(MF-MBR) and ultrafiltration (UF-MBR) submerged membranes. Both reactors were fed with synthetic water and operated for 168 days without sludge extraction, increasing their mixed liquor suspended solid (MLSS) concentration during the experimentation time. The characterization of soluble EPS (EPSs) was achieved by the centrifugation of mixed liquor and bound EPS (EPSb) by extraction using a cationic resin exchange (CER). EPS characterization was carried out by applying the 3-dimensional excitation–emission matrix fluorescence spectroscopy (3D-EEM) and high-performance size exclusion chromatography (HPSEC) with the aim of obtaining structural and functional information thereof. With regard to the 3D-EEM analysis, fluorescence spectra of EPSb and EPSs showed 2 peaks in both MBRs at all the MLSS concentrations studied. The peaks obtained for EPSb were associated to soluble microbial by-product-like (predominantly protein-derived compounds) and to aromatic protein. For EPSs, the peaks were associated with humic and fulvic acids. In both MBRs, the fluorescence intensity (FI) of the peaks increased as MLSS and protein concentrations increased. The FI of the EPSs peaks was much lower than for EPSb. It was verified that the evolution of the FI clearly depends on the concentration of protein and humic acids for EPSb and EPSs, respectively. Chromatographic analysis showed that the intensity of the EPSb peak increased while the concentrations of MLSS did. Additionally, the mean MW calculated was always higher the higher the MLSS concentrations in the reactors. MW was higher for the MF-MBR than for the UF-MBR for the same MLSS concentrations demonstrating that the filtration carried out with a UF membrane lead to retentions of lower MW particles.
Resumo:
Different kinds of algorithms can be chosen so as to compute elementary functions. Among all of them, it is worthwhile mentioning the shift-and-add algorithms due to the fact that they have been specifically designed to be very simple and to save computer resources. In fact, almost the only operations usually involved with these methods are additions and shifts, which can be easily and efficiently performed by a digital processor. Shift-and-add algorithms allow fairly good precision with low cost iterations. The most famous algorithm belonging to this type is CORDIC. CORDIC has the capability of approximating a wide variety of functions with only the help of a slight change in their iterations. In this paper, we will analyze the requirements of some engineering and industrial problems in terms of type of operands and functions to approximate. Then, we will propose the application of shift-and-add algorithms based on CORDIC to these problems. We will make a comparison between the different methods applied in terms of the precision of the results and the number of iterations required.
Resumo:
In the current Information Age, data production and processing demands are ever increasing. This has motivated the appearance of large-scale distributed information. This phenomenon also applies to Pattern Recognition so that classic and common algorithms, such as the k-Nearest Neighbour, are unable to be used. To improve the efficiency of this classifier, Prototype Selection (PS) strategies can be used. Nevertheless, current PS algorithms were not designed to deal with distributed data, and their performance is therefore unknown under these conditions. This work is devoted to carrying out an experimental study on a simulated framework in which PS strategies can be compared under classical conditions as well as those expected in distributed scenarios. Our results report a general behaviour that is degraded as conditions approach to more realistic scenarios. However, our experiments also show that some methods are able to achieve a fairly similar performance to that of the non-distributed scenario. Thus, although there is a clear need for developing specific PS methodologies and algorithms for tackling these situations, those that reported a higher robustness against such conditions may be good candidates from which to start.