976 resultados para Fast methods
Resumo:
This review focuses on methodological approaches used to study the composition of human faecal microbiota. Gene sequencing is the most accurate tool for revealing the phylogenetic relationships between bacteria. The main application of fluorescence in situ hybridization (FISH) in both microscopy and flow cytometry is to enumerate faecal bacteria. While flow cytometry is a very fast method, FISH microscopy still has a considerably lower detection limit.
Resumo:
This paper formally derives a new path-based neural branch prediction algorithm (FPP) into blocks of size two for a lower hardware solution while maintaining similar input-output characteristic to the algorithm. The blocked solution, here referred to as B2P algorithm, is obtained using graph theory and retiming methods. Verification approaches were exercised to show that prediction performances obtained from the FPP and B2P algorithms differ within one mis-prediction per thousand instructions using a known framework for branch prediction evaluation. For a chosen FPGA device, circuits generated from the B2P algorithm showed average area savings of over 25% against circuits for the FPP algorithm with similar time performances thus making the proposed blocked predictor superior from a practical viewpoint.
Resumo:
We introduce a classification-based approach to finding occluding texture boundaries. The classifier is composed of a set of weak learners, which operate on image intensity discriminative features that are defined on small patches and are fast to compute. A database that is designed to simulate digitized occluding contours of textured objects in natural images is used to train the weak learners. The trained classifier score is then used to obtain a probabilistic model for the presence of texture transitions, which can readily be used for line search texture boundary detection in the direction normal to an initial boundary estimate. This method is fast and therefore suitable for real-time and interactive applications. It works as a robust estimator, which requires a ribbon-like search region and can handle complex texture structures without requiring a large number of observations. We demonstrate results both in the context of interactive 2D delineation and of fast 3D tracking and compare its performance with other existing methods for line search boundary detection.
Resumo:
We study the numerical efficiency of solving the self-consistent field theory (SCFT) for periodic block-copolymer morphologies by combining the spectral method with Anderson mixing. Using AB diblock-copolymer melts as an example, we demonstrate that this approach can be orders of magnitude faster than competing methods, permitting precise calculations with relatively little computational cost. Moreover, our results raise significant doubts that the gyroid (G) phase extends to infinite $\chi N$. With the increased precision, we are also able to resolve subtle free-energy differences, allowing us to investigate the layer stacking in the perforated-lamellar (PL) phase and the lattice arrangement of the close-packed spherical (S$_{cp}$) phase. Furthermore, our study sheds light on the existence of the newly discovered Fddd (O$^{70}$) morphology, showing that conformational asymmetry has a significant effect on its stability.
Resumo:
We have estimated the speed and direction of propagation of a number of Coronal Mass Ejections (CMEs) using single-spacecraft data from the STEREO Heliospheric Imager (HI) wide-field cameras. In general, these values are in good agreement with those predicted by Thernisien, Vourlidas, and Howard in Solar Phys. 256, 111 -aEuro parts per thousand 130 (2009) using a forward modelling method to fit CMEs imaged by the STEREO COR2 coronagraphs. The directions of the CMEs predicted by both techniques are in good agreement despite the fact that many of the CMEs under study travel in directions that cause them to fade rapidly in the HI images. The velocities estimated from both techniques are in general agreement although there are some interesting differences that may provide evidence for the influence of the ambient solar wind on the speed of CMEs. The majority of CMEs with a velocity estimated to be below 400 km s(-1) in the COR2 field of view have higher estimated velocities in the HI field of view, while, conversely, those with COR2 velocities estimated to be above 400 km s(-1) have lower estimated HI velocities. We interpret this as evidence for the deceleration of fast CMEs and the acceleration of slower CMEs by interaction with the ambient solar wind beyond the COR2 field of view. We also show that the uncertainties in our derived parameters are influenced by the range of elongations over which each CME can be tracked. In order to reduce the uncertainty in the predicted arrival time of a CME at 1 Astronomical Unit (AU) to within six hours, the CME needs to be tracked out to at least 30 degrees elongation. This is in good agreement with predictions of the accuracy of our technique based on Monte Carlo simulations.
Resumo:
The problem of adjusting the weights (learning) in multilayer feedforward neural networks (NN) is known to be of a high importance when utilizing NN techniques in various practical applications. The learning procedure is to be performed as fast as possible and in a simple computational fashion, the two requirements which are usually not satisfied practically by the methods developed so far. Moreover, the presence of random inaccuracies are usually not taken into account. In view of these three issues, an alternative stochastic approximation approach discussed in the paper, seems to be very promising.
Resumo:
The ever increasing demand for high image quality requires fast and efficient methods for noise reduction. The best-known order-statistics filter is the median filter. A method is presented to calculate the median on a set of N W-bit integers in W/B time steps. Blocks containing B-bit slices are used to find B-bits of the median; using a novel quantum-like representation allowing the median to be computed in an accelerated manner compared to the best-known method (W time steps). The general method allows a variety of designs to be synthesised systematically. A further novel architecture to calculate the median for a moving set of N integers is also discussed.
Resumo:
Sea-ice concentrations in the Laptev Sea simulated by the coupled North Atlantic-Arctic Ocean-Sea-Ice Model and Finite Element Sea-Ice Ocean Model are evaluated using sea-ice concentrations from Advanced Microwave Scanning Radiometer-Earth Observing System satellite data and a polynya classification method for winter 2007/08. While developed to simulate largescale sea-ice conditions, both models are analysed here in terms of polynya simulation. The main modification of both models in this study is the implementation of a landfast-ice mask. Simulated sea-ice fields from different model runs are compared with emphasis placed on the impact of this prescribed landfast-ice mask. We demonstrate that sea-ice models are not able to simulate flaw polynyas realistically when used without fast-ice description. Our investigations indicate that without landfast ice and with coarse horizontal resolution the models overestimate the fraction of open water in the polynya. This is not because a realistic polynya appears but due to a larger-scale reduction of ice concentrations and smoothed ice-concentration fields. After implementation of a landfast-ice mask, the polynya location is realistically simulated but the total open-water area is still overestimated in most cases. The study shows that the fast-ice parameterization is essential for model improvements. However, further improvements are necessary in order to progress from the simulation of large-scale features in the Arctic towards a more detailed simulation of smaller-scaled features (here polynyas) in an Arctic shelf sea.
Resumo:
The problem of projecting multidimensional data into lower dimensions has been pursued by many researchers due to its potential application to data analyses of various kinds. This paper presents a novel multidimensional projection technique based on least square approximations. The approximations compute the coordinates of a set of projected points based on the coordinates of a reduced number of control points with defined geometry. We name the technique Least Square Projections ( LSP). From an initial projection of the control points, LSP defines the positioning of their neighboring points through a numerical solution that aims at preserving a similarity relationship between the points given by a metric in mD. In order to perform the projection, a small number of distance calculations are necessary, and no repositioning of the points is required to obtain a final solution with satisfactory precision. The results show the capability of the technique to form groups of points by degree of similarity in 2D. We illustrate that capability through its application to mapping collections of textual documents from varied sources, a strategic yet difficult application. LSP is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.
Resumo:
This article describes and compares three heuristics for a variant of the Steiner tree problem with revenues, which includes budget and hop constraints. First, a greedy method which obtains good approximations in short computational times is proposed. This initial solution is then improved by means of a destroy-and-repair method or a tabu search algorithm. Computational results compare the three methods in terms of accuracy and speed. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We propose a likelihood ratio test ( LRT) with Bartlett correction in order to identify Granger causality between sets of time series gene expression data. The performance of the proposed test is compared to a previously published bootstrapbased approach. LRT is shown to be significantly faster and statistically powerful even within non- Normal distributions. An R package named gGranger containing an implementation for both Granger causality identification tests is also provided.
Resumo:
A reliable and fast sensor for in vitro evaluation of solar protection factors (SPFs) of cosmetic products, based on the photobleaching kinetics of a nanocrystalline TiO(2)/dye UV-dosimeter, has been devised. The accuracy, robustness and suitability of the new device was demonstrated by the excellent matching of the predicted and the in vivo results up to SPF 70, for four standard samples analyzed in blind. These results strongly suggest that our device can be useful for routine SPF evaluation in laboratories devoted to the development or production of cosmetic formulations, since the conventional in vitro methods tend to exhibit unacceptably high errors above SPF similar to 30 and the conventional in vivo methods tend to be expensive and exceedingly time consuming. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
A new compact system encompassing in flow gas diffusion unit and a wall-jet amperometric FIA detector, coated with a supramolecular porphyrin film, was specially designed as an alternative to the time-consuming Monier-Williams method, allowing fast, reproducible and accurate analyses of free sulphite species in fruit juices. In fact, a linear response between 0.64 and 6.4 ppm of sodium sulphite. LOD = 0.043 ppm, relative standard deviation of +/- 1.5% (n = 10) and analytical frequency of 85 analyses/h were obtained utilising optimised conditions. That superior analytical performance allows the precise evaluation of the amount of free sulphite present in foods, providing an important comparison between the standard addition and the standard injection methods. Although the first one is most frequently used, it was strongly influenced by matrix effects because of the unexpected reactivity of sulphite ions with the juice matrixes, leading to its partial consumption soon after addition. In contrast, the last method was not susceptible to matrix effects yielding accurate results, being more reliable for analytical purposes. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Two techniques, namely UV-vis- and FTIR spectroscopy, have been employed in order to calculate the degree of substitution (DS) of cellulose carboxylic esters, including acetates, CAs, butyrates, CBs, and hexanoates, CHs. Regarding UV-vis spectroscopy, we have employed a novel approach, based on measuring the dependence of lambda(max) of the intra-molecular charge-transfer bands of polarity probes adsorbed on DS of the ester films (solvatochromism). Additionally, we have revisited the use of FTIR for DS determination. Several methods have been used in order to plot Beer`s law graph, namely: Absorption of KBr pellets, pre-coated with CA: reflectance (DRIFTS) of CAs-KBr solid-solid mixtures with, or without the use of 1.4-dicyanobenzene as an internal reference; reflectance of KBr powder pre-coated with CA. The methods indicated are simple, fast, and accurate, requiring much less ester than the titration method. The probe method is independent of the experimental variables examined. (c) 2010 Published by Elsevier Ltd.
Resumo:
The aim of this study was to develop a fast capillary electrophoresis method for the determination of propranolol in pharmaceutical preparations. In the method development the pH and constituents of the background electrolyte were selected using the effective mobility versus pH curves. Benzylamine was used as the internal standard. The background electrolyte was composed of 60 mmol L(-1) tris(hydroxymethyl)aminomethane and 30 mmol L(-1) 2-hydroxyisobutyric acid,at pH 8.1. Separation was conducted in a fused-silica capillary (32 cm total length and 8.5 cm effective length, 50 mu m I.D.) with a short-end injection configuration and direct UV detection at 214 nm. The run time was only 14 s. Three different strategies were studied in order to develop a fast CE method with low total analysis time for propranolol analysis: low flush time (Lflush) 35 runs/h, without flush (Wflush) 52 runs/h, and Invert (switched polarity) 45 runs/h. Since the three strategies developed are statistically equivalent, Mush was selected due to the higher analytical frequency in comparison with the other methods. A few figures of merit of the proposed method include: good linearity (R(2) > 0.9999); limit of detection of 0.5 mg L(-1): inter-day precision better than 1.03% (n = 9) and recovery in the range of 95.1-104.5%. (C) 2009 Elsevier B.V. All rights reserved.