907 resultados para computationally efficient algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"Vegeu el resum a l'inici del document del fitxer adjunt"

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The UHPLC strategy which combines sub-2 microm porous particles and ultra-high pressure (>1000 bar) was investigated considering very high resolution criteria in both isocratic and gradient modes, with mobile phase temperatures between 30 and 90 degrees C. In isocratic mode, experimental conditions to reach the maximal efficiency were determined using the kinetic plot representation for DeltaP(max)=1000 bar. It has been first confirmed that the molecular weight of the compounds (MW) was a critical parameter which should be considered in the construction of such curves. With a MW around 1000 g mol(-1), efficiencies as high as 300,000 plates could be theoretically attained using UHPLC at 30 degrees C. By limiting the column length to 450 mm, the maximal plate count was around 100,000. In gradient mode, the longest column does not provide the maximal peak capacity for a given analysis time in UHPLC. This was attributed to the fact that peak capacity is not only related to the plate number but also to column dead time. Therefore, a compromise should be found and a 150 mm column should be preferentially selected for gradient lengths up to 60 min at 30 degrees C, while the columns coupled in series (3x 150 mm) were attractive only for t(grad)>250 min. Compared to 30 degrees C, peak capacities were increased by about 20-30% for a constant gradient length at 90 degrees C and gradient time decreased by 2-fold for an identical peak capacity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational resources. Grid enables access to the resources but it does not guarantee any quality of service. Moreover, Grid does not provide performance isolation; job of one user can influence the performance of other user’s job. The other problem with Grid is that the users of Grid belong to scientific community and the jobs require specific and customized software environment. Providing the perfect environment to the user is very difficult in Grid for its dispersed and heterogeneous nature. Though, Cloud computing provide full customization and control, but there is no simple procedure available to submit user jobs as in Grid. The Grid computing can provide customized resources and performance to the user using virtualization. A virtual machine can join the Grid as an execution node. The virtual machine can also be submitted as a job with user jobs inside. Where the first method gives quality of service and performance isolation, the second method also provides customization and administration in addition. In this thesis, a solution is proposed to enable virtual machine reuse which will provide performance isolation with customization and administration. The same virtual machine can be used for several jobs. In the proposed solution customized virtual machines join the Grid pool on user request. Proposed solution describes two scenarios to achieve this goal. In first scenario, user submits their customized virtual machine as a job. The virtual machine joins the Grid pool when it is powered on. In the second scenario, user customized virtual machines are preconfigured in the execution system. These virtual machines join the Grid pool on user request. Condor and VMware server is used to deploy and test the scenarios. Condor supports virtual machine jobs. The scenario 1 is deployed using Condor VM universe. The second scenario uses VMware-VIX API for scripting powering on and powering off of the remote virtual machines. The experimental results shows that as scenario 2 does not need to transfer the virtual machine image, the virtual machine image becomes live on pool more faster. In scenario 1, the virtual machine runs as a condor job, so it easy to administrate the virtual machine. The only pitfall in scenario 1 is the network traffic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The role of the Saccharomyces cerevisae peroxisomal acyl-coenzyme A (acyl-CoA) thioesterase (Pte1p) in fatty acid beta-oxidation was studied by analyzing the in vitro kinetic activity of the purified protein as well as by measuring the carbon flux through the beta-oxidation cycle in vivo using the synthesis of peroxisomal polyhydroxyalkanoate (PHA) from the polymerization of the 3-hydroxyacyl-CoAs as a marker. The amount of PHA synthesized from the degradation of 10-cis-heptadecenoic, tridecanoic, undecanoic, or nonanoic acids was equivalent or slightly reduced in the pte1Delta strain compared with wild type. In contrast, a strong reduction in PHA synthesized from heptanoic acid and 8-methyl-nonanoic acid was observed for the pte1Delta strain compared with wild type. The poor catabolism of 8-methyl-nonanoic acid via beta-oxidation in pte1Delta negatively impacted the degradation of 10-cis-heptadecenoic acid and reduced the ability of the cells to efficiently grow in medium containing such fatty acids. An increase in the proportion of the short chain 3-hydroxyacid monomers was observed in PHA synthesized in pte1Delta cells grown on a variety of fatty acids, indicating a reduction in the metabolism of short chain acyl-CoAs in these cells. A purified histidine-tagged Pte1p showed high activity toward short and medium chain length acyl-CoAs, including butyryl-CoA, decanoyl-CoA and 8-methyl-nonanoyl-CoA. The kinetic parameters measured for the purified Pte1p fit well with the implication of this enzyme in the efficient metabolism of short straight and branched chain fatty acyl-CoAs by the beta-oxidation cycle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Synchronization behavior of electroencephalographic (EEG) signals is important for decoding information processing in the human brain. Modern multichannel EEG allows a transition from traditional measurements of synchronization in pairs of EEG signals to whole-brain synchronization maps. The latter can be based on bivariate measures (BM) via averaging over pair-wise values or, alternatively, on multivariate measures (MM), which directly ascribe a single value to the synchronization in a group. In order to compare BM versus MM, we applied nine different estimators to simulated multivariate time series with known parameters and to real EEGs.We found widespread correlations between BM and MM, which were almost frequency-independent for all the measures except coherence. The analysis of the behavior of synchronization measures in simulated settings with variable coupling strength, connection probability, and parameter mismatch showed that some of them, including S-estimator, S-Renyi, omega, and coherence, aremore sensitive to linear interdependences,while others, like mutual information and phase locking value, are more responsive to nonlinear effects. Onemust consider these properties together with the fact thatMM are computationally less expensive and, therefore, more efficient for the large-scale data sets than BM while choosing a synchronization measure for EEG analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetically engineered bioreporters are an excellent complement to traditional methods of chemical analysis. The application of fluorescence flow cytometry to detection of bioreporter response enables rapid and efficient characterization of bacterial bioreporter population response on a single-cell basis. In the present study, intrapopulation response variability was used to obtain higher analytical sensitivity and precision. We have analyzed flow cytometric data for an arsenic-sensitive bacterial bioreporter using an artificial neural network-based adaptive clustering approach (a single-layer perceptron model). Results for this approach are far superior to other methods that we have applied to this fluorescent bioreporter (e.g., the arsenic detection limit is 0.01 microM, substantially lower than for other detection methods/algorithms). The approach is highly efficient computationally and can be implemented on a real-time basis, thus having potential for future development of high-throughput screening applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Schistosomes are endoparasites causing a serious human disease called schistosomiasis. The quantification of parasite genetic diversity is an essential component to understand the schistosomiasis epidemiology and disease transmission patterns. In this paper, we propose a novel assay for a rapid, low costly and efficient DNA extraction method of egg, larval and adult stages of Schistosoma mansoni. One euro makes possible to perform 60,000 DNA extraction reactions at top speed (only 15 min of incubation and 5 handling steps).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The geometry and connectivity of fractures exert a strong influence on the flow and transport properties of fracture networks. We present a novel approach to stochastically generate three-dimensional discrete networks of connected fractures that are conditioned to hydrological and geophysical data. A hierarchical rejection sampling algorithm is used to draw realizations from the posterior probability density function at different conditioning levels. The method is applied to a well-studied granitic formation using data acquired within two boreholes located 6 m apart. The prior models include 27 fractures with their geometry (position and orientation) bounded by information derived from single-hole ground-penetrating radar (GPR) data acquired during saline tracer tests and optical televiewer logs. Eleven cross-hole hydraulic connections between fractures in neighboring boreholes and the order in which the tracer arrives at different fractures are used for conditioning. Furthermore, the networks are conditioned to the observed relative hydraulic importance of the different hydraulic connections by numerically simulating the flow response. Among the conditioning data considered, constraints on the relative flow contributions were the most effective in determining the variability among the network realizations. Nevertheless, we find that the posterior model space is strongly determined by the imposed prior bounds. Strong prior bounds were derived from GPR measurements and helped to make the approach computationally feasible. We analyze a set of 230 posterior realizations that reproduce all data given their uncertainties assuming the same uniform transmissivity in all fractures. The posterior models provide valuable statistics on length scales and density of connected fractures, as well as their connectivity. In an additional analysis, effective transmissivity estimates of the posterior realizations indicate a strong influence of the DFN structure, in that it induces large variations of equivalent transmissivities between realizations. The transmissivity estimates agree well with previous estimates at the site based on pumping, flowmeter and temperature data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 2008, we have celebrated the centenary of the discovery of Toxoplasma gondii.Although this ubiquitous protozoan can generate devastating damage in foetuses and newborns, its treatment is the only field in which we have made little progress, despite a huge body of research, and has not yet been validated. Pregnant women who seroconvert are generally given spiramycine in order to reduce the risk of vertical transmission. However, to date, we have no evidence of the efficacy of this treatment because no randomized controlled trials have as yet been conducted. When foetal contamination is demonstrated, pyrimethamine, in association with sulfadoxine or sulfadiazine, is normally prescribed, but the effectiveness of this treatment remains to be shown. With regard to postnatal treatment, opinions vary considerably in terms of drugs, regimens and length of therapy. Similarly, we do not have clear evidence to support routine antibiotic treatment of acute ocular toxoplasmosis. We must be aware that pregnant women and newborns are currently being given empirically potentially toxic drugs that have no proven benefit. We must make progress in this field through well-designed collaborative studies and by drawing the attention of policy makers to this disastrous and unsustainable situation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The multiscale finite volume (MsFV) method has been developed to efficiently solve large heterogeneous problems (elliptic or parabolic); it is usually employed for pressure equations and delivers conservative flux fields to be used in transport problems. The method essentially relies on the hypothesis that the (fine-scale) problem can be reasonably described by a set of local solutions coupled by a conservative global (coarse-scale) problem. In most cases, the boundary conditions assigned for the local problems are satisfactory and the approximate conservative fluxes provided by the method are accurate. In numerically challenging cases, however, a more accurate localization is required to obtain a good approximation of the fine-scale solution. In this paper we develop a procedure to iteratively improve the boundary conditions of the local problems. The algorithm relies on the data structure of the MsFV method and employs a Krylov-subspace projection method to obtain an unconditionally stable scheme and accelerate convergence. Two variants are considered: in the first, only the MsFV operator is used; in the second, the MsFV operator is combined in a two-step method with an operator derived from the problem solved to construct the conservative flux field. The resulting iterative MsFV algorithms allow arbitrary reduction of the solution error without compromising the construction of a conservative flux field, which is guaranteed at any iteration. Since it converges to the exact solution, the method can be regarded as a linear solver. In this context, the schemes proposed here can be viewed as preconditioned versions of the Generalized Minimal Residual method (GMRES), with a very peculiar characteristic that the residual on the coarse grid is zero at any iteration (thus conservative fluxes can be obtained).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a hybrid behavior-based scheme using reinforcement learning for high-level control of autonomous underwater vehicles (AUVs). Two main features of the presented approach are hybrid behavior coordination and semi on-line neural-Q_learning (SONQL). Hybrid behavior coordination takes advantages of robustness and modularity in the competitive approach as well as efficient trajectories in the cooperative approach. SONQL, a new continuous approach of the Q_learning algorithm with a multilayer neural network is used to learn behavior state/action mapping online. Experimental results show the feasibility of the presented approach for AUVs

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a parallel architecture for estimation of the motion of an underwater robot. It is well known that image processing requires a huge amount of computation, mainly at low-level processing where the algorithms are dealing with a great number of data. In a motion estimation algorithm, correspondences between two images have to be solved at the low level. In the underwater imaging, normalised correlation can be a solution in the presence of non-uniform illumination. Due to its regular processing scheme, parallel implementation of the correspondence problem can be an adequate approach to reduce the computation time. Taking into consideration the complexity of the normalised correlation criteria, a new approach using parallel organisation of every processor from the architecture is proposed