932 resultados para efficient algorithm
Resumo:
We compared the influence of the bug density in the capacity of Triatoma infestans and Panstrongylus megistus in obtaining blood meal in non anaesthetized mice. The regression anlysis for increase in body weight (mg) versus density (no. of bugs/mouse) showed that in experiments with anaesthetized mice (AM), no correlation was observed. In experiments with non anaesthetized mice (NAM) the weight increase was inversely proportional to density. The regression slope for blood meal size on density was less steep for T. infestans than for P. megistus (-1.9 and -3.0, respectively). The average weight increase of P. megistus nymphus in experiments with AM was higher than for T. infestans nymphs; however, in experiments with NAM such results were inverted. Mortality of P. megistus was significantly higher than of T. infestans with NAM. However, in experiments with AM very low mortality was observed. Considering the mortality and the slope of regression line on NAM, T. infestans is more efficient than P. megistus in obtaining blood meal in similar densities, possibly because it caused less irritation of the mice. The better exploitation of blood source of T. infestans when compared with P. megistus in similar densities, favours the maintenance of a better nutritional status in higher densities. This could explain epidemiological findings in which T. infestans not only succeeds in establishing larger colonies but also dislodges P. megistus in human dwellings when it is introduced in areas where the latter species prevails.
Resumo:
We evaluate the performance of different optimization techniques developed in the context of optical flowcomputation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we develop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional multilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrectional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimization search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow computation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
This paper discusses the use of probabilistic or randomized algorithms for solving combinatorial optimization problems. Our approach employs non-uniform probability distributions to add a biased random behavior to classical heuristics so a large set of alternative good solutions can be quickly obtained in a natural way and without complex conguration processes. This procedure is especially useful in problems where properties such as non-smoothness or non-convexity lead to a highly irregular solution space, for which the traditional optimization methods, both of exact and approximate nature, may fail to reach their full potential. The results obtained are promising enough to suggest that randomizing classical heuristics is a powerful method that can be successfully applied in a variety of cases.
Resumo:
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt"
Resumo:
The UHPLC strategy which combines sub-2 microm porous particles and ultra-high pressure (>1000 bar) was investigated considering very high resolution criteria in both isocratic and gradient modes, with mobile phase temperatures between 30 and 90 degrees C. In isocratic mode, experimental conditions to reach the maximal efficiency were determined using the kinetic plot representation for DeltaP(max)=1000 bar. It has been first confirmed that the molecular weight of the compounds (MW) was a critical parameter which should be considered in the construction of such curves. With a MW around 1000 g mol(-1), efficiencies as high as 300,000 plates could be theoretically attained using UHPLC at 30 degrees C. By limiting the column length to 450 mm, the maximal plate count was around 100,000. In gradient mode, the longest column does not provide the maximal peak capacity for a given analysis time in UHPLC. This was attributed to the fact that peak capacity is not only related to the plate number but also to column dead time. Therefore, a compromise should be found and a 150 mm column should be preferentially selected for gradient lengths up to 60 min at 30 degrees C, while the columns coupled in series (3x 150 mm) were attractive only for t(grad)>250 min. Compared to 30 degrees C, peak capacities were increased by about 20-30% for a constant gradient length at 90 degrees C and gradient time decreased by 2-fold for an identical peak capacity.
Resumo:
Grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational resources. Grid enables access to the resources but it does not guarantee any quality of service. Moreover, Grid does not provide performance isolation; job of one user can influence the performance of other user’s job. The other problem with Grid is that the users of Grid belong to scientific community and the jobs require specific and customized software environment. Providing the perfect environment to the user is very difficult in Grid for its dispersed and heterogeneous nature. Though, Cloud computing provide full customization and control, but there is no simple procedure available to submit user jobs as in Grid. The Grid computing can provide customized resources and performance to the user using virtualization. A virtual machine can join the Grid as an execution node. The virtual machine can also be submitted as a job with user jobs inside. Where the first method gives quality of service and performance isolation, the second method also provides customization and administration in addition. In this thesis, a solution is proposed to enable virtual machine reuse which will provide performance isolation with customization and administration. The same virtual machine can be used for several jobs. In the proposed solution customized virtual machines join the Grid pool on user request. Proposed solution describes two scenarios to achieve this goal. In first scenario, user submits their customized virtual machine as a job. The virtual machine joins the Grid pool when it is powered on. In the second scenario, user customized virtual machines are preconfigured in the execution system. These virtual machines join the Grid pool on user request. Condor and VMware server is used to deploy and test the scenarios. Condor supports virtual machine jobs. The scenario 1 is deployed using Condor VM universe. The second scenario uses VMware-VIX API for scripting powering on and powering off of the remote virtual machines. The experimental results shows that as scenario 2 does not need to transfer the virtual machine image, the virtual machine image becomes live on pool more faster. In scenario 1, the virtual machine runs as a condor job, so it easy to administrate the virtual machine. The only pitfall in scenario 1 is the network traffic.
Resumo:
The role of the Saccharomyces cerevisae peroxisomal acyl-coenzyme A (acyl-CoA) thioesterase (Pte1p) in fatty acid beta-oxidation was studied by analyzing the in vitro kinetic activity of the purified protein as well as by measuring the carbon flux through the beta-oxidation cycle in vivo using the synthesis of peroxisomal polyhydroxyalkanoate (PHA) from the polymerization of the 3-hydroxyacyl-CoAs as a marker. The amount of PHA synthesized from the degradation of 10-cis-heptadecenoic, tridecanoic, undecanoic, or nonanoic acids was equivalent or slightly reduced in the pte1Delta strain compared with wild type. In contrast, a strong reduction in PHA synthesized from heptanoic acid and 8-methyl-nonanoic acid was observed for the pte1Delta strain compared with wild type. The poor catabolism of 8-methyl-nonanoic acid via beta-oxidation in pte1Delta negatively impacted the degradation of 10-cis-heptadecenoic acid and reduced the ability of the cells to efficiently grow in medium containing such fatty acids. An increase in the proportion of the short chain 3-hydroxyacid monomers was observed in PHA synthesized in pte1Delta cells grown on a variety of fatty acids, indicating a reduction in the metabolism of short chain acyl-CoAs in these cells. A purified histidine-tagged Pte1p showed high activity toward short and medium chain length acyl-CoAs, including butyryl-CoA, decanoyl-CoA and 8-methyl-nonanoyl-CoA. The kinetic parameters measured for the purified Pte1p fit well with the implication of this enzyme in the efficient metabolism of short straight and branched chain fatty acyl-CoAs by the beta-oxidation cycle.
Resumo:
Schistosomes are endoparasites causing a serious human disease called schistosomiasis. The quantification of parasite genetic diversity is an essential component to understand the schistosomiasis epidemiology and disease transmission patterns. In this paper, we propose a novel assay for a rapid, low costly and efficient DNA extraction method of egg, larval and adult stages of Schistosoma mansoni. One euro makes possible to perform 60,000 DNA extraction reactions at top speed (only 15 min of incubation and 5 handling steps).
Resumo:
In 2008, we have celebrated the centenary of the discovery of Toxoplasma gondii.Although this ubiquitous protozoan can generate devastating damage in foetuses and newborns, its treatment is the only field in which we have made little progress, despite a huge body of research, and has not yet been validated. Pregnant women who seroconvert are generally given spiramycine in order to reduce the risk of vertical transmission. However, to date, we have no evidence of the efficacy of this treatment because no randomized controlled trials have as yet been conducted. When foetal contamination is demonstrated, pyrimethamine, in association with sulfadoxine or sulfadiazine, is normally prescribed, but the effectiveness of this treatment remains to be shown. With regard to postnatal treatment, opinions vary considerably in terms of drugs, regimens and length of therapy. Similarly, we do not have clear evidence to support routine antibiotic treatment of acute ocular toxoplasmosis. We must be aware that pregnant women and newborns are currently being given empirically potentially toxic drugs that have no proven benefit. We must make progress in this field through well-designed collaborative studies and by drawing the attention of policy makers to this disastrous and unsustainable situation.
Resumo:
The multiscale finite volume (MsFV) method has been developed to efficiently solve large heterogeneous problems (elliptic or parabolic); it is usually employed for pressure equations and delivers conservative flux fields to be used in transport problems. The method essentially relies on the hypothesis that the (fine-scale) problem can be reasonably described by a set of local solutions coupled by a conservative global (coarse-scale) problem. In most cases, the boundary conditions assigned for the local problems are satisfactory and the approximate conservative fluxes provided by the method are accurate. In numerically challenging cases, however, a more accurate localization is required to obtain a good approximation of the fine-scale solution. In this paper we develop a procedure to iteratively improve the boundary conditions of the local problems. The algorithm relies on the data structure of the MsFV method and employs a Krylov-subspace projection method to obtain an unconditionally stable scheme and accelerate convergence. Two variants are considered: in the first, only the MsFV operator is used; in the second, the MsFV operator is combined in a two-step method with an operator derived from the problem solved to construct the conservative flux field. The resulting iterative MsFV algorithms allow arbitrary reduction of the solution error without compromising the construction of a conservative flux field, which is guaranteed at any iteration. Since it converges to the exact solution, the method can be regarded as a linear solver. In this context, the schemes proposed here can be viewed as preconditioned versions of the Generalized Minimal Residual method (GMRES), with a very peculiar characteristic that the residual on the coarse grid is zero at any iteration (thus conservative fluxes can be obtained).