873 resultados para Energy Efficient Algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the use of probabilistic or randomized algorithms for solving combinatorial optimization problems. Our approach employs non-uniform probability distributions to add a biased random behavior to classical heuristics so a large set of alternative good solutions can be quickly obtained in a natural way and without complex conguration processes. This procedure is especially useful in problems where properties such as non-smoothness or non-convexity lead to a highly irregular solution space, for which the traditional optimization methods, both of exact and approximate nature, may fail to reach their full potential. The results obtained are promising enough to suggest that randomizing classical heuristics is a powerful method that can be successfully applied in a variety of cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The UHPLC strategy which combines sub-2 microm porous particles and ultra-high pressure (>1000 bar) was investigated considering very high resolution criteria in both isocratic and gradient modes, with mobile phase temperatures between 30 and 90 degrees C. In isocratic mode, experimental conditions to reach the maximal efficiency were determined using the kinetic plot representation for DeltaP(max)=1000 bar. It has been first confirmed that the molecular weight of the compounds (MW) was a critical parameter which should be considered in the construction of such curves. With a MW around 1000 g mol(-1), efficiencies as high as 300,000 plates could be theoretically attained using UHPLC at 30 degrees C. By limiting the column length to 450 mm, the maximal plate count was around 100,000. In gradient mode, the longest column does not provide the maximal peak capacity for a given analysis time in UHPLC. This was attributed to the fact that peak capacity is not only related to the plate number but also to column dead time. Therefore, a compromise should be found and a 150 mm column should be preferentially selected for gradient lengths up to 60 min at 30 degrees C, while the columns coupled in series (3x 150 mm) were attractive only for t(grad)>250 min. Compared to 30 degrees C, peak capacities were increased by about 20-30% for a constant gradient length at 90 degrees C and gradient time decreased by 2-fold for an identical peak capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the present study was to compare, under the same nursing conditions, the energy-nitrogen balance and the protein turnover in small for gestational age (SGA) and appropriate for gestational age (AGA) low birthweight infants. We compared 8 SGA's (mean +/- s.d.: gestational age 35 +/- 2 weeks, birthweight 1520 +/- 330 g) to 11 AGA premature infants (32 +/- 2 weeks, birthweight 1560 +/- 240 g). When their rate of weight gain was above 15 g/kg/d (17.6 +/- 3.0 and 18.2 +/- 2.6 g/kg/d, mean postnatal age 18 +/- 10 and 20 +/- 9 d respectively) they were studied with respect to their metabolizable energy intake, their energy expenditure, their energy and protein gain and their protein turnover. Energy balance was assessed by the difference between metabolizable energy and energy expenditure as measured by indirect calorimetry. Protein gain was calculated from the amount of retained nitrogen. Protein turnover was estimated by a stable isotope enrichment technique using repeated nasogastric administration of 15N-glycine for 72 h. Although there was no difference in their metabolizable energy intakes (110 +/- 12 versus 108 +/- 11 kcal/kg/d), SGA's had a higher rate of resting energy expenditure (64 +/- 8 versus 57 +/- 8 kcal/kg/d, P less than 0.05). Protein gain and composition of weight gain was very similar in both groups (2.0 +/- 0.4 versus 2.1 +/- 0.4 g protein/kg/d; 3.5 +/- 1.1 versus 3.3 +/- 1.4 g fat/kg/d in SGA's and AGA's respectively). However, the rate of protein synthesis was significantly lower in SGA's (7.7 +/- 1.6 g/kg/d) as compared to AGA's (9.7 +/- 2.8 g/kg/d; P less than 0.05). It is concluded that SGA's have a more efficient protein gain/protein synthesis ratio since for the same weight and protein gains, SGA's show a 20 per cent slower protein turnover. They might therefore tolerate slightly higher protein intakes. Postconceptional age seems to be an important factor in the regulation of protein turnover.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Selected configuration interaction (SCI) for atomic and molecular electronic structure calculations is reformulated in a general framework encompassing all CI methods. The linked cluster expansion is used as an intermediate device to approximate CI coefficients BK of disconnected configurations (those that can be expressed as products of combinations of singly and doubly excited ones) in terms of CI coefficients of lower-excited configurations where each K is a linear combination of configuration-state-functions (CSFs) over all degenerate elements of K. Disconnected configurations up to sextuply excited ones are selected by Brown's energy formula, ΔEK=(E-HKK)BK2/(1-BK2), with BK determined from coefficients of singly and doubly excited configurations. The truncation energy error from disconnected configurations, Δdis, is approximated by the sum of ΔEKS of all discarded Ks. The remaining (connected) configurations are selected by thresholds based on natural orbital concepts. Given a model CI space M, a usual upper bound ES is computed by CI in a selected space S, and EM=E S+ΔEdis+δE, where δE is a residual error which can be calculated by well-defined sensitivity analyses. An SCI calculation on Ne ground state featuring 1077 orbitals is presented. Convergence to within near spectroscopic accuracy (0.5 cm-1) is achieved in a model space M of 1.4× 109 CSFs (1.1 × 1012 determinants) containing up to quadruply excited CSFs. Accurate energy contributions of quintuples and sextuples in a model space of 6.5 × 1012 CSFs are obtained. The impact of SCI on various orbital methods is discussed. Since ΔEdis can readily be calculated for very large basis sets without the need of a CI calculation, it can be used to estimate the orbital basis incompleteness error. A method for precise and efficient evaluation of ES is taken up in a companion paper

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An implicitly parallel method for integral-block driven restricted active space self-consistent field (RASSCF) algorithms is presented. The approach is based on a model space representation of the RAS active orbitals with an efficient expansion of the model subspaces. The applicability of the method is demonstrated with a RASSCF investigation of the first two excited states of indole

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recommended dietary allowances of many expert committees (UK DHSS 1979, FAO/WHO/UNU 1985, USA NRC 1989) have set out the extra energy requirements necessary to support lactation on the basis of an efficiency of 80 per cent for human milk production. The metabolic efficiency of milk synthesis can be derived from the measurements of resting energy expenditure in lactating women and in a matched control group of non-pregnant non-lactating women. The results of the present study in Gambian women, as well as a review of human studies on energy expenditure during lactation performed in different countries, suggest an efficiency of human milk synthesis greater than the value currently used by expert committees. We propose that an average figure of 95 per cent would be more appropriate to calculate the energy cost of human lactation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nominal Unification is an extension of first-order unification where terms can contain binders and unification is performed modulo α equivalence. Here we prove that the existence of nominal unifiers can be decided in quadratic time. First, we linearly-reduce nominal unification problems to a sequence of freshness and equalities between atoms, modulo a permutation, using ideas as Paterson and Wegman for first-order unification. Second, we prove that solvability of these reduced problems may be checked in quadràtic time. Finally, we point out how using ideas of Brown and Tarjan for unbalanced merging, we could solve these reduced problems more efficiently

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Segmenting ultrasound images is a challenging problemwhere standard unsupervised segmentation methods such asthe well-known Chan-Vese method fail. We propose in thispaper an efficient segmentation method for this class ofimages. Our proposed algorithm is based on asemi-supervised approach (user labels) and the use ofimage patches as data features. We also consider thePearson distance between patches, which has been shown tobe robust w.r.t speckle noise present in ultrasoundimages. Our results on phantom and clinical data show avery high similarity agreement with the ground truthprovided by a medical expert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developing a novel technique for the efficient, noninvasive clinical evaluation of bone microarchitecture remains both crucial and challenging. The trabecular bone score (TBS) is a new gray-level texture measurement that is applicable to dual-energy X-ray absorptiometry (DXA) images. Significant correlations between TBS and standard 3-dimensional (3D) parameters of bone microarchitecture have been obtained using a numerical simulation approach. The main objective of this study was to empirically evaluate such correlations in anteroposterior spine DXA images. Thirty dried human cadaver vertebrae were evaluated. Micro-computed tomography acquisitions of the bone pieces were obtained at an isotropic resolution of 93μm. Standard parameters of bone microarchitecture were evaluated in a defined region within the vertebral body, excluding cortical bone. The bone pieces were measured on a Prodigy DXA system (GE Medical-Lunar, Madison, WI), using a custom-made positioning device and experimental setup. Significant correlations were detected between TBS and 3D parameters of bone microarchitecture, mostly independent of any correlation between TBS and bone mineral density (BMD). The greatest correlation was between TBS and connectivity density, with TBS explaining roughly 67.2% of the variance. Based on multivariate linear regression modeling, we have established a model to allow for the interpretation of the relationship between TBS and 3D bone microarchitecture parameters. This model indicates that TBS adds greater value and power of differentiation between samples with similar BMDs but different bone microarchitectures. It has been shown that it is possible to estimate bone microarchitecture status derived from DXA imaging using TBS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Accuracy studies of Patient Safety Indicators (PSIs) are critical but limited by the large samples required due to low occurrence of most events. We tested a sampling design based on test results (verification-biased sampling [VBS]) that minimizes the number of subjects to be verified. METHODS: We considered 3 real PSIs, whose rates were calculated using 3 years of discharge data from a university hospital and a hypothetical screen of very rare events. Sample size estimates, based on the expected sensitivity and precision, were compared across 4 study designs: random and VBS, with and without constraints on the size of the population to be screened. RESULTS: Over sensitivities ranging from 0.3 to 0.7 and PSI prevalence levels ranging from 0.02 to 0.2, the optimal VBS strategy makes it possible to reduce sample size by up to 60% in comparison with simple random sampling. For PSI prevalence levels below 1%, the minimal sample size required was still over 5000. CONCLUSIONS: Verification-biased sampling permits substantial savings in the required sample size for PSI validation studies. However, sample sizes still need to be very large for many of the rarer PSIs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regulatory gene networks contain generic modules, like those involving feedback loops, which are essential for the regulation of many biological functions (Guido et al. in Nature 439:856-860, 2006). We consider a class of self-regulated genes which are the building blocks of many regulatory gene networks, and study the steady-state distribution of the associated Gillespie algorithm by providing efficient numerical algorithms. We also study a regulatory gene network of interest in gene therapy, using mean-field models with time delays. Convergence of the related time-nonhomogeneous Markov chain is established for a class of linear catalytic networks with feedback loops.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to verify if reflected energy of soils can characterize and discriminate them. A spectroradiometer (Spectral reflectance between: 400-2,500 nm) was utilized in laboratory. The soils evaluated are located in Bauru region, SP, Brazil, and are classified as Typic Argiudoll (TR), Typic Eutrorthox (LR), Typic Argiudoll (PE), Typic Haplortox (LE), Typic Paleudalf (PV) and Typic Quartzipsamment (AQ). They were characterized by their spectral reflectance as for descriptive conventional methods (Brazilian and International) according to the types of spectral curves. A method for the spectral descriptive evaluation of soils was established. It was possible to characterize and discriminate the soils by their spectral reflectance, with exception for LR and TR. The spectral differences were better identified by the general shape of spectral curves, by the intensity of band absorption and angle tendencies. These characteristics were mainly influenced by organic matter, iron, granulometry and mineralogy constituents. A reduction of iron and clay contents, which influenced higher reflectance intensity and shape variations, occurred on the soils LR/TR, PE, LE, PV and AQ, on that sequence. Soils of the same group with different clay textures could be discriminated. The conventional descriptive evaluation of spectral curves was less efficient on discriminating soils. Simulated orbital data discriminated soils mainly by bands 5 and 7.