25 resultados para Energy Efficient Algorithms
em Université de Lausanne, Switzerland
Resumo:
Combinatorial optimization involves finding an optimal solution in a finite set of options; many everyday life problems are of this kind. However, the number of options grows exponentially with the size of the problem, such that an exhaustive search for the best solution is practically infeasible beyond a certain problem size. When efficient algorithms are not available, a practical approach to obtain an approximate solution to the problem at hand, is to start with an educated guess and gradually refine it until we have a good-enough solution. Roughly speaking, this is how local search heuristics work. These stochastic algorithms navigate the problem search space by iteratively turning the current solution into new candidate solutions, guiding the search towards better solutions. The search performance, therefore, depends on structural aspects of the search space, which in turn depend on the move operator being used to modify solutions. A common way to characterize the search space of a problem is through the study of its fitness landscape, a mathematical object comprising the space of all possible solutions, their value with respect to the optimization objective, and a relationship of neighborhood defined by the move operator. The landscape metaphor is used to explain the search dynamics as a sort of potential function. The concept is indeed similar to that of potential energy surfaces in physical chemistry. Borrowing ideas from that field, we propose to extend to combinatorial landscapes the notion of the inherent network formed by energy minima in energy landscapes. In our case, energy minima are the local optima of the combinatorial problem, and we explore several definitions for the network edges. At first, we perform an exhaustive sampling of local optima basins of attraction, and define weighted transitions between basins by accounting for all the possible ways of crossing the basins frontier via one random move. Then, we reduce the computational burden by only counting the chances of escaping a given basin via random kick moves that start at the local optimum. Finally, we approximate network edges from the search trajectory of simple search heuristics, mining the frequency and inter-arrival time with which the heuristic visits local optima. Through these methodologies, we build a weighted directed graph that provides a synthetic view of the whole landscape, and that we can characterize using the tools of complex networks science. We argue that the network characterization can advance our understanding of the structural and dynamical properties of hard combinatorial landscapes. We apply our approach to prototypical problems such as the Quadratic Assignment Problem, the NK model of rugged landscapes, and the Permutation Flow-shop Scheduling Problem. We show that some network metrics can differentiate problem classes, correlate with problem non-linearity, and predict problem hardness as measured from the performances of trajectory-based local search heuristics.
Resumo:
Tractography is a class of algorithms aiming at in vivo mapping the major neuronal pathways in the white matter from diffusion magnetic resonance imaging (MRI) data. These techniques offer a powerful tool to noninvasively investigate at the macroscopic scale the architecture of the neuronal connections of the brain. However, unfortunately, the reconstructions recovered with existing tractography algorithms are not really quantitative even though diffusion MRI is a quantitative modality by nature. As a matter of fact, several techniques have been proposed in recent years to estimate, at the voxel level, intrinsic microstructural features of the tissue, such as axonal density and diameter, by using multicompartment models. In this paper, we present a novel framework to reestablish the link between tractography and tissue microstructure. Starting from an input set of candidate fiber-tracts, which are estimated from the data using standard fiber-tracking techniques, we model the diffusion MRI signal in each voxel of the image as a linear combination of the restricted and hindered contributions generated in every location of the brain by these candidate tracts. Then, we seek for the global weight of each of them, i.e., the effective contribution or volume, such that they globally fit the measured signal at best. We demonstrate that these weights can be easily recovered by solving a global convex optimization problem and using efficient algorithms. The effectiveness of our approach has been evaluated both on a realistic phantom with known ground-truth and in vivo brain data. Results clearly demonstrate the benefits of the proposed formulation, opening new perspectives for a more quantitative and biologically plausible assessment of the structural connectivity of the brain.
Resumo:
Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R), developed at the University of Lausanne (Switzerland). An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise mainly from the models applied and analysis scale, which are neglecting local controlling factors of debris flow hazard. The presented approach of debris flow hazard analysis, associating automatic detection of the source areas and a simple assessment of the debris flow spreading, provided results for consequent hazard and risk studies. However, for the validation and transferability of the parameters and results to other study areas, more testing is needed.
Resumo:
Some years ago, a parish in Geneva decided to reduce heating costs by insulating its church to make it more energy efficient. Three years after the last renovations, it was observed that the internal surfaces of the naves had already become dusty compared with the customary frequency of 10-12 years. Dust even deposited on various surfaces during religious services. Our investigation showed that nearly all the dust found inside the church may in fact be soot from incense and candle combustion. Incense appears to be a significant source of polycyclic aromatic hydrocarbons. With a mechanical ventilation system and petrol lamps resembling candles the problem can be resolved.
Resumo:
Summary : With regard to exercise metabolism, lactate was long considered as a dead-end waste product responsible for muscle fatigue and a limiting factor for motor performance. However, a large body of evidence clearly indicates that lactate is an energy efficient metabolite able to link the glycolytic pathway with aerobic metabolism and has endocrine-like actions, rather than to be a dead-end waste product. Lactate metabolism is also known to be quickly upregulated by regular endurance training and is thought to be related to exercise performance. However, to what extent its modulation can increase exercise performance in already endurance-trained subjects is unknown. The general hypothesis of this work was therefore that increasing either lactate metabolic clearance rate or lactate availability could, in turn, increase endurance performance. The first study (Study I) aimed at increasing the lactate clearance rate by means of assumed interaction effects of endurance training and hypoxia on lactate metabolism and endurance performance. Although this study did not demonstrate any interaction of training and hypoxia on both lactate metabolism and endurance performance, a significant deleterious effect of endurance training in hypoxia was shown on glucose homeostasis. The methods used to determine lactate kinetics during exercise exhibited some limitations, and the second study did delineate some of the issues raised (Study 2). The third study (Study 3) investigated the metabolic and performance effects of increasing plasma lactate production and availability during prolonged exercise in the fed state. A nutritional intervention was used for this purpose: part of glucose feedings ingested during the control condition was substituted by fructose. The results of this study showed a significant increase of lactate turnover rate, quantified the metabolic fate of fructose; and demonstrated a significant decrease of lipid oxidation and glycogen breakdown. In contrast, endurance performance appeared to be unmodified by this dietary intervention, being at odds with recent reports. Altogether the results of this thesis suggest that in endurance athletes the relationship between endurance performance and lactate turnover rate remains unclear. Nonetheless, the result of the present study raises questions and opens perspectives on the rationale of using hypoxia as a therapeutic aid for the treatment of insulin resistance. Moreover, the results of the second study open perspectives on the role of lactate as an intermediate metabolite and its modulatory effects on substrate metabolism during exercise. Additionally it is suggested that the simple nutritional intervention used in the third study can be of interest in the investigation on the aforementioned roles of lactate. Résumé : Lorsque le lactate est évoqué en rapport avec l'exercice, il est souvent considéré comme un déchet métabolique responsable de l'acidose métabolique, de la fatigue musculaire ou encore comme un facteur limitant de la performance. Or la littérature montre clairement que le lactate se révèle être plutôt un métabolite utilisé efficacement par de nombreux tissus par les voies oxydatives et, ainsi, il peut être considéré comme un lien entre le métabolisme glycolytique et le métabolisme oxydatif. De plus on lui prête des propriétés endocrines. Il est connu que l'entraînement d'endurance accroît rapidement le métabolisme du lactate, et il est suggéré que la performance d'endurance est liée à son métabolisme. Toutefois la relation entre le taux de renouvellement du lactate et la performance d'endurance est peu claire, et, de même, de quelle manière la modulation de son métabolisme peut influencer cette dernière. Le but de cette thèse était en conséquence d'investiguer de quelle manière et à quel degré l'augmentation du métabolisme du lactate, par l'augmentation de sa clearance et de son turnover, pouvait à son tour améliorer la performance d'endurance de sujets entraînés. L'objectif de la première étude a été d'augmenter la clearance du lactate par le biais d'un entraînement en conditions hypoxiques chez des cyclistes d'endurance. Basé sur la littérature scientifique existante, on a fait l'hypothèse que l'entraînement d'endurance et l'hypoxie exerceraient un effet synergétique sur le métabolisme du lactate et sur la performance, ce qui permettrait de montrer des relations entre performance et métabolisme du lactate. Les résultats de cette étude n'ont montré aucun effet synergique sur la performance ou le métabolisme du lactate. Toutefois, un effet délétère sur le métabolisme du glucose a été démontré. Quelques limitations de la méthode employée pour la mesure du métabolisme du lactate ont été soulevées, et partiellement résolues dans la seconde étude de ce travail, qui avait pour but d'évaluer la sensibilité du modèle pharmacodynamique utilisé pour le calcul du turnover du lactate. La troisième étude a investigué l'effet d'une augmentation de la lactatémie sur le métabolisme des substrats et sur la performance par une intervention nutritionnelle substituant une partie de glucose ingéré pendant l'exercice par du fructose. Les résultats montrent que les composants dynamiques du métabolisme du lactate sont significativement augmentés en présence de fructose, et que les oxydations de graisse et de glycogène sont significativement diminuées. Toutefois aucun effet sur la performance n'a été démontré. Les résultats de ces études montrent que la relation entre le métabolisme du lactate et la performance reste peu claire. Les résultats délétères de la première étude laissent envisager des pistes de travail, étant donné que l'entraînement en hypoxie est considéré comme outil thérapeutique dans le traitement de pathologies liées à la résistance à l'insuline. De plus les résultats de la troisième étude ouvrent des perspectives de travail quant au rôle du lactate comme intermédiaire métabolique durant l'exercice ainsi que sur ses effets directs sur le métabolisme. Ils suggèrent de plus que la manipulation nutritionnelle simple qui a été utilisée se révèle être un outil prometteur dans l'étude des rôles et effets métaboliques que peut revêtir le lactate durant l'exercice.
Resumo:
Biofuels are considered as a promising substitute for fossil fuels when considering the possible reduction of greenhouse gases emissions. However limiting their impacts on potential benefits for reducing climate change is shortsighted. Global sustainability assessments are necessary to determine the sustainability of supply chains. We propose a new global criterion based framework enabling a comprehensive international comparison of bioethanol supply chains. The interest of this framework is that the selection of the sustainability indicators is qualified on three criterions: relevance, reliability and adaptability to the local context. Sustainability issues have been handled along environmental, social and economical issues. This new framework has been applied for a specific issue: from a Swiss perspective, is locally produced bioethanol in Switzerland more sustainable than imported from Brazil? Thanks to this framework integrating local context in its indicator definition, Brazilian production of bioethanol is shown as energy efficient and economically interesting for Brazil. From a strictly economic point of view, bioethanol production within Switzerland is not justified for Swiss consumption and questionable for the environmental issue. The social dimension is delicate to assess due to the lack of reliable data and is strongly linked to the agricultural policy in both countries. There is a need of establishing minimum sustainability criteria for imported bioethanol to avoid unwanted negative or leakage effects.
Resumo:
We show that the dispersal routes reconstruction problem can be stated as an instance of a graph theoretical problem known as the minimum cost arborescence problem, for which there exist efficient algorithms. Furthermore, we derive some theoretical results, in a simplified setting, on the possible optimal values that can be obtained for this problem. With this, we place the dispersal routes reconstruction problem on solid theoretical grounds, establishing it as a tractable problem that also lends itself to formal mathematical and computational analysis. Finally, we present an insightful example of how this framework can be applied to real data. We propose that our computational method can be used to define the most parsimonious dispersal (or invasion) scenarios, which can then be tested using complementary methods such as genetic analysis.
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy, Total Variation (TV)- based energies and more recently non-local means. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm or fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n2) and O(1/√ε), while existing techniques are in O(1/n2) and O(1/√ε). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.
Resumo:
Astrocytes are now considered as key players in brain information processing because of their newly discovered roles in synapse formation and plasticity, energy metabolism and blood flow regulation. However, our understanding of astrocyte function is still fragmented compared to other brain cell types. A better appreciation of the biology of astrocytes requires the development of tools to generate animal models in which astrocyte-specific proteins and pathways can be manipulated. In addition, it is becoming increasingly evident that astrocytes are also important players in many neurological disorders. Targeted modulation of protein expression in astrocytes would be critical for the development of new therapeutic strategies. Gene transfer is valuable to target a subpopulation of cells and explore their function in experimental models. In particular, viral-mediated gene transfer provides a rapid, highly flexible and cost-effective, in vivo paradigm to study the impact of genes of interest during central nervous system development or in adult animals. We will review the different strategies that led to the recent development of efficient viral vectors that can be successfully used to selectively transduce astrocytes in the mammalian brain.
Resumo:
The UHPLC strategy which combines sub-2 microm porous particles and ultra-high pressure (>1000 bar) was investigated considering very high resolution criteria in both isocratic and gradient modes, with mobile phase temperatures between 30 and 90 degrees C. In isocratic mode, experimental conditions to reach the maximal efficiency were determined using the kinetic plot representation for DeltaP(max)=1000 bar. It has been first confirmed that the molecular weight of the compounds (MW) was a critical parameter which should be considered in the construction of such curves. With a MW around 1000 g mol(-1), efficiencies as high as 300,000 plates could be theoretically attained using UHPLC at 30 degrees C. By limiting the column length to 450 mm, the maximal plate count was around 100,000. In gradient mode, the longest column does not provide the maximal peak capacity for a given analysis time in UHPLC. This was attributed to the fact that peak capacity is not only related to the plate number but also to column dead time. Therefore, a compromise should be found and a 150 mm column should be preferentially selected for gradient lengths up to 60 min at 30 degrees C, while the columns coupled in series (3x 150 mm) were attractive only for t(grad)>250 min. Compared to 30 degrees C, peak capacities were increased by about 20-30% for a constant gradient length at 90 degrees C and gradient time decreased by 2-fold for an identical peak capacity.
Resumo:
The aim of the present study was to compare, under the same nursing conditions, the energy-nitrogen balance and the protein turnover in small for gestational age (SGA) and appropriate for gestational age (AGA) low birthweight infants. We compared 8 SGA's (mean +/- s.d.: gestational age 35 +/- 2 weeks, birthweight 1520 +/- 330 g) to 11 AGA premature infants (32 +/- 2 weeks, birthweight 1560 +/- 240 g). When their rate of weight gain was above 15 g/kg/d (17.6 +/- 3.0 and 18.2 +/- 2.6 g/kg/d, mean postnatal age 18 +/- 10 and 20 +/- 9 d respectively) they were studied with respect to their metabolizable energy intake, their energy expenditure, their energy and protein gain and their protein turnover. Energy balance was assessed by the difference between metabolizable energy and energy expenditure as measured by indirect calorimetry. Protein gain was calculated from the amount of retained nitrogen. Protein turnover was estimated by a stable isotope enrichment technique using repeated nasogastric administration of 15N-glycine for 72 h. Although there was no difference in their metabolizable energy intakes (110 +/- 12 versus 108 +/- 11 kcal/kg/d), SGA's had a higher rate of resting energy expenditure (64 +/- 8 versus 57 +/- 8 kcal/kg/d, P less than 0.05). Protein gain and composition of weight gain was very similar in both groups (2.0 +/- 0.4 versus 2.1 +/- 0.4 g protein/kg/d; 3.5 +/- 1.1 versus 3.3 +/- 1.4 g fat/kg/d in SGA's and AGA's respectively). However, the rate of protein synthesis was significantly lower in SGA's (7.7 +/- 1.6 g/kg/d) as compared to AGA's (9.7 +/- 2.8 g/kg/d; P less than 0.05). It is concluded that SGA's have a more efficient protein gain/protein synthesis ratio since for the same weight and protein gains, SGA's show a 20 per cent slower protein turnover. They might therefore tolerate slightly higher protein intakes. Postconceptional age seems to be an important factor in the regulation of protein turnover.
Resumo:
The recommended dietary allowances of many expert committees (UK DHSS 1979, FAO/WHO/UNU 1985, USA NRC 1989) have set out the extra energy requirements necessary to support lactation on the basis of an efficiency of 80 per cent for human milk production. The metabolic efficiency of milk synthesis can be derived from the measurements of resting energy expenditure in lactating women and in a matched control group of non-pregnant non-lactating women. The results of the present study in Gambian women, as well as a review of human studies on energy expenditure during lactation performed in different countries, suggest an efficiency of human milk synthesis greater than the value currently used by expert committees. We propose that an average figure of 95 per cent would be more appropriate to calculate the energy cost of human lactation.
Resumo:
The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.