974 resultados para Simulated annealing algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays the main honey producing countries require accurate labeling of honey before commercialization, including floral classification. Traditionally, this classification is made by melissopalynology analysis, an accurate but time-consuming task requiring laborious sample pre-treatment and high-skilled technicians. In this work the potential use of a potentiometric electronic tongue for pollinic assessment is evaluated, using monofloral and polyfloral honeys. The results showed that after splitting honeys according to color (white, amber and dark), the novel methodology enabled quantifying the relative percentage of the main pollens (Castanea sp., Echium sp., Erica sp., Eucaliptus sp., Lavandula sp., Prunus sp., Rubus sp. and Trifolium sp.). Multiple linear regression models were established for each type of pollen, based on the best sensors sub-sets selected using the simulated annealing algorithm. To minimize the overfitting risk, a repeated K-fold cross-validation procedure was implemented, ensuring that at least 10-20% of the honeys were used for internal validation. With this approach, a minimum average determination coefficient of 0.91 ± 0.15 was obtained. Also, the proposed technique enabled the correct classification of 92% and 100% of monofloral and polyfloral honeys, respectively. The quite satisfactory performance of the novel procedure for quantifying the relative pollen frequency may envisage its applicability for honey labeling and geographical origin identification. Nevertheless, this approach is not a full alternative to the traditional melissopalynologic analysis; it may be seen as a practical complementary tool for preliminary honey floral classification, leaving only problematic cases for pollinic evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Olive oils may be commercialized as intense, medium or light, according to the intensity perception of fruitiness, bitterness and pungency attributes, assessed by a sensory panel. In this work, the capability of an electronic tongue to correctly classify olive oils according to the sensory intensity perception levels was evaluated. Cross-sensitivity and non-specific lipid polymeric membranes were used as sensors. The sensor device was firstly tested using quinine monohydrochloride standard solutions. Mean sensitivities of 14±2 to 25±6 mV/decade, depending on the type of plasticizer used in the lipid membranes, were obtained showing the device capability for evaluating bitterness. Then, linear discriminant models based on sub-sets of sensors, selected by a meta-heuristic simulated annealing algorithm, were established enabling to correctly classify 91% of olive oils according to their intensity sensory grade (leave-one-out cross-validation procedure). This capability was further evaluated using a repeated K-fold cross-validation procedure, showing that the electronic tongue allowed an average correct classification of 80% of the olive oils used for internal-validation. So, the electronic tongue can be seen as a taste sensor, allowing differentiating olive oils with different sensory intensities, and could be used as a preliminary, complementary and practical tool for panelists during olive oil sensory analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Olive oil quality grading is traditionally assessed by human sensory evaluation of positive and negative attributes (olfactory, gustatory, and final olfactorygustatory sensations). However, it is not guaranteed that trained panelist can correctly classify monovarietal extra-virgin olive oils according to olive cultivar. In this work, the potential application of human (sensory panelists) and artificial (electronic tongue) sensory evaluation of olive oils was studied aiming to discriminate eight single-cultivar extra-virgin olive oils. Linear discriminant, partial least square discriminant, and sparse partial least square discriminant analyses were evaluated. The best predictive classification was obtained using linear discriminant analysis with simulated annealing selection algorithm. A low-level data fusion approach (18 electronic tongue signals and nine sensory attributes) enabled 100 % leave-one-out cross-validation correct classification, improving the discrimination capability of the individual use of sensor profiles or sensory attributes (70 and 57 % leave-one-out correct classifications, respectively). So, human sensory evaluation and electronic tongue analysis may be used as complementary tools allowing successful monovarietal olive oil discrimination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural mineral waters (still), effervescent natural mineral waters (sparkling) and aromatized waters with fruit-flavors (still or sparkling) are an emerging market. In this work, the capability of a potentiometric electronic tongue, comprised with lipid polymeric membranes, to quantitatively estimate routinely quality physicochemical parameters (pH and conductivity) as well as to qualitatively classify water samples according to the type of water was evaluated. The study showed that a linear discriminant model, based on 21 sensors selected by the simulated annealing algorithm, could correctly classify 100 % of the water samples (leave-one out cross-validation). This potential was further demonstrated by applying a repeated K-fold cross-validation (guaranteeing that at least 15 % of independent samples were only used for internal-validation) for which 96 % of correct classifications were attained. The satisfactory recognition performance of the E-tongue could be attributed to the pH, conductivity, sugars and organic acids contents of the studied waters, which turned out in significant differences of sweetness perception indexes and total acid flavor. Moreover, the E-tongue combined with multivariate linear regression models, based on sub-sets of sensors selected by the simulated annealing algorithm, could accurately estimate waters pH (25 sensors: R 2 equal to 0.99 and 0.97 for leave-one-out or repeated K-folds cross-validation) and conductivity (23 sensors: R 2 equal to 0.997 and 0.99 for leave-one-out or repeated K-folds cross-validation). So, the overall satisfactory results achieved, allow envisaging a potential future application of electronic tongue devices for bottled water analysis and classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The system described herein represents the first example of a recommender system in digital ecosystems where agents negotiate services on behalf of small companies. The small companies compete not only with price or quality, but with a wider service-by-service composition by subcontracting with other companies. The final result of these offerings depends on negotiations at the scale of millions of small companies. This scale requires new platforms for supporting digital business ecosystems, as well as related services like open-id, trust management, monitors and recommenders. This is done in the Open Negotiation Environment (ONE), which is an open-source platform that allows agents, on behalf of small companies, to negotiate and use the ecosystem services, and enables the development of new agent technologies. The methods and tools of cyber engineering are necessary to build up Open Negotiation Environments that are stable, a basic condition for predictable business and reliable business environments. Aiming to build stable digital business ecosystems by means of improved collective intelligence, we introduce a model of negotiation style dynamics from the point of view of computational ecology. This model inspires an ecosystem monitor as well as a novel negotiation style recommender. The ecosystem monitor provides hints to the negotiation style recommender to achieve greater stability of an open negotiation environment in a digital business ecosystem. The greater stability provides the small companies with higher predictability, and therefore better business results. The negotiation style recommender is implemented with a simulated annealing algorithm at a constant temperature, and its impact is shown by applying it to a real case of an open negotiation environment populated by Italian companies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les résultats présentés dans cette thèse précisent certains aspects de la fonction du cotransporteur Na+/glucose (SGLT1), une protéine transmembranaire qui utilise le gradient électrochimique favorable des ions Na+ afin d’accumuler le glucose à l’intérieur des cellules épithéliales de l’intestin grêle et du rein. Nous avons tout d’abord utilisé l’électrophysiologie à deux microélectrodes sur des ovocytes de xénope afin d’identifier les ions qui constituaient le courant de fuite de SGLT1, un courant mesuré en absence de glucose qui est découplé de la stoechiométrie stricte de 2 Na+/1 glucose caractérisant le cotransport. Nos résultats ont démontré que des cations comme le Li+, le K+ et le Cs+, qui n’interagissent que faiblement avec les sites de liaison de SGLT1 et ne permettent pas les conformations engendrées par la liaison du Na+, pouvaient néanmoins générer un courant de fuite d’amplitude comparable à celui mesuré en présence de Na+. Ceci suggère que le courant de fuite traverse SGLT1 en utilisant une voie de perméation différente de celle définie par les changements de conformation propres au cotransport Na+/glucose, possiblement similaire à celle empruntée par la perméabilité à l’eau passive. Dans un deuxième temps, nous avons cherché à estimer la vitesse des cycles de cotransport de SGLT1 à l’aide de la technique de la trappe ionique, selon laquelle le large bout d’une électrode sélective (~100 μm) est pressé contre la membrane plasmique d’un ovocyte et circonscrit ainsi un petit volume de solution extracellulaire que l’on nomme la trappe. Les variations de concentration ionique se produisant dans la trappe en conséquence de l’activité de SGLT1 nous ont permis de déduire que le cotransport Na+/glucose s’effectuait à un rythme d’environ 13 s-1 lorsque le potentiel membranaire était fixé à -155 mV. Suite à cela, nous nous sommes intéressés au développement d’un modèle cinétique de SGLT1. En se servant de l’algorithme du recuit simulé, nous avons construit un schéma cinétique à 7 états reproduisant de façon précise les courants du cotransporteur en fonction du Na+ et du glucose extracellulaire. Notre modèle prédit qu’en présence d’une concentration saturante de glucose, la réorientation dans la membrane de SGLT1 suivant le relâchement intracellulaire de ses substrats est l’étape qui limite la vitesse de cotransport.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thèse porte sur l’étude de la relation entre la structure et la fonction chez les cotransporteurs Na+/glucose (SGLTs). Les SGLTs sont des protéines membranaires qui se servent du gradient électrochimique transmembranaire du Na+ afin d’accumuler leurs substrats dans la cellule. Une mise en contexte présentera d’abord un bref résumé des connaissances actuelles dans le domaine, suivi par un survol des différentes techniques expérimentales utilisées dans le cadre de mes travaux. Ces travaux peuvent être divisés en trois projets. Un premier projet a porté sur les bases structurelles de la perméation de l’eau au travers des SGLTs. En utilisant à la fois des techniques de modélisation moléculaire, mais aussi la volumétrie en voltage imposé, nous avons identifié les bases structurelles de cette perméation. Ainsi, nous avons pu identifier in silico la présence d’une voie de perméation passive à l’eau traversant le cotransporteur, pour ensuite corroborer ces résultats à l’aide de mesures faites sur le cotransporteur Na/glucose humain (hSGLT1) exprimé dans les ovocytes. Un second projet a permis d’élucider certaines caractéristiques structurelles de hSGLT1 de par l’utilisation de la dipicrylamine (DPA), un accepteur de fluorescence dont la répartition dans la membrane lipidique dépend du potentiel membranaire. L’utilisation de la DPA, conjuguée aux techniques de fluorescence en voltage imposé et de FRET (fluorescence resonance energy transfer), a permis de démontrer la position extracellulaire d’une partie de la boucle 12-13 et le fait que hSGLT1 forme des dimères dont les sous-unités sont unies par un pont disulfure. Un dernier projet a eu pour but de caractériser les courants stationnaires et pré-stationaires d’un membre de la famille des SGLTs, soit le cotransporteur Na+/myo-inositol humain hSMIT2 afin de proposer un modèle cinétique qui décrit son fonctionnement. Nous avons démontré que la phlorizine inhibe mal les courants préstationnaires suite à une dépolarisation, et la présence de courants de fuite qui varient en fonction du temps, du potentiel membranaire et des substrats. Un algorithme de recuit simulé a été mis au point afin de permettre la détermination objective de la connectivité et des différents paramètres associés à la modélisation cinétique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The system described herein represents the first example of a recommender system in digital ecosystems where agents negotiate services on behalf of small companies. The small companies compete not only with price or quality, but with a wider service-by-service composition by subcontracting with other companies. The final result of these offerings depends on negotiations at the scale of millions of small companies. This scale requires new platforms for supporting digital business ecosystems, as well as related services like open-id, trust management, monitors and recommenders. This is done in the Open Negotiation Environment (ONE), which is an open-source platform that allows agents, on behalf of small companies, to negotiate and use the ecosystem services, and enables the development of new agent technologies. The methods and tools of cyber engineering are necessary to build up Open Negotiation Environments that are stable, a basic condition for predictable business and reliable business environments. Aiming to build stable digital business ecosystems by means of improved collective intelligence, we introduce a model of negotiation style dynamics from the point of view of computational ecology. This model inspires an ecosystem monitor as well as a novel negotiation style recommender. The ecosystem monitor provides hints to the negotiation style recommender to achieve greater stability of an open negotiation environment in a digital business ecosystem. The greater stability provides the small companies with higher predictability, and therefore better business results. The negotiation style recommender is implemented with a simulated annealing algorithm at a constant temperature, and its impact is shown by applying it to a real case of an open negotiation environment populated by Italian companies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The results of applying a fragment-based protein tertiary structure prediction method to the prediction of 14 CASP5 target domains are described. The method is based on the assembly of supersecondary structural fragments taken from highly resolved protein structures using a simulated annealing algorithm. A number of good predictions for proteins with novel folds were produced, although not always as the first model. For two fold recognition targets, FRAGFOLD produced the most accurate model in both cases, despite the fact that the predictions were not based on a template structure. Although clear progress has been made in improving FRAGFOLD since CASP4, the ranking of final models still seems to be the main problem that needs to be addressed before the next CASP experiment

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The p-median problem is often used to locate p service centers by minimizing their distances to a geographically distributed demand (n). The optimal locations are sensitive to geographical context such as road network and demand points especially when they are asymmetrically distributed in the plane. Most studies focus on evaluating performances of the p-median model when p and n vary. To our knowledge this is not a very well-studied problem when the road network is alternated especially when it is applied in a real world context. The aim in this study is to analyze how the optimal location solutions vary, using the p-median model, when the density in the road network is alternated. The investigation is conducted by the means of a case study in a region in Sweden with an asymmetrically distributed population (15,000 weighted demand points), Dalecarlia. To locate 5 to 50 service centers we use the national transport administrations official road network (NVDB). The road network consists of 1.5 million nodes. To find the optimal location we start with 500 candidate nodes in the network and increase the number of candidate nodes in steps up to 67,000. To find the optimal solution we use a simulated annealing algorithm with adaptive tuning of the temperature. The results show that there is a limited improvement in the optimal solutions when nodes in the road network increase and p is low. When p is high the improvements are larger. The results also show that choice of the best network depends on p. The larger p the larger density of the network is needed. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To have good data quality with high complexity is often seen to be important. Intuition says that the higher accuracy and complexity the data have the better the analytic solutions becomes if it is possible to handle the increasing computing time. However, for most of the practical computational problems, high complexity data means that computational times become too long or that heuristics used to solve the problem have difficulties to reach good solutions. This is even further stressed when the size of the combinatorial problem increases. Consequently, we often need a simplified data to deal with complex combinatorial problems. In this study we stress the question of how the complexity and accuracy in a network affect the quality of the heuristic solutions for different sizes of the combinatorial problem. We evaluate this question by applying the commonly used p-median model, which is used to find optimal locations in a network of p supply points that serve n demand points. To evaluate this, we vary both the accuracy (the number of nodes) of the network and the size of the combinatorial problem (p). The investigation is conducted by the means of a case study in a region in Sweden with an asymmetrically distributed population (15,000 weighted demand points), Dalecarlia. To locate 5 to 50 supply points we use the national transport administrations official road network (NVDB). The road network consists of 1.5 million nodes. To find the optimal location we start with 500 candidate nodes in the network and increase the number of candidate nodes in steps up to 67,000 (which is aggregated from the 1.5 million nodes). To find the optimal solution we use a simulated annealing algorithm with adaptive tuning of the temperature. The results show that there is a limited improvement in the optimal solutions when the accuracy in the road network increase and the combinatorial problem (low p) is simple. When the combinatorial problem is complex (large p) the improvements of increasing the accuracy in the road network are much larger. The results also show that choice of the best accuracy of the network depends on the complexity of the combinatorial (varying p) problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)