917 resultados para Hyper-heuristics


Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We annually monitored the abundance and size structure of herbivorous sea urchin populations (Paracentrotus lividus and Arbacia lixula) inside and outside a marine reserve in the Northwestern Mediterranean on two distinct habitats (boulders and vertical walls) over a period of 20 years, with the aim of analyzing changes at different temporal scales in relation to biotic and abiotic drivers. P. lividus exhibited significant variability in density over time on boulder bottoms but not on vertical walls, and temporal trends were not significantly different between the protection levels. Differences in densities were caused primarily by variance in recruitment, which was less pronounced inside the MPA and was correlated with adult density, indicating density-dependent recruitment under high predation pressure, as well as some positive feedback mechanisms that may facilitate higher urchin abundances despite higher predator abundance. Populations within the reserve were less variable in abundance and did not exhibit the hyper-abundances observed outside the reserve, suggesting that predation effects maybe more subtle than simply lowering the numbers of urchins in reserves. A. lixula densities were an order of magnitude lower than P. lividus densities and varied within sites and over time on boulder bottoms but did not differ between protection levels. In December 2008, an exceptionally violent storm reduced sea urchin densities drastically (by 50% to 80%) on boulder substrates, resulting in the lowest values observed over the entire study period, which remained at that level for at least two years (up to the present). Our results also showed great variability in the biological and physical processes acting at different temporal scales. This study highlights the need for appropriate temporal scales for studies to fully understand ecosystem functioning, the concepts of which are fundamental to successful conservation and management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Patients suffering from cutaneous leishmaniasis (CL) caused by New World Leishmania (Viannia) species are at high risk of developing mucosal (ML) or disseminated cutaneous leishmaniasis (DCL). After the formation of a primary skin lesion at the site of the bite by a Leishmania-infected sand fly, the infection can disseminate to form secondary lesions. This metastatic phenotype causes significant morbidity and is often associated with a hyper-inflammatory immune response leading to the destruction of nasopharyngeal tissues in ML, and appearance of nodules or numerous ulcerated skin lesions in DCL. Recently, we connected this aggressive phenotype to the presence of Leishmania RNA virus (LRV) in strains of L. guyanensis, showing that LRV is responsible for elevated parasitaemia, destructive hyper-inflammation and an overall exacerbation of the disease. Further studies of this relationship and the distribution of LRVs in other Leishmania strains and species would benefit from improved methods of viral detection and quantitation, especially ones not dependent on prior knowledge of the viral sequence as LRVs show significant evolutionary divergence. METHODOLOGY/PRINCIPAL FINDINGS: This study reports various techniques, among which, the use of an anti-dsRNA monoclonal antibody (J2) stands out for its specific and quantitative recognition of dsRNA in a sequence-independent fashion. Applications of J2 include immunofluorescence, ELISA and dot blot: techniques complementing an arsenal of other detection tools, such as nucleic acid purification and quantitative real-time-PCR. We evaluate each method as well as demonstrate a successful LRV detection by the J2 antibody in several parasite strains, a freshly isolated patient sample and lesion biopsies of infected mice. CONCLUSIONS/SIGNIFICANCE: We propose that refinements of these methods could be transferred to the field for use as a diagnostic tool in detecting the presence of LRV, and potentially assessing the LRV-related risk of complications in cutaneous leishmaniasis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a hybrid simulation-based algorithm is proposed for the StochasticFlow Shop Problem. The main idea of the methodology is to transform the stochastic problem into a deterministic problem and then apply simulation to the latter. In order to achieve this goal, we rely on Monte Carlo Simulation and an adapted version of a deterministic heuristic. This approach aims to provide flexibility and simplicity due to the fact that it is not constrained by any previous assumption and relies in well-tested heuristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a hybrid simulation-based algorithm is proposed for the StochasticFlow Shop Problem. The main idea of the methodology is to transform the stochastic problem into a deterministic problem and then apply simulation to the latter. In order to achieve this goal, we rely on Monte Carlo Simulation and an adapted version of a deterministic heuristic. This approach aims to provide flexibility and simplicity due to the fact that it is not constrained by any previous assumption and relies in well-tested heuristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report describes the first phase in a project to develop an electronic reference library (ERL) to help Iowa transportation officials efficiently access information in critical and heavily used documents. These documents include Standard Specifications for Bridge and Highway Construction (hereinafter called Standard Specifications), design manuals, standard drawings, the Construction Manual, and Material Instruction Memoranda (hereinafter called Material IMs). Additional items that could be included to enhance the ERL include phone books, letting dates, Internet links, computer programs distributed by the Iowa Department of Transportation (DOT), and local specifications, such as the Urban Standard Specifications of Public Improvements. All cross-references should be hyper linked, and a search engine should be provided. Revisions noted in the General Supplemental Specifications (hereinafter called the Supplemental Specifications) should be incorporated into the text of the Standard Specifications. The Standard Specifications should refer to related sections of other documents, and there should be reciprocal hyper links in those other documents. These features would speed research on critical issues and save staff time. A master plan and a pilot version were both developed in this first phase of the ERL.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The importance of the lateral hypothalamus in the pursuit of reward has long been recognized. However, the hypothalamic neuronal network involved in the regulation of reward still remains partially unknown. Hypocretins (aka orexins) are neuropeptides synthesized by a few thousand neurons restricted to the lateral hypothalamus and the perifornical area. Compelling evidence indicates that hypocretin neurons receive inputs from sensory and limbic systems and drive hyper-arousal possibly through modulation of stress responses. Major advances have been made in the elucidation of the hypocretin involvement in the regulation of arousal, stress, motivation, and reward seeking, without clearly defining the role of hypocretins in addictionrelated behaviors. We have recently gathered substantial evidence that points to a previously unidentified role for hypocretin-1 in driving relapse for cocaine seeking through activation of brain stress pathways. Meanwhile, several authors published concordant observations rather suggesting a direct activation of the mesolimbic dopamine system. In particular, hypocretin-1 has been shown to be critically involved in cocaine sensitization through the recruitment of NMDA receptors in the ventral tegmental area. Overall, on can conclude from recent findings that activation of hypocretin/orexin neurons plays a critical role in the development of the addiction process, either by contributing to brain sensitization (which is thought to lead to the unmanageable desire for drug intake) or by modulating the brain reward system that, in coordination with brain stress systems, leads to a vulnerable state that may facilitate relapse for drug seeking behavior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To investigate the differences between the Fundus Camera (Topcon TRC-50X) and Confocal Scanning Laser Ophthalmoscope (Heidelberg retina angiogram (HRA)) on the fundus autofluorescence (FAF) imaging (resolution and FAF characteristics). Methods: Eighty nine eyes of 46 patients with various retinal diseases underwent FAF imaging with HRA (488nm exciter / 500nm barrier filter) before fluorescein angiography (FFA) and Topcon Fundus Camera (580nm exciter / 695nm barrier filter) before and after FFA. The quality of the FAF images was estimated, compared for their resolution and analysed for the influence of fixation stability and cataracts. Hypo- and hyper-FAF behaviour was analysed for the healthy disc, healthy fovea, and a variety of pathological features. Results: HRA images were found to be of superior quality in 18 eyes, while Topcon images were estimated superior in 21 eyes. No difference was found in 50 eyes. Both poor fixation (p=0.009) and more advanced cataract (p=0.013) were found to strongly increase the likelihood of better image quality by Topcon. Images acquired by Topcon before and after FFA were identical (100%). The healthy disc was usually dark on HRA (71%), but showed mild autofluorescence on Topcon (88%). The healthy fovea showed in 100% Hypo-FAF on HRA, while Topcon showed in 52% Iso-FAF, in 43% mild Hypo-FAF, and in 5% Hypo-FAF as on HRA. No difference of FAF was found for geographic atrophy, pigment changes, and drusen, although Topcon images were often more detailed. Hyper-FAF due to exudation showed better on HRA. Pigment epithelium detachment showed identical FAF behaviour on the border, but reduced FAF with Topcon in the center. Cystic edema was visible only on HRA in a petaloid pattern. Hard exsudates caused Hypo-FAF only on HRA, hardly visible on Topcon. Blocage phenomenon by blood however was identical. Conclusions: The filter set of Topcon and the single image acquisition appear to be an advantage for patients with cataract or poor fixation. Preceding FFA does not alter the Topcon FAF image. Regarding the FAF behaviour, there are differences between the two systems which need to be taken into account when interpreting the images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For the development and evaluation of cardiac magnetic resonance (MR) imaging sequences and methodologies, the availability of a periodically moving phantom to model respiratory and cardiac motion would be of substantial benefit. Given the specific physical boundary conditions in an MR environment, the choice of materials and power source of such phantoms is heavily restricted. Sophisticated commercial solutions are available; however, they are often relatively costly and user-specific modifications may not easily be implemented. We therefore sought to construct a low-cost MR-compatible motion phantom that could be easily reproduced and had design flexibility. A commercially available K'NEX construction set (Hyper Space Training Tower, K'NEX Industries, Inc., Hatfield, PA) was used to construct a periodically moving phantom head. The phantom head performs a translation with a superimposed rotation, driven by a motor over a 2-m rigid rod. To synchronize the MR data acquisition with phantom motion (without introducing radiofrequency-related image artifacts), a fiberoptic control unit generates periodic trigger pulses synchronized to the phantom motion. Total material costs of the phantom are US$ < 200.00, and a total of 80 man-hours were required to design and construct the original phantom. With schematics of the present solution, the phantom reproduction may be achieved in approximately 15 man-hours. The presented MR-compatible periodically moving phantom can easily be reproduced, and user-specific modifications may be implemented. Such an approach allows a detailed investigation of motion-related phenomena in MR images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Combinatorial optimization involves finding an optimal solution in a finite set of options; many everyday life problems are of this kind. However, the number of options grows exponentially with the size of the problem, such that an exhaustive search for the best solution is practically infeasible beyond a certain problem size. When efficient algorithms are not available, a practical approach to obtain an approximate solution to the problem at hand, is to start with an educated guess and gradually refine it until we have a good-enough solution. Roughly speaking, this is how local search heuristics work. These stochastic algorithms navigate the problem search space by iteratively turning the current solution into new candidate solutions, guiding the search towards better solutions. The search performance, therefore, depends on structural aspects of the search space, which in turn depend on the move operator being used to modify solutions. A common way to characterize the search space of a problem is through the study of its fitness landscape, a mathematical object comprising the space of all possible solutions, their value with respect to the optimization objective, and a relationship of neighborhood defined by the move operator. The landscape metaphor is used to explain the search dynamics as a sort of potential function. The concept is indeed similar to that of potential energy surfaces in physical chemistry. Borrowing ideas from that field, we propose to extend to combinatorial landscapes the notion of the inherent network formed by energy minima in energy landscapes. In our case, energy minima are the local optima of the combinatorial problem, and we explore several definitions for the network edges. At first, we perform an exhaustive sampling of local optima basins of attraction, and define weighted transitions between basins by accounting for all the possible ways of crossing the basins frontier via one random move. Then, we reduce the computational burden by only counting the chances of escaping a given basin via random kick moves that start at the local optimum. Finally, we approximate network edges from the search trajectory of simple search heuristics, mining the frequency and inter-arrival time with which the heuristic visits local optima. Through these methodologies, we build a weighted directed graph that provides a synthetic view of the whole landscape, and that we can characterize using the tools of complex networks science. We argue that the network characterization can advance our understanding of the structural and dynamical properties of hard combinatorial landscapes. We apply our approach to prototypical problems such as the Quadratic Assignment Problem, the NK model of rugged landscapes, and the Permutation Flow-shop Scheduling Problem. We show that some network metrics can differentiate problem classes, correlate with problem non-linearity, and predict problem hardness as measured from the performances of trajectory-based local search heuristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The World Wide Web, the world¿s largest resource for information, has evolved from organizing information using controlled, top-down taxonomies to a bottom up approach that emphasizes assigning meaning to data via mechanisms such as the Social Web (Web 2.0). Tagging adds meta-data, (weak semantics) to the content available on the web. This research investigates the potential for repurposing this layer of meta-data. We propose a multi-phase approach that exploits user-defined tags to identify and extract domain-level concepts. We operationalize this approach and assess its feasibility by application to a publicly available tag repository. The paper describes insights gained from implementing and applying the heuristics contained in the approach, as well as challenges and implications of repurposing tags for extraction of domain-level concepts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Glucose homeostasis requires the tight regulation of glucose utilization by liver, muscle and white or brown fat, and glucose production and release in the blood by liver. The major goal of maintaining glycemia at ∼ 5 mM is to ensure a sufficient flux of glucose to the brain, which depends mostly on this nutrient as a source of metabolic energy. This homeostatic process is controlled by hormones, mainly glucagon and insulin, and by autonomic nervous activities that control the metabolic state of liver, muscle and fat tissue but also the secretory activity of the endocrine pancreas. Activation or inhibition of the sympathetic or parasympathetic branches of the autonomic nervous systems are controlled by glucose-excited or glucose-inhibited neurons located at different anatomical sites, mainly in the brainstem and the hypothalamus. Activation of these neurons by hyper- or hypoglycemia represents a critical aspect of the control of glucose homeostasis, and loss of glucose sensing by these cells as well as by pancreatic β-cells is a hallmark of type 2 diabetes. In this article, aspects of the brain-endocrine pancreas axis are reviewed, highlighting the importance of central glucose sensing in the control of counterregulation to hypoglycemia but also mentioning the role of the neural control in β-cell mass and function. Overall, the conclusions of these studies is that impaired glucose homeostasis, such as associated with type 2 diabetes, but also defective counterregulation to hypoglycemia, may be caused by initial defects in glucose sensing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este trabajo se evalúan algoritmos heurísticos de exploración de entornos(AED, ANED, SA, TS, GA y GRASP) en la programación de pedidos en unamáquina de la vida real, con el objetivo de minimizar la suma de retrasos . Elcaso estudiado se diferencia de los problemas convencionales en que lostiempos de preparación de las operaciones están separados de los tiempos deprocesamiento y son dependientes de la secuencia. Los resultadoscomputacionales revelan que la Búsqueda Tabú funciona mejor que los otrosalgoritmos aplicados.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new branch and bound algorithm for weighted Max-SAT, called Lazy which incorporates original data structures and inference rules, as well as a lower bound of better quality. We provide experimental evidence that our solver is very competitive and outperforms some of the best performing Max-SAT and weighted Max-SAT solvers on a wide range of instances.