977 resultados para Variable Neighborhood Search
Resumo:
Despite decades of research, the exact pathogenic mechanisms underlying acute mountain sickness (AMS) are still poorly understood. This fact frustrates the search for novel pharmacological prophylaxis for AMS. The prevailing view is that AMS results from an insufficient physiological response to hypoxia and that prophylaxis should aim at stimulating the response. Starting off from the opposite hypothesis that AMS may be caused by an initial excessive response to hypoxia, we suggest that directly or indirectly blunting-specific parts of the response might provide promising research alternatives. This reasoning is based on the observations that (i) humans, once acclimatized, can climb Mt Everest experiencing arterial partial oxygen pressures (PaO2 ) as low as 25 mmHg without AMS symptoms; (ii) paradoxically, AMS usually develops at much higher PaO2 levels; and (iii) several biomarkers, suggesting initial activation of specific pathways at such PaO2 , are correlated with AMS. Apart from looking for substances that stimulate certain hypoxia triggered effects, such as the ventilatory response to hypoxia, we suggest to also investigate pharmacological means aiming at blunting certain other specific hypoxia-activated pathways, or stimulating their agonists, in the quest for better pharmacological prophylaxis for AMS.
Resumo:
Selostus: Ponsiviljeltävyys ja siihen liittyvät geenimerkit peltokauran ja susikauran risteytysjälkeläisissä
Resumo:
In this paper we design and develop several filtering strategies for the analysis of data generated by a resonant bar gravitational wave (GW) antenna, with the goal of assessing the presence (or absence) therein of long-duration monochromatic GW signals, as well as the eventual amplitude and frequency of the signals, within the sensitivity band of the detector. Such signals are most likely generated in the fast rotation of slightly asymmetric spinning stars. We develop practical procedures, together with a study of their statistical properties, which will provide us with useful information on the performance of each technique. The selection of candidate events will then be established according to threshold-crossing probabilities, based on the Neyman-Pearson criterion. In particular, it will be shown that our approach, based on phase estimation, presents a better signal-to-noise ratio than does pure spectral analysis, the most common approach.
Resumo:
This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.
Resumo:
Quest for Orthologs (QfO) is a community effort with the goal to improve and benchmark orthology predictions. As quality assessment assumes prior knowledge on species phylogenies, we investigated the congruency between existing species trees by comparing the relationships of 147 QfO reference organisms from six Tree of Life (ToL)/species tree projects: The National Center for Biotechnology Information (NCBI) taxonomy, Opentree of Life, the sequenced species/species ToL, the 16S ribosomal RNA (rRNA) database, and trees published by Ciccarelli et al. (Ciccarelli FD, et al. 2006. Toward automatic reconstruction of a highly resolved tree of life. Science 311:1283-1287) and by Huerta-Cepas et al. (Huerta-Cepas J, Marcet-Houben M, Gabaldon T. 2014. A nested phylogenetic reconstruction approach provides scalable resolution in the eukaryotic Tree Of Life. PeerJ PrePrints 2:223) Our study reveals that each species tree suggests a different phylogeny: 87 of the 146 (60%) possible splits of a dichotomous and rooted tree are congruent, while all other splits are incongruent in at least one of the species trees. Topological differences are observed not only at deep speciation events, but also within younger clades, such as Hominidae, Rodentia, Laurasiatheria, or rosids. The evolutionary relationships of 27 archaea and bacteria are highly inconsistent. By assessing 458,108 gene trees from 65 genomes, we show that consistent species topologies are more often supported by gene phylogenies than contradicting ones. The largest concordant species tree includes 77 of the QfO reference organisms at the most. Results are summarized in the form of a consensus ToL (http://swisstree.vital-it.ch/species_tree) that can serve different benchmarking purposes.
Resumo:
La publicación en 1992 de Nubosidad variable supuso la reanudación de la actividad novelesca de Carmen Martín Gaite, después de catorce arios dedicados al ensayo y a la historia. En esta obra, el «credo» literario de la novelista no ha variado sustancialmente en relación a otras como Ritmo lento, Retahilas o El cuarto de atrás, pero se ha visto enriquecido con una técnica que acota y fija de forma maestra todo el mundo de significaciones que se proyecta, y que podemos conceptualizar como el flirteo de la autora con la introspección, el recuerdo, y los antojos frente al tiempo pasado de una memoria vulnerable y resbaladiza. La perspectiva narrativa se revela, en este sentido, como el apoyo básico del andamiaje material de la obra.
Resumo:
We consider the numerical treatment of the optical flow problem by evaluating the performance of the trust region method versus the line search method. To the best of our knowledge, the trust region method is studied here for the first time for variational optical flow computation. Four different optical flow models are used to test the performance of the proposed algorithm combining linear and nonlinear data terms with quadratic and TV regularization. We show that trust region often performs better than line search; especially in the presence of non-linearity and non-convexity in the model.
Resumo:
In 2008 the regional government of Catalonia (Spain) reduced the maximum speed limit on several stretches of congested urban motorway in the Barcelona metropolitan area to 80 km/h, while in 2009 it introduced a variable speed system on other stretches of its metropolitan motorways. We use the differences-in-differences method, which enables a policy impact to be measured under specific conditions, to assess the impact of these policies on emissions of NOx and PM10. Empirical estimation indicate that reducing the speed limit to 80 km h-1 causes a 1.7 to 3.2% increase in NOx and 5.3 to 5.9% in PM10. By contrast, the variable speed policy reduced NOx and PM10 pollution by 7.7 to 17.1% and 14.5 to 17.3%. As such, a variable speed policy appears to be a more effective environmental policy than reducing the speed limit to a maximum of 80 km/h.
Resumo:
As the world’s energy demand is increasing, a durable solution to control it is to improve the energy efficiency of the processes. It has been estimated that pumping applications have a significant potential for energy savings trough equipment or control system changes. For many pumping application the use of a variable speed drive as a process control element is the most energy efficient solution. The main target of this study is to examine the energy efficiency of a drive system that moves the pump. In a larger scale the purpose of this study is to examine how the different manufacturers’ variable speed drives are functioning as a control device of a pumping process. The idea is to compare the drives from a normal pump user’s point of view. The things that are mattering for the pump user are the efficiency gained in the process and the easiness of the use of the VSD. So some thought is given also on valuating the user-friendliness of the VSDs. The VSDs are compared to each other also on the basis of their life cycle energy costs in different kind of pumping cases. The comparison is made between ACS800 from ABB, VLT AQUA Drive from Danfoss, NX-drive from Vacon and Micromaster 430 from Siemens. The efficiencies are measured in power electronics laboratory in the Lappeenranta University of Technology with a system that consists of a variable speed drive, an induction motor with dc-machine, two power analyzers and a torque transducer. The efficiencies are measured as a function of a load at different frequencies. According to measurement results the differences between the measured system efficiencies on the actual working area of pumping are on average few percent units. When examining efficiencies at the whole range of different loads and frequencies, the differences get bigger. At low frequencies and loads the differences between the most efficient and the least efficient systems are at the most about ten percent units. At the most of the tested points ABB’s drive seem to have slightly better efficiencies than the other drives.
Resumo:
Previous studies have identified the rivalry among technological platforms as one of the main driving forces of broadband services penetration. This paper draws on data from the Spanish market between 2005 and 2011 to estimate the main determinants of broadband prices. Controlling for broadband tariffs features and network variables, we examine the impact of the different modes of competition on prices. We find that inter-platform competition has no significant effects over prices, while intra-platform competition is a key driver of the prices charged in the broadband market. Our analysis suggests that the impact of different types of competition on prices is critically affected by the levels of development of the broadband market achieved by the considered country
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
We present preliminary results of a campaign undertaken with different radio interferometers to observe a sample of the most variable unidentified EGRET sources. We expect to detect which of the possible counterparts of the gamma-ray sources (any of the radio emitters in the field) varies in time with similar timescales as the gamma-ray variation. If the gamma-rays are produced in a jet-like source, as we have modelled theoretically, synchrotron emission is also expected at radio wavelengths. Such radio emission should appear variable in time and correlated with the gamma-ray variability.
Resumo:
Demonstration of survival and outcome of progressive multifocal leukoencephalopathy (PML) in a 56-year-old patient with common variable immunodeficiency, consisting of severe hypogammaglobulinemia and CD4+ T lymphocytopenia, during continuous treatment with mirtazapine (30 mg/day) and mefloquine (250 mg/week) over 23 months. Regular clinical examinations including Rankin scale and Barthel index, nine-hole peg and box and block tests, Berg balance, 10-m walking tests, and Montreal Cognitive Assessment (MoCA) were done. Laboratory diagnostics included complete blood count and JC virus (JCV) concentration in cerebrospinal fluid (CSF). The noncoding control region (NCCR) of JCV, important for neurotropism and neurovirulence, was sequenced. Repetitive MRI investigated the course of brain lesions. JCV was detected in increasing concentrations (peak 2568 copies/ml CSF), and its NCCR was genetically rearranged. Under treatment, the rearrangement changed toward the archetype sequence, and later JCV DNA became undetectable. Total brain lesion volume decreased (8.54 to 3.97 cm(3)) and atrophy increased. Barthel (60 to 100 to 80 points) and Rankin (4 to 2 to 3) scores, gait stability, and box and block (7, 35, 25 pieces) and nine-hole peg (300, 50, 300 s) test performances first improved but subsequently worsened. Cognition and walking speed remained stable. Despite initial rapid deterioration, the patient survived under continuous treatment with mirtazapine and mefloquine even though he belongs to a PML subgroup that is usually fatal within a few months. This course was paralleled by JCV clones with presumably lower replication capability before JCV became undetectable. Neurological deficits were due to PML lesions and progressive brain atrophy.