886 resultados para Query Refinement


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background The MPER region of the HIV-1 envelope glycoprotein gp41 is targeted by broadly neutralizing antibodies. However, the localization of this epitope in a hydrophobic environment seems to hamper the elicitation of these antibodies in HIV infected individuals. We have quantified and characterized anti-MPER antibodies by ELISA and by flow cytometry using a collection of mini gp41-derived proteins expressed on the surface of 293T cells. Longitudinal plasma samples from 35 HIV-1 infected individuals were assayed for MPER recognition and MPER-dependent neutralizing capacity using HIV-2 viruses engrafted with HIV-1 MPER sequences. Results Miniproteins devoid of the cysteine loop of gp41 exposed the MPER on 293T cell membrane. Anti-MPER antibodies were identified in most individuals and were stable when analyzed in longitudinal samples. The magnitude of the responses was strongly correlated with the global response to the HIV-1 envelope glycoprotein, suggesting no specific limitation for anti-MPER antibodies. Peptide mapping showed poor recognition of the C-terminal MPER moiety and a wide presence of antibodies against the 2F5 epitope. However, antibody titers failed to correlate with 2F5-blocking activity and, more importantly, with the specific neutralization of HIV-2 chimeric viruses bearing the HIV-1 MPER sequence; suggesting a strong functional heterogeneity in anti-MPER humoral responses. Conclusions Anti-MPER antibodies can be detected in the vast majority of HIV-1 infected individuals and are generated in the context of the global anti-Env response. However, the neutralizing capacity is heterogeneous suggesting that eliciting neutralizing anti-MPER antibodies by immunization might require refinement of immunogens to skip nonneutralizing responses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given the climatic changes around the world and the growing outdoor sports participation, existing guidelines and recommendations for exercising in naturally challenging environments such as heat, cold or altitude, exhibit potential shortcomings. Continuous efforts from sport sciences and exercise physiology communities aim at minimizing the risks of environmental-related illnesses during outdoor sports practices. Despite this, the use of simple weather indices does not permit an accurate estimation of the likelihood of facing thermal illnesses. This provides a critical foundation to modify available human comfort modeling and to integrate bio-meteorological data in order to improve the current guidelines. Although it requires further refinement, there is no doubt that standardizing the recently developed Universal Thermal Climate Index approach and its application in the field of sport sciences and exercise physiology may help to improve the appropriateness of the current guidelines for outdoor, recreational and competitive sports participation. This review first summarizes the main environmental-related risk factors that are susceptible to increase with recent climate changes when exercising outside and offers recommendations to combat them appropriately. Secondly, we briefly address the recent development of thermal stress models to assess the thermal comfort and physiological responses when practicing outdoor activities in challenging environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis was made for Hyötypaperi Oy. To the business activity of the company belongs the recycling of materials and carry out for re-use and the manufacture of solid biofuels and solid recovered fuels. Hyötypaperi Oy delivers forest chips to its partner incineration plants by day and night though the year. The value of the forest chips is based on its percentage of dry material. It is important to dry forest chips well before its storage in piles and delivering to incineration plants. In the thesis was examed the increasing of the degree of refinement of forest chips by different drying methods. In the thesis was examined four different drying methods of the forest chips. The methods were field-, plate-, platform- and channel drying. In the channel drying was used a mechanical blower and the other drying methods were based on the weather conditions. By all drying methods were made test dryings during the summer 2007. In the thesis was examined also the economical profitableness of the field- and channel drying. The last examination in the thesis was measuring of the forest chips’s humidity by humidity measuring equipment of sawn timber during November 2007. The field drying on the surface of asphalt is the only method of drying, which is used by Hyötypaperi Oy in its own production. There do not exist earlier properly examined facts of any drying methods of forest chips, because the drying of forest chips is a new branch. By field- and platform drying achieved lower humidity of forest chips than by plate drying. The object by using the humidity measuring equipment was to be informed of the humidity of forest chips. At present the humidity will find out after 24 hours when the sample of humidity quantity has been dried in the oven. The Lappeenranta University of Technology had the humidity measuring equipment of sawn timber. The values of humidity measured by the equipment from the sample of forest chips were 2 – 9 percent lower than the real values of humidity specified by drying in oven.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Article About the Authors Metrics Comments Related Content Abstract Introduction Functionality Implementation Discussion Acknowledgments Author Contributions References Reader Comments (0) Figures Abstract Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/​under the LGPL license.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While the overall incidence of myocardial infarction (MI) has been decreasing since 2000 [1], there is an increasing number of younger patients presenting with MI [2]. Few studies have focused on MI in very young patients, aged 35 years or less, as they only account for a minority of all patients with myocardial infarction [3]. According to the age category, MI differs in presentation, treatment and outcome, as illustrated in table 1. Echocardiography is considered mandatory according to scientific guidelines in the management and diagnosis of MI [4,5,6]. However, new imaging techniques such as cardiac magnetic resonance (CMR) and computed tomography (CT) are increasingly performed and enable further refinement of the diagnosis of MI. These techniques allow, in particular, precise location and quantification of MI. In this case, MI was located to the septum, which is an unusual presentation of MI. The incidence of pulmonary embolism (PE) has also increased in young patients over the past years [7]. Since symptoms and signs of PE may be non-specific, establishing its diagnosis remains a challenge [8]. Therefore, PE is one of the most frequently missed diagnosis in clinical medicine. Because of the widespread use of CT and its improved visualization of pulmonary arteries, PE may be discovered incidentally [9]. In the absence of a congenital disorder, multiple and/or simultaneous disease presentation is uncommon in the young. We report the rare case of a 35 year old male with isolated septal MI and simultaneous PE. The diagnosis of this rare clinical entity was only possible by means of newer imaging techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Psychogenic non-epileptic seizures (PNES) are involuntary paroxysmal events that are unaccompanied by epileptiform EEG discharges. We hypothesised that PNES are a disorder of distributed brain networks resulting from their functional disconnection.The disconnection may underlie a dissociation mechanism that weakens the influence of unconsciously presented traumatising information but exerts maladaptive effects leading to episodic failures of behavioural control manifested by psychogenic 'seizures'. METHODS: To test this hypothesis, we compared functional connectivity (FC) derived from resting state high-density EEGs of 18 patients with PNES and 18 age-matched and gender-matched controls. To this end, the EEGs were transformed into source space using the local autoregressive average inverse solution. FC was estimated with a multivariate measure of lagged synchronisation in the θ, α and β frequency bands for 66 brain sites clustered into 18 regions. A multiple comparison permutation test was applied to deduce significant between-group differences in inter-regional and intraregional FC. RESULTS: The significant effect of PNES-a decrease in lagged FC between the basal ganglia and limbic, prefrontal, temporal, parietal and occipital regions-was found in the α band. CONCLUSION: We believe that this finding reveals a possible neurobiological substrate of PNES, which explains both attenuation of the effect of potentially disturbing mental representations and the occurrence of PNES episodes. By improving understanding of the aetiology of this condition, our results suggest a potential refinement of diagnostic criteria and management principles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El uso de las TIC se ha masificado dentro del ámbito del turismo convirtiéndose en herramienta fundamental y un aliado para llegar a conquistar turistas para los diferentes destinos que se promocionan a través de aplicaciones moviles o de website.Cada vez más las entidades turísticas o las empresas recurren a las tecnologías de la información, en particular Internet, como medio para promocionar sus productos y servicios turísticos. Estas nuevas tecnologías han cambiado el concepto de vida de personas en cuanto a la consulta de precio y rapidez de información de los diferentes servicios turísticos.En Valledupar se debe: aprovechar la tendencia mundial al rescate de los valores auténticos, el medio ambiente y las comunidades indígenas a través de diferentes modalidades de turismo: Ecoturismo, etnoturismo, agroturismo, cultural, religioso, compras, aventura, salud, deportivo, ciudad capital. Se debe ampliar el conocimiento del territorio municipal y de los valores autóctonos. Mediante uso de software libre y de código abierto se pueden crear soluciones para fortalecer la promoción del sector turístico de la ciudad de Valledupar.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An unsupervised approach to image segmentation which fuses region and boundary information is presented. The proposed approach takes advantage of the combined use of 3 different strategies: the guidance of seed placement, the control of decision criterion, and the boundary refinement. The new algorithm uses the boundary information to initialize a set of active regions which compete for the pixels in order to segment the whole image. The method is implemented on a multiresolution representation which ensures noise robustness as well as computation efficiency. The accuracy of the segmentation results has been proven through an objective comparative evaluation of the method

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tärkeä osa paperiteollisuuden myyntitapahtumaa on tarkistaa tilatun tuotteen saatavuus ja toimittamisen aikataulu. Käytännössä tämä tarkoittaa kuljetusten, tuotannon ja valmistetun materiaalin tarkistamista. Tässä työssä on tehty olemassa olevan vapaan materiaalin tarkistaminen. Materiaalin tarkastus ei ole uusi idea, mutta kapasiteettivarauksen uudelleen toteutus on tehty tulevan ylläpitotyön helpottamiseksi ja järjestelmän suorituskyvyn parantamiseksi. Lisäksi uutta varauslogiikkaa pystytään käyttämään muissakin tuotannonohjausjärjestelmän ohjelmistoissa. Kapasiteettivaraukseen on myös rakennettu uusi kustannuspohjainen priorisointijärjestelmä, ja mietitty kuinka tätä olisi tulevaisuudessa helppo jalostaa. Erityishuomiota on kiinnitetty toiminnan läpinäkyvyyteen eli siihen, että tarkistuslogiikka kertoo syyt eri materiaalin hylkäämiseen. Työn yhteydessä on analysoitu materiaalivarauksen aiheuttamaa kuormaa järjestelmässä ja mietitty eri tekniikoita suorituskyvyn parantamiseksi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Laser scanning is becoming an increasingly popular method for measuring 3D objects in industrial design. Laser scanners produce a cloud of 3D points. For CAD software to be able to use such data, however, this point cloud needs to be turned into a vector format. A popular way to do this is to triangulate the assumed surface of the point cloud using alpha shapes. Alpha shapes start from the convex hull of the point cloud and gradually refine it towards the true surface of the object. Often it is nontrivial to decide when to stop this refinement. One criterion for this is to do so when the homology of the object stops changing. This is known as the persistent homology of the object. The goal of this thesis is to develop a way to compute the homology of a given point cloud when processed with alpha shapes, and to infer from it when the persistent homology has been achieved. Practically, the computation of such a characteristic of the target might be applied to power line tower span analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Disseny i implementació d’aplicació web destinada a consulta d’informació urbana en els municipis de Mallorca. Localització de punts d’interès. Implementació pilot en el municipi de Santa Eugènia, Mallorca

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is no evidence of urban civilization in Brazilian prehistory; most inhabitants lived in tribal groupings, probably with regional economic integration among several independent tribes. There is little evidence of seasonal migrations between the coast and the inland of southern Brazil. Some specialized horticulturists competed among themselves but other groups lived more isolated and probably peacefully, in the upper interfluvial regions. Chemical analysis of artifacts is a means of documenting traffic in particular materials and intraregional production and distribution, development of craft specialization and typological refinement among other issues. In this study we tested some possibilities in two different cultural contexts using the parametric k0 neutron activation analysis technique, which allowed the determination of elements: Al, As, Au, Ce, Cl, Co, Cr, Cs, Cu, Fe, Ga, K, La, Na, Rb, Sc, Ta, Ti, V and Zn.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Mössbauer analysis along with the structural Rietveld refinement based on powder X-ray data for the magnetic fraction (saturation magnetization, sigma = 19 J T-1 kg-1) separated from a tuffite material from Alto Paranaíba, state of Minas Gerais, Brazil, reveal that a (Ti, Mg)-rich maghemite (deduced sigma = 17 J T-1 kg-1) and, for the first time observed in this lithodomain, magnesioferrite (characteristic sigma = 21 J T-1 kg-1) respond for the magnetization of the rock material. Consistent models for the ionic distribution in these iron-rich spinel structures are proposed.