892 resultados para Access to Content
Resumo:
Recognition of environmental sounds is believed to proceed through discrimination steps from broad to more narrow categories. Very little is known about the neural processes that underlie fine-grained discrimination within narrow categories or about their plasticity in relation to newly acquired expertise. We investigated how the cortical representation of birdsongs is modulated by brief training to recognize individual species. During a 60-minute session, participants learned to recognize a set of birdsongs; they improved significantly their performance for trained (T) but not control species (C), which were counterbalanced across participants. Auditory evoked potentials (AEPs) were recorded during pre- and post-training sessions. Pre vs. post changes in AEPs were significantly different between T and C i) at 206-232ms post stimulus onset within a cluster on the anterior part of the left superior temporal gyrus; ii) at 246-291ms in the left middle frontal gyrus; and iii) 512-545ms in the left middle temporal gyrus as well as bilaterally in the cingulate cortex. All effects were driven by weaker activity for T than C species. Thus, expertise in discriminating T species modulated early stages of semantic processing, during and immediately after the time window that sustains the discrimination between human vs. animal vocalizations. Moreover, the training-induced plasticity is reflected by the sharpening of a left lateralized semantic network, including the anterior part of the temporal convexity and the frontal cortex. Training to identify birdsongs influenced, however, also the processing of C species, but at a much later stage. Correct discrimination of untrained sounds seems to require an additional step which results from lower-level features analysis such as apperception. We therefore suggest that the access to objects within an auditory semantic category is different and depends on subject's level of expertise. More specifically, correct intra-categorical auditory discrimination for untrained items follows the temporal hierarchy and transpires in a late stage of semantic processing. On the other hand, correct categorization of individually trained stimuli occurs earlier, during a period contemporaneous with human vs. animal vocalization discrimination, and involves a parallel semantic pathway requiring expertise.
Resumo:
Background: Antiretroviral therapy has changed the natural history of human immunodeficiency virus (HIV) infection in developed countries, where it has become a chronic disease. This clinical scenario requires a new approach to simplify follow-up appointments and facilitate access to healthcare professionals. Methodology: We developed a new internet-based home care model covering the entire management of chronic HIV-infected patients. This was called Virtual Hospital. We report the results of a prospective randomised study performed over two years, comparing standard care received by HIV-infected patients with Virtual Hospital care. HIV-infected patients with access to a computer and broadband were randomised to be monitored either through Virtual Hospital (Arm I) or through standard care at the day hospital (Arm II). After one year of follow up, patients switched their care to the other arm. Virtual Hospital offered four main services: Virtual Consultations, Telepharmacy, Virtual Library and Virtual Community. A technical and clinical evaluation of Virtual Hospital was carried out. Findings: Of the 83 randomised patients, 42 were monitored during the first year through Virtual Hospital (Arm I) and 41 through standard care (Arm II). Baseline characteristics of patients were similar in the two arms. The level of technical satisfaction with the virtual system was high: 85% of patients considered that Virtual Hospital improved their access to clinical data and they felt comfortable with the videoconference system. Neither clinical parameters [level of CD4 + T lymphocytes, proportion of patients with an undetectable level of viral load (p = 0.21) and compliance levels 90% (p = 0.58)] nor the evaluation of quality of life or psychological questionnaires changed significantly between the two types of care. Conclusions: Virtual Hospital is a feasible and safe tool for the multidisciplinary home care of chronic HIV patients. Telemedicine should be considered as an appropriate support service for the management of chronic HIV infection.
Resumo:
The movement for open access to science seeks to achieve unrestricted and free access to academic publications on the Internet. To this end, two mechanisms have been established: the gold road, in which scientific journals are openly accessible, and the green road, in which publications are self-archived in repositories. The publication of the Finch Report in 2012, advocating exclusively the adoption of the gold road, generated a debate as to whether either of the two options should be prioritized. The recommendations of the Finch Report stirred controversy among academicians specialized in open access issues, who felt that the role played by repositories was not adequately considered and because the green road places the burden of publishing costs basically on authors. The Finch Report"s conclusions are compatible with the characteristics of science communication in the UK and they could surely also be applied to the (few) countries with a powerful publishing industry and substantial research funding. In Spain, both the current national legislation and the existing rules at universities largely advocate the green road. This is directly related to the structure of scientific communication in Spain, where many journals have little commercial significance, the system of charging a fee to authors has not been adopted, and there is a good repository infrastructure. As for open access policies, the performance of the scientific communication system in each country should be carefully analyzed to determine the most suitable open access strategy. [Int Microbiol 2013; 16(3):199-203]
Resumo:
Background: Information about the composition of regulatory regions is of great value for designing experiments to functionally characterize gene expression. The multiplicity of available applications to predict transcription factor binding sites in a particular locus contrasts with the substantial computational expertise that is demanded to manipulate them, which may constitute a potential barrier for the experimental community. Results: CBS (Conserved regulatory Binding Sites, http://compfly.bio.ub.es/CBS) is a public platform of evolutionarily conserved binding sites and enhancers predicted in multiple Drosophila genomes that is furnished with published chromatin signatures associated to transcriptionally active regions and other experimental sources of information. The rapid access to this novel body of knowledge through a user-friendly web interface enables non-expert users to identify the binding sequences available for any particular gene, transcription factor, or genome region. Conclusions: The CBS platform is a powerful resource that provides tools for data mining individual sequences and groups of co-expressed genes with epigenomics information to conduct regulatory screenings in Drosophila.
Resumo:
Background: Cardiovascular disease (CVD), mainly heart attack and stroke, is the leading cause of premature mortality in low and middle income countries (LMICs). Identifying and managing individuals at high risk of CVD is an important strategy to prevent and control CVD, in addition to multisectoral population-based interventions to reduce CVD risk factors in the entire population. Methods: We describe key public health considerations in identifying and managing individuals at high risk of CVD in LMICs. Results: A main objective of any strategy to identify individuals at high CVD risk is to maximize the number of CVD events averted while minimizing the numbers of individuals needing treatment. Scores estimating the total risk of CVD (e.g. ten-year risk of fatal and non-fatal CVD) are available for LMICs, and are based on the main CVD risk factors (history of CVD, age, sex, tobacco use, blood pressure, blood cholesterol and diabetes status). Opportunistic screening of CVD risk factors enables identification of persons with high CVD risk, but this strategy can be widely applied in low resource settings only if cost effective interventions are used (e.g. the WHO Package of Essential NCD interventions for primary health care in low resource settings package) and if treatment (generally for years) can be sustained, including continued availability of affordable medications and funding mechanisms that allow people to purchase medications without impoverishing them (e.g. universal access to health care). This also emphasises the need to re-orient health systems in LMICs towards chronic diseases management.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
BACKGROUND: Available methods to simulate nucleotide or amino acid data typically use Markov models to simulate each position independently. These approaches are not appropriate to assess the performance of combinatorial and probabilistic methods that look for coevolving positions in nucleotide or amino acid sequences. RESULTS: We have developed a web-based platform that gives a user-friendly access to two phylogenetic-based methods implementing the Coev model: the evaluation of coevolving scores and the simulation of coevolving positions. We have also extended the capabilities of the Coev model to allow for the generalization of the alphabet used in the Markov model, which can now analyse both nucleotide and amino acid data sets. The simulation of coevolving positions is novel and builds upon the developments of the Coev model. It allows user to simulate pairs of dependent nucleotide or amino acid positions. CONCLUSIONS: The main focus of our paper is the new simulation method we present for coevolving positions. The implementation of this method is embedded within the web platform Coev-web that is freely accessible at http://coev.vital-it.ch/, and was tested in most modern web browsers.
Resumo:
BACKGROUND: Transmission of mucosal pathogens relies on their ability to bind to the surfaces of epithelial cells, to cross this thin barrier, and to gain access to target cells and tissues, leading to systemic infection. This implies that pathogen-specific immunity at mucosal sites is critical for the control of infectious agents using these routes to enter the body. Although mucosal delivery would ensure the best onset of protective immunity, most of the candidate vaccines are administered through the parenteral route. OBJECTIVE: The present study evaluates the feasibility of delivering the chemically bound p24gag (referred to as p24 in the text) HIV antigen through secretory IgA (SIgA) in nasal mucosae in mice. RESULTS: We show that SIgA interacts specifically with mucosal microfold cells present in the nasal-associated lymphoid tissue. p24-SIgA complexes are quickly taken up in the nasal cavity and selectively engulfed by mucosal dendritic cell-specific intercellular adhesion molecule 3-grabbing nonintegrin-positive dendritic cells. Nasal immunization with p24-SIgA elicits both a strong humoral and cellular immune response against p24 at the systemic and mucosal levels. This ensures effective protection against intranasal challenge with recombinant vaccinia virus encoding p24. CONCLUSION: This study represents the first example that underscores the remarkable potential of SIgA to serve as a carrier for a protein antigen in a mucosal vaccine approach targeting the nasal environment.
Resumo:
MetaNetX is a repository of genome-scale metabolic networks (GSMNs) and biochemical pathways from a number of major resources imported into a common namespace of chemical compounds, reactions, cellular compartments-namely MNXref-and proteins. The MetaNetX.org website (http://www.metanetx.org/) provides access to these integrated data as well as a variety of tools that allow users to import their own GSMNs, map them to the MNXref reconciliation, and manipulate, compare, analyze, simulate (using flux balance analysis) and export the resulting GSMNs. MNXref and MetaNetX are regularly updated and freely available.
The personal research portal : web 2.0 driven individual commitment with open access for development
Resumo:
Peer-reviewed
Resumo:
The advent of the Internet had a great impact on distance education and rapidly e-learning has become a killer application. Education institutions worldwide are taking advantage of the available technology in order to facilitate education to a growing audience. Everyday, more and more people use e-learning systems, environments and contents for both training and learning. E-learning promotes educationamong people that due to different reasons could not have access to education: people who could nottravel, people with very little free time, or withdisabilities, etc. As e-learning systems grow and more people are accessing them, it is necessary to consider when designing virtual environments the diverse needs and characteristics that different users have. This allows building systems that people can use easily, efficiently and effectively, where the learning process leads to a good user experience and becomes a good learning experience.
Resumo:
Nonnative brook trout Salvelinus fontinalis are abundant in Pine Creek and its main tributary, Bogard Spring Creek, California. These creeks historically provided the most spawning and rearing habitat for endemic Eagle Lake rainbow trout Oncorhynchus mykiss aquilarum. Three-pass electrofishing removal was conducted in 2007–2009 over the entire 2.8-km length of Bogard Spring Creek to determine whether brook trout removal was a feasible restoration tool and to document the life history characteristics of brook trout in a California meadow stream. After the first 2 years of removal, brook trout density and biomass were severely reduced from 15,803 to 1,192 fish/ha and from 277 to 31 kg/ha, respectively. Average removal efficiency was 92–97%, and most of the remaining fish were removed in the third year. The lack of a decrease in age-0 brook trout abundance between 2007 and 2008 after the removal of more than 4,000 adults in 2007 suggests compensatory reproduction of mature fish that survived and higher survival of age-0 fish. However, recruitment was greatly reduced after 2 years of removal and is likely to be even more depressed after the third year of removal assuming that immigration of fish from outside the creek continues to be minimal. Brook trout condition, growth, and fecundity indicated a stunted population at the start of the study, but all three features increased significantly every year, demonstrating compensatory effects. Although highly labor intensive, the use of electrofishing to eradicate brook trout may be feasible in Bogard Spring Creek and similar small streams if removal and monitoring are continued annually and if other control measures (e.g., construction of barriers) are implemented. Our evidence shows that if brook trout control measures continue and if only Eagle Lake rainbow trout are allowed access to the creek, then a self-sustaining population ofEagle Lake rainbow trout can become reestablished
Resumo:
The provision of Internet access to large numbers has traditionally been under the control of operators, who have built closed access networks for connecting customers. As the access network (i.e. the last mile to the customer) is generally the most expensive part of the network because of the vast amount of cable required, many operators have been reluctant to build access networks in rural areas. There are problems also in urban areas, as incumbent operators may use various tactics to make it difficult for competitors to enter the market. Open access networking, where the goal is to connect multiple operators and other types of service providers to a shared network, changes the way in which networks are used. This change in network structure dismantles vertical integration in service provision and enables true competition as no service provider can prevent others fromcompeting in the open access network. This thesis describes the development from traditional closed access networks towards open access networking and analyses different types of open access solution. The thesis introduces a new open access network approach (The Lappeenranta Model) in greater detail. The Lappeenranta Model is compared to other types of open access networks. The thesis shows that end users and service providers see local open access and services as beneficial. In addition, the thesis discusses open access networking in a multidisciplinary fashion, focusing on the real-world challenges of open access networks.
Resumo:
Internetin yhteisöpalvelut ovat saavuttaneet suuren suosion. Ne mahdollistavat digitaalisten yhteisöjen ja yhteisöllisyyden tunteiden aikaansaamisen käyttäjien välille. Kehittyneet verkkoyhteydet ja sisältötekniikat ovat antaneet mahdollisuuden monipuolisten vuorovaikutustyökalujen toteuttamiseksi. Sähköinen yhteisöllisyys on nouseva trendi, joka tukee arkipäivästä todellisuutta. Verkkojen paikallinen kehitys, niin laajakaistan kuin erilaisten alueverkkojen avulla on nostanut ajatuksia luoda myös paikallista yhteisöllisyyttä globaalien yhteisöpalvelujen rinnalle. Nopeiden alueverkkojen laajentuminen ovat tuoneet edistyneet verkkoyhteydet niin lähiöihin kuin taajamienkin ulkopuolelle. Avointen alueverkkojen mallissa verkko ja palvelukerros ovat eriytetty toisistaan. Ulkopuoliset palveluntarjoajat voivat tarjota palveluitaan suoraan alueverkon käyttäjille lähempänä verkkotasoa, verrattuna perinteiseen yhden verkkooperaattorin malliin. Tämä mahdollistaa uusien innovatiivisempien palveluiden kehittelyn. Tässä diplomityössä tutkittiin mahdollisuuksia joilla voidaan edistää yhteisöllisyyttä paikallisissa alueverkoissa ja hyödyntää niiden paikallista suorituskykyä, sekä resursseja palveluiden toteutuksessa. Työssä selvitettiin ensin mistä käsitteet yhteisö ja yhteisöllisyys ovat muodostuneet. Selvityksen pohjalta tutkittiin mitä teknisiä menetelmiä on yhteisöjen ja yhteisöllisyyden tunteen aikaansaamiseksi olemassa. Selvityksen tuloksena syntyi teknologinen tiekartta tietoverkkoyhteisöllisyyteen, sekä sosiaalisten palvelualustojen luokittelukaavio, joiden tarkoitus on yhdessä kuvastaa yhteisöllisyyttä tukevia palvelumahdollisuuksia. Työn viimeisessä vaiheessa toteutettiin yhteisöllinen alueverkkopalvelu, sekä yhteisövalvontajärjestelmä – lisäarvopalvelukonsepti, jotka pyrkivät hyödyntämään paikallisen alueverkon tarjoamaa suorituskykyä ja resursseja.