984 resultados para PBL tutorial search term
Resumo:
This dissertation analyzed the existing work on travestility and transsexuality whose problematic research focused on issues related to health and / or health services. For this purpose, a Literature Review Systematized Descriptive in virtual databases was performed: Bank of Higher Education Personnel Improvement Coordination Thesis (CAPES), Brazilian Digital Library of Theses and Dissertations (BDTD), Scielo and PubMed, between the years 1997 and 2014 in Brazil. We used the search terms "transsexual," "transvestite" and "transgender", each associated with the search term "health", in Portuguese and English. Complementing this search, we used the Documentary Analysis methodology to assess pamphlets productions, institutional documents and non-governmental organizations (NGOs), which were incorporated into the discussion. 295 papers were identified, among theses, dissertations and scientific articles. Of these, 223 were excluded and 72 selected for analysis. Thus, it obtained five theses and dissertations 21 on the topic of travestility and 7 theses and dissertations 9 that deal with transsexuality. Among the selected papers, 16 deal with transsexuality and health, 5 address the issue of travestility and health and work, 9 refer to the term "transgender" and "health". Even though it is an emerging field of research, there is an apparent deviation of the speech, previously anchored in questions whose topics are related to confrontation, infection or illness by HIV / AIDS (level of specialized care) for discussions on the health care for transsexuals in the process (level of specialized care). Still, few papers have specific trans attention in primary care associated with a comprehensive health care, with the empowerment of individuals, respecting the power of life, which are configured as important issues for the Public Policy on Health today.
Resumo:
Multimedia objects, especially images and figures, are essential for the visualization and interpretation of research findings. The distribution and reuse of these scientific objects is significantly improved under open access conditions, for instance in Wikipedia articles, in research literature, as well as in education and knowledge dissemination, where licensing of images often represents a serious barrier. Whereas scientific publications are retrievable through library portals or other online search services due to standardized indices there is no targeted retrieval and access to the accompanying images and figures yet. Consequently there is a great demand to develop standardized indexing methods for these multimedia open access objects in order to improve the accessibility to this material. With our proposal, we hope to serve a broad audience which looks up a scientific or technical term in a web search portal first. Until now, this audience has little chance to find an openly accessible and reusable image narrowly matching their search term on first try - frustratingly so, even if there is in fact such an image included in some open access article.
Resumo:
Over recent years, evidence has been accumulating in favour of the importance of long-term information as a variable which can affect the success of short-term recall. Lexicality, word frequency, imagery and meaning have all been shown to augment short term recall performance. Two competing theories as to the causes of this long-term memory influence are outlined and tested in this thesis. The first approach is the order-encoding account, which ascribes the effect to the usage of resources at encoding, hypothesising that word lists which require less effort to process will benefit from increased levels of order encoding, in turn enhancing recall success. The alternative view, trace redintegration theory, suggests that order is automatically encoded phonologically, and that long-term information can only influence the interpretation of the resultant memory trace. The free recall experiments reported here attempted to determine the importance of order encoding as a facilitatory framework and to determine the locus of the effects of long-term information in free recall. Experiments 1 and 2 examined the effects of word frequency and semantic categorisation over a filled delay, and experiments 3 and 4 did the same for immediate recall. Free recall was improved by both long-term factors tested. Order information was not used over a short filled delay, but was evident in immediate recall. Furthermore, it was found that both long-term factors increased the amount of order information retained. Experiment 5 induced an order encoding effect over a filled delay, leaving a picture of short-term processes which are closely associated with long-term processes, and which fit conceptions of short-term memory being part of language processes rather better than either the encoding or the retrieval-based models. Experiments 6 and 7 aimed to determine to what extent phonological processes were responsible for the pattern of results observed. Articulatory suppression affected the encoding of order information where speech rate had no direct influence, suggesting that it is ease of lexical access which is the most important factor in the influence of long-term memory on immediate recall tasks. The evidence presented in this thesis does not offer complete support for either the retrieval-based account or the order encoding account of long-term influence. Instead, the evidence sits best with models that are based upon language-processing. The path urged for future research is to find ways in which this diffuse model can be better specified, and which can take account of the versatility of the human brain.
Resumo:
To compare neonatal deaths and complications in infants born at 34-36 weeks and six days (late preterm: LPT) with those born at term (37-41 weeks and six days); to compare deaths of early term (37-38 weeks) versus late term (39-41 weeks and six days) infants; to search for any temporal trend in LPT rate. A retrospective cohort study of live births was conducted in the Campinas State University, Brazil, from January 2004 to December 2010. Multiple pregnancies, malformations and congenital diseases were excluded. Control for confounders was performed. The level of significance was set at p<0.05. After exclusions, there were 17,988 births (1653 late preterm and 16,345 term infants). A higher mortality in LPT versus term was observed, with an adjusted odds ratio (OR) of 5.29 (p<0.0001). Most complications were significantly associated with LPT births. There was a significant increase in LPT rate throughout the study period, but no significant trend in the rate of medically indicated deliveries. A higher mortality was observed in early term versus late term infants, with adjusted OR: 2.43 (p=0.038). LPT and early term infants have a significantly higher risk of death.
Resumo:
Electricity short-term load forecast is very important for the operation of power systems. In this work a classical exponential smoothing model, the Holt-Winters with double seasonality was used to test for accurate predictions applied to the Portuguese demand time series. Some metaheuristic algorithms for the optimal selection of the smoothing parameters of the Holt-Winters forecast function were used and the results after testing in the time series showed little differences among methods, so the use of the simple local search algorithms is recommended as they are easier to implement.
Resumo:
Electricity short-term load forecast is very important for the operation of power systems. In this work a classical exponential smoothing model, the Holt-Winters with double seasonality was used to test for accurate predictions applied to the Portuguese demand time series. Some metaheuristic algorithms for the optimal selection of the smoothing parameters of the Holt-Winters forecast function were used and the results after testing in the time series showed little differences among methods, so the use of the simple local search algorithms is recommended as they are easier to implement.
Resumo:
This paper studies the effects of monetary policy on mutual fund risk taking using a sample of Portuguese fixed-income mutual funds in the 2000-2012 period. Firstly I estimate time-varying measures of risk exposure (betas) for the individual funds, for the benchmark portfolio, as well as for a representative equally-weighted portfolio, through 24-month rolling regressions of a two-factor model with two systematic risk factors: interest rate risk (TERM) and default risk (DEF). Next, in the second phase, using the estimated betas, I try to understand what portion of the risk exposure is in excess of the benchmark (active risk) and how it relates to monetary policy proxies (one-month rate, Taylor residual, real rate and first principal component of a cross-section of government yields and rates). Using this methodology, I provide empirical evidence that Portuguese fixed-income mutual funds respond to accommodative monetary policy by significantly increasing exposure, in excess of their benchmarks, to default risk rate and slightly to interest risk rate as well. I also find that the increase in funds’ risk exposure to gain a boost in return (search-for-yield) is more pronounced following the 2007-2009 global financial crisis, indicating that the current historic low interest rates may incentivize excessive risk taking. My results suggest that monetary policy affects the risk appetite of non-bank financial intermediaries.
Resumo:
Background:Long-term outcomes of drug-eluting stents (DES) versus bare-metal stents (BMS) in patients with ST-segment elevation myocardial infarction (STEMI) remain uncertain.Objective:To investigate long-term outcomes of drug-eluting stents (DES) versus bare-metal stents (BMS) in patients with ST-segment elevation myocardial infarction (STEMI).Methods:We performed search of MEDLINE, EMBASE, the Cochrane library, and ISI Web of Science (until February 2013) for randomized trials comparing more than 12-month efficacy or safety of DES with BMS in patients with STEMI. Pooled estimate was presented with risk ratio (RR) and its 95% confidence interval (CI) using random-effects model.Results:Ten trials with 7,592 participants with STEMI were included. The overall results showed that there was no significant difference in the incidence of all-cause death and definite/probable stent thrombosis between DES and BMS at long-term follow-up. Patients receiving DES implantation appeared to have a lower 1-year incidence of recurrent myocardial infarction than those receiving BMS (RR = 0.75, 95% CI 0.56 to 1.00, p= 0.05). Moreover, the risk of target vessel revascularization (TVR) after receiving DES was consistently lowered during long-term observation (all p< 0.01). In subgroup analysis, the use of everolimus-eluting stents (EES) was associated with reduced risk of stent thrombosis in STEMI patients (RR = 0.37, p=0.02).Conclusions:DES did not increase the risk of stent thrombosis in patients with STEMI compared with BMS. Moreover, the use of DES did lower long-term risk of repeat revascularization and might decrease the occurrence of reinfarction.
Resumo:
A growing literature integrates theories of debt management into models of optimal fiscal policy. One promising theory argues that the composition of government debt should be chosen so that fluctuations in the market value of debt offset changes in expected future deficits. This complete market approach to debt management is valid even when the government only issues non-contingent bonds. A number of authors conclude from this approach that governments should issue long term debt and invest in short term assets. We argue that the conclusions of this approach are too fragile to serve as a basis for policy recommendations. This is because bonds at different maturities have highly correlated returns, causing the determination of the optimal portfolio to be ill-conditioned. To make this point concrete we examine the implications of this approach to debt management in various models, both analytically and using numerical methods calibrated to the US economy. We find the complete market approach recommends asset positions which are huge multiples of GDP. Introducing persistent shocks or capital accumulation only worsens this problem. Increasing the volatility of interest rates through habits partly reduces the size of these simulations we find no presumption that governments should issue long term debt ? policy recommendations can be easily reversed through small perturbations in the specification of shocks or small variations in the maturity of bonds issued. We further extend the literature by removing the assumption that governments every period costlessly repurchase all outstanding debt. This exacerbates the size of the required positions, worsens their volatility and in some cases produces instability in debt holdings. We conclude that it is very difficult to insulate fiscal policy from shocks by using the complete markets approach to debt management. Given the limited variability of the yield curve using maturities is a poor way to substitute for state contingent debt. The result is the positions recommended by this approach conflict with a number of features that we believe are important in making bond markets incomplete e.g allowing for transaction costs, liquidity effects, etc.. Until these features are all fully incorporated we remain in search of a theory of debt management capable of providing robust policy insights.
Resumo:
Lithium is an efficacious agent for the treatment of bipolar disorder, but it is unclear to what extent its long-term use may result in neuroprotective or toxic consequences. Medline was searched with the combination of the word 'Lithium' plus key words that referred to every possible effect on the central nervous system. The papers were further classified into those supporting a neuroprotective effect, those in favour of a neurotoxic effect and those that were neutral. The papers were classified into research in humans, animal and in-vitro research, case reports, and review/opinion articles. Finally, the Natural Standard evidence-based validated grading rationale was used to validate the data. The Medline search returned 970 papers up to February 2006. Inspection of the abstracts supplied 214 papers for further reviewing. Eighty-nine papers supported the neuroprotective effect (6 human research, 58 animal/in vitro, 0 case reports, 25 review/opinion articles). A total of 116 papers supported the neurotoxic effect (17 human research, 23 animal/in vitro, 60 case reports, 16 review/opinion articles). Nine papers supported no hypothesis (5 human research, 3 animal/in vitro, 0 case reports, 1 review/opinion articles). Overall, the grading suggests that the data concerning the effect of lithium therapy is that of level C, that is 'unclear or conflicting scientific evidence' since there is conflicting evidence from uncontrolled non-randomized studies accompanied by conflicting evidence from animal and basic science studies. Although more papers are in favour of the toxic effect, the great difference in the type of papers that support either hypothesis, along with publication bias and methodological issues make conclusions difficult. Lithium remains the 'gold standard' for the prophylaxis of bipolar illness, however, our review suggests that there is a rare possibility of a neurotoxic effect in real-life clinical practice even in closely monitored patients with 'therapeutic' lithium plasma levels. It is desirable to keep lithium blood levels as low as feasible with prophylaxis.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Open educational resources (OER) promise increased access, participation, quality, and relevance, in addition to cost reduction. These seemingly fantastic promises are based on the supposition that educators and learners will discover existing resources, improve them, and share the results, resulting in a virtuous cycle of improvement and re-use. By anecdotal metrics, existing web scale search is not working for OER. This situation impairs the cycle underlying the promise of OER, endangering long term growth and sustainability. While the scope of the problem is vast, targeted improvements in areas of curation, indexing, and data exchange can improve the situation, and create opportunities for further scale. I explore the way the system is currently inadequate, discuss areas for targeted improvement, and describe a prototype system built to test these ideas. I conclude with suggestions for further exploration and development.