928 resultados para Job Search Methods
Resumo:
BACKGROUND: Health professionals and policymakers aspire to make healthcare decisions based on the entire relevant research evidence. This, however, can rarely be achieved because a considerable amount of research findings are not published, especially in case of 'negative' results - a phenomenon widely recognized as publication bias. Different methods of detecting, quantifying and adjusting for publication bias in meta-analyses have been described in the literature, such as graphical approaches and formal statistical tests to detect publication bias, and statistical approaches to modify effect sizes to adjust a pooled estimate when the presence of publication bias is suspected. An up-to-date systematic review of the existing methods is lacking. METHODS/DESIGN: The objectives of this systematic review are as follows:âeuro¢ To systematically review methodological articles which focus on non-publication of studies and to describe methods of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses.âeuro¢ To appraise strengths and weaknesses of methods, the resources they require, and the conditions under which the method could be used, based on findings of included studies.We will systematically search Web of Science, Medline, and the Cochrane Library for methodological articles that describe at least one method of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses. A dedicated data extraction form is developed and pilot-tested. Working in teams of two, we will independently extract relevant information from each eligible article. As this will be a qualitative systematic review, data reporting will involve a descriptive summary. DISCUSSION: Results are expected to be publicly available in mid 2013. This systematic review together with the results of other systematic reviews of the OPEN project (To Overcome Failure to Publish Negative Findings) will serve as a basis for the development of future policies and guidelines regarding the assessment and handling of publication bias in meta-analyses.
Resumo:
Laivan rungonvalmistuksen laadunvarmistuksessa käytetään viranomaisten valvomia NDT-tarkastuksia vaatimustenmukaisuuden varmistamiseksi. Yleisimmin käytettävät NDT-tarkastusmenetelmät laivanrakennuksessa ovat visuaali-, röntgen-, ultraääni-, magneettijauhe- ja tunkeumanestetarkastukset. Diplomityön tavoitteena oli kehittää laivan hitsattujen runkorakenteiden ainettarikkomattoman tarkastuksen ohjausta ja valvontaa Aker Finnyards:n Turun telakalla. Työn alussa selvitettiin NDT-tarkastuksen ohjauksen ja valvonnannykytila ja tunnistettiin kehityskohteita. Tutkimusmenetelminä käytettiin työnmittausta ja havainnointitutkimusta. Merkittävimmiksi kehityskohteiksi havaittiinyleinen epätietoisuus NDT-tarkastuksista laivan runkoprosessissa, selkeän NDT-tarkastusten ohjausjärjestelmän puuttuminen sekä puutteellinen valvonta. Kehityskohteiden tunnistamisen perusteella luotiin toimintamalli NDT-tarkastusten ohjauksen ja valvonnan kehittämiseksi. Toimintamallissa on kuvattuna NDT-tarkastusprosessin tekijät ja informaatiovirrat. Toimintamalli on tarvittaessa päivitettävissä.
Resumo:
Purpose The purpose of our multidisciplinary study was to define a pragmatic and secure alternative to the creation of a national centralised medical record which could gather together the different parts of the medical record of a patient scattered in the different hospitals where he was hospitalised without any risk of breaching confidentiality. Methods We first analyse the reasons for the failure and the dangers of centralisation (i.e. difficulty to define a European patients' identifier, to reach a common standard for the contents of the medical record, for data protection) and then propose an alternative that uses the existing available data on the basis that setting up a safe though imperfect system could be better than continuing a quest for a mythical perfect information system that we have still not found after a search that has lasted two decades. Results We describe the functioning of Medical Record Search Engines (MRSEs), using pseudonymisation of patients' identity. The MRSE will be able to retrieve and to provide upon an MD's request all the available information concerning a patient who has been hospitalised in different hospitals without ever having access to the patient's identity. The drawback of this system is that the medical practitioner then has to read all of the information and to create his own synthesis and eventually to reject extra data. Conclusions Faced with the difficulties and the risks of setting up a centralised medical record system, a system that gathers all of the available information concerning a patient could be of great interest. This low-cost pragmatic alternative which could be developed quickly should be taken into consideration by health authorities.
Resumo:
Background: Although randomized clinical trials (RCTs) are considered the gold standard of evidence, their reporting is often suboptimal. Trial registries have the potential to contribute important methodologic information for critical appraisal of study results. Methods and Findings: The objective of the study was to evaluate the reporting of key methodologic study characteristics in trial registries. We identified a random sample (n = 265) of actively recruiting RCTs using the World Health Organization International Clinical Trials Registry Platform (ICTRP) search portal in 2008. We assessed the reporting of relevant domains from the Cochrane Collaboration’s ‘Risk of bias’ tool and other key methodological aspects. Our primary outcomes were the proportion of registry records with adequate reporting of random sequence generation, allocation concealment, blinding, and trial outcomes. Two reviewers independently assessed each record. Weighted overall proportions in the ICTRP search portal for adequate reporting of sequence generation, allocation concealment, blinding (including and excluding open label RCT) and primary outcomes were 5.7% (95% CI 3.0–8.4%), 1.4% (0–2.8%), 41% (35–47%), 8.4% (4.1–13%), and 66% (60–72%), respectively. The proportion of adequately reported RCTs was higher for registries that used specific methodological fields for describing methods of randomization and allocation concealment compared to registries that did not. Concerning other key methodological aspects, weighted overall proportions of RCTs with adequately reported items were as follows: eligibility criteria (81%), secondary outcomes (46%), harm (5%) follow-up duration (62%), description of the interventions (53%) and sample size calculation (1%). Conclusions: Trial registries currently contain limited methodologic information about registered RCTs. In order to permit adequate critical appraisal of trial results reported in journals and registries, trial registries should consider requesting details on key RCT methods to complement journal publications. Full protocols remain the most comprehensive source of methodologic information and should be made publicly available.
Resumo:
This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
Metaheuristic methods have become increasingly popular approaches in solving global optimization problems. From a practical viewpoint, it is often desirable to perform multimodal optimization which, enables the search of more than one optimal solution to the task at hand. Population-based metaheuristic methods offer a natural basis for multimodal optimization. The topic has received increasing interest especially in the evolutionary computation community. Several niching approaches have been suggested to allow multimodal optimization using evolutionary algorithms. Most global optimization approaches, including metaheuristics, contain global and local search phases. The requirement to locate several optima sets additional requirements for the design of algorithms to be effective in both respects in the context of multimodal optimization. In this thesis, several different multimodal optimization algorithms are studied in regard to how their implementation in the global and local search phases affect their performance in different problems. The study concentrates especially on variations of the Differential Evolution algorithm and their capabilities in multimodal optimization. To separate the global and local search search phases, three multimodal optimization algorithms are proposed, two of which hybridize the Differential Evolution with a local search method. As the theoretical background behind the operation of metaheuristics is not generally thoroughly understood, the research relies heavily on experimental studies in finding out the properties of different approaches. To achieve reliable experimental information, the experimental environment must be carefully chosen to contain appropriate and adequately varying problems. The available selection of multimodal test problems is, however, rather limited, and no general framework exists. As a part of this thesis, such a framework for generating tunable test functions for evaluating different methods of multimodal optimization experimentally is provided and used for testing the algorithms. The results demonstrate that an efficient local phase is essential for creating efficient multimodal optimization algorithms. Adding a suitable global phase has the potential to boost the performance significantly, but the weak local phase may invalidate the advantages gained from the global phase.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
This study is dedicated to search engine marketing (SEM). It aims for developing a business model of SEM firms and to provide explicit research of trustworthy practices of virtual marketing companies. Optimization is a general term that represents a variety of techniques and methods of the web pages promotion. The research addresses optimization as a business activity, and it explains its role for the online marketing. Additionally, it highlights issues of unethical techniques utilization by marketers which created relatively negative attitude to them on the Internet environment. Literature insight combines in the one place both technical and economical scientific findings in order to highlight technological and business attributes incorporated in SEM activities. Empirical data regarding search marketers was collected via e-mail questionnaires. 4 representatives of SEM companies were engaged in this study to accomplish the business model design. Additionally, the fifth respondent was a representative of the search engine portal, who provided insight on relations between search engines and marketers. Obtained information of the respondents was processed qualitatively. Movement of commercial organizations to the online market increases demand on promotional programs. SEM is the largest part of online marketing, and it is a prerogative of search engines portals. However, skilled users, or marketers, are able to implement long-term marketing programs by utilizing web page optimization techniques, key word consultancy or content optimization to increase web site visibility to search engines and, therefore, user’s attention to the customer pages. SEM firms are related to small knowledge-intensive businesses. On the basis of data analysis the business model was constructed. The SEM model includes generalized constructs, although they represent a wider amount of operational aspects. Constructing blocks of the model includes fundamental parts of SEM commercial activity: value creation, customer, infrastructure and financial segments. Also, approaches were provided on company’s differentiation and competitive advantages evaluation. It is assumed that search marketers should apply further attempts to differentiate own business out of the large number of similar service providing companies. Findings indicate that SEM companies are interested in the increasing their trustworthiness and the reputation building. Future of the search marketing is directly depending on search engines development.
Resumo:
Rapid changes in working life and competence requirements of different professions have increased interest in workplace learning. It is considered an effective way to learn and update professional skills by performing daily tasks in an authentic environment. Especially, ensuring a supply of skilled future workers is a crucial issue for firms facing tight competition and a shortage of competent employees due to the retirement of current professionals. In order to develop and make the most of workplace learning, it is important to focus on workplace learning environments and the individual characteristics of those participating in workplace learning. The literature has suggested various factors that influence adults' and professionals’ workplace learning of profession-related skills, but lacks empirical studies on contextual and individual-related factors that positively affect students' workplace learning. Workers with vocational education form a large group in modern firms. Therefore, elements of vocational students’ successful workplace learning during their studies, before starting their career paths, need to be examined. To fill this gap in the literature, this dissertation examines contributors to vocational students’ workplace learning in Finland, where students’ workplace learning is included in the vocational education and training system. The study is divided into two parts: the introduction, comprised of the overview of the relevant literature and the conclusion of the entire study, and five separate articles. Three of the articles utilize quantitative methods and two use qualitative methods to examine factors that contribute to vocational students’ workplace learning. The results show that, from the students’ perspective, attitudinal, motivational, and organizationrelated factors enhance the student’s development of professionalism during the on-the-job learning period. Specifically, the organization-related factors such as innovative climate, guidance, and interactions with seniors have a strong positive impact on the students’ perceived development of professional skills because, for example, the seniors’ guidance and provision of new viewpoints for the tasks helps the vocational students to gain autonomy at work performance. A multilevel analysis shows that of those factors enhancing workplace learning from the student perspective, innovative climate, knowledge transfer accuracy, and the students’ performance orientation were significantly related to the workplace instructors’ assessment regarding the students’ professional performance. Furthermore, support from senior colleagues and the students’ self-efficacy were both significantly associated with the formal grades measuring how well the students managed to learn necessary professional skills. In addition, the results suggest that the students’ on-the-job learning can be divided into three main phases, of which two require efforts from both the student and the on-the-job learning organization. The first phase includes the student’s application of basic professional skills, demonstration of potential in performing daily tasks, and orientation provided by the organization at the beginning of the on-the-job learning period. In the second phase, the student actively develops profession-related skills by performing daily tasks, thus learning a fluent working style while observing the seniors’ performance. The organization offers relevant tasks and follows the student’s development. The third level indicates a student who has reached the professional level described as a full occupation. The results suggest that constructing the vocational students’ successful on-the-job learning period requires feedback from seniors, opportunities to learn to manage entire work processes, self-efficacy on the part of the students, proactive behavior, and initiative in learning. The study contributes to research on workplace learning in three ways: firstly, it identifies the key individual- and organization-based factors that influence the vocational students’ successful on-the-job learning from their perspective and examines mutual relationships between these factors. Second, the study provides knowledge of how the factors related to the students’ view of successful workplace learning are associated with the workplace instructors’ perspective and the formal grades. Third, the present study finds elements needed to construct a successful on-the-job learning for the students.
Resumo:
Individual circadian clocks entrain differently to environmental cycles (zeitgebers, e.g., light and darkness), earlier or later within the day, leading to different chronotypes. In human populations, the distribution of chronotypes forms a bell-shaped curve, with the extreme early and late types _ larks and owls, respectively _ at its ends. Human chronotype, which can be assessed by the timing of an individual's sleep-wake cycle, is partly influenced by genetic factors - known from animal experimentation. Here, we review population genetic studies which have used a questionnaire probing individual daily timing preference for associations with polymorphisms in clock genes. We discuss their inherent limitations and suggest an alternative approach combining a short questionnaire (Munich ChronoType Questionnaire, MCTQ), which assesses chronotype in a quantitative manner, with a genome-wide analysis (GWA). The advantages of these methods in comparison to assessing time-of-day preferences and single nucleotide polymorphism genotyping are discussed. In the future, global studies of chronotype using the MCTQ and GWA may also contribute to understanding the influence of seasons, latitude (e.g., different photoperiods), and climate on allele frequencies and chronotype distribution in different populations.