8 resultados para J64 - Unemployment: Models, Duration, Incidence, and Job Search

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Selostus: Viljelyvyöhykkeiden ja kasvumallien soveltaminen ilmastonmuutoksen tutkimisessa: Mackenzien jokialue, Kanada

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Selostus: Ponsiviljeltävyys ja siihen liittyvät geenimerkit peltokauran ja susikauran risteytysjälkeläisissä

Relevância:

100.00% 100.00%

Publicador:

Resumo:

WDM (Wavelength-Division Multiplexing) optiset verkot on tällä hetkellä suosituin tapa isojen määrän tietojen siirtämiseen. Jokaiselle liittymälle määrätään reitin ja aallonpituus joka linkin varten. Tarvittavan reitin ja aallon pituuden löytäminen kutsutaan RWA-ongelmaksi. Tämän työn kuvaa mahdollisia kustannuksen mallein ratkaisuja RWA-ongelmaan. Olemassa on paljon erilaisia optimoinnin tavoitteita. Edellä mainittuja kustannuksen malleja perustuu näillä tavoitteilla. Kustannuksen malleja antavat tehokkaita ratkaisuja ja algoritmeja. The multicommodity malli on käsitelty tässä työssä perusteena RV/A-kustannuksen mallille. Myöskin OB käsitelty heuristisia menetelmiä RWA-ongelman ratkaisuun. Työn loppuosassa käsitellään toteutuksia muutamalle mallille ja erilaisia mahdollisuuksia kustannuksen mallein parantamiseen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In today’s knowledge intense economy the human capital is a source for competitive advantage for organizations. Continuous learning and sharing the knowledge within the organization are important to enhance and utilize this human capital in order to maximize the productivity. The new generation with different views and expectations of work is coming to work life giving its own characteristics on learning and sharing. Work should offer satisfaction so that the new generation employees would commit to organizations. At the same time organizations have to be able to focus on productivity to survive in the competitive market. The objective of this thesis is to construct a theory based framework of productivity, continuous learning and job satisfaction and further examine this framework and its applications in a global organization operating in process industry. Suggestions for future actions are presented for this case organization. The research is a qualitative case study and the empiric material was gathered by personal interviews concluding 15 employee and one supervisor interview. Results showed that more face to face interaction is needed between employees for learning because much of the knowledge of the process is tacit and so difficult to share in other ways. Offering these sharing possibilities can also impact positively to job satisfaction because they will increase the sense of community among employees which was found to be lacking. New employees demand more feedback to improve their learning and confidence. According to the literature continuous learning and job satisfaction have a relative strong relationship on productivity. The employee’s job description in the case organization has moved towards knowledge work due to continuous automation and expansion of the production process. This emphasizes the importance of continuous learning and means that productivity can be seen also from quality perspective. The normal productivity output in the case organization is stable and by focusing on the quality of work by improving continuous learning and job satisfaction the upsets in production can be handled and prevented more effectively. Continuous learning increases also the free human capital input and utilization of it and this can breed output increasing innovations that can increase productivity in long term. Also job satisfaction can increase productivity output in the end because employees will work more efficiently, not doing only the minimum tasks required. Satisfied employees are also found participating more in learning activities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study is to propose a stochastic model for commodity markets linked with the Burgers equation from fluid dynamics. We construct a stochastic particles method for commodity markets, in which particles represent market participants. A discontinuity in the model is included through an interacting kernel equal to the Heaviside function and its link with the Burgers equation is given. The Burgers equation and the connection of this model with stochastic differential equations are also studied. Further, based on the law of large numbers, we prove the convergence, for large N, of a system of stochastic differential equations describing the evolution of the prices of N traders to a deterministic partial differential equation of Burgers type. Numerical experiments highlight the success of the new proposal in modeling some commodity markets, and this is confirmed by the ability of the model to reproduce price spikes when their effects occur in a sufficiently long period of time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In older populations, fractures are common and the consequences of fractures may be serious both for an individual and for society. However, information is scarce about the incidence, predictors and consequences of fractures in population-based unselected cohorts including both men and women and a long follow-up. The objective of this study was to analyse the incidence and predictors of fractures as well as functional decline and excess mortality due to fractures, among 482 men and 695 women aged 65 or older in the municipality of Lieto, Finland from 1991 until 2002. In analyses, Poisson’s, Cox proportional Hazards and Cumulative Logistic regression models were used for the control of several confounding variables. During the 12-year follow-up with a total of 10 040 person-years (PY), 307 (26%) persons sustained altogether 425 fractures of which 77% were sustained by women. The total incidence of fractures was 53.4 per 1000 PY (95% confidence intervals [95% CI]: 47.9 - 59.5) in women and 24.9 per 1000 PY (95% CI: 20.4 - 30.4) in men. The incidence rates of fractures at any sites and hip fractures were associated with increasing age. No significant changes in the ageadjusted incidence rates of fractures were found in either gender during the 12-year follow-up. The predictors of fractures varied by gender. In multivariate analyses, reduced handgrip strength and body mass index (BMI) lower than 30 in women and a large number of depressive symptoms in men were independent predictors of fractures. A compression fracture in one or more thoracic or upper lumbar vertebras on chest radiography at baseline was associated with subsequent fractures in both genders. Lower body fractures independently predicted both short- (0-2 years) and long-term (up to 8 years) functional decline in mobility and activities of daily living (ADL) performance during the 8-year follow-up. Upper body fractures predicted decline in ADL performance during longterm follow-up. In the 12-year follow-up, hip fractures in men (Hazard Ratio [HR] 8.1, 95% CI: 4.4-14.9) and in women (HR 3.0, 95% CI: 1.9-4.9), and fractures at the proximal humerus in men (HR 5.4, 95% CI: 1.6-17.7) were independently associated with excess mortality. In addition, leisure time inactivity in physical exercise predicted independently both functional decline and excess mortality. Fractures are common among older people posing serious individual consequences. Further studies about the effectiveness of preventing falls and fractures as well as improving care and rehabilitation after fractures are needed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract The ultimate problem considered in this thesis is modeling a high-dimensional joint distribution over a set of discrete variables. For this purpose, we consider classes of context-specific graphical models and the main emphasis is on learning the structure of such models from data. Traditional graphical models compactly represent a joint distribution through a factorization justi ed by statements of conditional independence which are encoded by a graph structure. Context-speci c independence is a natural generalization of conditional independence that only holds in a certain context, speci ed by the conditioning variables. We introduce context-speci c generalizations of both Bayesian networks and Markov networks by including statements of context-specific independence which can be encoded as a part of the model structures. For the purpose of learning context-speci c model structures from data, we derive score functions, based on results from Bayesian statistics, by which the plausibility of a structure is assessed. To identify high-scoring structures, we construct stochastic and deterministic search algorithms designed to exploit the structural decomposition of our score functions. Numerical experiments on synthetic and real-world data show that the increased exibility of context-specific structures can more accurately emulate the dependence structure among the variables and thereby improve the predictive accuracy of the models.