826 resultados para graph matching algorithms
Resumo:
According to Ljungqvist and Sargent (1998), high European unemployment since the 1980s can be explained by a rise in economic turbulence, leading to greater numbers of unemployed workers with obsolete skills. These workers refuse new jobs due to high unemployment benefits. In this paper we reassess the turbulence-unemployment relationship using a matching model with endogenous job destruction. In our model, higher turbulence reduces the incentives of employed workers to leave their jobs. If turbulence has only a tiny effect on the skills of workers experiencing endogenous separation, then the results of Lungqvist and Sargent (1998, 2004) are reversed, and higher turbulence leads to a reduction in unemployment. Thus, changes in turbulence cannot provide an explanation for European unemployment that reconciles the incentives of both unemployed and employed workers.
Resumo:
This paper generalizes the original random matching model of money byKiyotaki and Wright (1989) (KW) in two aspects: first, the economy ischaracterized by an arbitrary distribution of agents who specialize in producing aparticular consumption good; and second, these agents have preferences suchthat they want to consume any good with some probability. The resultsdepend crucially on the size of the fraction of producers of each goodand the probability with which different agents want to consume eachgood. KW and other related models are shown to be parameterizations ofthis more general one.
Resumo:
This paper analyzes the problem of matching heterogeneous agents in aBayesian learning model. One agent gives a noisy signal to another agent,who is responsible for learning. If production has a strong informationalcomponent, a phase of cross-matching occurs, so that agents of low knowledgecatch up with those of higher one. It is shown that:(i) a greater informational component in production makes cross-matchingmore likely;(ii) as the new technology is mastered, production becomes relatively morephysical and less informational;(iii) a greater dispersion of the ability to learn and transfer informationmakes self-matching more likely; and(iv) self-matching leads to more self-matching, whereas cross-matching canmake less productive agents overtake more productive ones.
Resumo:
This paper explains the divergent behavior of European an US unemploymentrates using a job market matching model of the labor market with aninteraction between shocks an institutions. It shows that a reduction inTF growth rates, an increase in real interest rates, and an increase intax rates leads to a permanent increase in unemployment rates when thereplacement rates or initial tax rates are high, while no increase inunemployment occurs when institutions are "employment friendly". The paperalso shows that an increase in turbulence, modelle as an increase probabilityof skill loss, is not a robust explanation for the European unemploymentpuzzle in the context of a matching model with both endogenous job creationand job estruction.
Resumo:
This paper theoretically and empirically documents a puzzle that arises when an RBC economy with a job matching function is used to model unemployment. The standard model can generate sufficiently large cyclical fluctuations in unemployment, or a sufficiently small response of unemployment to labor market policies, but it cannot do both. Variable search and separation, finite UI benefit duration, efficiency wages, and capital all fail to resolve this puzzle. However, either sticky wages or match-specific productivity shocks can improve the model's performance by making the firm's flow of surplus more procyclical, which makes hiring more procyclical too.
Resumo:
PRECON S.A is a manufacturing company dedicated to produce prefabricatedconcrete parts to several industries as rail transportation andagricultural industries.Recently, PRECON signed a contract with RENFE,the Spanish Nnational Rail Transportation Company to manufacturepre-stressed concrete sleepers for siding of the new railways of the highspeed train AVE. The scheduling problem associated with the manufacturingprocess of the sleepers is very complex since it involves severalconstraints and objectives. The constraints are related with productioncapacity, the quantity of available moulds, satisfying demand and otheroperational constraints. The two main objectives are related withmaximizing the usage of the manufacturing resources and minimizing themoulds movements. We developed a deterministic crowding genetic algorithmfor this multiobjective problem. The algorithm has proved to be a powerfuland flexible tool to solve the large-scale instance of this complex realscheduling problem.
Resumo:
We study the complexity of rationalizing choice behavior. We do so by analyzing two polar cases, and a number of intermediate ones. In our most structured case, that is where choice behavior is defined in universal choice domains and satisfies the "weak axiom of revealed preference," finding the complete preorder rationalizing choice behavior is a simple matter. In the polar case, where no restriction whatsoever is imposed, either on choice behavior or on choice domain, finding the complete preordersthat rationalize behavior turns out to be intractable. We show that the task of finding the rationalizing complete preorders is equivalent to a graph problem. This allows the search for existing algorithms in the graph theory literature, for the rationalization of choice.
Endogeneous matching in university-industry collaboration: Theory and empirical evidence from the UK
Resumo:
We develop a two-sided matching model to analyze collaboration between heterogeneousacademics and firms. We predict a positive assortative matching in terms of both scientificability and affinity for type of research, but negative assortative in terms of ability on one sideand affinity in the other. In addition, the most able and most applied academics and the mostable and most basic firms shall collaborate rather than stay independent. Our predictionsreceive strong support from the analysis of the teams of academics and firms that proposeresearch projects to the UK's Engineering and Physical Sciences Research Council.
Resumo:
The matching function -a key building block in models of labor market frictions- impliesthat the job finding rate depends only on labor market tightness. We estimate such amatching function and find that the relation, although remarkably stable over 1967-2007,broke down spectacularly after 2007. We argue that labor market heterogeneities are notfully captured by the standard matching function, but that a generalized matching functionthat explicitly takes into account worker heterogeneity and market segmentation is fullyconsistent with the behavior of the job finding rate. The standard matching function canbreak down when, as in the Great Recession, the average characteristics of the unemployedchange too much, or when dispersion in labor market conditions -the extent to which somelabor markets fare worse than others- increases too much.
Resumo:
Segmenting ultrasound images is a challenging problemwhere standard unsupervised segmentation methods such asthe well-known Chan-Vese method fail. We propose in thispaper an efficient segmentation method for this class ofimages. Our proposed algorithm is based on asemi-supervised approach (user labels) and the use ofimage patches as data features. We also consider thePearson distance between patches, which has been shown tobe robust w.r.t speckle noise present in ultrasoundimages. Our results on phantom and clinical data show avery high similarity agreement with the ground truthprovided by a medical expert.
Resumo:
ABSTRACT: BACKGROUND: Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have shown that a patient's antibody reaction in a confirmatory line immunoassay (INNO-LIATM HIV I/II Score, Innogenetics) provides information on the duration of infection. Here, we sought to further investigate the diagnostic specificity of various Inno-Lia algorithms and to identify factors affecting it. METHODS: Plasma samples of 714 selected patients of the Swiss HIV Cohort Study infected for longer than 12 months and representing all viral clades and stages of chronic HIV-1 infection were tested blindly by Inno-Lia and classified as either incident (up to 12 m) or older infection by 24 different algorithms. Of the total, 524 patients received HAART, 308 had HIV-1 RNA below 50 copies/mL, and 620 were infected by a HIV-1 non-B clade. Using logistic regression analysis we evaluated factors that might affect the specificity of these algorithms. RESULTS: HIV-1 RNA <50 copies/mL was associated with significantly lower reactivity to all five HIV-1 antigens of the Inno-Lia and impaired specificity of most algorithms. Among 412 patients either untreated or with HIV-1 RNA ≥50 copies/mL despite HAART, the median specificity of the algorithms was 96.5% (range 92.0-100%). The only factor that significantly promoted false-incident results in this group was age, with false-incident results increasing by a few percent per additional year. HIV-1 clade, HIV-1 RNA, CD4 percentage, sex, disease stage, and testing modalities exhibited no significance. Results were similar among 190 untreated patients. CONCLUSIONS: The specificity of most Inno-Lia algorithms was high and not affected by HIV-1 variability, advanced disease and other factors promoting false-recent results in other STARHS. Specificity should be good in any group of untreated HIV-1 patients.
Resumo:
PURPOSE: To determine the lower limit of dose reduction with hybrid and fully iterative reconstruction algorithms in detection of endoleaks and in-stent thrombus of thoracic aorta with computed tomographic (CT) angiography by applying protocols with different tube energies and automated tube current modulation. MATERIALS AND METHODS: The calcification insert of an anthropomorphic cardiac phantom was replaced with an aortic aneurysm model containing a stent, simulated endoleaks, and an intraluminal thrombus. CT was performed at tube energies of 120, 100, and 80 kVp with incrementally increasing noise indexes (NIs) of 16, 25, 34, 43, 52, 61, and 70 and a 2.5-mm section thickness. NI directly controls radiation exposure; a higher NI allows for greater image noise and decreases radiation. Images were reconstructed with filtered back projection (FBP) and hybrid and fully iterative algorithms. Five radiologists independently analyzed lesion conspicuity to assess sensitivity and specificity. Mean attenuation (in Hounsfield units) and standard deviation were measured in the aorta to calculate signal-to-noise ratio (SNR). Attenuation and SNR of different protocols and algorithms were analyzed with analysis of variance or Welch test depending on data distribution. RESULTS: Both sensitivity and specificity were 100% for simulated lesions on images with 2.5-mm section thickness and an NI of 25 (3.45 mGy), 34 (1.83 mGy), or 43 (1.16 mGy) at 120 kVp; an NI of 34 (1.98 mGy), 43 (1.23 mGy), or 61 (0.61 mGy) at 100 kVp; and an NI of 43 (1.46 mGy) or 70 (0.54 mGy) at 80 kVp. SNR values showed similar results. With the fully iterative algorithm, mean attenuation of the aorta decreased significantly in reduced-dose protocols in comparison with control protocols at 100 kVp (311 HU at 16 NI vs 290 HU at 70 NI, P ≤ .0011) and 80 kVp (400 HU at 16 NI vs 369 HU at 70 NI, P ≤ .0007). CONCLUSION: Endoleaks and in-stent thrombus of thoracic aorta were detectable to 1.46 mGy (80 kVp) with FBP, 1.23 mGy (100 kVp) with the hybrid algorithm, and 0.54 mGy (80 kVp) with the fully iterative algorithm.