70 resultados para Foreground Segmentation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been shown that the accuracy of mammographic abnormality detection methods is strongly dependent on the breast tissue characteristics, where a dense breast drastically reduces detection sensitivity. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. Here, we describe the development of an automatic breast tissue classification methodology, which can be summarized in a number of distinct steps: 1) the segmentation of the breast area into fatty versus dense mammographic tissue; 2) the extraction of morphological and texture features from the segmented breast areas; and 3) the use of a Bayesian combination of a number of classifiers. The evaluation, based on a large number of cases from two different mammographic data sets, shows a strong correlation ( and 0.67 for the two data sets) between automatic and expert-based Breast Imaging Reporting and Data System mammographic density assessment

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last few years, some of the visionary concepts behind the virtual physiological human began to be demonstrated on various clinical domains, showing great promise for improving healthcare management. In the current work, we provide an overview of image- and biomechanics-based techniques that, when put together, provide a patient-specific pipeline for the management of intracranial aneurysms. The derivation and subsequent integration of morphological, morphodynamic, haemodynamic and structural analyses allow us to extract patient-specific models and information from which diagnostic and prognostic descriptors can be obtained. Linking such new indices with relevant clinical events should bring new insights into the processes behind aneurysm genesis, growth and rupture. The development of techniques for modelling endovascular devices such as stents and coils allows the evaluation of alternative treatment scenarios before the intervention takes place and could also contribute to the understanding and improved design of more effective devices. A key element to facilitate the clinical take-up of all these developments is their comprehensive validation. Although a number of previously published results have shown the accuracy and robustness of individual components, further efforts should be directed to demonstrate the diagnostic and prognostic efficacy of these advanced tools through large-scale clinical trials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we propose a new automatic methodology for computing accurate digital elevation models (DEMs) in urban environments from low baseline stereo pairs that shall be available in the future from a new kind of earth observation satellite. This setting makes both views of the scene similarly, thus avoiding occlusions and illumination changes, which are the main disadvantages of the commonly accepted large-baseline configuration. There still remain two crucial technological challenges: (i) precisely estimating DEMs with strong discontinuities and (ii) providing a statistically proven result, automatically. The first one is solved here by a piecewise affine representation that is well adapted to man-made landscapes, whereas the application of computational Gestalt theory introduces reliability and automation. In fact this theory allows us to reduce the number of parameters to be adjusted, and tocontrol the number of false detections. This leads to the selection of a suitable segmentation into affine regions (whenever possible) by a novel and completely automatic perceptual grouping method. It also allows us to discriminate e.g. vegetation-dominated regions, where such an affine model does not apply anda more classical correlation technique should be preferred. In addition we propose here an extension of the classical ”quantized” Gestalt theory to continuous measurements, thus combining its reliability with the precision of variational robust estimation and fine interpolation methods that are necessary in the low baseline case. Such an extension is very general and will be useful for many other applications as well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study analyses the determinants of the rate of temporary employment in various OECD countries using both macro-level data drawn from the OECD and EUROSTAT databases, as well as micro-level data drawn from the 8th wave of the European Household Panel. Comparative analysis is set out to test different explanations originally formulated for the Spanish case. The evidence suggests that the overall distribution of temporary employment in advanced economies does not seem to be explicable by the characteristics of national productive structures. This evidence seems at odds with previous interpretations based on segmentation theories. As an alternative explanation, two types of supply-side factors are tested: crowding-out effects and educational gaps in the workforce. The former seems non significant, whilst the effects of the latter disappear after controlling for the levels of institutional protection in standard employment during the 1980s. Multivariate analysis shows that only this latter institutional variable, together with the degree of coordinated centralisation of the collective bargaining system, seem to have a significant impact on the distribution of temporary employment in the countries examined. On the basis of this observation, an explanation of the very high levels of temporary employment observed in Spain is proposed. This explanation is consistent with both country-specific and comparative evidence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we portray the features of the Catalan textiles labour market in a period of technological change. Supply and demand for labour as well as a gendered view of living standards are presented. A first set of results is that labour supply adjusts to changes in labour demand trough the spread of new demographic attitudes. In this respect we imply that labour economic agents (or labour population) were able to modify the economic condition of their children. A second set of results refers to living standards and income distribution inequality. In this respect we see that unemployment and protectionism were the main sources breeding income inequality. A third set of results deals with the extreme labour market segmentation according to gender. Since women s real wages did not obey to an economic rationale we conclude that women were outside the labour market.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The general objective of the study was to empirically test a reciprocal model of job satisfaction and life satisfaction while controlling for some social demographic variables. 827 employees working in 34 car dealerships in Northern Quebec (56% responses rate) were surveyed. The multiple item questionnaires were analysed using correlation analysis, chi square and ANOVAs. Results show interesting patterns emerging for the relationships between job and life satisfaction of which 49.2% of all individuals have spillover, 43.5% compensation, and 7.3% segmentation type of relationships. Results, nonetheless, are far richer and the model becomes much more refined when social demographic indicators are taken into account. Globally, social demographic variables demonstrate some effects on each satisfaction individually but also on the interrelation (nature of the relations) between life and work satisfaction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the earnings structure and the equilibrium assignment of workers when workers exert intra-firm spillovers on each other.We allow for arbitrary spillovers provided output depends on some aggregate index of workers' skill. Despite the possibility of increasing returns to skills, equilibrium typically exists. We show that equilibrium will typically be segregated; that the skill space can be partitioned into a set of segments and any firm hires from only one segment. Next, we apply the model to analyze the effect of information technology on segmentation and the distribution of income. There are two types of human capital, productivity and creativity, i.e. the ability to produce ideas that may be duplicated over a network. Under plausible assumptions, inequality rises and then falls when network size increases, and the poorest workers cannot lose. We also analyze the impact of an improvement in worker quality and of an increased international mobility of ideas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many workers believe that personal contacts are crucial for obtainingjobs in high-wage sectors. On the other hand, firms in high-wage sectorsreport using employee referrals because they help provide screening andmonitoring of new employees. This paper develops a matching model thatcan explain the link between inter-industry wage differentials and useof employee referrals. Referrals lower monitoring costs because high-effortreferees can exert peer pressure on co-workers, allowing firms to pay lowerefficiency wages. On the other hand, informal search provides fewer job andapplicant contacts than formal methods (e.g., newspaper ads). In equilibrium,the matching process generates segmentation in the labor market becauseof heterogeneity in the size of referral networks. Referrals match good high-paying jobs to well-connected workers, while formal methods matchless attractive jobs to less-connected workers. Industry-level data show apositive correlation between industry wage premia and use of employeereferrals. Moreover, evidence using the NLSY shows similar positive andsignificant OLS and fixed-effects estimates of the returns to employeereferrals, but insignificant effects once sector of employment is controlledfor. This evidence suggests referred workers earn higher wages not becauseof higher unobserved ability or better matches but rather because theyare hired in high-wage sectors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The trade-off between property rights/price regulation and innovation depends on countrycharacteristics and drug industry specificities. Access to drugs and innovation can bereconciled by seven ways that, among others, include: public health strengthening in thecountries with the largest access problems (those among the poor with the weakestinstitutions); public and private aid to make attractive R&D on neglected diseases; pricediscrimination with market segmentation; to require patent owners to choose eitherprotection in the rich countries or protection in the poor countries (but not both).Regarding price regulation, after a review of theoretical arguments and empirical evidence,seven strategies to reconcile health and industrial considerations are outlined, including:mitigation of the medical profession dependence on the pharmaceutical industry; considerationof a drug as an input of a production process; split drug authorization from public fundingdecisions; establish an efficiency minimum for all health production inputs; and stop theEuropean R&D hemorrhagia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

 En la societat d’avui dia, les empreses depenen en gran part dels seus recursos informàtics. La seva capacitat de supervivència i innovació en el mercat actual, on la competitivitat és cada dia més forta, passa per una infraestructura informàtica que els permeti, no només desplegar i implantar ordinadors i servidors de manera ràpida i eficient sinó que també les protegeixi contra parades del sistema informàtic, problemes amb servidors, caigudes o desastres físics de hardware.Per evitar aquests problemes informàtics susceptibles de poder parar el funcionament d’una empresa es va començar a treballar en el camp de la virtualització informàtica amb l’objectiu de poder trobar solucions a aquests problemes a la vegada que s’aprofitaven els recursos de hardware existents d’una manera més òptim a i eficient, reduint així també el cost de la infraestructura informàtica.L’objectiu principal d’aquest treball és veure en primer pla la conversió d’una empresa real amb una infraestructura informàtica del tipus un servidor físic -una funció cap a una infraestructura virtual del tipus un servidor físic -varis servidors virtual -vàries funcions. Analitzarem l’estat actual de l’empresa, servidors i funcions, adquirirem el hardware necessari i farem la conversió de tots els seus servidors cap a una nova infraestructura virtual.Faig especial atenció a les explicacions de perquè utilitzo una opció i no un altre i també procuro sempre donar vàries opcions. Igualment remarco en quadres verds observacions a tenir en compte complementàries al que estic explicant en aquell moment, i en quadres vermells temes en els que s’ha de posar especial atenció en el moment en que es fan. Finalment, un cop feta la conversió, veurem els molts avantatges que ens ha reportat aquesta tecnologia a nivell de fiabilitat, estabilitat, capacitat de tolerància a errades, capacitat de ràpid desplegament de noves màquines, capacitat de recuperació del sistema i aprofitament de recursos físics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El reconeixement dels gestos de la mà (HGR, Hand Gesture Recognition) és actualment un camp important de recerca degut a la varietat de situacions en les quals és necessari comunicar-se mitjançant signes, com pot ser la comunicació entre persones que utilitzen la llengua de signes i les que no. En aquest projecte es presenta un mètode de reconeixement de gestos de la mà a temps real utilitzant el sensor Kinect per Microsoft Xbox, implementat en un entorn Linux (Ubuntu) amb llenguatge de programació Python i utilitzant la llibreria de visió artifical OpenCV per a processar les dades sobre un ordinador portàtil convencional. Gràcies a la capacitat del sensor Kinect de capturar dades de profunditat d’una escena es poden determinar les posicions i trajectòries dels objectes en 3 dimensions, el que implica poder realitzar una anàlisi complerta a temps real d’una imatge o d’una seqüencia d’imatges. El procediment de reconeixement que es planteja es basa en la segmentació de la imatge per poder treballar únicament amb la mà, en la detecció dels contorns, per després obtenir l’envolupant convexa i els defectes convexos, que finalment han de servir per determinar el nombre de dits i concloure en la interpretació del gest; el resultat final és la transcripció del seu significat en una finestra que serveix d’interfície amb l’interlocutor. L’aplicació permet reconèixer els números del 0 al 5, ja que s’analitza únicament una mà, alguns gestos populars i algunes de les lletres de l’alfabet dactilològic de la llengua de signes catalana. El projecte és doncs, la porta d’entrada al camp del reconeixement de gestos i la base d’un futur sistema de reconeixement de la llengua de signes capaç de transcriure tant els signes dinàmics com l’alfabet dactilològic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The matching function -a key building block in models of labor market frictions- impliesthat the job finding rate depends only on labor market tightness. We estimate such amatching function and find that the relation, although remarkably stable over 1967-2007,broke down spectacularly after 2007. We argue that labor market heterogeneities are notfully captured by the standard matching function, but that a generalized matching functionthat explicitly takes into account worker heterogeneity and market segmentation is fullyconsistent with the behavior of the job finding rate. The standard matching function canbreak down when, as in the Great Recession, the average characteristics of the unemployedchange too much, or when dispersion in labor market conditions -the extent to which somelabor markets fare worse than others- increases too much.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a Bayesian image reconstruction algorithm with entropy prior (FMAPE) that uses a space-variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of different statistical characteristics, thus avoiding the large residuals resulting from algorithms that use a constant hyperparameter. In the first implementation of the algorithm, we begin by segmenting a Maximum Likelihood Estimator (MLE) reconstruction. The segmentation method is based on using a wavelet decomposition and a self-organizing neural network. The result is a predetermined number of extended regions plus a small region for each star or bright object. To assign a different value of the hyperparameter to each extended region and star, we use either feasibility tests or cross-validation methods. Once the set of hyperparameters is obtained, we carried out the final Bayesian reconstruction, leading to a reconstruction with decreased bias and excellent visual characteristics. The method has been applied to data from the non-refurbished Hubble Space Telescope. The method can be also applied to ground-based images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este documento se ilustra de un modo práctico, el empleo de tres instrumentos que permiten al actuario definir grupos arancelarios y estimar premios de riesgo en el proceso que tasa la clase para el seguro de no vida. El primero es el análisis de segmentación (CHAID y XAID) usado en primer lugar en 1997 por UNESPA en su cartera común de coches. El segundo es un proceso de selección gradual con el modelo de regresión a base de distancia. Y el tercero es un proceso con el modelo conocido y generalizado de regresión linear, que representa la técnica más moderna en la bibliografía actuarial. De estos últimos, si combinamos funciones de eslabón diferentes y distribuciones de error, podemos obtener el aditivo clásico y modelos multiplicativos

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este documento se ilustra de un modo práctico, el empleo de tres instrumentos que permiten al actuario definir grupos arancelarios y estimar premios de riesgo en el proceso que tasa la clase para el seguro de no vida. El primero es el análisis de segmentación (CHAID y XAID) usado en primer lugar en 1997 por UNESPA en su cartera común de coches. El segundo es un proceso de selección gradual con el modelo de regresión a base de distancia. Y el tercero es un proceso con el modelo conocido y generalizado de regresión linear, que representa la técnica más moderna en la bibliografía actuarial. De estos últimos, si combinamos funciones de eslabón diferentes y distribuciones de error, podemos obtener el aditivo clásico y modelos multiplicativos