865 resultados para kernel estimator


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work. XG extracted from Tamarindus indica (XGT) and Copaifera langsdorffii (XGC) seeds were deposited onto Si wafers as thin films. The characteristics of XGT and XGC adsorbed layers were compared with a commercial XG sample (TKP, Tamarind kernel powder) by ellipsometry, and atomic force microscopy (AFM). Moreover, the adsorption of oxidized derivative of XGT (To60) onto amino-terminated Si wafers and the immobilization of bovine serum albumin (BSA) onto polysaccharides covered wafers, as a function of pH, were also investigated. The XG samples presented molar ratios Glc:Xyl:Gal of 2.4:2.1:1 (XGC) 2.8: 23: 1 (XGT) and 1.91.91 (TKP). The structure of XGT and XGC was determined by O-methy alditol acetate derivatization and showed similar features, but XGC confirmed the presence of more alpha-D-Xyl branches due to more beta-D-Gal ends. XGT deposited onto Si adsorbed as fibers and small entities uniformly distributed, as evidenced by AFM, while TPK and XGC formed larger aggregates. The thickness of To60 onto amino-terminated surface was similar to that determined for XGT onto Si wafers. A maximum in the adsorbed amount of BSA occurred close to its isoelectric point (5.5). These findings indicate that XGT and To60 are potential materials for the development of biomaterials and biotechnological devices. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since last two decades researches have been working on developing systems that can assistsdrivers in the best way possible and make driving safe. Computer vision has played a crucialpart in design of these systems. With the introduction of vision techniques variousautonomous and robust real-time traffic automation systems have been designed such asTraffic monitoring, Traffic related parameter estimation and intelligent vehicles. Among theseautomatic detection and recognition of road signs has became an interesting research topic.The system can assist drivers about signs they don’t recognize before passing them.Aim of this research project is to present an Intelligent Road Sign Recognition System basedon state-of-the-art technique, the Support Vector Machine. The project is an extension to thework done at ITS research Platform at Dalarna University [25]. Focus of this research work ison the recognition of road signs under analysis. When classifying an image its location, sizeand orientation in the image plane are its irrelevant features and one way to get rid of thisambiguity is to extract those features which are invariant under the above mentionedtransformation. These invariant features are then used in Support Vector Machine forclassification. Support Vector Machine is a supervised learning machine that solves problemin higher dimension with the help of Kernel functions and is best know for classificationproblems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines whether European Monetary Union (EMU) countries share fairly the effect of their membership in Eurozone (EZ) or whether are winners and losers in this ''Euro-game''. By using panel data of 27 European Union (EU) Member States for the period 2001-2012 in the context of a gravity model, we focus on estimating the Euro’s effect on bilateral trade and we detect whether this effect differs across the Member States of EZ. Two estimation methods are applied: Pooled OLS estimator and Fixed Effects estimator. The empirical results come to the conclusion that the individual country effects differ and are statistically significant, indicating that EMU’s effect on trade differs across the Member States of EZ. The overall effect of the Euro is statistically insignificant, regardless the estimation method, demonstrating that the common European currency may have no effect on bilateral trade.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider method of moment fixed effects (FE) estimation of technical inefficiency. When N, the number of cross sectional observations, is large it ispossible to obtain consistent central moments of the population distribution of the inefficiencies. It is well-known that the traditional FE estimator may be seriously upward biased when N is large and T, the number of time observations, is small. Based on the second central moment and a single parameter distributional assumption on the inefficiencies, we obtain unbiased technical inefficiencies in large N settings. The proposed methodology bridges traditional FE and maximum likelihood estimation – bias is reduced without the random effects assumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This is a note about proxy variables and instruments for identification of structural parameters in regression models. We have experienced that in the econometric textbooks these two issues are treated separately, although in practice these two concepts are very often combined. Usually, proxy variables are inserted in instrument variable regressions with the motivation they are exogenous. Implicitly meaning they are exogenous in a reduced form model and not in a structural model. Actually if these variables are exogenous they should be redundant in the structural model, e.g. IQ as a proxy for ability. Valid proxies reduce unexplained variation and increases the efficiency of the estimator of the structural parameter of interest. This is especially important in situations when the instrument is weak. With a simple example we demonstrate what is required of a proxy and an instrument when they are combined. It turns out that when a researcher has a valid instrument the requirements on the proxy variable is weaker than if no such instrument exists

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Solutions to combinatorial optimization problems, such as problems of locating facilities, frequently rely on heuristics to minimize the objective function. The optimum is sought iteratively and a criterion is needed to decide when the procedure (almost) attains it. Pre-setting the number of iterations dominates in OR applications, which implies that the quality of the solution cannot be ascertained. A small, almost dormant, branch of the literature suggests using statistical principles to estimate the minimum and its bounds as a tool to decide upon stopping and evaluating the quality of the solution. In this paper we examine the functioning of statistical bounds obtained from four different estimators by using simulated annealing on p-median test problems taken from Beasley’s OR-library. We find the Weibull estimator and the 2nd order Jackknife estimator preferable and the requirement of sample size to be about 10 being much less than the current recommendation. However, reliable statistical bounds are found to depend critically on a sample of heuristic solutions of high quality and we give a simple statistic useful for checking the quality. We end the paper with an illustration on using statistical bounds in a problem of locating some 70 distribution centers of the Swedish Post in one Swedish region. 

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Att kunna gör en effektiv undersökning av det flyktiga minnet är något som blir viktigare ochviktigare i IT-forensiska utredningar. Dels under Linux och Windows baserade PC installationermen också för mobila enheter i form av Android och enheter baserade andra mobila opperativsy-stem.Android använder sig av en modifierad Linux-kärna var modifikationer är för att anpassa kärnantill de speciella krav som gäller för ett mobilt operativsystem. Dessa modifikationer innefattardels meddelandehantering mellan processer men även ändringar till hur internminnet hanteras ochövervakas.Då dessa två kärnor är så pass nära besläktade kan samma grundläggande principer användas föratt dumpa och undersöka minne. Dumpningen sker via en kärn-modul vilket i den här rapportenutgörs av en programvara vid namn LiME vilken kan hantera bägge kärnorna.Analys av minnet kräver att verktygen som används har en förståelse för minneslayouten i fråga.Beroende på vilken metod verktyget använder så kan det även behövas information om olika sym-boler. Verktyget som används i det här examensarbetet heter Volatility och klarar på papperet avatt extrahera all den information som behövs för att kunna göra en korrekt undersökning.Arbetet avsåg att vidareutveckla existerande metoder för analys av det flyktiga minnet på Linux-baserade maskiner (PC) och inbyggda system(Android). Problem uppstod då undersökning avflyktigt minne på Android och satta mål kunde inte uppnås fullt ut. Det visade sig att minnesanalysriktat emot PC-plattformen är både enklare och smidigare än vad det är mot Android.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study analyses the effects of firm relocation on firm profits, using longitudinal data on Swedish limtied liability firms and employing a difference-in-differnce propensity score method in the empirical analysis. Using propensity score matching, the pre-relocalization differneces between relocating and non-relocating firms are balanced. In addition to that, a difference-in-difference estimator is employed in order to control for all time-invariant unobserved heterogeneity among firms. For matching, nearest neighbour matching, using the one-, two- and three nearest neighbours is employed. The balanacing results indicate that matching achieves a good balance, and that similar relocating and non-relocating firms are being compared. The estimated average treatment on the treatment effects indicate thats relocations has a significant effect on the profits of the relocating firms. In other words, firms taht relocate increase their profits significantly, in comparison to what the profits would be had the firms not relocated. This effect is estimated to vary between 3 to 11 percentage points, depending on the lenght of the analysed period after relocation. 

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of a summary and five self-contained papers addressing dynamics of firms in the Swedish wholesale trade sector. Paper [1] focuses upon determinants of new firm formation in the Swedish wholesale trade sector, using two definitions of firms’ relevant markets, markets defined as administrative areas, and markets based on a cost minimizing behavior of retailers. The paper shows that new entering firms tend to avoid regions with already high concentration of other firms in the same branch of wholesaling, while right-of-the-center local government and quality of the infrastructure have positive impacts upon entry of new firms. The signs of the estimated coefficients remain the same regardless which definition of relevant market is used, while the size of the coefficients is generally higher once relevant markets delineated on the cost-minimizing assumption of retailers are used. Paper [2] analyses determinant of firm relocation, distinguishing between the role of the factors in in-migration municipalities and out-migration municipalities. The results of the analysis indicate that firm-specific factors, such as profits, age and size of the firm are negatively related to the firm’s decision to relocate. Furthermore, firms seems to be avoiding municipalities with already high concentration of firms operating in the same industrial branch of wholesaling and also to be more reluctant to leave municipalities governed by right-of-the- center parties. Lastly, firms seem to avoid moving to municipalities characterized with high population density. Paper [3] addresses determinants of firm growth, adopting OLS and a quantile regression technique. The results of this paper indicate that very little of the firm growth can be explained by the firm-, industry- and region-specific factors, controlled for in the estimated models. Instead, the firm growth seems to be driven by internal characteristics of firms, factors difficult to capture in conventional statistics. This result supports Penrose’s (1959) suggestion that internal resources such as firm culture, brand loyalty, entrepreneurial skills, and so on, are important determinants of firm growth rates. Paper [4] formulates a forecasting model for firm entry into local markets and tests this model using data from the Swedish wholesale industry. The empirical analysis is based on directly estimating the profit function of wholesale firms and identification of low- and high-return local markets. The results indicate that 19 of 30 estimated models have more net entry in high-return municipalities, but the estimated parameters is only statistically significant at conventional level in one of our estimated models, and then with unexpected negative sign. Paper [5] studies effects of firm relocation on firm profits of relocating firms, employing a difference-in-difference propensity score matching. Using propensity score matching, the pre-relocalization differences between relocating and non-relocating firms are balanced, while the difference-in-difference estimator controls for all time-invariant unobserved heterogeneity among firms. The results suggest that firms that relocate increase their profits significantly, in comparison to what the profits would be had the firms not relocated. This effect is estimated to vary between 3 to 11 percentage points, depending on the length of the analyzed period. 

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a system to recognise and classify road and traffic signs for the purpose of developing an inventory of them which could assist the highway engineers’ tasks of updating and maintaining them. It uses images taken by a camera from a moving vehicle. The system is based on three major stages: colour segmentation, recognition, and classification. Four colour segmentation algorithms are developed and tested. They are a shadow and highlight invariant, a dynamic threshold, a modification of de la Escalera’s algorithm and a Fuzzy colour segmentation algorithm. All algorithms are tested using hundreds of images and the shadow-highlight invariant algorithm is eventually chosen as the best performer. This is because it is immune to shadows and highlights. It is also robust as it was tested in different lighting conditions, weather conditions, and times of the day. Approximately 97% successful segmentation rate was achieved using this algorithm.Recognition of traffic signs is carried out using a fuzzy shape recogniser. Based on four shape measures - the rectangularity, triangularity, ellipticity, and octagonality, fuzzy rules were developed to determine the shape of the sign. Among these shape measures octangonality has been introduced in this research. The final decision of the recogniser is based on the combination of both the colour and shape of the sign. The recogniser was tested in a variety of testing conditions giving an overall performance of approximately 88%.Classification was undertaken using a Support Vector Machine (SVM) classifier. The classification is carried out in two stages: rim’s shape classification followed by the classification of interior of the sign. The classifier was trained and tested using binary images in addition to five different types of moments which are Geometric moments, Zernike moments, Legendre moments, Orthogonal Fourier-Mellin Moments, and Binary Haar features. The performance of the SVM was tested using different features, kernels, SVM types, SVM parameters, and moment’s orders. The average classification rate achieved is about 97%. Binary images show the best testing results followed by Legendre moments. Linear kernel gives the best testing results followed by RBF. C-SVM shows very good performance, but ?-SVM gives better results in some case.