796 resultados para Empirical Algorithm Analysis
Resumo:
A closed-form solution formula for the kinematic control of manipulators with redundancy is derived, using the Lagrangian multiplier method. Differential relationship equivalent to the Resolved Motion Method has been also derived. The proposed method is proved to provide with the exact equilibrium state for the Resolved Motion Method. This exactness in the proposed method fixes the repeatability problem in the Resolved Motion Method, and establishes a fixed transformation from workspace to the joint space. Also the method, owing to the exactness, is demonstrated to give more accurate trajectories than the Resolved Motion Method. In addition, a new performance measure for redundancy control has been developed. This measure, if used with kinematic control methods, helps achieve dexterous movements including singularity avoidance. Compared to other measures such as the manipulability measure and the condition number, this measure tends to give superior performances in terms of preserving the repeatability property and providing with smoother joint velocity trajectories. Using the fixed transformation property, Taylor's Bounded Deviation Paths Algorithm has been extended to the redundant manipulators.
Resumo:
Object recognition is complicated by clutter, occlusion, and sensor error. Since pose hypotheses are based on image feature locations, these effects can lead to false negatives and positives. In a typical recognition algorithm, pose hypotheses are tested against the image, and a score is assigned to each hypothesis. We use a statistical model to determine the score distribution associated with correct and incorrect pose hypotheses, and use binary hypothesis testing techniques to distinguish between them. Using this approach we can compare algorithms and noise models, and automatically choose values for internal system thresholds to minimize the probability of making a mistake.
Resumo:
Impressive claims have been made for the performance of the SNoW algorithm on face detection tasks by Yang et. al. [7]. In particular, by looking at both their results and those of Heisele et. al. [3], one could infer that the SNoW system performed substantially better than an SVM-based system, even when the SVM used a polynomial kernel and the SNoW system used a particularly simplistic 'primitive' linear representation. We evaluated the two approaches in a controlled experiment, looking directly at performance on a simple, fixed-sized test set, isolating out 'infrastructure' issues related to detecting faces at various scales in large images. We found that SNoW performed about as well as linear SVMs, and substantially worse than polynomial SVMs.
Resumo:
A compositional time series is obtained when a compositional data vector is observed at different points in time. Inherently, then, a compositional time series is a multivariate time series with important constraints on the variables observed at any instance in time. Although this type of data frequently occurs in situations of real practical interest, a trawl through the statistical literature reveals that research in the field is very much in its infancy and that many theoretical and empirical issues still remain to be addressed. Any appropriate statistical methodology for the analysis of compositional time series must take into account the constraints which are not allowed for by the usual statistical techniques available for analysing multivariate time series. One general approach to analyzing compositional time series consists in the application of an initial transform to break the positive and unit sum constraints, followed by the analysis of the transformed time series using multivariate ARIMA models. In this paper we discuss the use of the additive log-ratio, centred log-ratio and isometric log-ratio transforms. We also present results from an empirical study designed to explore how the selection of the initial transform affects subsequent multivariate ARIMA modelling as well as the quality of the forecasts
Resumo:
Image segmentation of natural scenes constitutes a major problem in machine vision. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. This approach begins by detecting the main contours of the scene which are later used to guide a concurrent set of growing processes. A previous analysis of the seed pixels permits adjustment of the homogeneity criterion to the region's characteristics during the growing process. Since the high variability of regions representing outdoor scenes makes the classical homogeneity criteria useless, a new homogeneity criterion based on clustering analysis and convex hull construction is proposed. Experimental results have proven the reliability of the proposed approach
Resumo:
Michael Porter reconocido como una autoridad en estrategia nos ha motivado a entrar a analizar profundamente sus propuestas e ideas en el campo de su dominio, con el objetivo de analizarlas y validarlas en el actual ambiente de negocios, caracterizado por ser turbulento y dinámico. El método elegido fue contratar éste ambiente con el ambiente bajo el cual se crearon las propuestas estratégicas de Michael Porter. Para esto, nos enfocamos en su propuesta para la estrategia de negocio, específicamente, las Tres Estrategias Genéricas. Tomando el caso de estudio de ZARA y su trayectoria empresarial como una investigación empírica para obtener resultados y discutirlos bajo parámetros que surgen desde las críticas desarrolladas por otros autores. Tomando como fuente de información libros, ensayos, publicaciones en Internet, noticias, entrevistas a clientes y a sus empleados.
Resumo:
What are ways of searching in graphs? In this class, we will discuss basics of link analysis, including Google's PageRank algorithm as an example. Readings: The PageRank Citation Ranking: Bringing Order to the Web, L. Page and S. Brin and R. Motwani and T. Winograd (1998) Stanford Tecnical Report
Resumo:
Las capacidades dinámicas constituyen un aporte importante a la estrategia empresarial. De acuerdo con esta premisa se desarrolla el siguiente documento, al reconocer que la generación de competencias se consolida como la base teórica para el logro de sostenibilidad ante eventos de cambio que puedan afectar la estabilidad y la toma de decisiones de las organizaciones. Dada la falta de aplicación empírica del concepto se ha elaborado este paper, en el que se demuestran e identifican las herramientas que la aplicación empiríca puede dar a las organizaciones y los instrumentos que proveen para la generación de valor. A través del caso de estudio ASOS.COM se ejemplifica la necesidad de detección y aprovechamiento de oportunidades y amenazas, así como la reconfiguración, renovación y generación de competencias de segundo orden para enfrentar el cambio. De esta manera por medio de las habilidades creadas al interior de las empresas con enfoque en el aprendizaje e innovación se logra la comprensión del negocio y el afianzamiento de mejores escenarios futuros.
Resumo:
This article focuses on innate concepts: their definition, according to the linguistic work of Noam Chomsky, and the outline of a method for their study. As an introduction to the subject some academic conceptions of the concept acquisition are pointed out, and it is claimed that there is a lack of an empirical method for the study of innate concepts. Next, the article presents the definition that Chomsky has defended over time about such concepts. Finally, in a theoretical way, it presents the conditions for an empirical procedure for the study of innate concepts, called semantic analysis of corpus
Resumo:
Esta disertación busca estudiar los mecanismos de transmisión que vinculan el comportamiento de agentes y firmas con las asimetrías presentes en los ciclos económicos. Para lograr esto, se construyeron tres modelos DSGE. El en primer capítulo, el supuesto de función cuadrática simétrica de ajuste de la inversión fue removido, y el modelo canónico RBC fue reformulado suponiendo que des-invertir es más costoso que invertir una unidad de capital físico. En el segundo capítulo, la contribución más importante de esta disertación es presentada: la construcción de una función de utilidad general que anida aversión a la pérdida, aversión al riesgo y formación de hábitos, por medio de una función de transición suave. La razón para hacerlo así es el hecho de que los individuos son aversos a la pérdidad en recesiones, y son aversos al riesgo en auges. En el tercer capítulo, las asimetrías en los ciclos económicos son analizadas junto con ajuste asimétrico en precios y salarios en un contexto neokeynesiano, con el fin de encontrar una explicación teórica de la bien documentada asimetría presente en la Curva de Phillips.
Resumo:
The growing empirical literature on the analysis of civil war has recently included the study of conflict duration at the cross-country level. This paper presents, for the first time, a within-country analysis of the determinants of violence duration. I focus on the experience of the Colombian armed conflict. While the conflict has been active for about five decades, local violence ebbs and flows and areas experiencing continuous conflict coexist with places that have been able to resile and where violence is mostly absent. I examine a wide range of factors potentially associated with violence duration at the municipal level, including scale variables, geographical conditions, economic and social variables, institutions and state presence, inequality, government intervention, and victimization variables. I characterize a few variables robustly correlated with the persistence of localized conflict, both across specifications and using different econometric models of duration analysis.
Resumo:
This thesis theoretically studies the relationship between the informal sector (both in the labor and the housing market) and the city structure.
Resumo:
In this paper we use the most representative models that exist in the literature on term structure of interest rates. In particular, we explore affine one factor models and polynomial-type approximations such as Nelson and Siegel. Our empirical application considers monthly data of USA and Colombia for estimation and forecasting. We find that affine models do not provide adequate performance either in-sample or out-of-sample. On the contrary, parsimonious models such as Nelson and Siegel have adequate results in-sample, however out-of-sample they are not able to systematically improve upon random walk base forecast.
Resumo:
We run a standard income convergence analysis for the last decade and confirm an already established finding in the growth economics literature. EU countries are converging. Regions in Europe are also converging. But, within countries, regional disparities are on the rise. At the same time, there is probably no reason for EU Cohesion Policy to be concerned with what happens inside countries. Ultimately, our data shows that national governments redistribute well across regions, whether they are fiscally centralised or decentralised. It is difficult to establish if Structural and Cohesion Funds play any role in recent growth convergence patterns in Europe. Generally, macroeconomic simulations produce better results than empirical tests. It is thus possible that Structural Funds do not fully realise their potential either because they are not efficiently allocated or are badly managed or are used for the wrong investments, or a combination of all three. The approach to assess the effectiveness of EU funds should be consistent with the rationale behind the post-1988 EU Cohesion Policy. Standard income convergence analysis is certainly not sufficient and should be accompanied by an assessment of the changes in the efficiency of the capital stock in the recipient countries or regions as well as by a more qualitative assessment. EU funds for competitiveness and employment should be allocated by looking at each region’s capital efficiency to maximise growth generating effects or on a pure competitive.
Resumo:
An improved algorithm for the generation of gridded window brightness temperatures is presented. The primary data source is the International Satellite Cloud Climatology Project, level B3 data, covering the period from July 1983 to the present. The algorithm rakes window brightness, temperatures from multiple satellites, both geostationary and polar orbiting, which have already been navigated and normalized radiometrically to the National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer, and generates 3-hourly global images on a 0.5 degrees by 0.5 degrees latitude-longitude grid. The gridding uses a hierarchical scheme based on spherical kernel estimators. As part of the gridding procedure, the geostationary data are corrected for limb effects using a simple empirical correction to the radiances, from which the corrected temperatures are computed. This is in addition to the application of satellite zenith angle weighting to downweight limb pixels in preference to nearer-nadir pixels. The polar orbiter data are windowed on the target time with temporal weighting to account for the noncontemporaneous nature of the data. Large regions of missing data are interpolated from adjacent processed images using a form of motion compensated interpolation based on the estimation of motion vectors using an hierarchical block matching scheme. Examples are shown of the various stages in the process. Also shown are examples of the usefulness of this type of data in GCM validation.