921 resultados para Spectrally bounded
Resumo:
The central thesis of this report is that human language is NP-complete. That is, the process of comprehending and producing utterances is bounded above by the class NP, and below by NP-hardness. This constructive complexity thesis has two empirical consequences. The first is to predict that a linguistic theory outside NP is unnaturally powerful. The second is to predict that a linguistic theory easier than NP-hard is descriptively inadequate. To prove the lower bound, I show that the following three subproblems of language comprehension are all NP-hard: decide whether a given sound is possible sound of a given language; disambiguate a sequence of words; and compute the antecedents of pronouns. The proofs are based directly on the empirical facts of the language user's knowledge, under an appropriate idealization. Therefore, they are invariant across linguistic theories. (For this reason, no knowledge of linguistic theory is needed to understand the proofs, only knowledge of English.) To illustrate the usefulness of the upper bound, I show that two widely-accepted analyses of the language user's knowledge (of syntactic ellipsis and phonological dependencies) lead to complexity outside of NP (PSPACE-hard and Undecidable, respectively). Next, guided by the complexity proofs, I construct alternate linguisitic analyses that are strictly superior on descriptive grounds, as well as being less complex computationally (in NP). The report also presents a new framework for linguistic theorizing, that resolves important puzzles in generative linguistics, and guides the mathematical investigation of human language.
Resumo:
All intelligence relies on search --- for example, the search for an intelligent agent's next action. Search is only likely to succeed in resource-bounded agents if they have already been biased towards finding the right answer. In artificial agents, the primary source of bias is engineering. This dissertation describes an approach, Behavior-Oriented Design (BOD) for engineering complex agents. A complex agent is one that must arbitrate between potentially conflicting goals or behaviors. Behavior-oriented design builds on work in behavior-based and hybrid architectures for agents, and the object oriented approach to software engineering. The primary contributions of this dissertation are: 1.The BOD architecture: a modular architecture with each module providing specialized representations to facilitate learning. This includes one pre-specified module and representation for action selection or behavior arbitration. The specialized representation underlying BOD action selection is Parallel-rooted, Ordered, Slip-stack Hierarchical (POSH) reactive plans. 2.The BOD development process: an iterative process that alternately scales the agent's capabilities then optimizes the agent for simplicity, exploiting tradeoffs between the component representations. This ongoing process for controlling complexity not only provides bias for the behaving agent, but also facilitates its maintenance and extendibility. The secondary contributions of this dissertation include two implementations of POSH action selection, a procedure for identifying useful idioms in agent architectures and using them to distribute knowledge across agent paradigms, several examples of applying BOD idioms to established architectures, an analysis and comparison of the attributes and design trends of a large number of agent architectures, a comparison of biological (particularly mammalian) intelligence to artificial agent architectures, a novel model of primate transitive inference, and many other examples of BOD agents and BOD development.
Resumo:
In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).
Resumo:
This paper presents a computation of the $V_gamma$ dimension for regression in bounded subspaces of Reproducing Kernel Hilbert Spaces (RKHS) for the Support Vector Machine (SVM) regression $epsilon$-insensitive loss function, and general $L_p$ loss functions. Finiteness of the RV_gamma$ dimension is shown, which also proves uniform convergence in probability for regression machines in RKHS subspaces that use the $L_epsilon$ or general $L_p$ loss functions. This paper presenta a novel proof of this result also for the case that a bias is added to the functions in the RKHS.
Resumo:
In this paper we focus on the problem of estimating a bounded density using a finite combination of densities from a given class. We consider the Maximum Likelihood Procedure (MLE) and the greedy procedure described by Li and Barron. Approximation and estimation bounds are given for the above methods. We extend and improve upon the estimation results of Li and Barron, and in particular prove an $O(\\frac{1}{\\sqrt{n}})$ bound on the estimation error which does not depend on the number of densities in the estimated combination.
Resumo:
Building robust recognition systems requires a careful understanding of the effects of error in sensed features. Error in these image features results in a region of uncertainty in the possible image location of each additional model feature. We present an accurate, analytic approximation for this uncertainty region when model poses are based on matching three image and model points, for both Gaussian and bounded error in the detection of image points, and for both scaled-orthographic and perspective projection models. This result applies to objects that are fully three- dimensional, where past results considered only two-dimensional objects. Further, we introduce a linear programming algorithm to compute the uncertainty region when poses are based on any number of initial matches. Finally, we use these results to extend, from two-dimensional to three- dimensional objects, robust implementations of alignmentt interpretation- tree search, and ransformation clustering.
Resumo:
This thesis presents there important results in visual object recognition based on shape. (1) A new algorithm (RAST; Recognition by Adaptive Sudivisions of Tranformation space) is presented that has lower average-case complexity than any known recognition algorithm. (2) It is shown, both theoretically and empirically, that representing 3D objects as collections of 2D views (the "View-Based Approximation") is feasible and affects the reliability of 3D recognition systems no more than other commonly made approximations. (3) The problem of recognition in cluttered scenes is considered from a Bayesian perspective; the commonly-used "bounded-error errorsmeasure" is demonstrated to correspond to an independence assumption. It is shown that by modeling the statistical properties of real-scenes better, objects can be recognized more reliably.
Resumo:
We present a technique for the rapid and reliable evaluation of linear-functional output of elliptic partial differential equations with affine parameter dependence. The essential components are (i) rapidly uniformly convergent reduced-basis approximations — Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N (optimally) selected points in parameter space; (ii) a posteriori error estimation — relaxations of the residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs; and (iii) offline/online computational procedures — stratagems that exploit affine parameter dependence to de-couple the generation and projection stages of the approximation process. The operation count for the online stage — in which, given a new parameter value, we calculate the output and associated error bound — depends only on N (typically small) and the parametric complexity of the problem. The method is thus ideally suited to the many-query and real-time contexts. In this paper, based on the technique we develop a robust inverse computational method for very fast solution of inverse problems characterized by parametrized partial differential equations. The essential ideas are in three-fold: first, we apply the technique to the forward problem for the rapid certified evaluation of PDE input-output relations and associated rigorous error bounds; second, we incorporate the reduced-basis approximation and error bounds into the inverse problem formulation; and third, rather than regularize the goodness-of-fit objective, we may instead identify all (or almost all, in the probabilistic sense) system configurations consistent with the available experimental data — well-posedness is reflected in a bounded "possibility region" that furthermore shrinks as the experimental error is decreased.
Resumo:
Hungary lies entirely within the Carpatho-Pannonian Region (CPR), a dominant tectonic unit of eastern Central Europe. The CPR consists of the Pannonian Basin system, and the arc of the Carpathian Mountains surrounding the lowlands in the north, east, and southeast. In the west, the CPR is bounded by the Eastern Alps, whereas in the south, by the Dinaridic belt. (...)
Resumo:
The preceding two editions of CoDaWork included talks on the possible consideration of densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended the Euclidean structure of the simplex to a Hilbert space structure of the set of densities within a bounded interval, and van den Boogaart (2005) generalized this to the set of densities bounded by an arbitrary reference density. From the many variations of the Hilbert structures available, we work with three cases. For bounded variables, a basis derived from Legendre polynomials is used. For variables with a lower bound, we standardize them with respect to an exponential distribution and express their densities as coordinates in a basis derived from Laguerre polynomials. Finally, for unbounded variables, a normal distribution is used as reference, and coordinates are obtained with respect to a Hermite-polynomials-based basis. To get the coordinates, several approaches can be considered. A numerical accuracy problem occurs if one estimates the coordinates directly by using discretized scalar products. Thus we propose to use a weighted linear regression approach, where all k- order polynomials are used as predictand variables and weights are proportional to the reference density. Finally, for the case of 2-order Hermite polinomials (normal reference) and 1-order Laguerre polinomials (exponential), one can also derive the coordinates from their relationships to the classical mean and variance. Apart of these theoretical issues, this contribution focuses on the application of this theory to two main problems in sedimentary geology: the comparison of several grain size distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock or sediment, like their composition
Resumo:
La industria de las centrales de llamadas es uno de los sectores de más rápido crecimiento en el mundo desarrollado, gracias a los avances tecnológicos han permitido su uso cada vez más generalizado desarrollando servicios integrales que son accesibles las 24 horas del día. Los operadores telefónicos o tele-operadores de esta industria se ven enfrentados a jornadas laborales en las que se exponen al uso constante de la voz, utilización permanente de auriculares de comunicación, confinamiento en estaciones de trabajo delimitadas pero no aisladas; aumentando así la prevalencia de síntomas como los otorrinolaringológicos. Este estudio tiene como objeto identificar la prevalencia de síntomas otorrinolaringológicos dados por alteraciones de la voz, compromiso auditivo y síntomas de la vía respiratoria superior durante la jornada laboral de los trabajadores de una central de llamadas de una prestigiosa empresa aseguradora de la ciudad de Bogotá Colombia, así como también identificar la asociación de factores demográficos organizacionales y biológicos con los síntomas otorrinolaringológicos y analizar el medio ambiente laboral de dicha empresa y la relación de los síntomas otorrinolaringológicos con mediciones de ruido, temperatura y humedad. La población estudiada fue de 81 tele operadores de los cuales 61 (75.3%) fueron mujeres, se evidencio que las enfermedades respiratorias altas tienen una prevalencia del 36%, también se reporto una prevalencia del 85% (69) tele operadores reportaron por lo menos un síntoma de voz y solo 12 tele operadores 15% no reportaron ningún síntoma. En cuanto a la hipoacusia solo 5 (6.2%) reportaron disminución de la agudeza auditiva
Resumo:
El presente trabajo tiene como finalidad analizar y comparar Puertos Secos en tres países diferentes: Colombia, México y España; y con ello determinar las claves o herramientas para el éxito de estos. Para esto, la investigación se llevó a cabo a nivel documental y teórico, abarcando desde documentos de investigación académicos y de entidades supranacionales, hasta documentos de carácter legislativo de diferente orden, para los tres países de estudio, incluyendo estudios privados y herramientas estadísticas propias de los Puertos Secos o sus empresas concesionarias. Gracias a todo esto, se pudo desarrollar el documento con la siguiente estructura: Historia, Referencias, Legislación y Descripción. En la primera etapa, resalta el enfoque de desarrollo portuario y de promoción del tren como medio de transporte y herramienta comercial; en la segunda etapa, se corrobora la dificultad que estos proyectos enfrentan en Colombia por parte del sector público centralista y carente de visión y una precaria participación del sector privado; en la tercera etapa se ratifica esto al ser la principal diferencia el enfoque logístico en la legislación, partiendo de la constitución misma, y su presencia o ausencia en los marcos legales de cada país; y se termina con el resultado de estos elementos previos condensado en elementos tangibles en México y España, y tan solo ilusiones y fracasos para Colombia. Por todo lo anterior, se llega a la conclusión que un Marco Jurídico bien estructurado, una fuerte Inversión del Sector Público y Privado, y una Voluntad Política de largo impulso son las claves para el éxito de estos proyectos, y todo lo que viene ligado a ellos.
Resumo:
Una de las problemáticas más singulares que se presentan en la actualidad urbana es la individualidad de los ciudadanos. Esta tendencia también se manifiesta mediante la reciente preferencia de los ciudadanos por nuevos tipos de viviendas, delimitadas por barreras físicas y habitadas por colectivos relativamente homogéneos. Se trata de territorios denominados “conjuntos cerrados” que se han convertido en un tema relevante para entender a la ciudad como un fenómeno de la realidad contemporánea. En Bogotá se presenta esta situación; sin embargo, este artículo no se reduce a estudiarlas. En un sentido más amplio, busca adentrarse en la problemática socio-espacial del fenómeno urbano mediante el análisis del “uso indebido del cerramiento”. Esta práctica es contrastada con el desarrollo del espacio público de Bogotá durante los últimos tres lustros. El planteamiento general busca analizar esta tensión real y simbólica. Finalmente, debe resaltarse la recurrencia a las querellas como unidad de medición cuantitativa la cual permite hacer procesos comparativos de tipo social y espacial. Este método puede aportar mecanismos útiles para la comprensión de esta y otras realidades urbanas.
Resumo:
The evolution of the drug trafficking network –so-called– ‘Cartel del Norte del Valle’, is studied using network analysis methods. We found that the average length between any pair of its members was bounded by 4 –an attribute of smallworld networks. In this tightly connected network, informational shocks induce fear and the unleashing of searches of threatening nodes, using available paths. Lethal violence ensues in clusters of increasing sizes that fragment the network, without compromising, however, the survival of the largest component, which proved to be resilient to massive violence. In spite of a success from the point of view of head counting, the US’ socialization program for drug traffickers did not effectively change the cyclical dynamics of the drug dealing business: war survivors took over what was left from the old network initiating a new cycle of business and violence.
Resumo:
El propósito de este trabajo de investigación teórica, es lograr basados en el estudio de variables del entorno empresarial, como lo son los modelos de negocio, comportamiento, cultura y complejidad organizacional, estructurar un artículo que invite a la reflexión sobre la incidencia del comportamiento de los líderes para con los modelos de gestión. Con base en la información recolectada, se observan patrones funcionales en el comportamiento de las organizaciones, derivados de los modelos de comportamiento de quienes las dirigen. Las implicaciones podrían ser calificadas de favorables o desfavorables, si se les asocia como condición relevante en la perdurabilidad de las mismas en el tiempo.