337 resultados para Teoria de la ment
Resumo:
Aquest projecte és l’exemplificació del punt on la teoria i la pràctica s’uneixen. Mostra com la tasca socioeducativa que es duu a terme a un recurs on es dóna atenció residencial a joves (majors d’edat) amb una mesura judicial de règim obert, pot encaixar perfectament amb el que teoria de la resiliència argumenta. S’utilitza el model resilient de la casita el qual es fa servir de cinc possibles àrees d’intervenció. Es tracta d’un estudi on es justifiquen les raons de per què el Pis de Joves d’Emancipació pot ser definit com una Institució Resilient
Resumo:
Es descriu l'aproximació de Capes Atòmiques dins de la teoria de la Semblança Molecular Quàntica. Partint només de dades teòriques, s'ha trobat una relació entre estructura molecular i activitat biològica per a diversos conjunts de molècules. Es descriuen els aspectes teòrics de la Semblança Molecular Quàntica i alguns exemples d'aplicació
Resumo:
This paper investigates the role of learning by private agents and the central bank (two-sided learning) in a New Keynesian framework in which both sides of the economy have asymmetric and imperfect knowledge about the true data generating process. We assume that all agents employ the data that they observe (which may be distinct for different sets of agents) to form beliefs about unknown aspects of the true model of the economy, use their beliefs to decide on actions, and revise these beliefs through a statistical learning algorithm as new information becomes available. We study the short-run dynamics of our model and derive its policy recommendations, particularly with respect to central bank communications. We demonstrate that two-sided learning can generate substantial increases in volatility and persistence, and alter the behavior of the variables in the model in a signifficant way. Our simulations do not converge to a symmetric rational expectations equilibrium and we highlight one source that invalidates the convergence results of Marcet and Sargent (1989). Finally, we identify a novel aspect of central bank communication in models of learning: communication can be harmful if the central bank's model is substantially mis-specified
Resumo:
A graphical processing unit (GPU) is a hardware device normally used to manipulate computer memory for the display of images. GPU computing is the practice of using a GPU device for scientific or general purpose computations that are not necessarily related to the display of images. Many problems in econometrics have a structure that allows for successful use of GPU computing. We explore two examples. The first is simple: repeated evaluation of a likelihood function at different parameter values. The second is a more complicated estimator that involves simulation and nonparametric fitting. We find speedups from 1.5 up to 55.4 times, compared to computations done on a single CPU core. These speedups can be obtained with very little expense, energy consumption, and time dedicated to system maintenance, compared to equivalent performance solutions using CPUs. Code for the examples is provided.
Resumo:
En marzo de 2004 el Observatorio de Política Exterior Europea publicó, en versión digital, un monográfico especial sobre España en Europa (1996-2004). Su objetivo era analizar la agenda y la estrategia de España durante el período de José María Aznar en materia de relaciones internacionales. Como bien indicaba el título de aquella publicación, uno de los supuestos de partida del análisis era la europeización de la actividad internacional de España. ¿Era así?, ¿la España de Aznar veía el mundo y se aproximaba a él a través de Bruselas? Aquella publicación tuvo una buena acogida, a la vista de las visitas recibidas y sobre todo de las instituciones que nos pidieron vincular dicha publicación a sus páginas web y, entre ellas, hay que destacar que EUObserver publicó como comentario su artículo introductorio, en versión inglesa, Aznar: thinking locally, acting in Europe (calificado por EUObserver como lectura de máxima relevancia). El hecho de que las elecciones de 2004 se celebraran tres días después de los trágicos acontecimientos del 11-M hizo que el interés por España y por su proyección europea e internacional aumentara de manera destacada. La presente publicación constituye un segundo ejercicio de dicho tipo, en este caso para analizar el período del gobierno Zapatero (2004-2008). Una vez más, el supuesto de partida (la europeización de la agenda y del método) está en la mente de los analistas. Y una vez más los artículos recogidos en esta publicación hacen el ejercicio de “triangular” el análisis. España y Europa son dos vértices (más o menos alejados, en el fondo y en la forma) que los autores manejan en sus análisis de caso (tercer vértice)
Resumo:
La motivació del present treball va sorgir de la necessitat de descodificar els codis de Golay, un tipus de codis lineals perfectes, en el paquet matemàtic Sage. Sage és un paquet software de lliure distribució i en actual desenvolupament i popularització destinat a aglutinar les funcionalitats de part dels paquets d'anàlisi matemàtic, calculadors simbòlics i manipuladors algebraics propietaris com Mathematica, Matlab, Maple, Magma. En el document es descriurà la implementació realitzada, destacant-ne els aspectes més rellevants. Per a tal efecte, es donarà una introducció als codis lineals i als seus aspectes matemàtics tant des de la vessant de la definició com de les propietats; i al paquet Sage.
Resumo:
The radiation distribution function used by Domínguez and Jou [Phys. Rev. E 51, 158 (1995)] has been recently modified by Domínguez-Cascante and Faraudo [Phys. Rev. E 54, 6933 (1996)]. However, in these studies neither distribution was written in terms of directly measurable quantities. Here a solution to this problem is presented, and we also propose an experiment that may make it possible to determine the distribution function of nonequilibrium radiation experimentally. The results derived do not depend on a specific distribution function for the matter content of the system
Resumo:
In this technical report, we approach one of the practical aspects when it comes to represent users' interests from their tagging activity, namely the categorization of tags into high-level categories of interest. The reason is that the representation of user profiles on the basis of the myriad of tags available on the Web is certainly unfeasible from various practical perspectives; mainly concerningthe unavailability of data to reliably, accurately measure interests across such fine-grained categorization, and, should the data be available, its overwhelming computational intractability. Motivated by this, our study presents the results of a categorization process whereby a collection of tags posted at BibSonomy #http://www.bibsonomy.org# are classified into 5 categories of interest. The methodology used to conduct such categorization is in line with other works in the field.
Resumo:
In the scenario of social bookmarking, a user browsing the Web bookmarks web pages and assigns free-text labels (i.e., tags) to them according to their personal preferences. In this technical report, we approach one of the practical aspects when it comes to represent users' interests from their tagging activity, namely the categorization of tags into high-level categories of interest. The reason is that the representation of user profiles on the basis of the myriad of tags available on the Web is certainly unfeasible from various practical perspectives; mainly concerning the unavailability of data to reliably, accurately measure interests across such fine-grained categorisation, and, should the data be available, its overwhelming computational intractability. Motivated by this, our study presents the results of a categorization process whereby a collection of tags posted at Delicious #http://delicious.com# are classified into 200 subcategories of interest.
Resumo:
Bimodal dispersal probability distributions with characteristic distances differing by several orders of magnitude have been derived and favorably compared to observations by Nathan [Nature (London) 418, 409 (2002)]. For such bimodal kernels, we show that two-dimensional molecular dynamics computer simulations are unable to yield accurate front speeds. Analytically, the usual continuous-space random walks (CSRWs) are applied to two dimensions. We also introduce discrete-space random walks and use them to check the CSRW results (because of the inefficiency of the numerical simulations). The physical results reported are shown to predict front speeds high enough to possibly explain Reid's paradox of rapid tree migration. We also show that, for a time-ordered evolution equation, fronts are always slower in two dimensions than in one dimension and that this difference is important both for unimodal and for bimodal kernels
Resumo:
Standard practice in Bayesian VARs is to formulate priors on the autoregressive parameters, but economists and policy makers actually have priors about the behavior of observable variables. We show how this kind of prior can be used in a VAR under strict probability theory principles. We state the inverse problem to be solved and we propose a numerical algorithm that works well in practical situations with a very large number of parameters. We prove various convergence theorems for the algorithm. As an application, we first show that the results in Christiano et al. (1999) are very sensitive to the introduction of various priors that are widely used. These priors turn out to be associated with undesirable priors on observables. But an empirical prior on observables helps clarify the relevance of these estimates: we find much higher persistence of output responses to monetary policy shocks than the one reported in Christiano et al. (1999) and a significantly larger total effect.
Resumo:
The problem of jointly estimating the number, the identities, and the data of active users in a time-varying multiuser environment was examined in a companion paper (IEEE Trans. Information Theory, vol. 53, no. 9, September 2007), at whose core was the use of the theory of finite random sets on countable spaces. Here we extend that theory to encompass the more general problem of estimating unknown continuous parameters of the active-user signals. This problem is solved here by applying the theory of random finite sets constructed on hybrid spaces. We doso deriving Bayesian recursions that describe the evolution withtime of a posteriori densities of the unknown parameters and data.Unlike in the above cited paper, wherein one could evaluate theexact multiuser set posterior density, here the continuous-parameter Bayesian recursions do not admit closed-form expressions. To circumvent this difficulty, we develop numerical approximationsfor the receivers that are based on Sequential Monte Carlo (SMC)methods (“particle filtering”). Simulation results, referring to acode-divisin multiple-access (CDMA) system, are presented toillustrate the theory.
Resumo:
This paper presents our investigation on iterativedecoding performances of some sparse-graph codes on block-fading Rayleigh channels. The considered code ensembles are standard LDPC codes and Root-LDPC codes, first proposed in and shown to be able to attain the full transmission diversity. We study the iterative threshold performance of those codes as a function of fading gains of the transmission channel and propose a numerical approximation of the iterative threshold versus fading gains, both both LDPC and Root-LDPC codes.Also, we show analytically that, in the case of 2 fading blocks,the iterative threshold root of Root-LDPC codes is proportional to (α1 α2)1, where α1 and α2 are corresponding fading gains.From this result, the full diversity property of Root-LDPC codes immediately follows.