47 resultados para Parametric search

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We provide some guidelines for deriving new projective hash families of cryptographic interest. Our main building blocks are so called group action systems; we explore what properties of this mathematical primitives may lead to the construction of cryptographically useful projective hash families. We point out different directions towards new constructions, deviating from known proposals arising from Cramer and Shoup's seminal work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We accomplish two goals. First, we provide a non-cooperative foundation for the use of the Nash bargaining solution in search markets. This finding should help to close the rift between the search and the matching-and-bargaining literature. Second, we establish that the diversity of quality offered (at an increasing price-quality ratio) in a decentralized market is an equilibrium phenomenon - even in the limit as search frictions disappear.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study pair-wise decentralized trade in dynamic markets with homogeneous, non-atomic, buyers and sellers that wish to exchange one unit. Pairs of traders are randomly matched and bargaining a price under rules that offer the freedom to quit the match at any time. Market equilbria, prices and trades over time, are characterized. The asymptotic behavior of prices and trades as frictions (search costs and impatience) vanish, and the conditions for (non) convergence to walrasian prices are explored. As a side product of independent interest, we present a self-contained theory of non-cooperative bargaining with two-sided, time-varying, outside options.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estudi elaborat a partir d’una estada a l’ Imperial College London, entre juliol i novembre de 2006. En aquest treball s’ha investigat la geometria més apropiada per a la caracterització de la tenacitat a fractura intralaminar de materials compòsits laminats amb teixit. L’objectiu és assegurar la propagació de l’esquerda sense que la proveta falli abans per cap altre mecanisme de dany per tal de permetre la caracterització experimental de la tenacitat a fractura intralaminar de materials compòsits laminats amb teixit. Amb aquesta fi, s’ha dut a terme l’anàlisi paramètrica de diferents tipus de provetes mitjançant el mètode dels elements finits (FE) combinat amb la virtual crack closure technique (VCCT). Les geometries de les provetes analitzades corresponen a la proveta de l’assaig compact tension (CT) i diferents variacions com la extended compact tension (ECT), la proveta widened compact tension (WCT), tapered compact tension (TCT) i doubly-tapered compact tension (2TCT). Com a resultat d’aquestes anàlisis s’han derivat diferents conclusions per obtenir la geometria de proveta més apropiada per a la caracterització de la tenacitat a fractura intralaminar de materials compòsits laminats amb teixit. A més, també s’han dut a terme una sèrie d’assaigs experimentals per tal de validar els resultats de les anàlisis paramètriques. La concordança trobada entre els resultats numèrics i experimentals és bona tot i la presència d’efectes no previstos durant els assaigs experimentals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A growing literature integrates theories of debt management into models of optimal fiscal policy. One promising theory argues that the composition of government debt should be chosen so that fluctuations in the market value of debt offset changes in expected future deficits. This complete market approach to debt management is valid even when the government only issues non-contingent bonds. A number of authors conclude from this approach that governments should issue long term debt and invest in short term assets. We argue that the conclusions of this approach are too fragile to serve as a basis for policy recommendations. This is because bonds at different maturities have highly correlated returns, causing the determination of the optimal portfolio to be ill-conditioned. To make this point concrete we examine the implications of this approach to debt management in various models, both analytically and using numerical methods calibrated to the US economy. We find the complete market approach recommends asset positions which are huge multiples of GDP. Introducing persistent shocks or capital accumulation only worsens this problem. Increasing the volatility of interest rates through habits partly reduces the size of these simulations we find no presumption that governments should issue long term debt ? policy recommendations can be easily reversed through small perturbations in the specification of shocks or small variations in the maturity of bonds issued. We further extend the literature by removing the assumption that governments every period costlessly repurchase all outstanding debt. This exacerbates the size of the required positions, worsens their volatility and in some cases produces instability in debt holdings. We conclude that it is very difficult to insulate fiscal policy from shocks by using the complete markets approach to debt management. Given the limited variability of the yield curve using maturities is a poor way to substitute for state contingent debt. The result is the positions recommended by this approach conflict with a number of features that we believe are important in making bond markets incomplete e.g allowing for transaction costs, liquidity effects, etc.. Until these features are all fully incorporated we remain in search of a theory of debt management capable of providing robust policy insights.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

CODEX SEARCH es un motor de recuperación de información especializado en derecho de extranjería que está basado en herramientas y conocimiento lingüísticos. Un motor o Sistema de Recuperación de Información (SRI) es un software capaz de localizar información en grandes colecciones documentales (entorno no trivial) en formato electrónico. Mediante un estudio previo se ha detectado que la extranjería es un ámbito discursivo en el que resulta difícil expresar la necesidad de información en términos de una consulta formal, objeto de los sistemas de recuperación actuales. Por lo tanto, para desarrollar un SRI eficiente en el dominio indicado no basta con emplear un modelo tradicional de RI, es decir, comparar los términos de la pregunta con los de la respuesta, básicamente porque no expresan implicaciones y porque no tiene que haber necesariamente una relación 1 a 1. En este sentido, la solución lingüística propuesta se basa en incorporar el conocimiento del especialista mediante la integración en el sistema de una librería de casos. Los casos son ejemplos de procedimientos aplicados por expertos a la solución de problemas que han ocurrido en la realidad y que han terminado en éxito o fracaso. Los resultados obtenidos en esta primera fase son muy alentadores pero es necesario continuar la investigación en este campo para mejorar el rendimiento del prototipo al que se puede acceder desde &http://161.116.36.139/~codex/&.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the antecedents and innovation consequences of the methods firms adopt in organizing their search strategies. From a theoretical perspective, organizational search is described using a typology that shows how firms implement exploration and exploitation search activities that span their organizational boundaries. This typology includes three models of implementation: ambidextrous, specialized, and diversified implementation. From an empirical perspective, the paper examines the performance consequences when applying these models, and compares their capacity to produce complementarities. Additionally, since firms' choices in matters of organizational search are viewed as endogenous variables, the paper examines the drivers affecting them and identifies the importance of firms' absorptive capacity and diversified technological opportunities in determining these choices. The empirical design of the paper draws on new data for manufacturing firms in Spain, surveyed between 2003 and 2006.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a real data set of claims amounts where costs related to damage are recorded separately from those related to medical expenses. Only claims with positive costs are considered here. Two approaches to density estimation are presented: a classical parametric and a semi-parametric method, based on transformation kernel density estimation. We explore the data set with standard univariate methods. We also propose ways to select the bandwidth and transformation parameters in the univariate case based on Bayesian methods. We indicate how to compare the results of alternative methods both looking at the shape of the overall density domain and exploring the density estimates in the right tail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an analysis of motor vehicle insurance claims relating to vehicle damage and to associated medical expenses. We use univariate severity distributions estimated with parametric and non-parametric methods. The methods are implemented using the statistical package R. Parametric analysis is limited to estimation of normal and lognormal distributions for each of the two claim types. The nonparametric analysis presented involves kernel density estimation. We illustrate the benefits of applying transformations to data prior to employing kernel based methods. We use a log-transformation and an optimal transformation amongst a class of transformations that produces symmetry in the data. The central aim of this paper is to provide educators with material that can be used in the classroom to teach statistical estimation methods, goodness of fit analysis and importantly statistical computing in the context of insurance and risk management. To this end, we have included in the Appendix of this paper all the R code that has been used in the analysis so that readers, both students and educators, can fully explore the techniques described

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We evaluate the performance of different optimization techniques developed in the context of optical flowcomputation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we develop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional multilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrectional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimization search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow computation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Land cover classification is a key research field in remote sensing and land change science as thematic maps derived from remotely sensed data have become the basis for analyzing many socio-ecological issues. However, land cover classification remains a difficult task and it is especially challenging in heterogeneous tropical landscapes where nonetheless such maps are of great importance. The present study aims to establish an efficient classification approach to accurately map all broad land cover classes in a large, heterogeneous tropical area of Bolivia, as a basis for further studies (e.g., land cover-land use change). Specifically, we compare the performance of parametric (maximum likelihood), non-parametric (k-nearest neighbour and four different support vector machines - SVM), and hybrid classifiers, using both hard and soft (fuzzy) accuracy assessments. In addition, we test whether the inclusion of a textural index (homogeneity) in the classifications improves their performance. We classified Landsat imagery for two dates corresponding to dry and wet seasons and found that non-parametric, and particularly SVM classifiers, outperformed both parametric and hybrid classifiers. We also found that the use of the homogeneity index along with reflectance bands significantly increased the overall accuracy of all the classifications, but particularly of SVM algorithms. We observed that improvements in producer’s and user’s accuracies through the inclusion of the homogeneity index were different depending on land cover classes. Earlygrowth/degraded forests, pastures, grasslands and savanna were the classes most improved, especially with the SVM radial basis function and SVM sigmoid classifiers, though with both classifiers all land cover classes were mapped with producer’s and user’s accuracies of around 90%. Our approach seems very well suited to accurately map land cover in tropical regions, thus having the potential to contribute to conservation initiatives, climate change mitigation schemes such as REDD+, and rural development policies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the unresolved questions of modern physics is the nature of Dark Matter. Strong experimental evidences suggest that the presence of this elusive component in the energy budget of the Universe is quite significant, without, however, being able to provide conclusive information about its nature. The most plausible scenario is that of weakly interacting massive particles (WIMPs), that includes a large class of non-baryonic Dark Matter candidates with a mass typically between few tens of GeV and few TeVs, and a cross section of the order of weak interactions. Search for Dark Matter particles using very high energy gamma-ray Cherenkov telescopes is based on the model that WIMPs can self-annihilate, leading to production of detectable species, like photons. These photons are very energetic, and since unreflected by the Universe's magnetic fields, they can be traced straight to the source of their creation. The downside of the approach is a great amount of background radiation, coming from the conventional astrophysical objects, that usually hides clear signals of the Dark Matter particle interactions. That is why good choice of the observational candidates is the crucial factor in search for Dark Matter. With MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov Telescopes), a two-telescope ground-based system located in La Palma, Canary Islands, we choose objects like dwarf spheroidal satellite galaxies of the Milky Way and galaxy clusters for our search. Our idea is to increase chances for WIMPs detection by pointing to objects that are relatively close, with great amount of Dark Matter and with as-little-as-possible pollution from the stars. At the moment, several observation projects are ongoing and analyses are being performed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Autonomous underwater vehicles (AUV) represent a challenging control problem with complex, noisy, dynamics. Nowadays, not only the continuous scientific advances in underwater robotics but the increasing number of subsea missions and its complexity ask for an automatization of submarine processes. This paper proposes a high-level control system for solving the action selection problem of an autonomous robot. The system is characterized by the use of reinforcement learning direct policy search methods (RLDPS) for learning the internal state/action mapping of some behaviors. We demonstrate its feasibility with simulated experiments using the model of our underwater robot URIS in a target following task