31 resultados para Dynamic identification

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The problem of jointly estimating the number, the identities, and the data of active users in a time-varying multiuser environment was examined in a companion paper (IEEE Trans. Information Theory, vol. 53, no. 9, September 2007), at whose core was the use of the theory of finite random sets on countable spaces. Here we extend that theory to encompass the more general problem of estimating unknown continuous parameters of the active-user signals. This problem is solved here by applying the theory of random finite sets constructed on hybrid spaces. We doso deriving Bayesian recursions that describe the evolution withtime of a posteriori densities of the unknown parameters and data.Unlike in the above cited paper, wherein one could evaluate theexact multiuser set posterior density, here the continuous-parameter Bayesian recursions do not admit closed-form expressions. To circumvent this difficulty, we develop numerical approximationsfor the receivers that are based on Sequential Monte Carlo (SMC)methods (“particle filtering”). Simulation results, referring to acode-divisin multiple-access (CDMA) system, are presented toillustrate the theory.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We describe the use of dynamic combinatorial chemistry (DCC) to identify ligands for the stem-loop structure located at the exon 10-5'-intron junction of Tau pre-mRNA, which is involved in the onset of several tauopathies including frontotemporal dementia with Parkinsonism linked to chromosome 17 (FTDP-17). A series of ligands that combine the small aminoglycoside neamine and heteroaromatic moieties (azaquinolone and two acridines) have been identified by using DCC. These compounds effectively bind the stem-loop RNA target (the concentration required for 50% RNA response (EC(50)): 2-58 μM), as determined by fluorescence titration experiments. Importantly, most of them are able to stabilize both the wild-type and the +3 and +14 mutated sequences associated with the development of FTDP-17 without producing a significant change in the overall structure of the RNA (as analyzed by circular dichroism (CD) spectroscopy), which is a key factor for recognition by the splicing regulatory machinery. A good correlation has been found between the affinity of the ligands for the target and their ability to stabilize the RNA secondary structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study focuses on identification and exploitation processes among Finnish design entrepreneurs (i.e. selfemployed industrial designers). More specifically, this study strives to find out what design entrepreneurs do when they create new ventures, how venture ideas are identified and how entrepreneurial processes are organized to identify and exploit such venture ideas in the given industrial context. Indeed, what does educated and creative individuals do when they decide to create new ventures, where do the venture ideas originally come from, and moreover, how are venture ideas identified and developed into viable business concepts that are introduced on the markets? From an academic perspective: there is a need to increase our understanding of the interaction between the identification and exploitation of emerging ventures, in this and other empirical contexts. Rather than assuming that venture ideas are constant in time, this study examines how emerging ideas are adjusted to enable exploitation in dynamic market settings. It builds on the insights from previous entrepreneurship process research. The interpretations from the theoretical discussion build on the assumption that the subprocesses of identification and exploitation interact, and moreover, they are closely entwined with each other (e.g. McKelvie & Wiklund, 2004, Davidsson, 2005). This explanation challenges the common assumption that entrepreneurs would first identify venture ideas and then exploit them (e.g. Shane, 2003). The assumption is that exploitation influences identification, just as identification influences exploitation. Based on interviews with design entrepreneurs and external actors (e.g. potential customers, suppliers and collaborators), it appears as identification and exploitation of venture ideas are carried out in close interaction between a number of actors, rather than alone by entrepreneurs. Due to their available resources, design entrepreneurs have a desire to focus on identification related activities and to find external actors that take care of exploitation related activities. The involvement of external actors may have a direct impact on decisionmaking and various activities along the processes of identification and exploitation, which is something that previous research does not particularly emphasize. For instance, Bhave (1994) suggests both operative and strategic feedback from the market, but does not explain how external parties are actually involved in the decisionmaking, and in carrying out various activities along the entrepreneurial process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new technique for audio signal comparison based on tonal subsequence alignment and its application to detect cover versions (i.e., different performances of the same underlying musical piece). Cover song identification is a task whose popularity has increased in the Music Information Retrieval (MIR) community along in the past, as it provides a direct and objective way to evaluate music similarity algorithms.This article first presents a series of experiments carried outwith two state-of-the-art methods for cover song identification.We have studied several components of these (such as chroma resolution and similarity, transposition, beat tracking or Dynamic Time Warping constraints), in order to discover which characteristics would be desirable for a competitive cover song identifier. After analyzing many cross-validated results, the importance of these characteristics is discussed, and the best-performing ones are finally applied to the newly proposed method. Multipleevaluations of this one confirm a large increase in identificationaccuracy when comparing it with alternative state-of-the-artapproaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gas sensing systems based on low-cost chemical sensor arrays are gaining interest for the analysis of multicomponent gas mixtures. These sensors show different problems, e.g., nonlinearities and slow time-response, which can be partially solved by digital signal processing. Our approach is based on building a nonlinear inverse dynamic system. Results for different identification techniques, including artificial neural networks and Wiener series, are compared in terms of measurement accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a dynamic model where traders in each period are matched randomly into pairs who then bargain about the division of a fixed surplus. When agreement is reached the traders leave the market. Traders who do not come to an agreement return next period in which they will be matched again, as long as their deadline has not expired yet. New traders enter exogenously in each period. We assume that traders within a pair know each other's deadline. We define and characterize the stationary equilibrium configurations. Traders with longer deadlines fare better than traders with short deadlines. It is shown that the heterogeneity of deadlines may cause delay. It is then shown that a centralized mechanism that controls the matching protocol, but does not interfere with the bargaining, eliminates all delay. Even though this efficient centralized mechanism is not as good for traders with long deadlines, it is shown that in a model where all traders can choose which mechanism to

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this paper is twofold. First, we construct a DSGE model which spells out explicitly the instrumentation of monetary policy. The interest rate is determined every period depending on the supply and demand for reserves which in turn are affected by fundamental shocks: unforeseeable changes in cash withdrawal, autonomous factors, technology and government spending. Unexpected changes in the monetary conditions of the economy are interpreted as monetary shocks. We show that these monetary shocks have the usual effects on economic activity without the need of imposing additional frictions as limited participation in asset markets or sticky prices. Second, we show that this view of monetary policy may have important consequences for empirical research. In the model, the contemporaneous correlations between interest rates, prices and output are due to the simultaneous effect of all fundamental shocks. We provide an example where these contemporaneous correlations may be misinterpreted as a Taylor rule. In addition, we use the sign of the impact responses of all shocks on output, prices and interest rates derived from the model to identify the sources of shocks in the data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the literature on risk, one generally assume that uncertainty is uniformly distributed over the entire working horizon, when the absolute risk-aversion index is negative and constant. From this perspective, the risk is totally exogenous, and thus independent of endogenous risks. The classic procedure is "myopic" with regard to potential changes in the future behavior of the agent due to inherent random fluctuations of the system. The agent's attitude to risk is rigid. Although often criticized, the most widely used hypothesis for the analysis of economic behavior is risk-neutrality. This borderline case must be envisaged with prudence in a dynamic stochastic context. The traditional measures of risk-aversion are generally too weak for making comparisons between risky situations, given the dynamic �complexity of the environment. This can be highlighted in concrete problems in finance and insurance, context for which the Arrow-Pratt measures (in the small) give ambiguous.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this paper is to re-evaluate the attitude to effort of a risk-averse decision-maker in an evolving environment. In the classic analysis, the space of efforts is generally discretized. More realistic, this new approach emploies a continuum of effort levels. The presence of multiple possible efforts and performance levels provides a better basis for explaining real economic phenomena. The traditional approach (see, Laffont, J. J. & Tirole, J., 1993, Salanie, B., 1997, Laffont, J.J. and Martimort, D, 2002, among others) does not take into account the potential effect of the system dynamics on the agent's behavior to effort over time. In the context of a Principal-agent relationship, not only the incentives of the Principal can determine the private agent to allocate a good effort, but also the evolution of the dynamic system. The incentives can be ineffective when the environment does not incite the agent to invest a good effort. This explains why, some effici

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The demand for computational power has been leading the improvement of the High Performance Computing (HPC) area, generally represented by the use of distributed systems like clusters of computers running parallel applications. In this area, fault tolerance plays an important role in order to provide high availability isolating the application from the faults effects. Performance and availability form an undissociable binomial for some kind of applications. Therefore, the fault tolerant solutions must take into consideration these two constraints when it has been designed. In this dissertation, we present a few side-effects that some fault tolerant solutions may presents when recovering a failed process. These effects may causes degradation of the system, affecting mainly the overall performance and availability. We introduce RADIC-II, a fault tolerant architecture for message passing based on RADIC (Redundant Array of Distributed Independent Fault Tolerance Controllers) architecture. RADIC-II keeps as maximum as possible the RADIC features of transparency, decentralization, flexibility and scalability, incorporating a flexible dynamic redundancy feature, allowing to mitigate or to avoid some recovery side-effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper shows that tourism specialisation can help to explain the observed high growth rates of small countries. For this purpose, two models of growth and trade are constructed to represent the trade relations between two countries. One of the countries is large, rich, has an own source of sustained growth and produces a tradable capital good. The other is a small poor economy, which does not have an own engine of growth and produces tradable tourism services. The poor country exports tourism services to and imports capital goods from the rich economy. In one model tourism is a luxury good, while in the other the expenditure elasticity of tourism imports is unitary. Two main results are obtained. In the long run, the tourism country overcomes decreasing returns and permanently grows because its terms of trade continuously improve. Since the tourism sector is relatively less productive than the capital good sector, tourism services become relatively scarcer and hence more expensive than the capital good. Moreover, along the transition the growth rate of the tourism economy holds well above the one of the rich country for a long time. The growth rate differential between countries is particularly high when tourism is a luxury good. In this case, there is a faster increase in the tourism demand. As a result, investment of the small economy is boosted and its terms of trade highly improve.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study is the empirical identification of the monetary policy rules pursued in individual countries of EU before and after the launch of European Monetary Union. In particular, we have employed an estimation of the augmented version of the Taylor rule (TR) for 25 countries of the EU in two periods (1992-1998, 1999-2006). While uniequational estimation methods have been used to identify the policy rules of individual central banks, for the rule of the European Central Bank has been employed a dynamic panel setting. We have found that most central banks really followed some interest rate rule but its form was usually different from the original TR (proposing that domestic interest rate responds only to domestic inflation rate and output gap). Crucial features of policy rules in many countries have been the presence of interest rate smoothing as well as response to foreign interest rate. Any response to domestic macroeconomic variables have been missing in the rules of countries with inflexible exchange rate regimes and the rules consisted in mimicking of the foreign interest rates. While we have found response to long-term interest rates and exchange rate in rules of some countries, the importance of monetary growth and asset prices has been generally negligible. The Taylor principle (the response of interest rates to domestic inflation rate must be more than unity as a necessary condition for achieving the price stability) has been confirmed only in large economies and economies troubled with unsustainable inflation rates. Finally, the deviation of the actual interest rate from the rule-implied target rate can be interpreted as policy shocks (these deviation often coincided with actual turbulent periods).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.