23 resultados para the SIMPLE algorithm

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Which projects should be financed through separate non-recourse loans (or limited- liability companies) and which should be bundled into a single loan? In the pres- ence of bankruptcy costs, this conglomeration decision trades off the benefit of co- insurance with the cost of risk contamination. This paper characterize this tradeoff for projects with binary returns, depending on the mean, variability, and skewness of returns, the bankruptcy recovery rate, the correlation across projects, the number of projects, and their heterogeneous characteristics. In some cases, separate financing dominates joint financing, even though it increases the interest rate or the probability of bankruptcy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In general terms key sectors analysis aims at identifying the role, or impact, that the existence of a productive sector has in the economy. Quite a few measures, indicators and methodologies of varied complexity have been proposed in the literature, from multiplier sums to extraction methods, but not without debate about their properties and their information content. All of them, to our knowledge, focus exclusively on the interdependence effects that result from the input-output structure of the economy. By so doing the simple input-output approach misses critical links beyond the interindustry ones. A productive sector’s role is that of producing but also that of generating and distributing income among primary factors as a result of production. Thus when measuring a sector’s role, the income generating process cannot and should not be omitted if we want to better elucidate the sector’ economic role. A simple way to make the missing income link explicit is to use the SAM (Soci

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we explore the effect of bounded rationality on the convergence of individual behavior toward equilibrium. In the context of a Cournot game with a unique and symmetric Nash equilibrium, firms are modeled as adaptive economic agents through a genetic algorithm. Computational experiments show that (1) there is remarkable heterogeneity across identical but boundedly rational agents; (2) such individual heterogeneity is not simply a consequence of the random elements contained in the genetic algorithm; (3) the more rational agents are in terms of memory abilities and pre-play evaluation of strategies, the less heterogeneous they are in their actions. At the limit case of full rationality, the outcome converges to the standard result of uniform individual behavior.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

"Es tracta d'un projecte dividit en dues parts independents però complementàries, realitzades per autors diferents. Aquest document conté originàriament altre material i/o programari només consultable a la Biblioteca de Ciència i Tecnologia"

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aquest projecte té com a objectiu l'anàlisi de prestacions de processadors RISC de baix cost i el disseny d'un processador RISC simple per a aplicacions de propòsit general relacionades amb l'adquisició i el procés simple de dades. Com a resultat es presenta el processador SR3C de 32 bits i arquitectura RISC. Aquest processador s'ha descrit i simulat mitjançant el llenguatge de descripció de hardware VHDL i s'ha sintetitzat en una FPGA. El processador està preparat per poder utilitzar-se en SoCs reals gràcies al compliment de l'estàndard de busos Wishbone. A més també es pot utilitzar com plataforma educativa gràcies a l'essamblador i simulador desenvolupats.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider negotiations selecting one-dimensional policies. Individuals have single-peaked preferences, and they are impatient. Decisions arise from a bargaining game with random proposers and (super) majority approval, ranging from the simple majority up to unanimity. The existence and uniqueness of stationary subgame perfect equilibrium is established, and its explicit characterization provided. We supply an explicit formula to determine the unique alternative that prevails, as impatience vanishes, for each majority. As an application, we examine the efficiency of majority rules. For symmetric distributions of peaks unanimity is the unanimously preferred majority rule. For asymmetric populations rules maximizing social surplus are characterized.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Es vol implementar un sistema de detecció de còpia per a protegir el copyright de fitxers d'àudio digitals en format WAV. El sistema haurà de contenir dos algorismes bàsics: un per a inserir una marca d'aigua (

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Quantitatively assessing the importance or criticality of each link in a network is of practical value to operators, as that can help them to increase the network's resilience, provide more efficient services, or improve some other aspect of the service. Betweenness is a graph-theoretical measure of centrality that can be applied to communication networks to evaluate link importance. However, as we illustrate in this paper, the basic definition of betweenness centrality produces inaccurate estimations as it does not take into account some aspects relevant to networking, such as the heterogeneity in link capacity or the difference between node-pairs in their contribution to the total traffic. A new algorithm for discovering link centrality in transport networks is proposed in this paper. It requires only static or semi-static network and topology attributes, and yet produces estimations of good accuracy, as verified through extensive simulations. Its potential value is demonstrated by an example application. In the example, the simple shortest-path routing algorithm is improved in such a way that it outperforms other more advanced algorithms in terms of blocking ratio

Relevância:

90.00% 90.00%

Publicador:

Resumo:

All of the imputation techniques usually applied for replacing values below thedetection limit in compositional data sets have adverse effects on the variability. In thiswork we propose a modification of the EM algorithm that is applied using the additivelog-ratio transformation. This new strategy is applied to a compositional data set and theresults are compared with the usual imputation techniques

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A systolic array to implement lattice-reduction-aided lineardetection is proposed for a MIMO receiver. The lattice reductionalgorithm and the ensuing linear detections are operated in the same array, which can be hardware-efficient. All-swap lattice reduction algorithm (ASLR) is considered for the systolic design.ASLR is a variant of the LLL algorithm, which processes all lattice basis vectors within one iteration. Lattice-reduction-aided linear detection based on ASLR and LLL algorithms have very similarbit-error-rate performance, while ASLR is more time efficient inthe systolic array, especially for systems with a large number ofantennas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Children occupy centre-stage in any new welfare equilibrium. Failure to support families may produce either of two undesirable scenarios. We shall see a society without children if motherhood remains incompatible with work. A new family policy needs to recognize that children are a collective asset and that the cost of having children is rising. The double challenge is to eliminate the constraints on having children in the first place, and to ensure that the children we have are ensured optimal opportunities. The simple reason why a new social contract is called for is that fertility and child quality combine both private utility and societal gains. And like no other epoch in the past, the societal gains are mounting all-the-while that families’ ability to produce these social gains is weakening.In the following 1 analyze the twin challenges of fertility and child development. I then examine which kind of policy mix will ensure both the socially desired level of fertility and investment in our children? The task is to identify a Paretian optimum that will maximize efficiency gains and social equity simultaneously.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper compares two well known scan matching algorithms: the MbICP and the pIC. As a result of the study, it is proposed the MSISpIC, a probabilistic scan matching algorithm for the localization of an Autonomous Underwater Vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), and the robot displacement estimated through dead-reckoning with the help of a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). The proposed method is an extension of the pIC algorithm. Its major contribution consists in: 1) using an EKF to estimate the local path traveled by the robot while grabbing the scan as well as its uncertainty and 2) proposing a method to group into a unique scan, with a convenient uncertainty model, all the data grabbed along the path described by the robot. The algorithm has been tested on an AUV guided along a 600m path within a marina environment with satisfactory results

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using the extended Thomas-Fermi version of density-functional theory (DFT), calculations are presented for the barrier for the reaction Na20++Na20+¿Na402+. The deviation from the simple Coulomb barrier is shown to be proportional to the electron density at the bond midpoint of the supermolecule (Na20+)2. An extension of conventional quantum-chemical studies of homonuclear diatomic molecular ions is then effected to apply to the supermolecular ions of the alkali metals. This then allows the Na results to be utilized to make semiquantitative predictions of position and height of the maximum of the fusion barrier for other alkali clusters. These predictions are confirmed by means of similar DFT calculations for the K clusters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results: In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions: Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn’s disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.