94 resultados para efficient algorithms
Resumo:
We present building blocks for algorithms for the efficient reduction of square factor, i.e. direct repetitions in strings. So the basic problem is this: given a string, compute all strings that can be obtained by reducing factors of the form zz to z. Two types of algorithms are treated: an offline algorithm is one that can compute a data structure on the given string in advance before the actual search for the square begins; in contrast, online algorithms receive all input only at the time when a request is made. For offline algorithms we treat the following problem: Let u and w be two strings such that w is obtained from u by reducing a square factor zz to only z. If we further are given the suffix table of u, how can we derive the suffix table for w without computing it from scratch? As the suffix table plays a key role in online algorithms for the detection of squares in a string, this derivation can make the iterated reduction of squares more efficient. On the other hand, we also show how a suffix array, used for the offline detection of squares, can be adapted to the new string resulting from the deletion of a square. Because the deletion is a very local change, this adaption is more eficient than the computation of the new suffix array from scratch.
Resumo:
A systolic array to implement lattice-reduction-aided lineardetection is proposed for a MIMO receiver. The lattice reductionalgorithm and the ensuing linear detections are operated in the same array, which can be hardware-efficient. All-swap lattice reduction algorithm (ASLR) is considered for the systolic design.ASLR is a variant of the LLL algorithm, which processes all lattice basis vectors within one iteration. Lattice-reduction-aided linear detection based on ASLR and LLL algorithms have very similarbit-error-rate performance, while ASLR is more time efficient inthe systolic array, especially for systems with a large number ofantennas.
Resumo:
Scoring rules that elicit an entire belief distribution through the elicitation of point beliefsare time-consuming and demand considerable cognitive e¤ort. Moreover, the results are validonly when agents are risk-neutral or when one uses probabilistic rules. We investigate a classof rules in which the agent has to choose an interval and is rewarded (deterministically) onthe basis of the chosen interval and the realization of the random variable. We formulatean e¢ ciency criterion for such rules and present a speci.c interval scoring rule. For single-peaked beliefs, our rule gives information about both the location and the dispersion of thebelief distribution. These results hold for all concave utility functions.
Resumo:
I show that intellectual property rights yield static efficiency gains, irrespective oftheir dynamic role in fostering innovation. I develop a property-rights model of firmorganization with two dimensions of non-contractible investment. In equilibrium, thefirst best is attained if and only if ownership of tangible and intangible assets is equallyprotected. If IP rights are weaker, firm structure is distorted and efficiency declines:the entrepreneur must either integrate her suppliers, which prompts a decline in theirinvestment; or else risk their defection, which entails a waste of her human capital. Mymodel predicts greater prevalence of vertical integration where IP rights are weaker,and a switch from integration to outsourcing over the product cycle. Both empiricalpredictions are consistent with evidence on multinational companies. As a normativeimplication, I find that IP rights should be strong but narrowly defined, to protect abusiness without holding up its potential spin-offs.
Resumo:
This paper introduces the approach of using Total Unduplicated Reach and Frequency analysis (TURF) to design a product line through a binary linear programming model. This improves the efficiency of the search for the solution to the problem compared to the algorithms that have been used to date. The results obtained through our exact algorithm are presented, and this method shows to be extremely efficient both in obtaining optimal solutions and in computing time for very large instances of the problem at hand. Furthermore, the proposed technique enables the model to be improved in order to overcome the main drawbacks presented by TURF analysis in practice.
Resumo:
We study a retail benchmarking approach to determine access prices for interconnected networks. Instead of considering fixed access charges as in the existing literature, we study access pricing rules that determine the access price that network i pays to network j as a linear function of the marginal costs and the retail prices set by both networks. In the case of competition in linear prices, we show that there is a unique linear rule that implements the Ramsey outcome as the unique equilibrium, independently of the underlying demand conditions. In the case of competition in two-part tariffs, we consider a class of access pricing rules, similar to the optimal one under linear prices but based on average retail prices. We show that firms choose the variable price equal to the marginal cost under this class of rules. Therefore, the regulator (or the competition authority) can choose one among the rules to pursue additional objectives such as consumer surplus, network coverage or investment: for instance, we show that both static and dynamic e±ciency can be achieved at the same time.
Resumo:
This article investigates the main sources of heterogeneity in regional efficiency. We estimate a translog stochastic frontier production function in the analysis of Spanish regions in the period 1964-1996, to attempt to measure and explain changes in technical efficiency. Our results confirm that regional inefficiency is significantly and positively correlated with the ratio of public capital to private capital. The proportion of service industries in the private capital, the proportion of public capital devoted to transport infrastructures, the industrial specialization, and spatial spillovers from transport infrastructures in neighbouring regions significantly contributed to improve regional efficiency.
Resumo:
PRECON S.A is a manufacturing company dedicated to produce prefabricatedconcrete parts to several industries as rail transportation andagricultural industries.Recently, PRECON signed a contract with RENFE,the Spanish Nnational Rail Transportation Company to manufacturepre-stressed concrete sleepers for siding of the new railways of the highspeed train AVE. The scheduling problem associated with the manufacturingprocess of the sleepers is very complex since it involves severalconstraints and objectives. The constraints are related with productioncapacity, the quantity of available moulds, satisfying demand and otheroperational constraints. The two main objectives are related withmaximizing the usage of the manufacturing resources and minimizing themoulds movements. We developed a deterministic crowding genetic algorithmfor this multiobjective problem. The algorithm has proved to be a powerfuland flexible tool to solve the large-scale instance of this complex realscheduling problem.
Resumo:
This paper extends existing insurance results on the type of insurance contracts needed for insurance market efficiency toa dynamic setting. It introduces continuosly open markets that allow for more efficient asset allocation. It alsoeliminates the role of preferences and endowments in the classification of risks, which is done primarily in terms of the actuarial properties of the underlying riskprocess. The paper further extends insurability to include correlated and catstrophic events. Under these very general conditions the paper defines a condition that determines whether a small number of standard insurance contracts (together with aggregate assets) suffice to complete markets or one needs to introduce such assets as mutual insurance.
Resumo:
Recently, several anonymization algorithms have appeared for privacy preservation on graphs. Some of them are based on random-ization techniques and on k-anonymity concepts. We can use both of them to obtain an anonymized graph with a given k-anonymity value. In this paper we compare algorithms based on both techniques in orderto obtain an anonymized graph with a desired k-anonymity value. We want to analyze the complexity of these methods to generate anonymized graphs and the quality of the resulting graphs.
Resumo:
[spa] La implementación de un programa de subvenciones públicas a proyectos empresariales de I+D comporta establecer un sistema de selección de proyectos. Esta selección se enfrenta a problemas relevantes, como son la medición del posible rendimiento de los proyectos de I+D y la optimización del proceso de selección entre proyectos con múltiples y a veces incomparables medidas de resultados. Las agencias públicas utilizan mayoritariamente el método peer review que, aunque presenta ventajas, no está exento de críticas. En cambio, las empresas privadas con el objetivo de optimizar su inversión en I+D utilizan métodos más cuantitativos, como el Data Envelopment Análisis (DEA). En este trabajo se compara la actuación de los evaluadores de una agencia pública (peer review) con una metodología alternativa de selección de proyectos como es el DEA.
Resumo:
[spa] La implementación de un programa de subvenciones públicas a proyectos empresariales de I+D comporta establecer un sistema de selección de proyectos. Esta selección se enfrenta a problemas relevantes, como son la medición del posible rendimiento de los proyectos de I+D y la optimización del proceso de selección entre proyectos con múltiples y a veces incomparables medidas de resultados. Las agencias públicas utilizan mayoritariamente el método peer review que, aunque presenta ventajas, no está exento de críticas. En cambio, las empresas privadas con el objetivo de optimizar su inversión en I+D utilizan métodos más cuantitativos, como el Data Envelopment Análisis (DEA). En este trabajo se compara la actuación de los evaluadores de una agencia pública (peer review) con una metodología alternativa de selección de proyectos como es el DEA.