91 resultados para Simple Eight-line Arrangement


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Critical real-time ebedded (CRTE) Systems require safe and tight worst-case execution time (WCET) estimations to provide required safety levels and keep costs low. However, CRTE Systems require increasing performance to satisfy performance needs of existing and new features. Such performance can be only achieved by means of more agressive hardware architectures, which are much harder to analyze from a WCET perspective. The main features considered include cache memòries and multi-core processors.Thus, althoug such features provide higher performance, corrent WCET analysis methods are unable to provide tight WCET estimations. In fact, WCET estimations become worse than for simple rand less powerful hardware. The main reason is the fact that hardware behavior is deterministic but unknown and, therefore, the worst-case behavior must be assumed most of the time, leading to large WCET estimations. The purpose of this project is developing new hardware designs together with WCET analysis tools able to provide tight and safe WCET estimations. In order to do so, those pieces of hardware whose behavior is not easily analyzable due to lack of accurate information during WCET analysis will be enhanced to produce a probabilistically analyzable behavior. Thus, even if the worst-case behavior cannot be removed, its probabilty can be bounded, and hence, a safe and tight WCET can be provided for a particular safety level in line with the safety levels of the remaining components of the system. During the first year the project we have developed molt of the evaluation infraestructure as well as the techniques hardware techniques to analyze cache memories. During the second year those techniques have been evaluated, and new purely-softwar techniques have been developed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The understanding of whole genome sequences in higher eukaryotes depends to a large degree on the reliable definition of transcription units including exon/intron structures, translated open reading frames (ORFs) and flanking untranslated regions. The best currently available chicken transcript catalog is the Ensembl build based on the mappings of a relatively small number of full length cDNAs and ESTs to the genome as well as genome sequence derived in silico gene predictions.Results: We use Long Serial Analysis of Gene Expression (LongSAGE) in bursal lymphocytes and the DT40 cell line to verify the quality and completeness of the annotated transcripts. 53.6% of the more than 38,000 unique SAGE tags (unitags) match to full length bursal cDNAs, the Ensembl transcript build or the genome sequence. The majority of all matching unitags show single matches to the genome, but no matches to the genome derived Ensembl transcript build. Nevertheless, most of these tags map close to the 3' boundaries of annotated Ensembl transcripts.Conclusions: These results suggests that rather few genes are missing in the current Ensembl chicken transcript build, but that the 3' ends of many transcripts may not have been accurately predicted. The tags with no match in the transcript sequences can now be used to improve gene predictions, pinpoint the genomic location of entirely missed transcripts and optimize the accuracy of gene finder software.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper a method for extracting semantic informationfrom online music discussion forums is proposed. The semantic relations are inferred from the co-occurrence of musical concepts in forum posts, using network analysis. The method starts by defining a dictionary of common music terms in an art music tradition. Then, it creates a complex network representation of the online forum by matchingsuch dictionary against the forum posts. Once the complex network is built we can study different network measures, including node relevance, node co-occurrence andterm relations via semantically connecting words. Moreover, we can detect communities of concepts inside the forum posts. The rationale is that some music terms are more related to each other than to other terms. All in all, this methodology allows us to obtain meaningful and relevantinformation from forum discussions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El presente documento conduce a un análisis y comparativa de diferentes y variados conjuntos de redes sociales on-line. Para ello, primero se explica la base teórica de teoría de grafos para su interpretación y comprensión, así como de la base matemática que fundamenta el tipo específico de red estudiada y las diferentes métricas (estadísticas) extraídas de estas. Luego, se ofrece una detallada explicación del entorno de trabajo tanto para la aplicación informática desarrollada, como para posterior visualización y también una explicación y los algoritmos utilizados en las funciones implementadas con tales fines. Para finalizar el documento, se realiza una inmersión particular en cada red social on-line, puntualizando sus características y finalizando con una comparativa general entre todas ellas, siempre acompañadas con sus respectivas visualizaciones en el espacio 2D representadas en forma de grafo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El projecte Integració del motor APIS de qüestionaris on-line en una plataformad'eLearning basada en serveis descriu el procés d'anàlisi i implementació del codi necessari per integrar dins de la plataforma d'eLearning SLeD tot el motor d'APIS de manera que SLeD pugui reproduir preguntes o items i exàmens complets o tests dins d'una UOL o unitat d'aprenentatge. Això permetrà dotar a APIS d'un control de sessió d'usuari i permetrà una utilitat que fins ara no tenia, la possibilitat de entrar al món online mitjançant SLeD. Tot aixòs’implementa gràcies a la connexió WebServices amb el mòdul CCSI de CopperCore i SLeD que s'encarrega de donar facilitats al programador de cara a ampliar el ventall d'eines que ofereix aquesta plataforma i fomentar encara més la idea del codi lliure (open source) i la utilitat globalitzadora d'e-Learning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, I consider a general and informationally effcient approach to determine the optimal access rule and show that there exists a simple rule that achieves the Ramsey outcome as the unique equilibrium when networks compete in linear prices without network-based price discrimination. My approach is informationally effcient in the sense that the regulator is required to know only the marginal cost structure, i.e. the marginal cost of making and terminating a call. The approach is general in that access prices can depend not only on the marginal costs but also on the retail prices, which can be observed by consumers and therefore by the regulator as well. In particular, I consider the set of linear access pricing rules which includes any fixed access price, the Efficient Component Pricing Rule (ECPR) and the Modified ECPR as special cases. I show that in this set, there is a unique access rule that achieves the Ramsey outcome as the unique equilibrium as long as there exists at least a mild degree of substitutability among networks' services.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces the approach of using TURF analysis to design a product line through a binary linear programming model. This improves the efficiency of the search for the solution to the problem compared to the algorithms that have been used to date. Furthermore, the proposed technique enables the model to be improved in order to overcome the main drawbacks presented by TURF analysis in practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Which projects should be financed through separate non-recourse loans (or limited- liability companies) and which should be bundled into a single loan? In the pres- ence of bankruptcy costs, this conglomeration decision trades off the benefit of co- insurance with the cost of risk contamination. This paper characterize this tradeoff for projects with binary returns, depending on the mean, variability, and skewness of returns, the bankruptcy recovery rate, the correlation across projects, the number of projects, and their heterogeneous characteristics. In some cases, separate financing dominates joint financing, even though it increases the interest rate or the probability of bankruptcy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hierarchical clustering is a popular method for finding structure in multivariate data,resulting in a binary tree constructed on the particular objects of the study, usually samplingunits. The user faces the decision where to cut the binary tree in order to determine the numberof clusters to interpret and there are various ad hoc rules for arriving at a decision. A simplepermutation test is presented that diagnoses whether non-random levels of clustering are presentin the set of objects and, if so, indicates the specific level at which the tree can be cut. The test isvalidated against random matrices to verify the type I error probability and a power study isperformed on data sets with known clusteredness to study the type II error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Working Paper no longer available. Please contact the author.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a simple adaptive procedure for playing a game. In thisprocedure, players depart from their current play with probabilities thatare proportional to measures of regret for not having used other strategies(these measures are updated every period). It is shown that our adaptiveprocedure guaranties that with probability one, the sample distributionsof play converge to the set of correlated equilibria of the game. Tocompute these regret measures, a player needs to know his payoff functionand the history of play. We also offer a variation where every playerknows only his own realized payoff history (but not his payoff function).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effectiveness of decision rules depends on characteristics of bothrules and environments. A theoretical analysis of environments specifiesthe relative predictive accuracies of the lexicographic rule 'take-the-best'(TTB) and other simple strategies for binary choice. We identify threefactors: how the environment weights variables; characteristics of choicesets; and error. For cases involving from three to five binary cues, TTBis effective across many environments. However, hybrids of equal weights(EW) and TTB models are more effective as environments become morecompensatory. In the presence of error, TTB and similar models do not predictmuch better than a naïve model that exploits dominance. We emphasizepsychological implications and the need for more complete theories of theenvironment that include the role of error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we attempt to describe the general reasons behind the world populationexplosion in the 20th century. The size of the population at the end of the century inquestion, deemed excessive by some, was a consequence of a dramatic improvementin life expectancies, attributable, in turn, to scientific innovation, the circulation ofinformation and economic growth. Nevertheless, fertility is a variable that plays acrucial role in differences in demographic growth. We identify infant mortality, femaleeducation levels and racial identity as important exogenous variables affecting fertility.It is estimated that in poor countries one additional year of primary schooling forwomen leads to 0.614 child less per couple on average (worldwide). While it may bepossible to identify a global tendency towards convergence in demographic trends,particular attention should be paid to the case of Africa, not only due to its differentdemographic patterns, but also because much of the continent's population has yet toexperience improvement in quality of life generally enjoyed across the rest of theplanet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Departures from pure self interest in economic experiments have recently inspired models of "social preferences". We conduct experiments on simple two-person and three-person games with binary choices that test these theories more directly than the array of games conventionally considered. Our experiments show strong support for the prevalence of "quasi-maximin" preferences: People sacrifice to increase the payoffs for all recipients, but especially for the lowest-payoff recipients. People are also motivated by reciprocity: While people are reluctant to sacrifice to reciprocate good or bad behavior beyond what they would sacrifice for neutral parties, they withdraw willingness to sacrifice to achieve a fair outcome when others are themselves unwilling to sacrifice. Some participants are averse to getting different payoffs than others, but based on our experiments and reinterpretation of previous experiments we argue that behavior that has been presented as "difference aversion" in recent papers is actually a combination of reciprocal and quasi-maximin motivations. We formulate a model in which each player is willing to sacrifice to allocate the quasi-maximin allocation only to those players also believed to be pursuing the quasi-maximin allocation, and may sacrifice to punish unfair players.