61 resultados para BENCHMARK
Resumo:
The computer simulation of reaction dynamics has nowadays reached a remarkable degree of accuracy. Triatomic elementary reactions are rigorously studied with great detail on a straightforward basis using a considerable variety of Quantum Dynamics computational tools available to the scientific community. In our contribution we compare the performance of two quantum scattering codes in the computation of reaction cross sections of a triatomic benchmark reaction such as the gas phase reaction Ne + H2+ %12. NeH++ H. The computational codes are selected as representative of time-dependent (Real Wave Packet [ ]) and time-independent (ABC [ ]) methodologies. The main conclusion to be drawn from our study is that both strategies are, to a great extent, not competing but rather complementary. While time-dependent calculations advantages with respect to the energy range that can be covered in a single simulation, time-independent approaches offer much more detailed information from each single energy calculation. Further details such as the calculation of reactivity at very low collision energies or the computational effort related to account for the Coriolis couplings are analyzed in this paper.
Resumo:
The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.
Resumo:
The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.
Resumo:
Recent experiments have established that information can be encoded in the spike times of neurons relative to the phase of a background oscillation in the local field potential—a phenomenon referred to as “phase-of-firing coding” (PoFC). These firing phase preferences could result from combining an oscillation in the input current with a stimulus-dependent static component that would produce the variations in preferred phase, but it remains unclear whether these phases are an epiphenomenon or really affect neuronal interactions—only then could they have a functional role. Here we show that PoFC has a major impact on downstream learning and decoding with the now well established spike timing-dependent plasticity (STDP). To be precise, we demonstrate with simulations how a single neuron equipped with STDP robustly detects a pattern of input currents automatically encoded in the phases of a subset of its afferents, and repeating at random intervals. Remarkably, learning is possible even when only a small fraction of the afferents (~10%) exhibits PoFC. The ability of STDP to detect repeating patterns had been noted before in continuous activity, but it turns out that oscillations greatly facilitate learning. A benchmark with more conventional rate-based codes demonstrates the superiority of oscillations and PoFC for both STDP-based learning and the speed of decoding: the oscillation partially formats the input spike times, so that they mainly depend on the current input currents, and can be efficiently learned by STDP and then recognized in just one oscillation cycle. This suggests a major functional role for oscillatory brain activity that has been widely reported experimentally.
Resumo:
Peer-reviewed
Resumo:
This paper describes an audio watermarking scheme based on lossy compression. The main idea is taken from an image watermarking approach where the JPEG compression algorithm is used to determine where and how the mark should be placed. Similarly, in the audio scheme suggested in this paper, an MPEG 1 Layer 3 algorithm is chosen for compression to determine the position of the mark bits and, thus, the psychoacoustic masking of the MPEG 1 Layer 3compression is implicitly used. This methodology provides with a high robustness degree against compression attacks. The suggested scheme is also shown to succeed against most of the StirMark benchmark attacks for audio.
Resumo:
This paper deals with the design of nonregenerativerelaying transceivers in cooperative systems where channel stateinformation (CSI) is available at the relay station. The conventionalnonregenerative approach is the amplify and forward(A&F) approach, where the signal received at the relay is simplyamplified and retransmitted. In this paper, we propose an alternativelinear transceiver design for nonregenerative relaying(including pure relaying and the cooperative transmission cases),making proper use of CSI at the relay station. Specifically, wedesign the optimum linear filtering performed on the data to beforwarded at the relay. As optimization criteria, we have consideredthe maximization of mutual information (that provides aninformation rate for which reliable communication is possible) fora given available transmission power at the relay station. Threedifferent levels of CSI can be considered at the relay station: onlyfirst hop channel information (between the source and relay);first hop channel and second hop channel (between relay anddestination) information, or a third situation where the relaymay have complete cooperative channel information includingall the links: first and second hop channels and also the directchannel between source and destination. Despite the latter beinga more unrealistic situation, since it requires the destination toinform the relay station about the direct channel, it is useful as anupper benchmark. In this paper, we consider the last two casesrelating to CSI.We compare the performance so obtained with theperformance for the conventional A&F approach, and also withthe performance of regenerative relays and direct noncooperativetransmission for two particular cases: narrowband multiple-inputmultiple-output transceivers and wideband single input singleoutput orthogonal frequency division multiplex transmissions.
Resumo:
Phylogenetic trees representing the evolutionary relationships of homologous genes are the entry point for many evolutionary analyses. For instance, the use of a phylogenetic tree can aid in the inference of orthology and paralogy relationships, and in the detection of relevant evolutionary events such as gene family expansions and contractions, horizontal gene transfer, recombination or incomplete lineage sorting. Similarly, given the plurality of evolutionary histories among genes encoded in a given genome, there is a need for the combined analysis of genome-wide collections of phylogenetic trees (phylomes). Here, we introduce a new release of PhylomeDB (http://phylomedb.org), a public repository of phylomes. Currently, PhylomeDB hosts 120 public phylomes, comprising >1.5 million maximum likelihood trees and multiple sequence alignments. In the current release, phylogenetic trees are annotated with taxonomic, protein-domain arrangement, functional and evolutionary information. PhylomeDB is also a major source for phylogeny-based predictions of orthology and paralogy, covering >10 million proteins across 1059 sequenced species. Here we describe newly implemented PhylomeDB features, and discuss a benchmark of the orthology predictions provided by the database, the impact of proteome updates and the use of the phylome approach in the analysis of newly sequenced genomes and transcriptomes.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
Abstract Purpose- There is a lack of studies on tourism demand forecasting that use non-linear models. The aim of this paper is to introduce consumer expectations in time-series models in order to analyse their usefulness to forecast tourism demand. Design/methodology/approach- The paper focuses on forecasting tourism demand in Catalonia for the four main visitor markets (France, the UK, Germany and Italy) combining qualitative information with quantitative models: autoregressive (AR), autoregressive integrated moving average (ARIMA), self-exciting threshold autoregressions (SETAR) and Markov switching regime (MKTAR) models. The forecasting performance of the different models is evaluated for different time horizons (one, two, three, six and 12 months). Findings- Although some differences are found between the results obtained for the different countries, when comparing the forecasting accuracy of the different techniques, ARIMA and Markov switching regime models outperform the rest of the models. In all cases, forecasts of arrivals show lower root mean square errors (RMSE) than forecasts of overnight stays. It is found that models with consumer expectations do not outperform benchmark models. These results are extensive to all time horizons analysed. Research limitations/implications- This study encourages the use of qualitative information and more advanced econometric techniques in order to improve tourism demand forecasting. Originality/value- This is the first study on tourism demand focusing specifically on Catalonia. To date, there have been no studies on tourism demand forecasting that use non-linear models such as self-exciting threshold autoregressions (SETAR) and Markov switching regime (MKTAR) models. This paper fills this gap and analyses forecasting performance at a regional level. Keywords Tourism, Forecasting, Consumers, Spain, Demand management Paper type Research paper
Resumo:
The computer simulation of reaction dynamics has nowadays reached a remarkable degree of accuracy. Triatomic elementary reactions are rigorously studied with great detail on a straightforward basis using a considerable variety of Quantum Dynamics computational tools available to the scientific community. In our contribution we compare the performance of two quantum scattering codes in the computation of reaction cross sections of a triatomic benchmark reaction such as the gas phase reaction Ne + H2+ %12. NeH++ H. The computational codes are selected as representative of time-dependent (Real Wave Packet [ ]) and time-independent (ABC [ ]) methodologies. The main conclusion to be drawn from our study is that both strategies are, to a great extent, not competing but rather complementary. While time-dependent calculations advantages with respect to the energy range that can be covered in a single simulation, time-independent approaches offer much more detailed information from each single energy calculation. Further details such as the calculation of reactivity at very low collision energies or the computational effort related to account for the Coriolis couplings are analyzed in this paper.
Resumo:
We present a model for transport in multiply scattering media based on a three-dimensional generalization of the persistent random walk. The model assumes that photons move along directions that are parallel to the axes. Although this hypothesis is not realistic, it allows us to solve exactly the problem of multiple scattering propagation in a thin slab. Among other quantities, the transmission probability and the mean transmission time can be calculated exactly. Besides being completely solvable, the model could be used as a benchmark for approximation schemes to multiple light scattering.
Resumo:
Cartel detection is one of the most basic and most complicated tasks of competition authorities. In recent years, however, variance filters have provided a fairly simple tool for rejecting the existence of price-fixing, with the added advantage that the methodology requires only a low volume of data. In this paper we analyze two aspects of variance filters: 1- the relationship established between market structure and price rigidity, and 2- the use of different benchmarks for implementing the filters. This paper addresses these two issues by applying a variance filter to a gasoline retail market characterized by a set of unique features. Our results confirm the positive relationship between monopolies and price rigidity, and the variance filter's ability to detect non-competitive behavior when an appropriate benchmark is used. Our findings should serve to promote the implementation of this methodology among competition authorities, albeit in the awareness that a more exhaustive complementary analysis is required.
Resumo:
Joc de Realitat Augmentada on l’usuari haurà de complir petits reptes interactuant amb els elements virtuals de l’escena. Aquests elements es presentaran fent us de marcadors. El projecte és un joc on l’usuari ha de cuidar unes plantes. Per a poder fer aquesta feina el jugador realitzarà 3 tipus de reptes. Aquests reptes són petits jocs, és a dir, que hi ha tres tipus de “mini-jocs” dintre de la Aplicació. Degut a que cada jugador té preferències diferents, aquesta divisió́ de jocs permet accedir a un major nombre d’usuaris. Pel seu desenvolupament s'ha fet un recull d’informació i evolució històrica de la Realitat Augmentada. S'han agafant referents de jocs similars en el mercat: PC, Apps i videoconsoles com a base d’inspiració per a la creació de la historia del joc. I finalment una recollida de requeriments tècnics per al desenvolupament tecnològic a nivell de programació i disseny. Amb tota aquesta informació i tenint com a medis de desenvolupament Blender, Unity + Vuforia s'ha complert la implementació del joc.
Resumo:
We present a general algorithm for the simulation of x-ray spectra emitted from targets of arbitrary composition bombarded with kilovolt electron beams. Electron and photon transport is simulated by means of the general-purpose Monte Carlo code PENELOPE, using the standard, detailed simulation scheme. Bremsstrahlung emission is described by using a recently proposed algorithm, in which the energy of emitted photons is sampled from numerical cross-section tables, while the angular distribution of the photons is represented by an analytical expression with parameters determined by fitting benchmark shape functions obtained from partial-wave calculations. Ionization of K and L shells by electron impact is accounted for by means of ionization cross sections calculated from the distorted-wave Born approximation. The relaxation of the excited atoms following the ionization of an inner shell, which proceeds through emission of characteristic x rays and Auger electrons, is simulated until all vacancies have migrated to M and outer shells. For comparison, measurements of x-ray emission spectra generated by 20 keV electrons impinging normally on multiple bulk targets of pure elements, which span the periodic system, have been performed using an electron microprobe. Simulation results are shown to be in close agreement with these measurements.