68 resultados para attori, concorrenza, COOP, Akka, benchmark
Resumo:
The set covering problem is an NP-hard combinatorial optimization problemthat arises in applications ranging from crew scheduling in airlines todriver scheduling in public mass transport. In this paper we analyze searchspace characteristics of a widely used set of benchmark instances throughan analysis of the fitness-distance correlation. This analysis shows thatthere exist several classes of set covering instances that have a largelydifferent behavior. For instances with high fitness distance correlation,we propose new ways of generating core problems and analyze the performanceof algorithms exploiting these core problems.
Resumo:
We conduct a large-scale comparative study on linearly combining superparent-one-dependence estimators (SPODEs), a popular family of seminaive Bayesian classifiers. Altogether, 16 model selection and weighing schemes, 58 benchmark data sets, and various statistical tests are employed. This paper's main contributions are threefold. First, it formally presents each scheme's definition, rationale, and time complexity and hence can serve as a comprehensive reference for researchers interested in ensemble learning. Second, it offers bias-variance analysis for each scheme's classification error performance. Third, it identifies effective schemes that meet various needs in practice. This leads to accurate and fast classification algorithms which have an immediate and significant impact on real-world applications. Another important feature of our study is using a variety of statistical tests to evaluate multiple learning methods across multiple data sets.
Resumo:
The main goal of this article is to provide an answer to the question: "Does anything forecast exchange rates, and if so, which variables?". It is well known thatexchange rate fluctuations are very difficult to predict using economic models, andthat a random walk forecasts exchange rates better than any economic model (theMeese and Rogoff puzzle). However, the recent literature has identified a series of fundamentals/methodologies that claim to have resolved the puzzle. This article providesa critical review of the recent literature on exchange rate forecasting and illustratesthe new methodologies and fundamentals that have been recently proposed in an up-to-date, thorough empirical analysis. Overall, our analysis of the literature and thedata suggests that the answer to the question: "Are exchange rates predictable?" is,"It depends" -on the choice of predictor, forecast horizon, sample period, model, andforecast evaluation method. Predictability is most apparent when one or more of thefollowing hold: the predictors are Taylor rule or net foreign assets, the model is linear, and a small number of parameters are estimated. The toughest benchmark is therandom walk without drift.
Resumo:
We introduce a width parameter that bounds the complexity of classical planning problems and domains, along with a simple but effective blind-search procedure that runs in time that is exponential in the problem width. We show that many benchmark domains have a bounded and small width provided thatgoals are restricted to single atoms, and hence that such problems are provably solvable in low polynomial time. We then focus on the practical value of these ideas over the existing benchmarks which feature conjunctive goals. We show that the blind-search procedure can be used for both serializing the goal into subgoals and for solving the resulting problems, resulting in a ‘blind’ planner that competes well with a best-first search planner guided by state-of-the-art heuristics. In addition, ideas like helpful actions and landmarks can be integrated as well, producing a planner with state-of-the-art performance.
Resumo:
L’estudi que es presenta a continuació té l’objectiu de comprendre quina és la realitat ociosa de les persones de 50 a 70 anys als municipis de Malla (Catalunya, Espanya) i de San Juan la Laguna (Sololá, Guatemala) des de la perspectiva humanista en termes de concepció i pràctica, i veure quina és la influència i la força que prenen les característiques de la societat en la qual es desenvolupa. Per tal de dur-ho a terme, primerament s’ha realitzat un procés d’aproximació respecte el concepte de l’oci, i una recerca concreta vers l’oci humanista. A partir d’aquí, s’ha fet l’estudi amb una mostra formada per deu persones del municipi de Malla i deu membres de San Juan la Laguna que es troben entre els 50 i 70 anys, i amb unes condicions econòmiques i uns estils de vida diferents. Per tal de realitzar la recerca i l’anàlisi de l’oci humanista en els contexts de Malla i San Juan la Laguna s’ha emprat una metodologia qualitativa, i s’ha utilitzat l’instrument corresponent a l’entrevista. Aquesta ha estat elaborada prenent com a marc de referència la metodologia de la Grounded Theory (Glaser y Strauss, 1967). El projecte també compte amb una vessant d’etnografia. Els resultats que s’han obtingut demostren que hi ha una presència significativa de l’oci humanista en els contexts analitzats, però que en el cas de San Juan la Laguna aquest esdevé un element en construcció.
Resumo:
Background: Declining physical activity is associated with a rising burden of global disease. There is little evidence about effective ways to increase adherence to physical activity. Therefore, interventions are needed that produce sustained increases in adherence to physical activity and are cost-effective. The purpose is to assess the effectiveness of a primary care physical activity intervention in increasing adherence to physical activity in the general population seen in primary care. Method and design: Randomized controlled trial with systematic random sampling. A total of 424 subjects of both sexes will participate; all will be over the age of 18 with a low level of physical activity (according to the International Physical Activity Questionnaire, IPAQ), self-employed and from 9 Primary Healthcare Centres (PHC). They will volunteer to participate in a physical activity programme during 3 months (24 sessions; 2 sessions a week, 60 minutes per session). Participants from each PHC will be randomly allocated to an intervention (IG) and control group (CG). The following parameters will be assessed pre and post intervention in both groups: (1) health-related quality of life (SF-12), (2) physical activity stage of change (Prochaska's stages of change), (3) level of physical activity (IPAQ-short version), (4) change in perception of health (vignettes from the Cooperative World Organization of National Colleges, Academies, and Academic Associations of Family Physicians, COOP/WONCA), (5) level of social support for the physical activity practice (Social Support for Physical Activity Scale, SSPAS), and (6) control based on analysis (HDL, LDL and glycated haemoglobin).Participants' frequency of visits to the PHC will be registered over the six months before and after the programme. There will be a follow up in a face to face interview three, six and twelve months after the programme, with the reduced version of IPAQ, SF-12, SSPAS, and Prochaska's stages. Discussion: The pilot study showed the effectiveness of an enhanced low-cost, evidence-based intervention in increased physical activity and improved social support. If successful in demonstrating long-term improvements, this randomised controlled trial will be the first sustainable physical activity intervention based in primary care in our country to demonstrate longterm adherence to physical activity. Trial Registration: A service of the U.S. National Institutes of Health. Developed by the National Library of Medicine. ClinicalTrials.gov ID: NCT00714831.
Resumo:
The computer simulation of reaction dynamics has nowadays reached a remarkable degree of accuracy. Triatomic elementary reactions are rigorously studied with great detail on a straightforward basis using a considerable variety of Quantum Dynamics computational tools available to the scientific community. In our contribution we compare the performance of two quantum scattering codes in the computation of reaction cross sections of a triatomic benchmark reaction such as the gas phase reaction Ne + H2+ %12. NeH++ H. The computational codes are selected as representative of time-dependent (Real Wave Packet [ ]) and time-independent (ABC [ ]) methodologies. The main conclusion to be drawn from our study is that both strategies are, to a great extent, not competing but rather complementary. While time-dependent calculations advantages with respect to the energy range that can be covered in a single simulation, time-independent approaches offer much more detailed information from each single energy calculation. Further details such as the calculation of reactivity at very low collision energies or the computational effort related to account for the Coriolis couplings are analyzed in this paper.
Resumo:
The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.
Resumo:
The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.
Resumo:
Recent experiments have established that information can be encoded in the spike times of neurons relative to the phase of a background oscillation in the local field potential—a phenomenon referred to as “phase-of-firing coding” (PoFC). These firing phase preferences could result from combining an oscillation in the input current with a stimulus-dependent static component that would produce the variations in preferred phase, but it remains unclear whether these phases are an epiphenomenon or really affect neuronal interactions—only then could they have a functional role. Here we show that PoFC has a major impact on downstream learning and decoding with the now well established spike timing-dependent plasticity (STDP). To be precise, we demonstrate with simulations how a single neuron equipped with STDP robustly detects a pattern of input currents automatically encoded in the phases of a subset of its afferents, and repeating at random intervals. Remarkably, learning is possible even when only a small fraction of the afferents (~10%) exhibits PoFC. The ability of STDP to detect repeating patterns had been noted before in continuous activity, but it turns out that oscillations greatly facilitate learning. A benchmark with more conventional rate-based codes demonstrates the superiority of oscillations and PoFC for both STDP-based learning and the speed of decoding: the oscillation partially formats the input spike times, so that they mainly depend on the current input currents, and can be efficiently learned by STDP and then recognized in just one oscillation cycle. This suggests a major functional role for oscillatory brain activity that has been widely reported experimentally.
Resumo:
Peer-reviewed
Resumo:
This paper describes an audio watermarking scheme based on lossy compression. The main idea is taken from an image watermarking approach where the JPEG compression algorithm is used to determine where and how the mark should be placed. Similarly, in the audio scheme suggested in this paper, an MPEG 1 Layer 3 algorithm is chosen for compression to determine the position of the mark bits and, thus, the psychoacoustic masking of the MPEG 1 Layer 3compression is implicitly used. This methodology provides with a high robustness degree against compression attacks. The suggested scheme is also shown to succeed against most of the StirMark benchmark attacks for audio.
Resumo:
This paper deals with the design of nonregenerativerelaying transceivers in cooperative systems where channel stateinformation (CSI) is available at the relay station. The conventionalnonregenerative approach is the amplify and forward(A&F) approach, where the signal received at the relay is simplyamplified and retransmitted. In this paper, we propose an alternativelinear transceiver design for nonregenerative relaying(including pure relaying and the cooperative transmission cases),making proper use of CSI at the relay station. Specifically, wedesign the optimum linear filtering performed on the data to beforwarded at the relay. As optimization criteria, we have consideredthe maximization of mutual information (that provides aninformation rate for which reliable communication is possible) fora given available transmission power at the relay station. Threedifferent levels of CSI can be considered at the relay station: onlyfirst hop channel information (between the source and relay);first hop channel and second hop channel (between relay anddestination) information, or a third situation where the relaymay have complete cooperative channel information includingall the links: first and second hop channels and also the directchannel between source and destination. Despite the latter beinga more unrealistic situation, since it requires the destination toinform the relay station about the direct channel, it is useful as anupper benchmark. In this paper, we consider the last two casesrelating to CSI.We compare the performance so obtained with theperformance for the conventional A&F approach, and also withthe performance of regenerative relays and direct noncooperativetransmission for two particular cases: narrowband multiple-inputmultiple-output transceivers and wideband single input singleoutput orthogonal frequency division multiplex transmissions.
Resumo:
Phylogenetic trees representing the evolutionary relationships of homologous genes are the entry point for many evolutionary analyses. For instance, the use of a phylogenetic tree can aid in the inference of orthology and paralogy relationships, and in the detection of relevant evolutionary events such as gene family expansions and contractions, horizontal gene transfer, recombination or incomplete lineage sorting. Similarly, given the plurality of evolutionary histories among genes encoded in a given genome, there is a need for the combined analysis of genome-wide collections of phylogenetic trees (phylomes). Here, we introduce a new release of PhylomeDB (http://phylomedb.org), a public repository of phylomes. Currently, PhylomeDB hosts 120 public phylomes, comprising >1.5 million maximum likelihood trees and multiple sequence alignments. In the current release, phylogenetic trees are annotated with taxonomic, protein-domain arrangement, functional and evolutionary information. PhylomeDB is also a major source for phylogeny-based predictions of orthology and paralogy, covering >10 million proteins across 1059 sequenced species. Here we describe newly implemented PhylomeDB features, and discuss a benchmark of the orthology predictions provided by the database, the impact of proteome updates and the use of the phylome approach in the analysis of newly sequenced genomes and transcriptomes.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.