917 resultados para MODEL SEARCH
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
A new class of parameter estimation algorithms is introduced for Gaussian process regression (GPR) models. It is shown that the integration of the GPR model with probability distance measures of (i) the integrated square error and (ii) Kullback–Leibler (K–L) divergence are analytically tractable. An efficient coordinate descent algorithm is proposed to iteratively estimate the kernel width using golden section search which includes a fast gradient descent algorithm as an inner loop to estimate the noise variance. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.
Resumo:
Attention is a critical mechanism for visual scene analysis. By means of attention, it is possible to break down the analysis of a complex scene to the analysis of its parts through a selection process. Empirical studies demonstrate that attentional selection is conducted on visual objects as a whole. We present a neurocomputational model of object-based selection in the framework of oscillatory correlation. By segmenting an input scene and integrating the segments with their conspicuity obtained from a saliency map, the model selects salient objects rather than salient locations. The proposed system is composed of three modules: a saliency map providing saliency values of image locations, image segmentation for breaking the input scene into a set of objects, and object selection which allows one of the objects of the scene to be selected at a time. This object selection system has been applied to real gray-level and color images and the simulation results show the effectiveness of the system. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
We propose an alternative formulation of the Standard Model which reduces the number of free parameters. In our framework, fermionic fields are assigned to fundamental representations of the Lorentz and the internal symmetry groups, whereas bosonic field variables transform as direct products of fundamental representations of all symmetry groups. This allows us to reduce the number of fundamental symmetries. We formulate the Standard Model by considering the SU(3) and SU(2) symmetry groups as the underlying symmetries of the fundamental interactions. This allows us to suggest a model, for the description of the interactions of the intermediate bosons among themselves and interactions of fermions, that makes use of just two parameters. One parameter characterizes the symmetric phase, whereas the other parameter (the asymmetry parameter) gives the breakdown strength of the symmetries. All coupling strengths of the Standard Model are then derived in terms of these two parameters. In particular, we show that all fermionic electric charges result from symmetry breakdown.
Resumo:
Brazil`s State of Sao Paulo Research Foundation
Resumo:
We present parallel algorithms on the BSP/CGM model, with p processors, to count and generate all the maximal cliques of a circle graph with n vertices and m edges. To count the number of all the maximal cliques, without actually generating them, our algorithm requires O(log p) communication rounds with O(nm/p) local computation time. We also present an algorithm to generate the first maximal clique in O(log p) communication rounds with O(nm/p) local computation, and to generate each one of the subsequent maximal cliques this algorithm requires O(log p) communication rounds with O(m/p) local computation. The maximal cliques generation algorithm is based on generating all maximal paths in a directed acyclic graph, and we present an algorithm for this problem that uses O(log p) communication rounds with O(m/p) local computation for each maximal path. We also show that the presented algorithms can be extended to the CREW PRAM model.
Resumo:
Schistosomiasis affects more than 200 million people worldwide; another 600 million are at risk of infection. The schistosomulum stage is believed to be the target of protective immunity in the attenuated cercaria vaccine model. In an attempt to identify genes up-regulated in the schistosomulum stage in relation to cercaria, we explored the Schistosoma mansoni transcriptome by looking at the relative frequency of reads in EST libraries from both stages. The 400 genes potentially up-regulated in schistosomula were analyzed as to their Gene Ontology categorization, and we have focused on those encoding-predicted proteins with no similarity to proteins of other organisms, assuming they could be parasite-specific proteins important for survival in the host. Up-regulation in schistosomulum relative to cercaria was validated with real-time reverse transcription polymerase chain reaction (RT-PCR) for five out of nine selected genes (56%). We tested their protective potential in mice through immunization with DNA vaccines followed by a parasite challenge. Worm burden reductions of 16-17% were observed for one of them, indicating its protective potential. Our results demonstrate the value and caveats of using stage-associated frequency of ESTs as an indication of differential expression coupled to DNA vaccine screening in the identification of novel proteins to be further investigated as potential vaccine candidates.
Resumo:
Some unexpected promiscuous inhibitors were observed in a virtual screening protocol applied to select cruzain inhibitors from the ZINC database. Physical-chemical and pharmacophore model filters were used to reduce the database size. The selected compounds were docked into the cruzain active site. Six hit compounds were tested as inhibitors. Although the compounds were designed to be nucleophilically attacked by the catalytic cysteine of cruzain, three of them showed typical promiscuous behavior, revealing that false positives are a prevalent concern in VS programs. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
A description of a data item's provenance can be provided in dierent forms, and which form is best depends on the intended use of that description. Because of this, dierent communities have made quite distinct underlying assumptions in their models for electronically representing provenance. Approaches deriving from the library and archiving communities emphasise agreed vocabulary by which resources can be described and, in particular, assert their attribution (who created the resource, who modied it, where it was stored etc.) The primary purpose here is to provide intuitive metadata by which users can search for and index resources. In comparison, models for representing the results of scientific workflows have been developed with the assumption that each event or piece of intermediary data in a process' execution can and should be documented, to give a full account of the experiment undertaken. These occurrences are connected together by stating where one derived from, triggered, or otherwise caused another, and so form a causal graph. Mapping between the two approaches would be benecial in integrating systems and exploiting the strengths of each. In this paper, we specify such a mapping between Dublin Core and the Open Provenance Model. We further explain the technical issues to overcome and the rationale behind the approach, to allow the same method to apply in mapping similar schemes.
Resumo:
Lawrance (1991) has shown, through the estimation of consumption Euler equations, that subjective rates of impatience (time preference) in the U.S. are three to Öve percentage points higher for households with lower average labor incomes than for those with higher labor income. From a theoretical perspective, the sign of this correlation in a job-search model seems at Örst to be undetermined, since more impatient workers tend to accept wage o§ers that less impatient workers would not, thereby remaining less time unemployed. The main result of this paper is showing that, regardless of the existence of e§ects of opposite sign, and independently of the particular speciÖcations of the givens of the model, less impatient workers always end up, in the long run, with a higher average income. The result is based on the (unique) invariant Markov distribution of wages associated with the dynamic optimization problem solved by the consumers. An example is provided to illustrate the method.
Resumo:
A intenção deste trabalho é explorar dinâmicas de competição por meio de “simulação baseada em agentes”. Apoiando-se em um crescente número de estudos no campo da estratégia e teoria das organizações que utilizam métodos de simulação, desenvolveu-se um modelo computacional para simular situações de competição entre empresas e observar a eficiência relativa dos métodos de busca de melhoria de desempenho teorizados. O estudo também explora possíveis explicações para a persistência de desempenho superior ou inferior das empresas, associados às condições de vantagem ou desvantagem competitiva
Resumo:
Pode-se observar uma considerável dispersão entre os preços que diferentes bancos comerciais no Brasil cobram por um mesmo pacote homogêneo de serviços— dispersão esta que é sustentada ao longo do tempo. Em uma tentativa de replicar esta observação empírica, foi desenvolvido um simples modelo que lança mão do arcabouço da literatura de custos de procura (search costs) e que baseia-se também na lealdade por parte dos consumidores. Em seguida, dados de preços referentes ao setor bancário brasileiro são aplicados ao modelo desenvolvido e alguns exercícios empíricos são então realizados. Esses exercícios permitem que: (i) os custos de procura incorridos pelos consumidores sejam estimados, ao fixar-se os valores dos demais parâmetros e (ii) as correspondentes perdas de peso-morto que surgem como consequência dos custos de procura incorridos pelos consumidores sejam também estimadas. Quando apenas 80% da população é livre para buscar por bancos que cobrem menores tarifas, à taxa de juros mensal de 0,5%, o valor estimado do custo de procura médio incorrido pelos consumidores chega a 1805,80 BRL, sendo a correspondente perda de peso-morto média na ordem de 233,71 BRL por consumidor.
Resumo:
In this paper I claim that, in a long-run perspective, measurements of income inequality, under any of the usual inequality measures used in the literature, are upward biased. The reason is that such measurements are cross-sectional by nature and, therefore, do not take into consideration the turnover in the job market which, in the long run, equalizes within-group (e.g., same-education groups) inequalities. Using a job-search model, I show how to derive the within-group invariant-distribution Gini coefficient of income inequality, how to calculate the size of the bias and how to organize the data in arder to solve the problem. Two examples are provided to illustrate the argument.