90 resultados para Biologically inspired
Resumo:
Background: Recent advances on high-throughput technologies have produced a vast amount of protein sequences, while the number of high-resolution structures has seen a limited increase. This has impelled the production of many strategies to built protein structures from its sequence, generating a considerable amount of alternative models. The selection of the closest model to the native conformation has thus become crucial for structure prediction. Several methods have been developed to score protein models by energies, knowledge-based potentials and combination of both.Results: Here, we present and demonstrate a theory to split the knowledge-based potentials in scoring terms biologically meaningful and to combine them in new scores to predict near-native structures. Our strategy allows circumventing the problem of defining the reference state. In this approach we give the proof for a simple and linear application that can be further improved by optimizing the combination of Zscores. Using the simplest composite score () we obtained predictions similar to state-of-the-art methods. Besides, our approach has the advantage of identifying the most relevant terms involved in the stability of the protein structure. Finally, we also use the composite Zscores to assess the conformation of models and to detect local errors.Conclusion: We have introduced a method to split knowledge-based potentials and to solve the problem of defining a reference state. The new scores have detected near-native structures as accurately as state-of-art methods and have been successful to identify wrongly modeled regions of many near-native conformations.
Resumo:
Demosaicking is a particular case of interpolation problems where, from a scalar image in which each pixel has either the red, the green or the blue component, we want to interpolate the full-color image. State-of-the-art demosaicking algorithms perform interpolation along edges, but these edges are estimated locally. We propose a level-set-based geometric method to estimate image edges, inspired by the image in-painting literature. This method has a time complexity of O(S) , where S is the number of pixels in the image, and compares favorably with the state-of-the-art algorithms both visually and in most relevant image quality measures.
Resumo:
Tone Mapping is the problem of compressing the range of a High-Dynamic Range image so that it can be displayed in a Low-Dynamic Range screen, without losing or introducing novel details: The final image should produce in the observer a sensation as close as possible to the perception produced by the real-world scene. We propose a tone mapping operator with two stages. The first stage is a global method that implements visual adaptation, based on experiments on human perception, in particular we point out the importance of cone saturation. The second stage performs local contrast enhancement, based on a variational model inspired by color vision phenomenology. We evaluate this method with a metric validated by psychophysical experiments and, in terms of this metric, our method compares very well with the state of the art.
Resumo:
This paper breaks new ground toward contractual and institutional innovation in models of homeownership, equity building, and mortgage enforcement. Inspired by recent developments in the affordable housing sector and in other types of public financing schemes, this paper suggests extending institutional and financial strategies such as timeand place-based division of property rights, conditional subsidies, and credit mediation to alleviate the systemic risks of mortgage foreclosure. Alongside a for-profit shared equity scheme that would be led by local governments, we also outline a private market shared equity model, one of bootstrapping home buying with purchase options.
Resumo:
Quan els bojos canten és el resultat d’una recerca motivada pel perquè són com són les àries de bogeria. El treball exposa l’evolució dels bojos en societat i cultura fins arribar a la imatge del boig en l’òpera. La metodologia usada per l’elaboració del treball ha consistit en una recerca històrica, l’anàlisi general de la música i dels personatges i una anàlisi més exhaustiva sobre tres àries de bogeria. Així doncs el treball ens planteja una evolució sobre aquesta bogeria musical i quins han estat els seus efectes tant en els autors com en els seus protagonistes.
Resumo:
El título del siguiente trabajo está inspirado en la revista Billboard (U.S.A.), lista de éxitos musicales (U.S.A.,1936), y la Cartella Musicale de Adriano Banchieri (Venecia, 1601), queriendo hacer homenaje a las músicas o melodías más escuchadas. Este fenómeno que aparentemente involucra sólo la recepción, fue muy común durante el s. XVI (período en el cual nos centraremos), pero también mucho antes y hasta nuestros días. El término técnico utilizado para el estudio del mismo, será el de “préstamo”, en música, desarrollado por varios musicólogos pero principalmente por Peter Burkholder. Este proceso ha involucrado no sólo la recepción, sino también la composición, interpretación, intercambio, etc. Mi idea es argumentar y documentar en la medida de lo posible este fenómeno mediante algunos ejemplos concretos.
Resumo:
In this paper we view bargaining and cooperation as an interaction superimposed on a strategic form game. A multistage bargaining procedure for N players, the proposer commitment procedure, is presented. It is inspired by Nash s two-player variable-threat model; a key feature is the commitment to threats. We establish links to classical cooperative game theory solutions, such as the Shapley value in the transferable utility case. However, we show that even in standard pure exchange economies the traditional coalitional function may not be adequate when utilities are not transferable.
Resumo:
This paper explores the integration process that firms follow to implementSupply Chain Management (SCM) and the main barriers and benefits relatedto this strategy. This study has been inspired in the SCM literature,especially in the logistics integration model by Stevens [1]. Due to theexploratory nature of this paper and the need to obtain an in depthknowledge of the SCM development in the Spanish grocery sector, we used thecase study methodology. A multiple case study analysis based on interviewswith leading manufacturers and retailers was conducted.The results of this analysis suggest that firms seem to follow the integration process proposed by Stevens, integrating internally first, andthen, extending this integration to other supply chain members. The casesalso show that Spanish manufacturers, in general, seem to have a higherlevel of SCM development than Spanish retailers. Regarding the benefitsthat SCM can bring, most of the companies identify the general objectivesof cost and stock reductions and service improvements. However, withrespect to the barriers found in its implementation, retailers andmanufacturers are not coincident: manufacturers seem to see more barrierswith respect to aspects related to the other party, such as distrust and alack of culture of sharing information, while retailers find as mainbarriers the need of a know-how , the company culture and the historyand habits.
Resumo:
This Article breaks new ground toward contractual and institutional innovation in models of homeownership, equity building, and mortgage enforcement. Inspired by recent developments in the affordable housing sector and other types of public financing schemes, we suggest extending institutional and financial strategies such as time- and place-based division of property rights, conditional subsidies, and credit mediation to alleviate the systemic risks of mortgage foreclosure. Two new solutions offer a broad theoretical basis for such developments in the economic and legal institution of homeownership: a for-profit shared equity scheme led by local governments alongside a private market shared equity model, one of "bootstrapping home buying with purchase options".
Resumo:
Departures from pure self interest in economic experiments have recently inspired models of "social preferences". We conduct experiments on simple two-person and three-person games with binary choices that test these theories more directly than the array of games conventionally considered. Our experiments show strong support for the prevalence of "quasi-maximin" preferences: People sacrifice to increase the payoffs for all recipients, but especially for the lowest-payoff recipients. People are also motivated by reciprocity: While people are reluctant to sacrifice to reciprocate good or bad behavior beyond what they would sacrifice for neutral parties, they withdraw willingness to sacrifice to achieve a fair outcome when others are themselves unwilling to sacrifice. Some participants are averse to getting different payoffs than others, but based on our experiments and reinterpretation of previous experiments we argue that behavior that has been presented as "difference aversion" in recent papers is actually a combination of reciprocal and quasi-maximin motivations. We formulate a model in which each player is willing to sacrifice to allocate the quasi-maximin allocation only to those players also believed to be pursuing the quasi-maximin allocation, and may sacrifice to punish unfair players.
Resumo:
We perform an experiment on a pure coordination game with uncertaintyabout the payoffs. Our game is closely related to models that have beenused in many macroeconomic and financial applications to solve problemsof equilibrium indeterminacy. In our experiment each subject receives anoisy signal about the true payoffs. This game has a unique strategyprofile that survives the iterative deletion of strictly dominatedstrategies (thus a unique Nash equilibrium). The equilibrium outcomecoincides, on average, with the risk-dominant equilibrium outcome ofthe underlying coordination game. The behavior of the subjects convergesto the theoretical prediction after enough experience has been gained. The data (and the comments) suggest that subjects do not apply through"a priori" reasoning the iterated deletion of dominated strategies.Instead, they adapt to the responses of other players. Thus, the lengthof the learning phase clearly varies for the different signals. We alsotest behavior in a game without uncertainty as a benchmark case. The gamewith uncertainty is inspired by the "global" games of Carlsson and VanDamme (1993).
Resumo:
This paper establishes a general framework for metric scaling of any distance measure between individuals based on a rectangular individuals-by-variables data matrix. The method allows visualization of both individuals and variables as well as preserving all the good properties of principal axis methods such as principal components and correspondence analysis, based on the singular-value decomposition, including the decomposition of variance into components along principal axes which provide the numerical diagnostics known as contributions. The idea is inspired from the chi-square distance in correspondence analysis which weights each coordinate by an amount calculated from the margins of the data table. In weighted metric multidimensional scaling (WMDS) we allow these weights to be unknown parameters which are estimated from the data to maximize the fit to the original distances. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing a matrix and displaying its rows and columns in biplots.
Resumo:
While the theoretical industrial organization literature has long arguedthat excess capacity can be used to deter entry into markets, there islittle empirical evidence that incumbent firms effectively behave in thisway. Bagwell and Ramey (1996) propose a game with a specific sequence ofmoves and partially-recoverable capacity costs in which forward inductionprovides a theoretical rationalization for firm behavior in the field. Weconduct an experiment with a game inspired by their work. In our data theincumbent tends to keep the market, in contrast to what the forwardinduction argument of Bagwell and Ramey would suggest. The results indicatethat players perceive that the first mover has an advantage without havingto pre-commit capacity. In our game, evolution and learning do not driveout this perception. We back these claims with data analysis, atheoretical framework for dynamics, and simulation results.
Resumo:
We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.
Resumo:
Climate science indicates that climate stabilization requires low GHG emissions. Is thisconsistent with nondecreasing human welfare?Our welfare or utility index emphasizes education, knowledge, and the environment. Weconstruct and calibrate a multigenerational model with intertemporal links provided by education,physical capital, knowledge and the environment.We reject discounted utilitarianism and adopt, first, the Pure Sustainability Optimization (orIntergenerational Maximin) criterion, and, second, the Sustainable Growth Optimization criterion,that maximizes the utility of the first generation subject to a given future rate of growth. We applythese criteria to our calibrated model via a novel algorithm inspired by the turnpike property.The computed paths yield levels of utility higher than the level at reference year 2000 for allgenerations. They require the doubling of the fraction of labor resources devoted to the creation ofknowledge relative to the reference level, whereas the fractions of labor allocated to consumptionand leisure are similar to the reference ones. On the other hand, higher growth rates requiresubstantial increases in the fraction of labor devoted to education, together with moderate increasesin the fractions of labor devoted to knowledge and the investment in physical capital.