13 resultados para Complexity of Distribution
em Archivo Digital para la Docencia y la Investigación - Repositorio Institucional de la Universidad del País Vasco
Resumo:
This paper describes Mateda-2.0, a MATLAB package for estimation of distribution algorithms (EDAs). This package can be used to solve single and multi-objective discrete and continuous optimization problems using EDAs based on undirected and directed probabilistic graphical models. The implementation contains several methods commonly employed by EDAs. It is also conceived as an open package to allow users to incorporate different combinations of selection, learning, sampling, and local search procedures. Additionally, it includes methods to extract, process and visualize the structures learned by the probabilistic models. This way, it can unveil previously unknown information about the optimization problem domain. Mateda-2.0 also incorporates a module for creating and validating function models based on the probabilistic models learned by EDAs.
Resumo:
Recently, probability models on rankings have been proposed in the field of estimation of distribution algorithms in order to solve permutation-based combinatorial optimisation problems. Particularly, distance-based ranking models, such as Mallows and Generalized Mallows under the Kendall’s-t distance, have demonstrated their validity when solving this type of problems. Nevertheless, there are still many trends that deserve further study. In this paper, we extend the use of distance-based ranking models in the framework of EDAs by introducing new distance metrics such as Cayley and Ulam. In order to analyse the performance of the Mallows and Generalized Mallows EDAs under the Kendall, Cayley and Ulam distances, we run them on a benchmark of 120 instances from four well known permutation problems. The conducted experiments showed that there is not just one metric that performs the best in all the problems. However, the statistical test pointed out that Mallows-Ulam EDA is the most stable algorithm among the studied proposals.
Resumo:
[ENG]Aiming at an integrated and mechanistic view of the early biological effects of selected metals in the marine sentinel organism Mytilus galloprovincialis, we exposed mussels for 48 hours to 50, 100 and 200 nM solutions of equimolar Cd, Cu and Hg salts and measured cytological and molecular biomarkers in parallel. Focusing on the mussel gills, first target of toxic water contaminants and actively proliferating tissue, we detected significant dose-related increases of cells with micronuclei and other nuclear abnormalities in the treated mussels, with differences in the bioconcentration of the three metals determined in the mussel flesh by atomic absorption spectrometry. Gene expression profiles, determined in the same individual gills in parallel, revealed some transcriptional changes at the 50 nM dose, and substantial increases of differentially expressed genes at the 100 and 200 nM doses, with roughly similar amounts of up- and down-regulated genes. The functional annotation of gill transcripts with consistent expression trends and significantly altered at least in one dose point disclosed the complexity of the induced cell response. The most evident transcriptional changes concerned protein synthesis and turnover, ion homeostasis, cell cycle regulation and apoptosis, and intracellular trafficking (transcript sequences denoting heat shock proteins, metal binding thioneins, sequestosome 1 and proteasome subunits, and GADD45 exemplify up-regulated genes while transcript sequences denoting actin, tubulins and the apoptosis inhibitor 1 exemplify down-regulated genes). Overall, nanomolar doses of co-occurring free metal ions have induced significant structural and functional changes in the mussel gills: the intensity of response to the stimulus measured in laboratory supports the additional validation of molecular markers of metal exposure to be used in Mussel Watch programs
Resumo:
Background: Gastrointestinal stromal tumours (GISTs) are the most common primary mesenchymal neoplasia in the gastrointestinal tract, although they represent only a small fraction of total gastrointestinal malignancies in adults (<2%). GISTs can be located at any level of the gastrointestinal tract; the stomach is the most common location (60-70%), in contrast to the rectum, which is most rare (4%). When a GIST invades into the adjacent prostate tissue, it can simulate prostate cancer. In this study, we report on a case comprising the unexpected collision between a rectal GIST tumour and a prostatic adenocarcinoma. Findings: We describe the complexity of the clinical, endoscopic and radiological diagnosis, of the differential diagnosis based on tumour biopsy, and of the role of neoadjuvant therapy using imatinib prior to surgical treatment. Conclusions: Although isolated cases of coexisting GISTs and prostatic adenocarcinomas have reviously been described, this is the first reported case in the medical literature of a collision tumour involving a rectal GIST and prostatic adenocarcinoma components.
Resumo:
Purpose Retinal ganglion cells (RGCs) are exposed to injury in a variety of optic nerve diseases including glaucoma. However, not all cells respond in the same way to damage and the capacity of individual RGCs to survive or regenerate is variable. In order to elucidate factors that may be important for RGC survival and regeneration we have focussed on the extracellular matrix (ECM) and RGC integrin expression. Our specific questions were: (1) Do adult RGCs express particular sets of integrins in vitro and in vivo? (2) Can the nature of the ECM influence the expression of different integrins? (3) Can the nature of the ECM affect the survival of the cells and the length or branching complexity of their neurites? Methods Primary RGC cultures from adult rat retina were placed on glass coverslips treated with different substrates: Poly-L-Lysine (PL), or PL plus laminin (L), collagen I (CI), collagen IV (CIV) or fibronectin (F). After 10 days in culture, we performed double immunostaining with an antibody against beta III-Tubulin to identify the RGCs, and antibodies against the integrin subunits: alpha V, alpha 1, alpha 3, alpha 5, beta 1 or beta 3. The number of adhering and surviving cells, the number and length of the neurites and the expression of the integrin subunits on the different substrates were analysed. Results PL and L were associated with the greatest survival of RGCs while CI provided the least favourable conditions. The type of substrate affected the number and length of neurites. L stimulated the longest growth. We found at least three different types of RGCs in terms of their capacity to regenerate and extend neurites. The different combinations of integrins expressed by the cells growing on different substrata suggest that RGCs expressed predominantly alpha 1 beta 1 or alpha 3 beta 1 on L, alpha 1 beta 1 on CI and CIV, and alpha 5 beta 3 on F. The activity of the integrins was demonstrated by the phosphorylation of focal adhesion kinase (FAK). Conclusions Adult rat RGCs can survive and grow in the presence of different ECM tested. Further studies should be done to elucidate the different molecular characteristics of the RGCs subtypes in order to understand the possible different sensitivity of different RGCs to damage in diseases like glaucoma in which not all RGCs die at the same time.
Resumo:
Cardiovascular diseases are nowadays the first cause of mortality worldwide, causing around the 30% of global deaths each year. The risk of suffering from cardiovascular illnesses is strongly related to some factors such as hypertension, high cholesterol levels, diabetes, obesity The combination of these different risk factors is known as metabolic syndrome and it is considered a pandemic due to the high prevalence worldwide. The pathology of the disorders implies a combined cardiovascular therapy with drugs which have different targets and mechanisms of action, to regulate each factor separately. The simultaneous analysis of these drugs turns interesting but it is a complex task since the determination of multiple substances with different physicochemical properties and physiological behavior is always a challenge for the analytical chemist. The complexity of the biological matrices and the difference in the expected concentrations of some analytes require the development of extremely sensitive and selective determination methods. The aim of this work is to fill the gap existing in this field of the drug analysis, developing analytical methods capable of quantifying the different drugs prescribed in combined cardiovascular therapy simultaneously. Liquid chromatography andem mass spectrometry (LCMS/MS) has been the technique of choice throughout the main part of this work, due to the high sensitivity and selectivity requirements.
Resumo:
In this paper we empirically investigate which are the structural characteristics that can help to predict the complexity of NK-landscape instances for estimation of distribution algorithms. To this end, we evolve instances that maximize the estimation of distribution algorithm complexity in terms of its success rate. Similarly, instances that minimize the algorithm complexity are evolved. We then identify network measures, computed from the structures of the NK-landscape instances, that have a statistically significant difference between the set of easy and hard instances. The features identified are consistently significant for different values of N and K.
Resumo:
Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors) is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA), linear discriminant analysis (LDA), wavelet decomposition (WD), or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral) characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification
Resumo:
The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.
Resumo:
33 p.
Resumo:
Background: Two distinct trends are emerging with respect to how data is shared, collected, and analyzed within the bioinformatics community. First, Linked Data, exposed as SPARQL endpoints, promises to make data easier to collect and integrate by moving towards the harmonization of data syntax, descriptive vocabularies, and identifiers, as well as providing a standardized mechanism for data access. Second, Web Services, often linked together into workflows, normalize data access and create transparent, reproducible scientific methodologies that can, in principle, be re-used and customized to suit new scientific questions. Constructing queries that traverse semantically-rich Linked Data requires substantial expertise, yet traditional RESTful or SOAP Web Services cannot adequately describe the content of a SPARQL endpoint. We propose that content-driven Semantic Web Services can enable facile discovery of Linked Data, independent of their location. Results: We use a well-curated Linked Dataset - OpenLifeData - and utilize its descriptive metadata to automatically configure a series of more than 22,000 Semantic Web Services that expose all of its content via the SADI set of design principles. The OpenLifeData SADI services are discoverable via queries to the SHARE registry and easy to integrate into new or existing bioinformatics workflows and analytical pipelines. We demonstrate the utility of this system through comparison of Web Service-mediated data access with traditional SPARQL, and note that this approach not only simplifies data retrieval, but simultaneously provides protection against resource-intensive queries. Conclusions: We show, through a variety of different clients and examples of varying complexity, that data from the myriad OpenLifeData can be recovered without any need for prior-knowledge of the content or structure of the SPARQL endpoints. We also demonstrate that, via clients such as SHARE, the complexity of federated SPARQL queries is dramatically reduced.
Resumo:
The past years have seen an increasing debate on cooperation and its unique human character. Philosophers and psychologists have proposed that cooperative activities are characterized by shared goals to which participants are committed through the ability to understand each other’s intentions. Despite its popularity, some serious issues arise with this approach to cooperation. First, one may challenge the assumption that high-level mental processes are necessary for engaging in acting cooperatively. If they are, then how do agents that do not possess such ability (preverbal children, or children with autism who are often claimed to be mind-blind) engage in cooperative exchanges, as the evidence suggests? Secondly, to define cooperation as the result of two de-contextualized minds reading each other’s intentions may fail to fully acknowledge the complexity of situated, interactional dynamics and the interplay of variables such as the participants’ relational and personal history and experience. In this paper we challenge such accounts of cooperation, calling for an embodied approach that sees cooperation not only as an individual attitude toward the other, but also as a property of interaction processes. Taking an enactive perspective, we argue that cooperation is an intrinsic part of any interaction, and that there can be cooperative interaction before complex communicative abilities are achieved. The issue then is not whether one is able or not to read the other’s intentions, but what it takes to participate in joint action. From this basic account, it should be possible to build up more complex forms of cooperation as needed. Addressing the study of cooperation in these terms may enhance our understanding of human social development, and foster our knowledge of different ways of engaging with others, as in the case of autism.
Resumo:
Multi-Agent Reinforcement Learning (MARL) algorithms face two main difficulties: the curse of dimensionality, and environment non-stationarity due to the independent learning processes carried out by the agents concurrently. In this paper we formalize and prove the convergence of a Distributed Round Robin Q-learning (D-RR-QL) algorithm for cooperative systems. The computational complexity of this algorithm increases linearly with the number of agents. Moreover, it eliminates environment non sta tionarity by carrying a round-robin scheduling of the action selection and execution. That this learning scheme allows the implementation of Modular State-Action Vetoes (MSAV) in cooperative multi-agent systems, which speeds up learning convergence in over-constrained systems by vetoing state-action pairs which lead to undesired termination states (UTS) in the relevant state-action subspace. Each agent's local state-action value function learning is an independent process, including the MSAV policies. Coordination of locally optimal policies to obtain the global optimal joint policy is achieved by a greedy selection procedure using message passing. We show that D-RR-QL improves over state-of-the-art approaches, such as Distributed Q-Learning, Team Q-Learning and Coordinated Reinforcement Learning in a paradigmatic Linked Multi-Component Robotic System (L-MCRS) control problem: the hose transportation task. L-MCRS are over-constrained systems with many UTS induced by the interaction of the passive linking element and the active mobile robots.