884 resultados para Minimal-complexity classifier


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semi-qualitative probabilistic networks (SQPNs) merge two important graphical model formalisms: Bayesian networks and qualitative probabilistic networks. They provade a very Complexity of inferences in polytree-shaped semi-qualitative probabilistic networks and qualitative probabilistic networks. They provide a very general modeling framework by allowing the combination of numeric and qualitative assessments over a discrete domain, and can be compactly encoded by exploiting the same factorization of joint probability distributions that are behind the bayesian networks. This paper explores the computational complexity of semi-qualitative probabilistic networks, and takes the polytree-shaped networks as its main target. We show that the inference problem is coNP-Complete for binary polytrees with multiple observed nodes. We also show that interferences can be performed in time linear in the number of nodes if there is a single observed node. Because our proof is construtive, we obtain an efficient linear time algorithm for SQPNs under such assumptions. To the best of our knowledge, this is the first exact polynominal-time algorithm for SQPn. Together these results provide a clear picture of the inferential complexity in polytree-shaped SQPNs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Airway smooth muscle constriction induced by cholinergic agonists such as methacholine (MCh), which is typically increased in asthmatic patients, is regulated mainly by muscle muscarinic M3 receptors and negatively by vagal muscarinic M2 receptors. Here we evaluated basal (intrinsic) and allergen-induced (extrinsic) airway responses to MCh. We used two mouse lines selected to respond maximally (AIRmax) or minimally (AIRmin) to innate inflammatory stimuli. We found that in basal condition AIRmin mice responded more vigorously to MCh than AIRmax. Treatment with a specific M2 antagonist increased airway response of AIRmax but not of AIRmin mice. The expression of M2 receptors in the lung was significantly lower in AIRmin compared to AIRmax animals. AIRmax mice developed a more intense allergic inflammation than AIRmin, and both allergic mouse lines increased airway responses to MCh. However, gallamine treatment of allergic groups did not affect the responses to MCh. Our results confirm that low or dysfunctional M2 receptor activity is associated with increased airway responsiveness to MCh and that this trait was inherited during the selective breeding of AIRmin mice and was acquired by AIRmax mice during allergic lung inflammation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While dozens of classification algorithms have been applied to time series, recent empirical evidence strongly suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm is important, and depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping, and cardiology data requires invariance to the baseline (the mean value). Similarly, recent work suggests that for time series clustering, the choice of clustering algorithm is much less important than the choice of distance measure used.In this work we make a somewhat surprising claim. There is an invariance that the community seems to have missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where some complex objects may be incorrectly assigned to a simpler class. Similarly, for clustering this effect can introduce errors by “suggesting” to the clustering algorithm that subjectively similar, but complex objects belong in a sparser and larger diameter cluster than is truly warranted.We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification and clustering accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series mining experiments ever attempted in a single work, and show that complexity-invariant distance measures can produce improvements in classification and clustering in the vast majority of cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN] Indoor position estimation has become an attractive research topic due to growing interest in location-aware services. Nevertheless, satisfying solutions have not been found with the considerations of both accuracy and system complexity. From the perspective of lightweight mobile devices, they are extremely important characteristics, because both the processor power and energy availability are limited. Hence, an indoor localization system with high computational complexity can cause complete battery drain within a few hours. In our research, we use a data mining technique named boosting to develop a localization system based on multiple weighted decision trees to predict the device location, since it has high accuracy and low computational complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction. Postnatal neurogenesis in the hippocampal dentate gyrus, can be modulated by numerous determinants, such as hormones, transmitters and stress. Among the factors positively interfering with neurogenesis, the complexity of the environment appears to play a particularly striking role. Adult mice reared in an enriched environment produce more neurons and exhibit better performance in hippocampus-specific learning tasks. While the effects of complex environments on hippocampal neurogenesis are well documented, there is a lack of information on the effects of living under socio-sensory deprivation conditions. Due to the immaturity of rats and mice at birth, studies dealing with the effects of environmental enrichment on hippocampal neurogenesis were carried out in adult animals, i.e. during a period of relatively low rate of neurogenesis. The impact of environment is likely to be more dramatic during the first postnatal weeks, because at this time granule cell production is remarkably higher than at later phases of development. The aim of the present research was to clarify whether and to what extent isolated or enriched rearing conditions affect hippocampal neurogenesis during the early postnatal period, a time window characterized by a high rate of precursor proliferation and to elucidate the mechanisms underlying these effects. The experimental model chosen for this research was the guinea pig, a precocious rodent, which, at 4-5 days of age can be independent from maternal care. Experimental design. Animals were assigned to a standard (control), an isolated, or an enriched environment a few days after birth (P5-P6). On P14-P17 animals received one daily bromodeoxyuridine (BrdU) injection, to label dividing cells, and were sacrificed either on P18, to evaluate cell proliferation or on P45, to evaluate cell survival and differentiation. Methods. Brain sections were processed for BrdU immunhistochemistry, to quantify the new born and surviving cells. The phenotype of the surviving cells was examined by means of confocal microscopy and immunofluorescent double-labeling for BrdU and either a marker of neurons (NeuN) or a marker of astrocytes (GFAP). Apoptotic cell death was examined with the TUNEL method. Serial sections were processed for immunohistochemistry for i) vimentin, a marker of radial glial cells, ii) BDNF (brain-derived neurotrofic factor), a neurotrophin involved in neuron proliferation/survival, iii) PSA-NCAM (the polysialylated form of the neural cell adhesion molecule), a molecule associated with neuronal migration. Total granule cell number in the dentate gyrus was evaluated by stereological methods, in Nissl-stained sections. Results. Effects of isolation. In P18 isolated animals we found a reduced cell proliferation (-35%) compared to controls and a lower expression of BDNF. Though in absolute terms P45 isolated animals had less surviving cells than controls, they showed no differences in survival rate and phenotype percent distribution compared to controls. Evaluation of the absolute number of surviving cells of each phenotype showed that isolated animals had a reduced number of cells with neuronal phenotype than controls. Looking at the location of the new neurons, we found that while in control animals 76% of them had migrated to the granule cell layer, in isolated animals only 55% of the new neurons had reached this layer. Examination of radial glia cells of P18 and P45 animals by vimentin immunohistochemistry showed that in isolated animals radial glia cells were reduced in density and had less and shorter processes. Granule cell count revealed that isolated animals had less granule cells than controls (-32% at P18 and -42% at P45). Effects of enrichment. In P18 enriched animals there was an increase in cell proliferation (+26%) compared to controls and a higher expression of BDNF. Though in both groups there was a decline in the number of BrdU-positive cells by P45, enriched animals had more surviving cells (+63) and a higher survival rate than controls. No differences were found between control and enriched animals in phenotype percent distribution. Evaluation of the absolute number of cells of each phenotype showed that enriched animals had a larger number of cells of each phenotype than controls. Looking at the location of cells of each phenotype we found that enriched animals had more new neurons in the granule cell layer and more astrocytes and cells with undetermined phenotype in the hilus. Enriched animals had a higher expression of PSA-NCAM in the granule cell layer and hilus Vimentin immunohistochemistry showed that in enriched animals radial glia cells were more numerous and had more processes.. Granule cell count revealed that enriched animals had more granule cells than controls (+37% at P18 and +31% at P45). Discussion. Results show that isolation rearing reduces hippocampal cell proliferation but does not affect cell survival, while enriched rearing increases both cell proliferation and cell survival. Changes in the expression of BDNF are likely to contribute to he effects of environment on precursor cell proliferation. The reduction and increase in final number of granule neurons in isolated and enriched animals, respectively, are attributable to the effects of environment on cell proliferation and survival and not to changes in the differentiation program. As radial glia cells play a pivotal role in neuron guidance to the granule cell layer, the reduced number of radial glia cells in isolated animals and the increased number in enriched animals suggests that the size of radial glia population may change dynamically, in order to match changes in neuron production. The high PSA-NCAM expression in enriched animals may concur to favor the survival of the new neurons by facilitating their migration to the granule cell layer. Conclusions. By using a precocious rodent we could demonstrate that isolated/enriched rearing conditions, at a time window during which intense granule cell proliferation takes place, lead to a notable decrease/increase of total granule cell number. The time-course and magnitude of postnatal granule cell production in guinea pigs are more similar to the human and non-human primate condition than in rats and mice. Translation of current data to humans would imply that exposure of children to environments poor/rich of stimuli may have a notably large impact on dentate neurogenesis and, very likely, on hippocampus dependent memory functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Water distribution networks optimization is a challenging problem due to the dimension and the complexity of these systems. Since the last half of the twentieth century this field has been investigated by many authors. Recently, to overcome discrete nature of variables and non linearity of equations, the research has been focused on the development of heuristic algorithms. This algorithms do not require continuity and linearity of the problem functions because they are linked to an external hydraulic simulator that solve equations of mass continuity and of energy conservation of the network. In this work, a NSGA-II (Non-dominating Sorting Genetic Algorithm) has been used. This is a heuristic multi-objective genetic algorithm based on the analogy of evolution in nature. Starting from an initial random set of solutions, called population, it evolves them towards a front of solutions that minimize, separately and contemporaneously, all the objectives. This can be very useful in practical problems where multiple and discordant goals are common. Usually, one of the main drawback of these algorithms is related to time consuming: being a stochastic research, a lot of solutions must be analized before good ones are found. Results of this thesis about the classical optimal design problem shows that is possible to improve results modifying the mathematical definition of objective functions and the survival criterion, inserting good solutions created by a Cellular Automata and using rules created by classifier algorithm (C4.5). This part has been tested using the version of NSGA-II supplied by Centre for Water Systems (University of Exeter, UK) in MATLAB® environment. Even if orientating the research can constrain the algorithm with the risk of not finding the optimal set of solutions, it can greatly improve the results. Subsequently, thanks to CINECA help, a version of NSGA-II has been implemented in C language and parallelized: results about the global parallelization show the speed up, while results about the island parallelization show that communication among islands can improve the optimization. Finally, some tests about the optimization of pump scheduling have been carried out. In this case, good results are found for a small network, while the solutions of a big problem are affected by the lack of constraints on the number of pump switches. Possible future research is about the insertion of further constraints and the evolution guide. In the end, the optimization of water distribution systems is still far from a definitive solution, but the improvement in this field can be very useful in reducing the solutions cost of practical problems, where the high number of variables makes their management very difficult from human point of view.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Myc is a transcription factor that can activate transcription of several hundreds genes by direct binding to their promoters at specific DNA sequences (E-box). However, recent studies have also shown that it can exert its biological role by repressing transcription. Such studies collectively support a model in which c-Myc-mediated repression occurs through interactions with transcription factors bound to promoter DNA regions but not through direct recognition of typical E-box sequences. Here, we investigated whether N-Myc can also repress gene transcription, and how this is mechanistically achieved. We used human neuroblastoma cells as a model system in that N-MYC amplification/over-expression represents a key prognostic marker of this tumour. By means of transcription profile analyses we could identify at least 5 genes (TRKA, p75NTR, ABCC3, TG2, p21) that are specifically repressed by N-Myc. Through a dual-step-ChIP assay and genetic dissection of gene promoters, we found that N-Myc is physically associated with gene promoters in vivo, in proximity of the transcription start site. N-Myc association with promoters requires interaction with other proteins, such as Sp1 and Miz1 transcription factors. Furthermore, we found that N-Myc may repress gene expression by interfering directly with Sp1 and/or with Miz1 activity (i.e. TRKA, p75NTR, ABCC3, p21) or by recruiting Histone Deacetylase 1 (Hdac1) (i.e. TG2). In vitro analyses show that distinct N-Myc domains can interact with Sp1, Miz1 and Hdac1, supporting the idea that Myc may participate in distinct repression complexes by interacting specifically with diverse proteins. Finally, results show that N-Myc, through repressed genes, affects important cellular functions, such as apoptosis, growth, differentiation and motility. Overall, our results support a model in which N-Myc, like c-Myc, can repress gene transcription by direct interaction with Sp1 and/or Miz1, and provide further lines of evidence on the importance of transcriptional repression by Myc factors in tumour biology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Im Rahmen der vorliegenden Dissertation wurden Untersuchungen zur Expression und Funktion der respiratorischen Proteine Neuroglobin (Ngb) und Cytoglobin (Cygb) in Vertebraten durchgeführt. Beide Globine wurden erst kürzlich entdeckt, und ihre Funktionen konnten trotz vorliegender Daten zur Struktur und biochemischen Eigenschaften dieser Proteine bisher nicht eindeutig geklärt werden. Im ersten Abschnitt der vorliegenden Arbeit wurde die zelluläre und subzelluläre Lokalisation von Neuroglobin und Cytoglobin in murinen Gewebeschnitten untersucht. Die Expression von Ngb in neuronalen und endokrinen Geweben hängt offensichtlich mit den hohen metabolischen Aktivitäten dieser Organe zusammen. Insbesondere im Gehirn konnten regionale Unterschiede in der Ngb-Expression beobachtet werden. Dabei korrelierte eine besonders starke Neuroglobin-Expression mit Gehirnbereichen, die bekanntermaßen die höchsten Grundaktivitäten aufweisen. In Anbetracht dessen liegt die Funktion des Neuroglobins möglicherweise im basalen O2-Metabolismus dieser Gewebe, wobei Ngb als O2-Lieferant und kurzfristiger O2-Speicher den vergleichsweise hohen Sauerstoffbedarf vor Ort sicherstellen könnte. Weitere Funktionen in der Entgiftung von ROS bzw. RNS oder die kürzlich publizierte mögliche Rolle des Ngb bei der Verhinderung der Mitochondrien-vermittelten Apoptose durch eine Reduktion des freigesetzten Cytochrom c wären darüber hinaus denkbar. Die Cygb-Expression im Gehirn beschränkte sich auf relativ wenige Neurone in verschiedenen Gehirnbereichen und zeigte dort vorwiegend eine Co-Lokalisation mit der neuronalen NO-Synthase. Dieser Befund legt eine Funktion des Cytoglobins im NO-Metabolismus nahe. Quantitative RT-PCR-Experimente zur mRNA-Expression von Ngb und Cygb in alternden Säugern am Bsp. der Hamsterspezies Phodopus sungorus zeigten keine signifikanten Änderungen der mRNA-Mengen beider Globine in alten im Vergleich zu jungen Tieren. Dies widerspricht publizierten Daten, in denen bei der Maus anhand von Western Blot-Analysen eine Abnahme der Neuroglobin-Menge im Alter gezeigt wurde. Möglicherweise handelt es sich hierbei um speziesspezifische Differenzen. Die im Rahmen dieser Arbeit durchgeführte vergleichende Sequenzanalyse der humanen und murinen NGB/Ngb-Genregion liefert zum einen Hinweise auf die mögliche Regulation der Ngb-Expression und zum anderen eine wichtige Grundlage für die funktionellen Analysen dieses Gens. Es konnte ein minimaler Promotorbereich definiert werden, der zusammen mit einigen konservierten regulatorischen Elementen als Basis für experimentelle Untersuchungen der Promotoraktivität in Abhängigkeit von äußeren Einflüssen dienen wird. Bioinformatische Analysen führten zur Identifizierung des sog. „neuron restrictive silencer element“ (NRSE) im Ngb-Promotor, welches vermutlich für die vorwiegend neuronale Expression des Proteins verantwortlich ist. Die kontrovers diskutierte O2-abhängige Regulation der Ngb-Expression konnte hingegen anhand der durchgeführten komparativen Sequenzanalysen nicht bestätigt werden. Es wurden keine zwischen Mensch und Maus konservierten Bindestellen für den Transkriptionsfaktor HIF-1 identifiziert, der die Expression zahlreicher hypoxieregulierter Gene, z.B. Epo und VEGF, vermittelt. Zusammen mit den in vivo-Daten spricht dies eher gegen eine Regulation der Ngb-Expression bei verminderter Verfügbarkeit von Sauerstoff. Die Komplexität der Funktionen von Ngb und Cygb im O2-Stoffwechsel der Vertebraten macht den Einsatz muriner Modellsysteme unerlässlich, die eine sukzessive Aufklärung der Funktionen beider Proteine erlauben. Die vorliegende Arbeit liefert auch dazu einen wichtigen Beitrag. Die hergestellten „gene-targeting“-Vektorkonstrukte liefern in Verbindung mit den etablierten Nachweisverfahren zur Genotypisierung von embryonalen Stammzellen die Grundlage zur erfolgreichen Generierung von Ngb-knock out sowie Ngb- und Cygb-überexprimierenden transgenen Tieren. Diese werden für die endgültige Entschlüsselung funktionell relevanter Fragestellungen von enormer Bedeutung sein.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis applies the ICC tecniques to the probabilistic polinomial complexity classes in order to get an implicit characterization of them. The main contribution lays on the implicit characterization of PP (which stands for Probabilistic Polynomial Time) class, showing a syntactical characterisation of PP and a static complexity analyser able to recognise if an imperative program computes in Probabilistic Polynomial Time. The thesis is divided in two parts. The first part focuses on solving the problem by creating a prototype of functional language (a probabilistic variation of lambda calculus with bounded recursion) that is sound and complete respect to Probabilistic Prolynomial Time. The second part, instead, reverses the problem and develops a feasible way to verify if a program, written with a prototype of imperative programming language, is running in Probabilistic polynomial time or not. This thesis would characterise itself as one of the first step for Implicit Computational Complexity over probabilistic classes. There are still open hard problem to investigate and try to solve. There are a lot of theoretical aspects strongly connected with these topics and I expect that in the future there will be wide attention to ICC and probabilistic classes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This Doctoral Thesis unfolds into a collection of three distinct papers that share an interest in institutional theory and technology transfer. Taking into account that organizations are increasingly exposed to a multiplicity of demands and pressures, we aim to analyze what renders this situation of institutional complexity more or less difficult to manage for organizations, and what makes organizations more or less successful in responding to it. The three studies offer a novel contribution both theoretically and empirically. In particular, the first paper “The dimensions of organizational fields for understanding institutional complexity: A theoretical framework” is a theoretical contribution that tries to better understand the relationship between institutional complexity and fields by providing a framework. The second article “Beyond institutional complexity: The case of different organizational successes in confronting multiple institutional logics” is an empirical study which aims to explore the strategies that allow organizations facing multiple logics to respond more successfully to them. The third work “ How external support may mitigate the barriers to university-industry collaboration” is oriented towards practitioners and presents a case study about technology transfer in Italy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Curry-Howard isomorphism is the idea that proofs in natural deduction can be put in correspondence with lambda terms in such a way that this correspondence is preserved by normalization. The concept can be extended from Intuitionistic Logic to other systems, such as Linear Logic. One of the nice conseguences of this isomorphism is that we can reason about functional programs with formal tools which are typical of proof systems: such analysis can also include quantitative qualities of programs, such as the number of steps it takes to terminate. Another is the possiblity to describe the execution of these programs in terms of abstract machines. In 1990 Griffin proved that the correspondence can be extended to Classical Logic and control operators. That is, Classical Logic adds the possiblity to manipulate continuations. In this thesis we see how the things we described above work in this larger context.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we provide a characterization of probabilistic computation in itself, from a recursion-theoretical perspective, without reducing it to deterministic computation. More specifically, we show that probabilistic computable functions, i.e., those functions which are computed by Probabilistic Turing Machines (PTM), can be characterized by a natural generalization of Kleene's partial recursive functions which includes, among initial functions, one that returns identity or successor with probability 1/2. We then prove the equi-expressivity of the obtained algebra and the class of functions computed by PTMs. In the the second part of the thesis we investigate the relations existing between our recursion-theoretical framework and sub-recursive classes, in the spirit of Implicit Computational Complexity. More precisely, endowing predicative recurrence with a random base function is proved to lead to a characterization of polynomial-time computable probabilistic functions.