988 resultados para Inference Technique
Resumo:
Principality of typings is the property that for each typable term, there is a typing from which all other typings are obtained via some set of operations. Type inference is the problem of finding a typing for a given term, if possible. We define an intersection type system which has principal typings and types exactly the strongly normalizable λ-terms. More interestingly, every finite-rank restriction of this system (using Leivant's first notion of rank) has principal typings and also has decidable type inference. This is in contrast to System F where the finite rank restriction for every finite rank at 3 and above has neither principal typings nor decidable type inference. This is also in contrast to earlier presentations of intersection types where the status of these properties is not known for the finite-rank restrictions at 3 and above.Furthermore, the notion of principal typings for our system involves only one operation, substitution, rather than several operations (not all substitution-based) as in earlier presentations of principality for intersection types (of unrestricted rank). A unification-based type inference algorithm is presented using a new form of unification, β-unification.
Resumo:
We consider type systems that combine universal types, recursive types, and object types. We study type inference in these systems under a rank restriction, following Leivant's notion of rank. To motivate our work, we present several examples showing how our systems can be used to type programs encountered in practice. We show that type inference in the rank-k system is decidable for k ≤ 2 and undecidable for k ≥ 3. (Similar results based on different techniques are known to hold for System F, without recursive types and object types.) Our undecidability result is obtained by a reduction from a particular adaptation (which we call "regular") of the semi-unification problem and whose undecidability is, interestingly, obtained by methods totally different from those used in the case of standard (or finite) semi-unification.
Resumo:
Current Internet transport protocols make end-to-end measurements and maintain per-connection state to regulate the use of shared network resources. When a number of such connections share a common endpoint, that endpoint has the opportunity to correlate these end-to-end measurements to better diagnose and control the use of shared resources. A valuable characterization of such shared resources is the "loss topology". From the perspective of a server with concurrent connections to multiple clients, the loss topology is a logical tree rooted at the server in which edges represent lossy paths between a pair of internal network nodes. We develop an end-to-end unicast packet probing technique and an associated analytical framework to: (1) infer loss topologies, (2) identify loss rates of links in an existing loss topology, and (3) augment a topology to incorporate the arrival of a new connection. Correct, efficient inference of loss topology information enables new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. Our extensive simulation results demonstrate that our approach is robust in terms of its accuracy and convergence over a wide range of network conditions.
Resumo:
Existing type systems for object calculi are based on invariant subtyping. Subtyping invariance is required for soundness of static typing in the presence of method overrides, but it is often in the way of the expressive power of the type system. Flexibility of static typing can be recovered in different ways: in first-order systems, by the adoption of object types with variance annotations, in second-order systems by resorting to Self types. Type inference is known to be P-complete for first-order systems of finite and recursive object types, and NP-complete for a restricted version of Self types. The complexity of type inference for systems with variance annotations is yet unknown. This paper presents a new object type system based on the notion of Split types, a form of object types where every method is assigned two types, namely, an update type and a select type. The subtyping relation that arises for Split types is variant and, as a result, subtyping can be performed both in width and in depth. The new type system generalizes all the existing first-order type systems for objects, including systems based on variance annotations. Interestingly, the additional expressive power does not affect the complexity of the type inference problem, as we show by presenting an O(n^3) inference algorithm.
Resumo:
In many networked applications, independent caching agents cooperate by servicing each other's miss streams, without revealing the operational details of the caching mechanisms they employ. Inference of such details could be instrumental for many other processes. For example, it could be used for optimized forwarding (or routing) of one's own miss stream (or content) to available proxy caches, or for making cache-aware resource management decisions. In this paper, we introduce the Cache Inference Problem (CIP) as that of inferring the characteristics of a caching agent, given the miss stream of that agent. While CIP is insolvable in its most general form, there are special cases of practical importance in which it is, including when the request stream follows an Independent Reference Model (IRM) with generalized power-law (GPL) demand distribution. To that end, we design two basic "litmus" tests that are able to detect LFU and LRU replacement policies, the effective size of the cache and of the object universe, and the skewness of the GPL demand for objects. Using extensive experiments under synthetic as well as real traces, we show that our methods infer such characteristics accurately and quite efficiently, and that they remain robust even when the IRM/GPL assumptions do not hold, and even when the underlying replacement policies are not "pure" LFU or LRU. We exemplify the value of our inference framework by considering example applications.
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain
Resumo:
Background: The use of mechanical and enzymatic techniques to isolate preantral follicles before in-vitro culture has been previously described. The aim of this study was to assess the effect of the isolation procedure of mouse preantral follicles on their subsequent development in vitro. Methods: Follicles were isolated either mechanically or enzymatically and cultured using an individual non-spherical culture system. Follicular development and steroidogenesis, oocyte in-vitro maturation and embryo development were assessed for both groups. Results: After 12 days of culture, follicles isolated mechanically had a higher survival rate but a lower antral-like cavity formation rate than follicles isolated enzymatically. Enzymatic follicle isolation was associated with a higher production of testosterone and estradiol compared with mechanical isolation. A stronger phosphatase alkaline reaction was observed after enzymatic isolation, suggesting that follicles isolated enzymatically had more theca cells than those isolated mechanically. However, both isolation techniques resulted in similar oocyte maturation and embryo development rates. Conclusions: Enzymatic follicular isolation did not affect theca cell development. Follicular steroidogenesis was enhanced after enzymatic isolation but the developmental capacity of oocytes was comparable to that obtained after mechanical isolation.
Resumo:
SCOPUS: ar.j
Resumo:
The Lucumi religion (also Santeria and Regla de Ocha) developed in 19th-century colonial Cuba, by syncretizing elements of Catholicism with the Yoruba worship of orisha. When fully initiated, santeros (priests) actively participate in religious ceremonies by periodically being possessed or "mounted" by a patron saint or orisha, usually within the context of a drumming ritual, known as a toque de santo, bembe, or tambor. Within these rituals, there is a clearly defined goal of trance possession, though its manifestation is not the sole measure of success or failure. Rather than focusing on the fleeting, exciting moments that immediately precede the arrival of an orisha in the form of a possession trance, this thesis investigates the entire four- to six-hour musical performance that is central to the ceremony. It examines the brief pauses, the moments of reduced intensity, the slow but deliberate build-ups of energy and excitement, and even the periods when novices are invited to perform the sacred bata drums, and places these moments on an equal footing with the more dynamic periods where possession is imminent or in progress. This document approaches Lucumi ritual from the viewpoint of bata drummers, ritual specialists who, during the course of a toque de santo, exercise wide latitude in determining the shape of the event. Known as omo Ana (children of the orisha Ana who is manifest in drums and rhythms), bata drummers comprise a fraternity that is accessible only through ritual initiation. Though they are sensitive to the desires of the many participants during a toque de santo, and indeed make their living by satisfying the expectations of their hosts, many of the drummers' activities are inwardly focused on the cultivation and preservation of this fraternity. Occasionally interfering with spirit possession, and other expectations of the participants, these aberrant activities include teaching and learning, developing group identity or signature sound, and achieving a state of intimacy among the musicians known as "communitas."
Resumo:
We propose an estimation-theoretic approach to the inference of an incoherent 3D scattering density from 2D scattered speckle field measurements. The object density is derived from the covariance of the speckle field. The inference is performed by a constrained optimization technique inspired by compressive sensing theory. Experimental results demonstrate and verify the performance of our estimates.
Resumo:
A significant challenge in environmental toxicology is that many genetic and genomic tools available in laboratory models are not developed for commonly used environmental models. The Atlantic killifish (Fundulus heteroclitus) is one of the most studied teleost environmental models, yet few genetic or genomic tools have been developed for use in this species. The advancement of genetic and evolutionary toxicology will require that many of the tools developed in laboratory models be transferred into species more applicable to environmental toxicology. Antisense morpholino oligonucleotide (MO) gene knockdown technology has been widely utilized to study development in zebrafish and has been proven to be a powerful tool in toxicological investigations through direct manipulation of molecular pathways. To expand the utility of killifish as an environmental model, MO gene knockdown technology was adapted for use in Fundulus. Morpholino microinjection methods were altered to overcome the significant differences between these two species. Morpholino efficacy and functional duration were evaluated with molecular and phenotypic methods. A cytochrome P450-1A (CYP1A) MO was used to confirm effectiveness of the methodology. For CYP1A MO-injected embryos, a 70% reduction in CYP1A activity, a 86% reduction in total CYP1A protein, a significant increase in beta-naphthoflavone-induced teratogenicity, and estimates of functional duration (50% reduction in activity 10 dpf, and 86% reduction in total protein 12 dpf) conclusively demonstrated that MO technologies can be used effectively in killifish and will likely be just as informative as they have been in zebrafish.
Resumo:
Aquifer denitrification is among the most poorly constrained fluxes in global and regional nitrogen budgets. The few direct measurements of denitrification in groundwaters provide limited information about its spatial and temporal variability, particularly at the scale of whole aquifers. Uncertainty in estimates of denitrification may also lead to underestimates of its effect on isotopic signatures of inorganic N, and thereby confound the inference of N source from these data. In this study, our objectives are to quantify the magnitude and variability of denitrification in the Upper Floridan Aquifer (UFA) and evaluate its effect on N isotopic signatures at the regional scale. Using dual noble gas tracers (Ne, Ar) to generate physical predictions of N2 gas concentrations for 112 observations from 61 UFA springs, we show that excess (i.e. denitrification-derived) N2 is highly variable in space and inversely correlated with dissolved oxygen (O2). Negative relationships between O2 and δ15N NO3 across a larger dataset of 113 springs, well-constrained isotopic fractionation coefficients, and strong 15N:18O covariation further support inferences of denitrification in this uniquely organic-matter-poor system. Despite relatively low average rates, denitrification accounted for 32 % of estimated aquifer N inputs across all sampled UFA springs. Back-calculations of source δ15N NO3 based on denitrification progression suggest that isotopically-enriched nitrate (NO3-) in many springs of the UFA reflects groundwater denitrification rather than urban- or animal-derived inputs. © Author(s) 2012.
Resumo:
Technological advances in genotyping have given rise to hypothesis-based association studies of increasing scope. As a result, the scientific hypotheses addressed by these studies have become more complex and more difficult to address using existing analytic methodologies. Obstacles to analysis include inference in the face of multiple comparisons, complications arising from correlations among the SNPs (single nucleotide polymorphisms), choice of their genetic parametrization and missing data. In this paper we present an efficient Bayesian model search strategy that searches over the space of genetic markers and their genetic parametrization. The resulting method for Multilevel Inference of SNP Associations, MISA, allows computation of multilevel posterior probabilities and Bayes factors at the global, gene and SNP level, with the prior distribution on SNP inclusion in the model providing an intrinsic multiplicity correction. We use simulated data sets to characterize MISA's statistical power, and show that MISA has higher power to detect association than standard procedures. Using data from the North Carolina Ovarian Cancer Study (NCOCS), MISA identifies variants that were not identified by standard methods and have been externally "validated" in independent studies. We examine sensitivity of the NCOCS results to prior choice and method for imputing missing data. MISA is available in an R package on CRAN.
Resumo:
BACKGROUND: Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. RESULTS: Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. CONCLUSIONS: Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data.
Resumo:
Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.