977 resultados para lambda-carrageenan


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have developed a compiler for the lexically-scoped dialect of LISP known as SCHEME. The compiler knows relatively little about specific data manipulation primitives such as arithmetic operators, but concentrates on general issues of environment and control. Rather than having specialized knowledge about a large variety of control and environment constructs, the compiler handles only a small basis set which reflects the semantics of lambda-calculus. All of the traditional imperative constructs, such as sequencing, assignment, looping, GOTO, as well as many standard LISP constructs such as AND, OR, and COND, are expressed in macros in terms of the applicative basis set. A small number of optimization techniques, coupled with the treatment of function calls as GOTO statements, serve to produce code as good as that produced by more traditional compilers. The macro approach enables speedy implementation of new constructs as desired without sacrificing efficiency in the generated code. A fair amount of analysis is devoted to determining whether environments may be stack-allocated or must be heap-allocated. Heap-allocated environments are necessary in general because SCHEME (unlike Algol 60 and Algol 68, for example) allows procedures with free lexically scoped variables to be returned as the values of other procedures; the Algol stack-allocation environment strategy does not suffice. The methods used here indicate that a heap-allocating generalization of the "display" technique leads to an efficient implementation of such "upward funargs". Moreover, compile-time optimization and analysis can eliminate many "funargs" entirely, and so far fewer environment structures need be allocated at run time than might be expected. A subset of SCHEME (rather than triples, for example) serves as the representation intermediate between the optimized SCHEME code and the final output code; code is expressed in this subset in the so-called continuation-passing style. As a subset of SCHEME, it enjoys the same theoretical properties; one could even apply the same optimizer used on the input code to the intermediate code. However, the subset is so chosen that all temporary quantities are made manifest as variables, and no control stack is needed to evaluate it. As a result, this apparently applicative representation admits an imperative interpretation which permits easy transcription to final imperative machine code. These qualities suggest that an applicative language like SCHEME is a better candidate for an UNCOL than the more imperative candidates proposed to date.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ML programming language restricts type polymorphism to occur only in the "let-in" construct and requires every occurrence of a formal parameter of a function (a lambda abstraction) to have the same type. Milner in 1978 refers to this restriction (which was adopted to help ML achieve automatic type inference) as a serious limitation. We show that this restriction can be relaxed enough to allow universal polymorphic abstraction without losing automatic type inference. This extension is equivalent to the rank-2 fragment of system F. We precisely characterize the additional program phrases (lambda terms) that can be typed with this extension and we describe typing anomalies both before and after the extension. We discuss how macros may be used to gain some of the power of rank-3 types without losing automatic type inference. We also discuss user-interface problems in how to inform the programmer of the possible types a program phrase may have.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problems of typability[1] and type checking[2] in the Girard/Reynolds second-order polymorphic typed λ-calculus, for which we use the short name "System F" and which we use in the "Curry style" where types are assigned to pure λ -terms. These problems have been considered and proven to be decidable or undecidable for various restrictions and extensions of System F and other related systems, and lower-bound complexity results for System F have been achieved, but they have remained "embarrassing open problems"[3] for System F itself. We first prove that type checking in System F is undecidable by a reduction from semi-unification. We then prove typability in System F is undecidable by a reduction from type checking. Since the reverse reduction is already known, this implies the two problems are equivalent. The second reduction uses a novel method of constructing λ-terms such that in all type derivations, specific bound variables must always be assigned a specific type. Using this technique, we can require that specific subterms must be typable using a specific, fixed type assignment in order for the entire term to be typable at all. Any desired type assignment may be simulated. We develop this method, which we call "constants for free", for both the λK and λI calculi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the problem of type inference for a family of polymorphic type disciplines containing the power of Core-ML. This family comprises all levels of the stratification of the second-order lambda-calculus by "rank" of types. We show that typability is an undecidable problem at every rank k ≥ 3 of this stratification. While it was already known that typability is decidable at rank ≤ 2, no direct and easy-to-implement algorithm was available. To design such an algorithm, we develop a new notion of reduction and show how to use it to reduce the problem of typability at rank 2 to the problem of acyclic semi-unification. A by-product of our analysis is the publication of a simple solution procedure for acyclic semi-unification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two new notions of reduction for terms of the λ-calculus are introduced and the question of whether a λ-term is beta-strongly normalizing is reduced to the question of whether a λ-term is merely normalizing under one of the new notions of reduction. This leads to a new way to prove beta-strong normalization for typed λ-calculi. Instead of the usual semantic proof style based on Girard's "candidats de réductibilité'', termination can be proved using a decreasing metric over a well-founded ordering in a style more common in the field of term rewriting. This new proof method is applied to the simply-typed λ-calculus and the system of intersection types.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This is an addendum to our technical report BUCS TR-94-014 of December 19, 1994. It clarifies some statements, adds information on some related research, includes a comparison with research be de Groote, and fixes two minor mistakes in a proof.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We define a unification problem ^UP with the property that, given a pure lambda-term M, we can derive an instance Gamma(M) of ^UP from M such that Gamma(M) has a solution if and only if M is beta-strongly normalizable. There is a type discipline for pure lambda-terms that characterizes beta-strong normalization; this is the system of intersection types (without a "top" type that can be assigned to every lambda-term). In this report, we use a lean version LAMBDA of the usual system of intersection types. Hence, ^UP is also an appropriate unification problem to characterize typability of lambda-terms in LAMBDA. It also follows that ^UP is an undecidable problem, which can in turn be related to semi-unification and second-order unification (both known to be undecidable).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider type systems that combine universal types, recursive types, and object types. We study type inference in these systems under a rank restriction, following Leivant's notion of rank. To motivate our work, we present several examples showing how our systems can be used to type programs encountered in practice. We show that type inference in the rank-k system is decidable for k ≤ 2 and undecidable for k ≥ 3. (Similar results based on different techniques are known to hold for System F, without recursive types and object types.) Our undecidability result is obtained by a reduction from a particular adaptation (which we call "regular") of the semi-unification problem and whose undecidability is, interestingly, obtained by methods totally different from those used in the case of standard (or finite) semi-unification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The isomorphisms holding in all models of the simply typed lambda calculus with surjective and terminal objects are well studied - these models are exactly the Cartesian closed categories. Isomorphism of two simple types in such a model is decidable by reduction to a normal form and comparison under a finite number of permutations (Bruce, Di Cosmo, and Longo 1992). Unfortunately, these normal forms may be exponentially larger than the original types so this construction decides isomorphism in exponential time. We show how using space-sharing/hash-consing techniques and memoization can be used to decide isomorphism in practical polynomial time (low degree, small hidden constant). Other researchers have investigated simple type isomorphism in relation to, among other potential applications, type-based retrieval of software modules from libraries and automatic generation of bridge code for multi-language systems. Our result makes such potential applications practically feasible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods. METHODOLOGY/PRINCIPAL FINDINGS: We searched PubMed and Cochrane databases (2000-2006) for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout) rates being approximated by an exponential decay curve (e(-lambdat)) where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100) and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive. CONCLUSION/SIGNIFICANCE: Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last observation carried forward as the primary method of analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Screening of a human placenta lambda gt11 library has led to the isolation of the cDNA for the human beta 1-adrenergic receptor (beta 1AR). Used as the probe was the human genomic clone termed G-21. This clone, which contains an intronless gene for a putative receptor, was previously isolated by virtue of its cross hybridization with the human beta 2-adrenergic receptor (beta 2AR). The 2.4-kilobase cDNA for the human beta 1AR encodes a protein of 477 amino acid residues that is 69% homologous with the avian beta AR but only 54% homologous with the human beta 2AR. This suggests that the avian gene encoding beta AR and the human gene encoding beta 1AR evolved from a common ancestral gene. RNA blot analysis indicates a message of 2.5 kilobases in rat tissues, with a pattern of tissue distribution consistent with beta 1AR binding. This pattern is quite distinct from the pattern obtained when the beta 2AR cDNA is used as a probe. Expression of receptor protein in Xenopus laevis oocytes conveys adenylate cyclase responsiveness to catecholamines with a typical beta 1AR specificity. This contrasts with the typical beta 2 subtype specificity observed when the human beta 2AR cDNA is expressed in this system. Mammalian beta 1AR and beta 2AR are thus products of distinct genes, both of which are apparently related to the putative G-21 receptor.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding immune tolerance mechanisms is a major goal of immunology research, but mechanistic studies have generally required the use of mouse models carrying untargeted or targeted antigen receptor transgenes, which distort lymphocyte development and therefore preclude analysis of a truly normal immune system. Here we demonstrate an advance in in vivo analysis of immune tolerance that overcomes these shortcomings. We show that custom superantigens generated by single chain antibody technology permit the study of tolerance in a normal, polyclonal immune system. In the present study we generated a membrane-tethered anti-Igkappa-reactive single chain antibody chimeric gene and expressed it as a transgene in mice. B cell tolerance was directly characterized in the transgenic mice and in radiation bone marrow chimeras in which ligand-bearing mice served as recipients of nontransgenic cells. We find that the ubiquitously expressed, Igkappa-reactive ligand induces efficient B cell tolerance primarily or exclusively by receptor editing. We also demonstrate the unique advantages of our model in the genetic and cellular analysis of immune tolerance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval $[0,1]$ with dependence on a single parameter, $\lambda$. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on $\lambda$ and the behavior of the initial data around $1$. The second scaling leads to a measure-valued Fleming-Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The characterization of thermocouple sensors for temperature measurement in varying-flow environments is a challenging problem. Recently, the authors introduced novel difference-equation-based algorithms that allow in situ characterization of temperature measurement probes consisting of two-thermocouple sensors with differing time constants. In particular, a linear least squares (LS) lambda formulation of the characterization problem, which yields unbiased estimates when identified using generalized total LS, was introduced. These algorithms assume that time constants do not change during operation and are, therefore, appropriate for temperature measurement in homogenous constant-velocity liquid or gas flows. This paper develops an alternative ß-formulation of the characterization problem that has the major advantage of allowing exploitation of a priori knowledge of the ratio of the sensor time constants, thereby facilitating the implementation of computationally efficient algorithms that are less sensitive to measurement noise. A number of variants of the ß-formulation are developed, and appropriate unbiased estimators are identified. Monte Carlo simulation results are used to support the analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The enantiomerically pure ligands LRR and LSS (N,N'-bis(-2,2'-bipyridyl-5-yl)carbonyl-(1S/R,2S/R)-(+/-)-1,2-diaminocyclohexane) have been synthesised by linking two 2,2'-bipyridine units by (R,R)- and (S,S)-1,2-diaminocyclohexane respectively. The crystal structure confirmed that the ligand had a twisted orientation between the two chelating units. The reaction of LRR and LSS with Fe(II), Co(III), Cd(II) and Zn(II) afforded dinuclear complexes confirmed by ES mass spectroscopy. CD spectroscopy indicated that the chiral diaminocyclohexane conferred helicity to the metal centre giving a dominant triple helicate diastereoisomer, with the LRR ligand giving a delta-configuration of each metal centre (P helicate) and the LSS ligand a lambda configuration (M helicate). 1H NMR spectroscopy confirmed a dominant major diastereoisomer with cadmium. The Zn(II) and Cd(II) complexes however were observed to undergo rapid ligand dissociation in solution.