628 resultados para REDUNDANT
Resumo:
With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.
Resumo:
One of the overarching questions in the field of infant perceptual and cognitive development concerns how selective attention is organized during early development to facilitate learning. The following study examined how infants' selective attention to properties of social events (i.e., prosody of speech and facial identity) changes in real time as a function of intersensory redundancy (redundant audiovisual, nonredundant unimodal visual) and exploratory time. Intersensory redundancy refers to the spatially coordinated and temporally synchronous occurrence of information across multiple senses. Real time macro- and micro-structural change in infants' scanning patterns of dynamic faces was also examined. ^ According to the Intersensory Redundancy Hypothesis, information presented redundantly and in temporal synchrony across two or more senses recruits infants' selective attention and facilitates perceptual learning of highly salient amodal properties (properties that can be perceived across several sensory modalities such as the prosody of speech) at the expense of less salient modality specific properties. Conversely, information presented to only one sense facilitates infants' learning of modality specific properties (properties that are specific to a particular sensory modality such as facial features) at the expense of amodal properties (Bahrick & Lickliter, 2000, 2002). ^ Infants' selective attention and discrimination of prosody of speech and facial configuration was assessed in a modified visual paired comparison paradigm. In redundant audiovisual stimulation, it was predicted infants would show discrimination of prosody of speech in the early phases of exploration and facial configuration in the later phases of exploration. Conversely, in nonredundant unimodal visual stimulation, it was predicted infants would show discrimination of facial identity in the early phases of exploration and prosody of speech in the later phases of exploration. Results provided support for the first prediction and indicated that following redundant audiovisual exposure, infants showed discrimination of prosody of speech earlier in processing time than discrimination of facial identity. Data from the nonredundant unimodal visual condition provided partial support for the second prediction and indicated that infants showed discrimination of facial identity, but not prosody of speech. The dissertation study contributes to the understanding of the nature of infants' selective attention and processing of social events across exploratory time.^
Resumo:
The Intersensory Redundancy Hypothesis (IRH; Bahrick & Lickliter, 2000, 2002, 2012) predicts that early in development information presented to a single sense modality will selectively recruit attention to modality-specific properties of stimulation and facilitate learning of those properties at the expense of amodal properties (unimodal facilitation). Vaillant (2010) demonstrated that bobwhite quail chicks prenatally exposed to a maternal call alone (unimodal stimulation) are able to detect a pitch change, a modality-specific property, in subsequent postnatal testing between the familiarized call and the same call with altered pitch. In contrast, chicks prenatally exposed to a maternal call paired with a temporally synchronous light (redundant audiovisual stimulation) were unable to detect a pitch change. According to the IRH (Bahrick & Lickliter, 2012), as development proceeds and the individual's perceptual abilities increase, the individual should detect modality-specific properties in both nonredundant, unimodal and redundant, bimodal conditions. However, when the perceiver is presented with a difficult task, relative to their level of expertise, unimodal facilitation should become evident. The first experiment of the present study exposed bobwhite quail chicks 24 hr after hatching to unimodal auditory, nonredundant audiovisual, or redundant audiovisual presentations of a maternal call for 10min/hr for 24 hours. All chicks were subsequently tested 24 hr after the completion of the stimulation (72 hr following hatching) between the familiarized maternal call and the same call with altered pitch. Chicks from all experimental groups (unimodal, nonredundant audiovisual, and redundant audiovisual exposure) significantly preferred the familiarized call over the pitch-modified call. The second experiment exposed chicks to the same exposure conditions, but created a more difficult task by narrowing the pitch range between the two maternal calls with which they were tested. Chicks in the unimodal and nonredundant audiovisual conditions demonstrated detection of the pitch change, whereas the redundant audiovisual exposure group did not show detection of the pitch change, providing evidence of unimodal facilitation. These results are consistent with predictions of the IRH and provide further support for the effects of unimodal facilitation and the role of task difficulty across early development.
Resumo:
The first part of this paper deals with an extension of Dirac's Theorem to directed graphs. It is related to a result often referred to as the Ghouila-Houri Theorem. Here we show that the requirement of being strongly connected in the hypothesis of the Ghouila-Houri Theorem is redundant. The Second part of the paper shows that a condition on the number of edges for a graph to be hamiltonian implies Ore's condition on the degrees of the vertices.
Resumo:
Secrecy is fundamental to computer security, but real systems often cannot avoid leaking some secret information. For this reason, the past decade has seen growing interest in quantitative theories of information flow that allow us to quantify the information being leaked. Within these theories, the system is modeled as an information-theoretic channel that specifies the probability of each output, given each input. Given a prior distribution on those inputs, entropy-like measures quantify the amount of information leakage caused by the channel. ^ This thesis presents new results in the theory of min-entropy leakage. First, we study the perspective of secrecy as a resource that is gradually consumed by a system. We explore this intuition through various models of min-entropy consumption. Next, we consider several composition operators that allow smaller systems to be combined into larger systems, and explore the extent to which the leakage of a combined system is constrained by the leakage of its constituents. Most significantly, we prove upper bounds on the leakage of a cascade of two channels, where the output of the first channel is used as input to the second. In addition, we show how to decompose a channel into a cascade of channels. ^ We also establish fundamental new results about the recently-proposed g-leakage family of measures. These results further highlight the significance of channel cascading. We prove that whenever channel A is composition refined by channel B, that is, whenever A is the cascade of B and R for some channel R, the leakage of A never exceeds that of B, regardless of the prior distribution or leakage measure (Shannon leakage, guessing entropy leakage, min-entropy leakage, or g-leakage). Moreover, we show that composition refinement is a partial order if we quotient away channel structure that is redundant with respect to leakage alone. These results are strengthened by the proof that composition refinement is the only way for one channel to never leak more than another with respect to g-leakage. Therefore, composition refinement robustly answers the question of when a channel is always at least as secure as another from a leakage point of view.^
Resumo:
Chapter 1: Patents and Entry Competition in the Pharmaceutical Industry: The Role of Marketing Exclusivity Effective patent length for innovation drugs is severely curtailed because of extensive efficacy and safety tests required for FDA approval, raising concern over adequacy of incentives for new drug development. The Hatch-Waxman Act extends patent length for new drugs by five years, but also promotes generic entry by simplifying approval procedures and granting 180-day marketing exclusivity to a first generic entrant before the patent expires. In this paper we present a dynamic model to examine the effect of marketing exclusivity. We find that marketing exclusivity may be redundant and its removal may increase generic firms' profits and social welfare. Chapter 2: Why Authorized Generics?: Theoretical and Empirical Investigations Facing generic competition, the brand-name companies some-times launch generic versions themselves called authorized generics. This practice is puzzling. If it is cannibalization, it cannot be profitable. If it is divisionalization, it should be practiced always instead of sometimes. I explain this phenomenon in terms of switching costs in a model in which the incumbent first develops a customer base to ready itself against generic competition later. I show that only sufficiently low switching costs or large market size justifies launch of AGs. I then use prescription drug data to test those results and find support. Chapter 3: The Merger Paradox and R&D Oligopoly theory says that merger is unprofitable, unless a majority of firms in industry merge. Here, we introduce R&D opportunities to resolve this so-called merger paradox. We have three results. First, when there is one R&D firm, that firm can profitably merge with any number of non-R&D firms. Second, with multiple R&D firms and multiple non-R&D firms, all R&D firms can profitably merge. Third, with two R&D firms and two non-R&D firms, each R&D firms prefer to merge with a non-R&D firm. With three or more than non-R&D firms, however, the R&D firms prefer to merge with each other.
Resumo:
With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.
Resumo:
The great amount of data generated as the result of the automation and process supervision in industry implies in two problems: a big demand of storage in discs and the difficulty in streaming this data through a telecommunications link. The lossy data compression algorithms were born in the 90’s with the goal of solving these problems and, by consequence, industries started to use those algorithms in industrial supervision systems to compress data in real time. These algorithms were projected to eliminate redundant and undesired information in a efficient and simple way. However, those algorithms parameters must be set for each process variable, becoming impracticable to configure this parameters for each variable in case of systems that monitor thousands of them. In that context, this paper propose the algorithm Adaptive Swinging Door Trending that consists in a adaptation of the Swinging Door Trending, as this main parameters are adjusted dynamically by the analysis of the signal tendencies in real time. It’s also proposed a comparative analysis of performance in lossy data compression algorithms applied on time series process variables and dynamometer cards. The algorithms used to compare were the piecewise linear and the transforms.
Resumo:
In longitudinal data analysis, our primary interest is in the regression parameters for the marginal expectations of the longitudinal responses; the longitudinal correlation parameters are of secondary interest. The joint likelihood function for longitudinal data is challenging, particularly for correlated discrete outcome data. Marginal modeling approaches such as generalized estimating equations (GEEs) have received much attention in the context of longitudinal regression. These methods are based on the estimates of the first two moments of the data and the working correlation structure. The confidence regions and hypothesis tests are based on the asymptotic normality. The methods are sensitive to misspecification of the variance function and the working correlation structure. Because of such misspecifications, the estimates can be inefficient and inconsistent, and inference may give incorrect results. To overcome this problem, we propose an empirical likelihood (EL) procedure based on a set of estimating equations for the parameter of interest and discuss its characteristics and asymptotic properties. We also provide an algorithm based on EL principles for the estimation of the regression parameters and the construction of a confidence region for the parameter of interest. We extend our approach to variable selection for highdimensional longitudinal data with many covariates. In this situation it is necessary to identify a submodel that adequately represents the data. Including redundant variables may impact the model’s accuracy and efficiency for inference. We propose a penalized empirical likelihood (PEL) variable selection based on GEEs; the variable selection and the estimation of the coefficients are carried out simultaneously. We discuss its characteristics and asymptotic properties, and present an algorithm for optimizing PEL. Simulation studies show that when the model assumptions are correct, our method performs as well as existing methods, and when the model is misspecified, it has clear advantages. We have applied the method to two case examples.
Resumo:
This paper deals with the monolithic decoupled XYZ compliant parallel mechanisms (CPMs) for multi-function applications, which can be fabricated monolithically without assembly and has the capability of kinetostatic decoupling. At first, the conceptual design of monolithic decoupled XYZ CPMs is presented using identical spatial compliant multi-beam modules based on a decoupled 3-PPPR parallel kinematic mechanism. Three types of applications: motion/positioning stages, force/acceleration sensors and energy harvesting devices are described in principle. The kinetostatic and dynamic modelling is then conducted to capture the displacements of any stage under loads acting at any stage and the natural frequency with the comparisons with FEA results. Finally, performance characteristics analysis for motion stage applications is detailed investigated to show how the change of the geometrical parameter can affect the performance characteristics, which provides initial optimal estimations. Results show that the smaller thickness of beams and larger dimension of cubic stages can improve the performance characteristics excluding natural frequency under allowable conditions. In order to improve the natural frequency characteristic, a stiffness-enhanced monolithic decoupled configuration that is achieved through employing more beams in the spatial modules or reducing the mass of each cubic stage mass can be adopted. In addition, an isotropic variation with different motion range along each axis and same payload in each leg is proposed. The redundant design for monolithic fabrication is introduced in this paper, which can overcome the drawback of monolithic fabrication that the failed compliant beam is difficult to replace, and extend the CPM’s life.
Resumo:
Natural IgM (nIgM) is constitutively present in the serum, where it aids in the early control of viral and bacterial expansions. nIgM also plays a significant role in the prevention of autoimmune disease by promoting the clearance of cellular debris. However, the cells that maintain high titers of nIgM in the circulation had not yet been identified. Several studies have linked serum nIgM with the presence of fetal-lineage B cells, and others have detected IgM secretion directly by B1a cells in various tissues. Nevertheless, a substantial contribution of undifferentiated B1 cells to nIgM titers is doubtful, as the ability to produce large quantities of antibody (Ab) is a function of the phenotype and morphology of differentiated plasma cells (PCs). No direct evidence exists to support the claim that a B1-cell population directly produces the bulk of circulating nIgM. The source of nIgM thus remained uncertain and unstudied.
In the first part of this study, I identified the primary source of nIgM. Using enzyme-linked immunosorbent spot (ELISPOT) assay, I determined that the majority of IgM Ab-secreting cells (ASCs) in naïve mice reside in the bone marrow (BM). Flow cytometric analysis of BM cells stained for intracellular IgM revealed that nIgM ASCs express IgM and the PC marker CD138 on their surface, but not the B1a cell marker CD5. By spinning these cells onto slides and staining them, following isolation by fluorescence-activated cell sorting (FACS), I found that they exhibit the typical morphological characteristics of terminally differentiated PCs. Transfer experiments demonstrated that BM nIgM PCs arise from a progenitor in the peritoneal cavity (PerC), but not isolated PerC B1a, B1b, or B2 cells. Immunoglobulin (Ig) gene sequence analysis and examination of B1-8i mice, which carry an Ig knockin that prohibits fetal B-cell development, indicated that nIgM PCs differentiate from fetal-lineage B cells. BrdU uptake experiments showed that the nIgM ASC compartment contains a substantial fraction of long-lived plasma cells (LLPCs). Finally, I demonstrated that nIgM PCs occupy a survival niche distinct from that used by IgG PCs.
In the second part of this dissertation, I characterized the unique survival niche of nIgM LLPCs, which maintain constitutive high titers of nIgM in the serum. By using genetically deficient or Ab-depleted mice, I found that neither T cells, type 2 innate lymphoid cells, nor mast cells, the three major hematopoietic producers of IL-5, were required for nIgM PC survival in the BM. However, IgM PCs associate strongly with IL-5-expressing BM stromal cells, which support their survival in vitro when stimulated. In vivo neutralization of IL-5 revealed that, like individual survival factors for IgG PCs, IL-5 is not the sole supporter of IgM PCs, but is likely one of several redundant molecules that together ensure uninterrupted signaling. Thus, the long-lived nIgM PC niche is not composed of hematopoietic sources of IL-5, but a stromal cell microenvironment that provides multiple redundant survival signals.
In the final part of my study, I identified and characterized the precursor of nIgM PCs, which I found in the first project to be resident in the PerC, but not a B1a, B1b, or B2 cell. By transferring PerC cells sorted based on expression of CD19, CD5, and CD11b, I found that only the CD19+CD5+CD11b- population contained cells capable of differentiating into nIgM PCs. Transfer of decreasing numbers of unfractionated PerC cells into Rag1 knockouts revealed an order-of-magnitude drop in the rate of serum IgM reconstitution between stochastically sampled pools of 106 and 3x105 PerC cells, suggesting that the CD19+CD5+CD11b- compartment comprises two cell types, and that interaction between the two necessary for nIgM-PC differentiation. By transferring neonatal liver, I determined that the early hematopoietic environment is required for nIgM PC precursors to develop. Using mice carrying a mutation that disturbs cKit expression, I also found that cKit appears to be required at a critical point near birth for the proper development of nIgM PC precursors.
The collective results of these studies demonstrate that nIgM is the product of BM-resident PCs, which differentiate from a PerC B cell precursor distinct from B1a cells, and survive long-term in a unique survival niche created by stromal cells. My work creates a new paradigm by which to understand nIgM, B1 cell, and PC biology.
Resumo:
Bud formation by Saccharomyces cerevisiae is a fundamental process for yeast proliferation. Bud emergence is initiated by the polarization of the cytoskeleton, leading to local secretory vesicle delivery and gulcan synthase activity. The master regulator of polarity establishment is a small Rho-family GTPase – Cdc42. Cdc42 forms a clustered patch at the incipient budding site in late G1 and mediates downstream events which lead to bud emergence. Cdc42 promotes morphogenesis via its various effectors. PAKs (p21-activated kinases) are important Cdc42 effectors which mediate actin cytoskeleton polarization and septin filament assembly. The PAKs Cla4 and Ste20 share common binding domains for GTP-Cdc42 and they are partially redundant in function. However, we found that Cla4 and Ste20 behaved differently during the polarization and this depended on their different membrane interaction domains. Also, Cla4 and Ste20 compete for a limited number of binding sites at the polarity patch during bud emergence. These results suggest that PAKs may be differentially regulated during polarity establishment.
Morphogenesis of yeast must be coordinated with the nuclear cycle to enable successful proliferation. Many environmental stresses temporarily disrupt bud formation, and in such circumstances, the morphogenesis checkpoint halts nuclear division until bud formation can resume. Bud emergence is essential for degradation of the mitotic inhibitor, Swe1. Swe1 is localized to the septin cytoskeleton at the bud neck by the Swe1-binding protein Hsl7. Neck localization of Swe1 is required for Swe1 degradation. Although septins form a ring at the presumptive bud site prior to bud emergence, Hsl7 is not recruited to the septins until after bud emergence, suggesting that septins and/or Hsl7 respond to a “bud sensor”. Here we show that recruitment of Hsl7 to the septin ring depends on a combination of two septin-binding kinases: Hsl1 and Elm1. We elucidate which domains of these kinases are needed, and show that artificial targeting of those domains suffices to recruit Hsl7 to septin rings even in unbudded cells. Moreover, recruitment of Elm1 is responsive to bud emergence. Our findings suggest that Elm1 plays a key role in sensing bud emergence.
Resumo:
Rolling Isolation Systems provide a simple and effective means for protecting components from horizontal floor vibrations. In these systems a platform rolls on four steel balls which, in turn, rest within shallow bowls. The trajectories of the balls is uniquely determined by the horizontal and rotational velocity components of the rolling platform, and thus provides nonholonomic constraints. In general, the bowls are not parabolic, so the potential energy function of this system is not quadratic. This thesis presents the application of Gauss's Principle of Least Constraint to the modeling of rolling isolation platforms. The equations of motion are described in terms of a redundant set of constrained coordinates. Coordinate accelerations are uniquely determined at any point in time via Gauss's Principle by solving a linearly constrained quadratic minimization. In the absence of any modeled damping, the equations of motion conserve energy. This mathematical model is then used to find the bowl profile that minimizes response acceleration subject to displacement constraint.
Resumo:
Mitotic genome instability can occur during the repair of double-strand breaks (DSBs) in DNA, which arise from endogenous and exogenous sources. Studying the mechanisms of DNA repair in the budding yeast, Saccharomyces cerevisiae has shown that Homologous Recombination (HR) is a vital repair mechanism for DSBs. HR can result in a crossover event, in which the broken molecule reciprocally exchanges information with a homologous repair template. The current model of double-strand break repair (DSBR) also allows for a tract of information to non-reciprocally transfer from the template molecule to the broken molecule. These “gene conversion” events can vary in size and can occur in conjunction with a crossover event or in isolation. The frequency and size of gene conversions in isolation and gene conversions associated with crossing over has been a source of debate due to the variation in systems used to detect gene conversions and the context in which the gene conversions are measured.
In Chapter 2, I use an unbiased system that measures the frequency and size of gene conversion events, as well as the association of gene conversion events with crossing over between homologs in diploid yeast. We show mitotic gene conversions occur at a rate of 1.3x10-6 per cell division, are either large (median 54.0kb) or small (median 6.4kb), and are associated with crossing over 43% of the time.
DSBs can arise from endogenous cellular processes such as replication and transcription. Two important RNA/DNA hybrids are involved in replication and transcription: R-loops, which form when an RNA transcript base pairs with the DNA template and displaces the non-template DNA strand, and ribonucleotides embedded into DNA (rNMPs), which arise when replicative polymerase errors insert ribonucleotide instead of deoxyribonucleotide triphosphates. RNaseH1 (encoded by RNH1) and RNaseH2 (whose catalytic subunit is encoded by RNH201) both recognize and degrade the RNA in within R-loops while RNaseH2 alone recognizes, nicks, and initiates removal of rNMPs embedded into DNA. Due to their redundant abilities to act on RNA:DNA hybrids, aberrant removal of rNMPs from DNA has been thought to lead to genome instability in an rnh201Δ background.
In Chapter 3, I characterize (1) non-selective genome-wide homologous recombination events and (2) crossing over on chromosome IV in mutants defective in RNaseH1, RNaseH2, or RNaseH1 and RNaseH2. Using a mutant DNA polymerase that incorporates 4-fold fewer rNMPs than wild type, I demonstrate that the primary recombinogenic lesion in the RNaseH2-defective genome is not rNMPs, but rather R-loops. This work suggests different in-vivo roles for RNaseH1 and RNaseH2 in resolving R-loops in yeast and is consistent with R-loops, not rNMPs, being the the likely source of pathology in Aicardi-Goutières Syndrome patients defective in RNaseH2.
Resumo:
Les langages de programmation typés dynamiquement tels que JavaScript et Python repoussent la vérification de typage jusqu’au moment de l’exécution. Afin d’optimiser la performance de ces langages, les implémentations de machines virtuelles pour langages dynamiques doivent tenter d’éliminer les tests de typage dynamiques redondants. Cela se fait habituellement en utilisant une analyse d’inférence de types. Cependant, les analyses de ce genre sont souvent coûteuses et impliquent des compromis entre le temps de compilation et la précision des résultats obtenus. Ceci a conduit à la conception d’architectures de VM de plus en plus complexes. Nous proposons le versionnement paresseux de blocs de base, une technique de compilation à la volée simple qui élimine efficacement les tests de typage dynamiques redondants sur les chemins d’exécution critiques. Cette nouvelle approche génère paresseusement des versions spécialisées des blocs de base tout en propageant de l’information de typage contextualisée. Notre technique ne nécessite pas l’utilisation d’analyses de programme coûteuses, n’est pas contrainte par les limitations de précision des analyses d’inférence de types traditionnelles et évite la complexité des techniques d’optimisation spéculatives. Trois extensions sont apportées au versionnement de blocs de base afin de lui donner des capacités d’optimisation interprocédurale. Une première extension lui donne la possibilité de joindre des informations de typage aux propriétés des objets et aux variables globales. Puis, la spécialisation de points d’entrée lui permet de passer de l’information de typage des fonctions appellantes aux fonctions appellées. Finalement, la spécialisation des continuations d’appels permet de transmettre le type des valeurs de retour des fonctions appellées aux appellants sans coût dynamique. Nous démontrons empiriquement que ces extensions permettent au versionnement de blocs de base d’éliminer plus de tests de typage dynamiques que toute analyse d’inférence de typage statique.