980 resultados para Dynamic code generation
Resumo:
GENTRANS, a comprehensive one-dimensional dynamic simulator for electrophoretic separations and transport, was extended for handling electrokinetic chiral separations with a neutral ligand. The code can be employed to study the 1:1 interaction of monovalent weak and strong acids and bases with a single monovalent weak or strong acid or base additive, including a neutral cyclodextrin, under real experimental conditions. It is a tool to investigate the dynamics of chiral separations and to provide insight into the buffer systems used in chiral capillary zone electrophoresis (CZE) and chiral isotachophoresis. Analyte stacking across conductivity and buffer additive gradients, changes of additive concentration, buffer component concentration, pH, and conductivity across migrating sample zones and peaks, and the formation and migration of system peaks can thereby be investigated in a hitherto inaccessible way. For model systems with charged weak bases and neutral modified β-cyclodextrins at acidic pH, for which complexation constants, ionic mobilities, and mobilities of selector-analyte complexes have been determined by CZE, simulated and experimentally determined electropherograms and isotachopherograms are shown to be in good agreement. Simulation data reveal that CZE separations of cationic enantiomers performed in phosphate buffers at low pH occur behind a fast cationic migrating system peak that has a small impact on the buffer composition under which enantiomeric separation takes place.
Resumo:
Abstract Information-centric networking (ICN) offers new perspectives on mobile ad-hoc communication because routing is based on names but not on endpoint identifiers. Since every content object has a unique name and is signed, authentic content can be stored and cached by any node. If connectivity to a content source breaks, it is not necessarily required to build a new path to the same source but content can also be retrieved from a closer node that provides the same content copy. For example, in case of collisions, retransmissions do not need to be performed over the entire path but due to caching only over the link where the collision occurred. Furthermore, multiple requests can be aggregated to improve scalability of wireless multi-hop communication. In this work, we base our investigations on Content-Centric Networking (CCN), which is a popular {ICN} architecture. While related works in wireless {CCN} communication are based on broadcast communication exclusively, we show that this is not needed for efficient mobile ad-hoc communication. With Dynamic Unicast requesters can build unicast paths to content sources after they have been identified via broadcast. We have implemented Dynamic Unicast in CCNx, which provides a reference implementation of the {CCN} concepts, and performed extensive evaluations in diverse mobile scenarios using NS3-DCE, the direct code execution framework for the {NS3} network simulator. Our evaluations show that Dynamic Unicast can result in more efficient communication than broadcast communication, but still supports all {CCN} advantages such as caching, scalability and implicit content discovery.
Resumo:
More than a century ago Ramon y Cajal pioneered the description of neural circuits. Currently, new techniques are being developed to streamline the characterization of entire neural circuits. Even if this 'connectome' approach is successful, it will represent only a static description of neural circuits. Thus, a fundamental question in neuroscience is to understand how information is dynamically represented by neural populations. In this thesis, I studied two main aspects of dynamical population codes. ^ First, I studied how the exposure or adaptation, for a fraction of a second to oriented gratings dynamically changes the population response of primary visual cortex neurons. The effects of adaptation to oriented gratings have been extensively explored in psychophysical and electrophysiological experiments. However, whether rapid adaptation might induce a change in the primary visual cortex's functional connectivity to dynamically impact the population coding accuracy is currently unknown. To address this issue, we performed multi-electrode recordings in primary visual cortex, where adaptation has been previously shown to induce changes in the selectivity and response amplitude of individual neurons. We found that adaptation improves the population coding accuracy. The improvement was more prominent for iso- and orthogonal orientation adaptation, consistent with previously reported psychophysical experiments. We propose that selective decorrelation is a metabolically inexpensive mechanism that the visual system employs to dynamically adapt the neural responses to the statistics of the input stimuli to improve coding efficiency. ^ Second, I investigated how ongoing activity modulates orientation coding in single neurons, neural populations and behavior. Cortical networks are never silent even in the absence of external stimulation. The ongoing activity can account for up to 80% of the metabolic energy consumed by the brain. Thus, a fundamental question is to understand the functional role of ongoing activity and its impact on neural computations. I studied how the orientation coding by individual neurons and cell populations in primary visual cortex depend on the spontaneous activity before stimulus presentation. We hypothesized that since the ongoing activity of nearby neurons is strongly correlated, it would influence the ability of the entire population of orientation-selective cells to process orientation depending on the prestimulus spontaneous state. Our findings demonstrate that ongoing activity dynamically filters incoming stimuli to shape the accuracy of orientation coding by individual neurons and cell populations and this interaction affects behavioral performance. In summary, this thesis is a contribution to the study of how dynamic internal states such as rapid adaptation and ongoing activity modulate the population code accuracy. ^
Resumo:
From laboratory tests under simulated downhole conditions we tentatively conclude that the higher the triaxial-compressive strength, the lower the drilling rate of basalts from DSDP Hole 504B. Because strength is roughly proportional to Young's modulus of elasticity, which is related in turn to seismic-wave velocities, one may be able to estimate drilling rates from routine shipboard measurements. However, further research is needed to verify that P-wave velocity is a generally useful predictor of relative drilling rate.
Resumo:
IPOD Leg 49 recovered basalts from 9 holes at 7 sites along 3 transects across the Mid-Atlantic Ridge: 63°N (Reykjanes), 45°N and 36°N (FAMOUS area). This has provided further information on the nature of mantle heterogeneity in the North Atlantic by enabling studies to be made of the variation of basalt composition with depth and with time near critical areas (Iceland and the Azores) where deep mantle plumes are thought to exist. Over 150 samples have been analysed for up to 40 major and trace elements and the results used to place constraints on the petrogenesis of the erupted basalts and hence on the geochemical nature of their source regions. It is apparent that few of the recovered basalts have the geochemical characteristics of typical "depleted" midocean ridge basalts (MORB). An unusually wide range of basalt compositions may be erupted at a single site: the range of rare earth patterns within the short section cored at Site 413, for instance, encompasses the total variation of REE patterns previously reported from the FAMOUS area. Nevertheless it is possible to account for most of the compositional variation at a single site by partial melting processes (including dynamic melting) and fractional crystallization. Partial melting mechanisms seem to be the dominant processes relating basalt compositions, particularly at 36°N and 45°N, suggesting that long-lived sub-axial magma chambers may not be a consistent feature of the slow-spreading Mid-Atlantic Ridge. Comparisons of basalts erupted at the same ridge segment for periods of the order of 35 m.y. (now lying along the same mantle flow line) do show some significant inter-site differences in Rb/Sr, Ce/Yb, 87Sr/86Sr, etc., which cannot be accounted for by fractionation mechanisms and which must reflect heterogeneities in the mantle source. However when hygromagmatophile (HYG) trace element levels and ratios are considered, it is the constancy or consistency of these HYG ratios which is the more remarkable, implying that the mantle source feeding a particular ridge segment was uniform with respect to these elements for periods of the order of 35 m.y. and probably since the opening of the Atlantic. Yet these HYG element ratios at 63°N are very different from those at 45°N and 36°N and significantly different from the values at 22°N and in "MORB". The observed variations are difficult to reconcile with current concepts of mantle plumes and binary mixing models. The mantle is certainly heterogeneous, but there is not simply an "enriched" and a "depleted" source, but rather a range of sources heterogeneous on different scales for different elements - to an extent and volume depending on previous depletion/enrichment events. HYG element ratios offer the best method of defining compositionally different mantle segments since they are little modified by the fractionation processes associated with basalt generation.
Resumo:
At subduction zones, the permeability of major fault zones influences pore pressure generation, controls fluid flow pathways and rates, and affects fault slip behavior and mechanical strength by mediating effective normal stress. Therefore, there is a need for detailed and systematic permeability measurements of natural materials from fault systems, particularly measurements that allow direct comparison between the permeability of sheared and unsheared samples from the same host rock or sediment. We conducted laboratory experiments to compare the permeability of sheared and uniaxially consolidated (unsheared) marine sediments sampled during IODP Expedition 316 and ODP Leg 190 to the Nankai Trough offshore Japan. These samples were retrieved from: (1) The décollement zone and incoming trench fill offshore Shikoku Island (the Muroto transect); (2) Slope sediments sampled offshore SW Honshu (the Kumano transect) ~ 25 km landward of the trench, including material overriden by a major out-of-sequence thrust fault, termed the "megasplay"; and (3) A region of diffuse thrust faulting near the toe of the accretionary prism along the Kumano transect. Our results show that shearing reduces fault-normal permeability by up to 1 order of magnitude, and this reduction is largest for shallow (< 500 mbsf) samples. Shearing-induced permeability reduction is smaller in samples from greater depth, where pre-existing fabric from compaction and lithification may be better developed. Our results indicate that localized shearing in fault zones should result in heterogeneous permeability in the uppermost few kilometers in accretionary prisms, which favors both the trapping of fluids beneath and within major faults, and the channeling of flow parallel to fault structure. These low permeabilities promote the development of elevated pore fluid pressures during accretion and underthrusting, and will also facilitate dynamic hydrologic processes within shear zones including dilatancy hardening and thermal pressurization.
Resumo:
The episodic occurrence of debris flow events in response to stochastic precipitation and wildfire events makes hazard prediction challenging. Previous work has shown that frequency-magnitude distributions of non-fire-related debris flows follow a power law, but less is known about the distribution of post-fire debris flows. As a first step in parameterizing hazard models, we use frequency-magnitude distributions and cumulative distribution functions to compare volumes of post-fire debris flows to non-fire-related debris flows. Due to the large number of events required to parameterize frequency-magnitude distributions, and the relatively small number of post-fire event magnitudes recorded in the literature, we collected data on 73 recent post-fire events in the field. The resulting catalog of 988 debris flow events is presented as an appendix to this article. We found that the empirical cumulative distribution function of post-fire debris flow volumes is composed of smaller events than that of non-fire-related debris flows. In addition, the slope of the frequency-magnitude distribution of post-fire debris flows is steeper than that of non-fire-related debris flows, evidence that differences in the post-fire environment tend to produce a higher proportion of small events. We propose two possible explanations: 1) post-fire events occur on shorter return intervals than debris flows in similar basins that do not experience fire, causing their distribution to shift toward smaller events due to limitations in sediment supply, or 2) fire causes changes in resisting and driving forces on a package of sediment, such that a smaller perturbation of the system is required in order for a debris flow to occur, resulting in smaller event volumes.
Resumo:
Proof-Carrying Code (PCC) is a general approach to mobile code safety in which programs are augmented with a certificate (or proof). The intended benefit is that the program consumer can locally validate the certificate w.r.t. the "untrustcd" program by means of a certificate checker a process which should be much simpler, efficient, and automatic than generating the original proof. The practical uptake of PCC greatly depends on the existence of a variety of enabling technologies which allow both proving programs correct and replacing a costly verification process by an efficient checking proceduri on th( consumer side. In this work we propose Abstraction- Carrying Code (ACC), a novel approach which uses abstract interpretation as enabling technology. We argue that the large body of applications of abstract interpretation to program verification is amenable to the overall PCC scheme. In particular, we rely on an expressive class of safely policies which can be defined over different abstract domains. We use an abstraction (or abstract model) of the program computed by standard static analyzers as a certificate. The validity of the abstraction on ihe consumer side is checked in a single pass by a very efficient and specialized abstract-interpreter. We believe that ACC brings the expressiveness, flexibility and automation which is inherent in abstract interpretation techniques to the area of mobile code safety.
Resumo:
There exists an interest in performing pin-by-pin calculations coupled with thermal hydraulics so as to improve the accuracy of nuclear reactor analysis. In the framework of the EU NURISP project, INRNE and UPM have generated an experimental version of a few group diffusion cross sections library with discontinuity factors intended for VVER analysis at the pin level with the COBAYA3 code. The transport code APOLLO2 was used to perform the branching calculations. As a first proof of principle the library was created for fresh fuel and covers almost the full parameter space of steady state and transient conditions. The main objective is to test the calculation schemes and post-processing procedures, including multi-pin branching calculations. Two library options are being studied: one based on linear table interpolation and another one using a functional fitting of the cross sections. The libraries generated with APOLLO2 have been tested with the pin-by-pin diffusion model in COBAYA3 including discontinuity factors; first comparing 2D results against the APOLLO2 reference solutions and afterwards using the libraries to compute a 3D assembly problem coupled with a simplified thermal-hydraulic model.
Resumo:
Abstraction-Carrying Code (ACC) has recently been proposed as a framework for mobile code safety in which the code supplier provides a program together with an abstraction whose validity entails compliance with a predefined safety policy. The abstraction plays thus the role of safety certifícate and its generation is carried out automatically by a fixed-point analyzer. The advantage of providing a (fixedpoint) abstraction to the code consumer is that its validity is checked in a single pass of an abstract interpretation-based checker. A main challenge is to reduce the size of certificates as much as possible while at the same time not increasing checking time. We introduce the notion of reduced certifícate which characterizes the subset of the abstraction which a checker needs in order to validate (and re-construct) the full certifícate in a single pass. Based on this notion, we instrument a generic analysis algorithm with the necessary extensions in order to identify the information relevant to the checker. We also provide a correct checking algorithm together with sufficient conditions for ensuring its completeness. The experimental results within the CiaoPP system show that our proposal is able to greatly reduce the size of certificates in practice.
Resumo:
Proof-Carrying Code (PCC) is a general approach to mobile code safety in which programs are augmented with a certifícate (or proof). The practical uptake of PCC greatly depends on the existence of a variety of enabling technologies which allow both to prove programs correct and to replace a costly verification process by an efñcient checking procedure on the consumer side. In this work we propose Abstraction-Carrying Code (ACC), a novel approach which uses abstract interpretation as enabling technology. We argüe that the large body of applications of abstract interpretation to program verification is amenable to the overall PCC scheme. In particular, we rely on an expressive class of safety policies which can be defined over different abstract domains. We use an abstraction (or abstract model) of the program computed by standard static analyzers as a certifícate. The validity of the abstraction on the consumer side is checked in a single-pass by a very efficient and specialized abstract-interpreter. We believe that ACC brings the expressiveness, flexibility and automation which is inherent in abstract interpretation techniques to the área of mobile code safety. We have implemented and benchmarked ACC within the Ciao system preprocessor. The experimental results show that the checking phase is indeed faster than the proof generation phase, and that the sizes of certificates are reasonable.
Resumo:
Abstraction-Carrying Code (ACC) is a framework for mobile code safety in which the code supplier provides a program together with an abstraction (or abstract model of the program) whose validity entails compliance with a predefined safety policy. The abstraction plays thus the role of safety certificate and its generation is carried out automatically by a fixed-point analyzer. The advantage of providing a (fixed-point) abstraction to the code consumer is that its validity is checked in a single pass (i.e., one iteration) of an abstract interpretation-based checker. A main challenge to make ACC useful in practice is to reduce the size of certificates as much as possible, while at the same time not increasing checking time. Intuitively, we only include in the certificate the information which the checker is unable to reproduce without iterating. We introduce the notion of reduced certifícate which characterizes the subset of the abstraction which a checker needs in order to validate (and re-construct) the full certificate in a single pass. Based on this notion, we show how to instrument a generic analysis algorithm with the necessary extensions in order to identify the information relevant to the checker.
Resumo:
Proof-Carrying Code (PCC) is a general approach to mobile code safety in which the code supplier augments the program with a certifícate (or proof). The intended benefit is that the program consumer can locally validate the certifícate w.r.t. the "untrusted" program by means of a certifícate checker—a process which should be much simpler, eíñcient, and automatic than generating the original proof. Abstraction Carrying Code (ACC) is an enabling technology for PCC in which an abstract model of the program plays the role of certifícate. The generation of the certifícate, Le., the abstraction, is automatically carried out by an abstract interpretation-based analysis engine, which is parametric w.r.t. different abstract domains. While the analyzer on the producer side typically has to compute a semantic fixpoint in a complex, iterative process, on the receiver it is only necessary to check that the certifícate is indeed a fixpoint of the abstract semantics equations representing the program. This is done in a single pass in a much more efficient process. ACC addresses the fundamental issues in PCC and opens the door to the applicability of the large body of frameworks and domains based on abstract interpretation as enabling technology for PCC. We present an overview of ACC and we describe in a tutorial fashion an application to the problem of resource-aware security in mobile code. Essentially the information computed by a cost analyzer is used to genérate cost certificates which attest a safe and efficient use of a mobile code. A receiving side can then reject code which brings cost certificates (which it cannot validate or) which have too large cost requirements in terms of computing resources (in time and/or space) and accept mobile code which meets the established requirements.
Resumo:
Abstraction-Carrying Code (ACC) has recently been proposed as a framework for mobile code safety in which the code supplier provides a program together with an abstraction (or abstract model of the program) whose validity entails compliance with a predefined safety policy. The abstraction plays thus the role of safety certifícate and its generation is carried out automatically by a fixed-point analyzer. The advantage of providing a (fixed-point) abstraction to the code consumer is that its validity is checked in a single pass (i.e., one iteration) of an abstract interpretation-based checker. A main challenge to make ACC useful in practice is to reduce the size of certificates as much as possible while at the same time not increasing checking time. The intuitive idea is to only include in the certifícate information that the checker is unable to reproduce without iterating. We introduce the notion of reduced certifícate which characterizes the subset of the abstraction which a checker needs in order to validate (and re-construct) the full certifícate in a single pass. Based on this notion, we instrument a generic analysis algorithm with the necessary extensions in order to identify information which can be reconstructed by the single-pass checker. Finally, we study what the effects of reduced certificates are on the correctness and completeness of the checking process. We provide a correct checking algorithm together with sufficient conditions for ensuring its completeness. Our ideas are illustrated through a running example, implemented in the context of constraint logic programs, which shows that our approach improves state-of-the-art techniques for reducing the size of certificates.