990 resultados para Significance driven computation
Resumo:
We study a resistively shunted semiconductor superlattice subject to a high-frequency electric field. Using a balance equation approach that incorporates the influence of the electric circuit, we determine numerically a range of amplitude and frequency of the ac field for which a dc bias and current are generated spontaneously and show that this region is likely accessible to current experiments. Our simulations reveal that the Bloch frequency corresponding to the spontaneous dc bias is approximately an integer multiple of the ac field frequency.
Resumo:
We consider the spontaneous creation of a dc voltage across a strongly coupled semiconductor superlattice subjected to THz radiation. We show that the dc voltage may be approximately proportional either to an integer or to a half- integer multiple of the frequency of the applied ac field, depending on the ratio of the characteristic scattering rates of conducting electrons. For the case of an ac field frequency less than the characteristic scattering rates, we demonstrate the generation of an unquantized dc voltage.
Resumo:
Cells are known to utilize biochemical noise to probabilistically switch between distinct gene expression states. We demonstrate that such noise-driven switching is dominated by tails of probability distributions and is therefore exponentially sensitive to changes in physiological parameters such as transcription and translation rates. However, provided mRNA lifetimes are short, switching can still be accurately simulated using protein-only models of gene expression. Exponential sensitivity limits the robustness of noise-driven switching, suggesting cells may use other mechanisms in order to switch reliably.
Resumo:
Predictability - the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements - is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems - possessing properties such as clairvoyance, caprice, in finite capacity, or perfect timing - cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems - not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the CLEOPATRA programming language. CLEOPATRA features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. CLEOPATRA is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of CLEOPATRA has been in use as a specification and simulation language for embedded time-critical robotic processes.
Resumo:
Predictability -- the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements -- is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems – possessing properties such as clairvoyance, caprice, infinite capacity, or perfect timing -- cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems -- not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the Cleopatra programming language. Cleopatra features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. Cleopatra is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of Cleopatra has been in use as a specification and simulation language for embedded time-critical robotic processes.
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
Formal tools like finite-state model checkers have proven useful in verifying the correctness of systems of bounded size and for hardening single system components against arbitrary inputs. However, conventional applications of these techniques are not well suited to characterizing emergent behaviors of large compositions of processes. In this paper, we present a methodology by which arbitrarily large compositions of components can, if sufficient conditions are proven concerning properties of small compositions, be modeled and completely verified by performing formal verifications upon only a finite set of compositions. The sufficient conditions take the form of reductions, which are claims that particular sequences of components will be causally indistinguishable from other shorter sequences of components. We show how this methodology can be applied to a variety of network protocol applications, including two features of the HTTP protocol, a simple active networking applet, and a proposed web cache consistency algorithm. We also doing discuss its applicability to framing protocol design goals and to representing systems which employ non-model-checking verification methodologies. Finally, we briefly discuss how we hope to broaden this methodology to more general topological compositions of network applications.
Resumo:
Attributing a dollar value to a keyword is an essential part of running any profitable search engine advertising campaign. When an advertiser has complete control over the interaction with and monetization of each user arriving on a given keyword, the value of that term can be accurately tracked. However, in many instances, the advertiser may monetize arrivals indirectly through one or more third parties. In such cases, it is typical for the third party to provide only coarse-grained reporting: rather than report each monetization event, users are aggregated into larger channels and the third party reports aggregate information such as total daily revenue for each channel. Examples of third parties that use channels include Amazon and Google AdSense. In such scenarios, the number of channels is generally much smaller than the number of keywords whose value per click (VPC) we wish to learn. However, the advertiser has flexibility as to how to assign keywords to channels over time. We introduce the channelization problem: how do we adaptively assign keywords to channels over the course of multiple days to quickly obtain accurate VPC estimates of all keywords? We relate this problem to classical results in weighing design, devise new adaptive algorithms for this problem, and quantify the performance of these algorithms experimentally. Our results demonstrate that adaptive weighing designs that exploit statistics of term frequency, variability in VPCs across keywords, and flexible channel assignments over time provide the best estimators of keyword VPCs.
Resumo:
It is shown that determining whether a quantum computation has a non-zero probability of accepting is at least as hard as the polynomial time hierarchy. This hardness result also applies to determining in general whether a given quantum basis state appears with nonzero amplitude in a superposition, or whether a given quantum bit has positive expectation value at the end of a quantum computation.
Resumo:
Temporal locality of reference in Web request streams emerges from two distinct phenomena: the popularity of Web objects and the {\em temporal correlation} of requests. Capturing these two elements of temporal locality is important because it enables cache replacement policies to adjust how they capitalize on temporal locality based on the relative prevalence of these phenomena. In this paper, we show that temporal locality metrics proposed in the literature are unable to delineate between these two sources of temporal locality. In particular, we show that the commonly-used distribution of reference interarrival times is predominantly determined by the power law governing the popularity of documents in a request stream. To capture (and more importantly quantify) both sources of temporal locality in a request stream, we propose a new and robust metric that enables accurate delineation between locality due to popularity and that due to temporal correlation. Using this metric, we characterize the locality of reference in a number of representative proxy cache traces. Our findings show that there are measurable differences between the degrees (and sources) of temporal locality across these traces, and that these differences are effectively captured using our proposed metric. We illustrate the significance of our findings by summarizing the performance of a novel Web cache replacement policy---called GreedyDual*---which exploits both long-term popularity and short-term temporal correlation in an adaptive fashion. Our trace-driven simulation experiments (which are detailed in an accompanying Technical Report) show the superior performance of GreedyDual* when compared to other Web cache replacement policies.
Resumo:
It is shown that determining whether a quantum computation has a non-zero probability of accepting is at least as hard as the polynomial time hierarchy. This hardness result also applies to determining in general whether a given quantum basis state appears with nonzero amplitude in a superposition, or whether a given quantum bit has positive expectation value at the end of a quantum computation. This result is achieved by showing that the complexity class NQP of Adleman, Demarrais, and Huang, a quantum analog of NP, is equal to the counting class coC=P.
Resumo:
Acute myeloid leukaemia refers to cancer of the blood and bone marrow characterised by the rapid expansion of immature blasts of the myeloid lineage. The aberrant proliferation of these blasts interferes with normal haematopoiesis, resulting in symptoms such as anaemia, poor coagulation and infections. The molecular mechanisms underpinning acute myeloid leukaemia are multi-faceted and complex, with a range of diverse genetic and cytogenetic abnormalities giving rise to the acute myeloid leukaemia phenotype. Amongst the most common causative factors are mutations of the FLT3 gene, which codes for a growth factor receptor tyrosine kinase required by developing haematopoietic cells. Disruptions to this gene can result in constitutively active FLT3, driving the de-regulated proliferation of undifferentiated precursor blasts. FLT3-targeted drugs provide the opportunity to inhibit this oncogenic receptor, but over time can give rise to resistance within the blast population. The identification of targetable components of the FLT3 signalling pathway may allow for combination therapies to be used to impede the emergence of resistance. However, the intracellular signal transduction pathway of FLT3 is relatively obscure. The objective of this study is to further elucidate this pathway, with particular focus on the redox signalling element which is thought to be involved. Signalling via reactive oxygen species is becoming increasingly recognised as a crucial aspect of physiological and pathological processes within the cell. The first part of this study examined the effects of NADPH oxidase-derived reactive oxygen species on the tyrosine phosphorylation levels of acute myeloid leukaemia cell lines. Using two-dimensional phosphotyrosine immunoblotting, a range of proteins were identified as undergoing tyrosine phosphorylation in response to NADPH oxidase activity. Ezrin, a cytoskeletal regulatory protein and substrate of Src kinase, was selected for further study. The next part of this study established that NADPH oxidase is subject to regulation by FLT3. Both wild type and oncogenic FLT3 signalling were shown to affect the expression of a key NADPH oxidase subunit, p22phox, and FLT3 was also demonstrated to drive intracellular reactive oxygen species production. The NADPH oxidase target protein, Ezrin, undergoes phosphorylation on two tyrosine residues downstream of FLT3 signalling, an effect which was shown to be p22phox-dependent and which was attributed to the redox regulation of Src. The cytoskeletal associations of Ezrin and its established role in metastasis prompted the investigation of the effects of FLT3 and NADPH oxidase activity on the migration of acute myeloid leukaemia cell lines. It was found that inhibition of either FLT3 or NADPH oxidase negatively impacted on the motility of acute myeloid leukaemia cells. The final part of this study focused on the relationship between FLT3 signalling and phosphatase activity. It was determined, using phosphatase expression profiling and real-time PCR, that several phosphatases are subject to regulation at the levels of transcription and post-translational modification downstream of oncogenic FLT3 activity. In summary, this study demonstrates that FLT3 signal transduction utilises a NADPH oxidase-dependent redox element, which affects Src kinase, and modulates leukaemic cell migration through Ezrin. Furthermore, the expression and activity of several phosphatases is tightly linked to FLT3 signalling. This work reveals novel components of the FLT3 signalling cascade and indicates a range of potential therapeutic targets.
Resumo:
With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.
Resumo:
The primary aim of this thesis is to analyse legal and governance issues in the use of Environmental NPR-PPMs, particularly those aiming to promote sustainable practices or to protect natural resources. NPR-PPMs have traditionally been thought of as being incompatible with the rules of the World Trade Organization (WTO). However, the issue remains untouched by WTO adjudicatory bodies. One can suggest that WTO adjudicatory bodies may want to leave this issue to the Members, but the analysis of the case law also seems to indicate that the question of legality of NPR-PPMs has not been brought ‘as such’ in dispute settlement. This thesis advances the argument that despite the fact that the legal status of NPR-PPMs remains unsettled, during the last decades adjudicatory bodies have been scrutinising environmental measures based on NPR-PPMs just as another expression of the regulatory autonomy of the Members. Though NPR-PPMs are regulatory choices associated with a wide range of environmental concerns, trade disputes giving rise to questions related to the legality of process-based measures have been mainly associated with the protection of marine wildlife (i.e., fishing techniques threatening or affecting animal species). This thesis argues that environmental objectives articulated as NPR-PPMs can indeed qualify as legitimate objectives both under the GATT and the TBT Agreement. However, an important challenge for the their compatibility with WTO law relate to aspects associated with arbitrary or unjustifiable discrimination. In the assessment of discrimination procedural issues play an important role. This thesis also elucidates other important dimensions to the issue from the perspective of global governance. One of the arguments advanced in this thesis is that a comprehensive analysis of environmental NPR-PPMs should consider not only their role in what is regarded as trade barriers (governmental and market-driven), but also their significance in global objectives such as the transition towards a green economy and sustainable patterns of consumption and production.
Towards a situation-awareness-driven design of operational business intelligence & analytics systems
Resumo:
With the swamping and timeliness of data in the organizational context, the decision maker’s choice of an appropriate decision alternative in a given situation is defied. In particular, operational actors are facing the challenge to meet business-critical decisions in a short time and at high frequency. The construct of Situation Awareness (SA) has been established in cognitive psychology as a valid basis for understanding the behavior and decision making of human beings in complex and dynamic systems. SA gives decision makers the possibility to make informed, time-critical decisions and thereby improve the performance of the respective business process. This research paper leverages SA as starting point for a design science project for Operational Business Intelligence and Analytics systems and suggests a first version of design principles.