801 resultados para Response time (computer systems)
Resumo:
Null dereferences are a bane of programming in languages such as Java. In this paper we propose a sound, demand-driven, inter-procedurally context-sensitive dataflow analysis technique to verify a given dereference as safe or potentially unsafe. Our analysis uses an abstract lattice of formulas to find a pre-condition at the entry of the program such that a null-dereference can occur only if the initial state of the program satisfies this pre-condition. We use a simplified domain of formulas, abstracting out integer arithmetic, as well as unbounded access paths due to recursive data structures. For the sake of precision we model aliasing relationships explicitly in our abstract lattice, enable strong updates, and use a limited notion of path sensitivity. For the sake of scalability we prune formulas continually as they get propagated, reducing to true conjuncts that are less likely to be useful in validating or invalidating the formula. We have implemented our approach, and present an evaluation of it on a set of ten real Java programs. Our results show that the set of design features we have incorporated enable the analysis to (a) explore long, inter-procedural paths to verify each dereference, with (b) reasonable accuracy, and (c) very quick response time per dereference, making it suitable for use in desktop development environments.
Resumo:
Various logical formalisms with the freeze quantifier have been recently considered to model computer systems even though this is a powerful mechanism that often leads to undecidability. In this article, we study a linear-time temporal logic with past-time operators such that the freeze operator is only used to express that some value from an infinite set is repeated in the future or in the past. Such a restriction has been inspired by a recent work on spatio-temporal logics that suggests such a restricted use of the freeze operator. We show decidability of finitary and infinitary satisfiability by reduction into the verification of temporal properties in Petri nets by proposing a symbolic representation of models. This is a quite surprising result in view of the expressive power of the logic since the logic is closed under negation, contains future-time and past-time temporal operators and can express the nonce property and its negation. These ingredients are known to lead to undecidability with a more liberal use of the freeze quantifier. The article also contains developments about the relationships between temporal logics with the freeze operator and counter automata as well as reductions into first-order logics over data words.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
In today's API-rich world, programmer productivity depends heavily on the programmer's ability to discover the required APIs. In this paper, we present a technique and tool, called MATHFINDER, to discover APIs for mathematical computations by mining unit tests of API methods. Given a math expression, MATHFINDER synthesizes pseudo-code to compute the expression by mapping its subexpressions to API method calls. For each subexpression, MATHFINDER searches for a method such that there is a mapping between method inputs and variables of the subexpression. The subexpression, when evaluated on the test inputs of the method under this mapping, should produce results that match the method output on a large number of tests. We implemented MATHFINDER as an Eclipse plugin for discovery of third-party Java APIs and performed a user study to evaluate its effectiveness. In the study, the use of MATHFINDER resulted in a 2x improvement in programmer productivity. In 96% of the subexpressions queried for in the study, MATHFINDER retrieved the desired API methods as the top-most result. The top-most pseudo-code snippet to implement the entire expression was correct in 93% of the cases. Since the number of methods and unit tests to mine could be large in practice, we also implement MATHFINDER in a MapReduce framework and evaluate its scalability and response time.
Resumo:
Time-varying linear prediction has been studied in the context of speech signals, in which the auto-regressive (AR) coefficients of the system function are modeled as a linear combination of a set of known bases. Traditionally, least squares minimization is used for the estimation of model parameters of the system. Motivated by the sparse nature of the excitation signal for voiced sounds, we explore the time-varying linear prediction modeling of speech signals using sparsity constraints. Parameter estimation is posed as a 0-norm minimization problem. The re-weighted 1-norm minimization technique is used to estimate the model parameters. We show that for sparsely excited time-varying systems, the formulation models the underlying system function better than the least squares error minimization approach. Evaluation with synthetic and real speech examples show that the estimated model parameters track the formant trajectories closer than the least squares approach.
Resumo:
The effect of doping trace amounts of noblemetals (Pt) on the gas sensing properties of chromium oxide thin films, is studied. The sensors are fabricated by depositing chromium oxide films on a glass substrate using a modified spray pyrolysis technique and characterized using X-ray diffraction, scanning electron microscopy, transmission electron microscopy and X-ray photoelectron spectroscopy. The films are porous and nanocrystalline with an average crystallite size of similar to 30 nm. The typical p-type conductivity arises due to the presence of Cr vacancies, formed as a result of Cr non-stoichiometry, which is found to vary upon Pt doping. In order to analyze the effect of doping on the gas sensing properties, we have adopted a kinetic response analysis approach, which is based on Langmuir Adsorption isotherm (LA) theory. The sensor response is analyzed with equations obtained from LA theory and time constants as well as energies of adsorption-desorption are evaluated. It is seen that, Pt doping lowers the Schottky barrier height of the metal oxide semiconductor sensor from 222 meV to 172 meV. Subsequently the reduction in adsorption and desorption energies led to enhancement in sensor response and improvement in the kinetics of the sensor response i.e. the response time as well as recovery times of the sensor.
Resumo:
The response of structural dynamical systems excited by multiple random excitations is considered. Two new procedures for evaluating global response sensitivity measures with respect to the excitation components are proposed. The first procedure is valid for stationary response of linear systems under stationary random excitations and is based on the notion of Hellinger's metric of distance between two power spectral density functions. The second procedure is more generally valid and is based on the l2 norm based distance measure between two probability density functions. Specific cases which admit exact solutions are presented, and solution procedures based on Monte Carlo simulations for more general class of problems are outlined. Illustrations include studies on a parametrically excited linear system and a nonlinear random vibration problem involving moving oscillator-beam system that considers excitations attributable to random support motions and guide-way unevenness. (C) 2015 American Society of Civil Engineers.
Resumo:
The rapid evolution of nanotechnology appeals for the understanding of global response of nanoscale systems based on atomic interactions, hence necessitates novel, sophisticated, and physically based approaches to bridge the gaps between various length and time scales. In this paper, we propose a group of statistical thermodynamics methods for the simulations of nanoscale systems under quasi-static loading at finite temperature, that is, molecular statistical thermodynamics (MST) method, cluster statistical thermodynamics (CST) method, and the hybrid molecular/cluster statistical thermodynamics (HMCST) method. These methods, by treating atoms as oscillators and particles simultaneously, as well as clusters, comprise different spatial and temporal scales in a unified framework. One appealing feature of these methods is their "seamlessness" or consistency in the same underlying atomistic model in all regions consisting of atoms and clusters, and hence can avoid the ghost force in the simulation. On the other hand, compared with conventional MD simulations, their high computational efficiency appears very attractive, as manifested by the simulations of uniaxial compression and nanoindenation. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
EXECUTIVE SUMMARY 1. DECADAL-SCALE CLIMATE EVENTS 1.1 Introduction 1.2 Basin-scale Patterns 1.3 Long Time Series in the North Pacific 1.4 Decadal Climate Variability in Ecological Regions of the North Pacific 1.5 Mechanisms 1.6 References 2. COHERENT REGIONAL RESPONSES 2.1 Introduction 2.2 Central North Pacific (CNP) 2.3 California Current System (CCS) 2.4 Gulf of Alaska (GOA) 2.5 Bering Sea and Aleutian Islands 2.6 Western North Pacific (WNP) 2.7 Coherence in Regional Responses to the 1998 Regime Shift 2.8 Climate Indicators for Detecting Regime Shifts 2.9 References 3. IMPLICATIONS FOR THE MANAGEMENT OF MARINE RESOURCES 3.1 Introduction 3.2 Response Time of Biota to Regime Shifts 3.3 Response Time of Management to Regime Shifts 3.4 Provision of Stock Assessment Advice 3.5 Decision Rules 3.6 References 4. SUGGESTED LITERATURE 4.1 Climate Regimes 4.2 Impacts on Lower Trophic Levels 4.3 Impacts on Fish and Higher Trophic Levels 4.4 Impacts on Ecosystems and Possible Mechanisms 4.5 Regimes and Fisheries Management APPENDIX 1: RECENT ECOSYSTEM CHANGES IN THE CENTRAL NORTH PACIFIC A1.1 Introduction A1.2 Physical Oceanography A1.3 Lower Trophic Levels A1.4 Invertebrates A1.5 Fishes A1.6 References APPENDIX 2: RECENT ECOSYSTEM CHANGES IN THE CALIFORNIA CURRENT SYSTEM A2.1 Introduction A2.2 Physical Oceanography A2.3 Lower Trophic Levels A2.4 Invertebrates A2.5 Fishes A2.6 References APPENDIX 3: RECENT ECOSYSTEM CHANGES IN THE GULF OF ALASKA A3.1 Introduction A3.2 Physical Oceanography A3.3 Lower Trophic Levels A3.4 Invertebrates A3.5 Fishes A3.6 Higher Trophic Levels A3.7 Coherence in Gulf of Alaska Fish A3.8 Combined Standardized Indices of Recruitment and Survival Rate A3.9 References APPENDIX 4: RECENT ECOSYSTEM CHANGES IN THE BERING SEA AND ALEUTIAN ISLANDS A4.1 Introduction A4.2 Bering Sea Environmental Variables and Physical Oceanography A4.3 Bering Sea Lower Trophic Levels A4.4 Bering Sea Invertebrates A4.5 Bering Sea Fishes A4.6 Bering Sea Higher Trophic Levels A4.7 Coherence in Bering Sea Fish Responses A4.8 Combined Standardized Indices of Bering Fish Recruitment and Survival Rate A4.9 Aleutian Islands A4.10 References APPENDIX 5: RECENT ECOSYSTEM CHANGES IN THE WESTERN NORTH PACIFIC A5.1 Introduction A5.2 Sea of Okhotsk A5.3 Tsushima Current Region and Kuroshio/Oyashio Current Region A5.4 Bohai Sea, Yellow Sea, and East China Sea A5.5 References (168 page document)
Resumo:
Almost all extreme events lasting less than several weeks that significantly impact ecosystems are weather related. This review examines the response of estuarine systems to intense short-term perturbations caused by major weather events such as hurricanes. Current knowledge concerning these effects is limited to relatively few studies where hurricanes and storms impacted estuaries with established environmental monitoring programs. Freshwater inputs associated with these storms were found to initially result in increased primary productivity. When hydrographic conditions are favorable, bacterial consumption of organic matter produced by the phytoplankton blooms and deposited during the initial runoff event can contribute to significant oxygen deficits during subsequent warmer periods. Salinity stress and habitat destruction associated with freshwater inputs, as well as anoxia, adversely affect benthic populations and fish. In contrast, mobile invertebrate species such as shrimp, which have a short life cycle and the ability to migrate during the runoff event, initially benefit from the increased primary productivity and decreased abundance of fish predators. Events studied so far indicate that estuaries rebound in one to three years following major short-term perturbations. However, repeated storm events without sufficient recovery time may cause a fundamental shift in ecosystem structure (Scavia et al. 2002). This is a scenario consistent with the predicted increase in hurricanes for the east coast of the United States. More work on the response of individual species to these stresses is needed so management of commercial resources can be adjusted to allow sufficient recovery time for affected populations.
Resumo:
This paper is focused on the study of the important property of the asymptotic hyperstability of a class of continuous-time dynamic systems. The presence of a parallel connection of a strictly stable subsystem to an asymptotically hyperstable one in the feed-forward loop is allowed while it has also admitted the generation of a finite or infinite number of impulsive control actions which can be combined with a general form of nonimpulsive controls. The asymptotic hyperstability property is guaranteed under a set of sufficiency-type conditions for the impulsive controls.
Resumo:
ICECCS 2010
Resumo:
This thesis is an investigation into the nature of data analysis and computer software systems which support this activity.
The first chapter develops the notion of data analysis as an experimental science which has two major components: data-gathering and theory-building. The basic role of language in determining the meaningfulness of theory is stressed, and the informativeness of a language and data base pair is studied. The static and dynamic aspects of data analysis are then considered from this conceptual vantage point. The second chapter surveys the available types of computer systems which may be useful for data analysis. Particular attention is paid to the questions raised in the first chapter about the language restrictions imposed by the computer system and its dynamic properties.
The third chapter discusses the REL data analysis system, which was designed to satisfy the needs of the data analyzer in an operational relational data system. The major limitation on the use of such systems is the amount of access to data stored on a relatively slow secondary memory. This problem of the paging of data is investigated and two classes of data structure representations are found, each of which has desirable paging characteristics for certain types of queries. One representation is used by most of the generalized data base management systems in existence today, but the other is clearly preferred in the data analysis environment, as conceptualized in Chapter I.
This data representation has strong implications for a fundamental process of data analysis -- the quantification of variables. Since quantification is one of the few means of summarizing and abstracting, data analysis systems are under strong pressure to facilitate the process. Two implementations of quantification are studied: one analagous to the form of the lower predicate calculus and another more closely attuned to the data representation. A comparison of these indicates that the use of the "label class" method results in orders of magnitude improvement over the lower predicate calculus technique.
Resumo:
This article investigates the convergence properties of iterative processes involving sequences of self-mappings of metric or Banach spaces. Such sequences are built from a set of primary self-mappings which are either expansive or non-expansive self-mappings and some of the non-expansive ones can be contractive including the case of strict contractions. The sequences are built subject to switching laws which select each active self-mapping on a certain activation interval in such a way that essential properties of boundedness and convergence of distances and iterated sequences are guaranteed. Applications to the important problem of stability of dynamic switched systems are also given.
Resumo:
This is a report on a workshop held at Cambridge University Engineering Design Centre, 17-10 June 1992. This workshop was held to discuss the issue of 'function' and 'function-to-form' evolution in mechanical design. The authors organised this workshop as they felt that their understanding of these topics was incomplete and that discussions between researchers might help to clarify some key issues.
The topic chosen for the workshop proved to be a stimulating one. The term 'function' is part of a designer's daily vocabulary, however there is poor agreement about its definition. In order to develop computer systems to support product evolution, a precise definition is required. Further the value of 'function' and 'function-to-form' evolution as a good choice of workshop topic is evident from the lack of firm conclusions that resulted from the sessions. This lack of consensus made for lively discussion and left participants questioning many of their preconceived ideas.
Attendance at the workshop was by invitation only. A list of the participants (not all those invited could attend due to time and financial constraints) is given in Appendix 1.