857 resultados para run-time allocation
Resumo:
Metaphor is a multi-stage programming language extension to an imperative, object-oriented language in the style of C# or Java. This paper discusses some issues we faced when applying multi-stage language design concepts to an imperative base language and run-time environment. The issues range from dealing with pervasive references and open code to garbage collection and implementing cross-stage persistence.
Resumo:
The programming and retasking of sensor nodes could benefit greatly from the use of a virtual machine (VM) since byte code is compact, can be loaded on demand, and interpreted on a heterogeneous set of devices. The challenge is to ensure good programming tools and a small footprint for the virtual machine to meet the memory constraints of typical WSN platforms. To this end we propose Darjeeling, a virtual machine modelled after the Java VM and capable of executing a substantial subset of the Java language, but designed specifically to run on 8- and 16-bit microcontrollers with 2 - 10 KB of RAM. The Darjeeling VM uses a 16- rather than a 32-bit architecture, which is more efficient on the targeted platforms. Darjeeling features a novel memory organisation with strict separation of reference from non-reference types which eliminates the need for run-time type inspection in the underlying compacting garbage collector. Darjeeling uses a linked stack model that provides light-weight threads, and supports synchronisation. The VM has been implemented on three different platforms and was evaluated with micro benchmarks and a real-world application. The latter includes a pure Java implementation of the collection tree routing protocol conveniently programmed as a set of cooperating threads, and a reimplementation of an existing environmental monitoring application. The results show that Darjeeling is a viable solution for deploying large-scale heterogeneous sensor networks. Copyright 2009 ACM.
Resumo:
The Java programming language has potentially significant advantages for wireless sensor nodes but there is currently no feature-rich, open source virtual machine available. In this paper we present Darjeeling, a system comprising offline tools and a memory efficient run-time. The offline post-compiler tool analyzes, links and consolidates Java class files into loadable modules. The runtime implements a modified Java VM that supports multithreading and is designed specifically to operate in constrained execution environments such as wireless sensor network nodes and supports inheritance, threads, garbage collection, and loadable modules. We have demonstrated Java running on AVR128 and MSP430 microcontrollers at speeds of up to 70,000 JVM instructions per second.
Resumo:
The Dynamic Data eXchange (DDX) is our third generation platform for building distributed robot controllers. DDX allows a coalition of programs to share data at run-time through an efficient shared memory mechanism managed by a store. Further, stores on multiple machines can be linked by means of a global catalog and data is moved between the stores on an as needed basis by multi-casting. Heterogeneous computer systems are handled. We describe the architecture of DDX and the standard clients we have developed that let us rapidly build complex control systems with minimal coding.
Resumo:
Digital collections are growing exponentially in size as the information age takes a firm grip on all aspects of society. As a result Information Retrieval (IR) has become an increasingly important area of research. It promises to provide new and more effective ways for users to find information relevant to their search intentions. Document clustering is one of the many tools in the IR toolbox and is far from being perfected. It groups documents that share common features. This grouping allows a user to quickly identify relevant information. If these groups are misleading then valuable information can accidentally be ignored. There- fore, the study and analysis of the quality of document clustering is important. With more and more digital information available, the performance of these algorithms is also of interest. An algorithm with a time complexity of O(n2) can quickly become impractical when clustering a corpus containing millions of documents. Therefore, the investigation of algorithms and data structures to perform clustering in an efficient manner is vital to its success as an IR tool. Document classification is another tool frequently used in the IR field. It predicts categories of new documents based on an existing database of (doc- ument, category) pairs. Support Vector Machines (SVM) have been found to be effective when classifying text documents. As the algorithms for classifica- tion are both efficient and of high quality, the largest gains can be made from improvements to representation. Document representations are vital for both clustering and classification. Representations exploit the content and structure of documents. Dimensionality reduction can improve the effectiveness of existing representations in terms of quality and run-time performance. Research into these areas is another way to improve the efficiency and quality of clustering and classification results. Evaluating document clustering is a difficult task. Intrinsic measures of quality such as distortion only indicate how well an algorithm minimised a sim- ilarity function in a particular vector space. Intrinsic comparisons are inherently limited by the given representation and are not comparable between different representations. Extrinsic measures of quality compare a clustering solution to a “ground truth” solution. This allows comparison between different approaches. As the “ground truth” is created by humans it can suffer from the fact that not every human interprets a topic in the same manner. Whether a document belongs to a particular topic or not can be subjective.
Resumo:
The Denial of Service Testing Framework (dosTF) being developed as part of the joint India-Australia research project for ‘Protecting Critical Infrastructure from Denial of Service Attacks’ allows for the construction, monitoring and management of emulated Distributed Denial of Service attacks using modest hardware resources. The purpose of the testbed is to study the effectiveness of different DDoS mitigation strategies and to allow for the testing of defense appliances. Experiments are saved and edited in XML as abstract descriptions of an attack/defense strategy that is only mapped to real resources at run-time. It also provides a web-application portal interface that can start, stop and monitor an attack remotely. Rather than monitoring a service under attack indirectly, by observing traffic and general system parameters, monitoring of the target application is performed directly in real time via a customised SNMP agent.
Resumo:
With daily commercial and social activity in cities, regulation of train service in mass rapid transit railways is necessary to maintain service and passenger flow. Dwell-time adjustment at stations is one commonly used approach to regulation of train service, but its control space is very limited. Coasting control is a viable means of meeting the specific run-time in an inter-station run. The current practice is to start coasting at a fixed distance from the departed station. Hence, it is only optimal with respect to a nominal operational condition of the train schedule, but not the current service demand. The advantage of coasting can only be fully secured when coasting points are determined in real-time. However, identifying the necessary starting point(s) for coasting under the constraints of current service conditions is no simple task as train movement is governed by a large number of factors. The feasibility and performance of classical and heuristic searching measures in locating coasting point(s) is studied with the aid of a single train simulator, according to specified inter-station run times.
Resumo:
The railway service is now the major transportation means in most of the countries around the world. With the increasing population and expanding commercial and industrial activities, a high quality of railway service is the most desirable. Train service usually varies with the population activities throughout a day and train coordination and service regulation are then expected to meet the daily passengers' demand. Dwell time control at stations and fixed coasting point in an inter-station run are the current practices to regulate train service in most metro railway systems. However, a flexible and efficient train control and operation is not always possible. To minimize energy consumption of train operation and make certain compromises on the train schedule, coast control is an economical approach to balance run-time and energy consumption in railway operation if time is not an important issue, particularly at off-peak hours. The capability to identify the starting point for coasting according to the current traffic conditions provides the necessary flexibility for train operation. This paper presents an application of genetic algorithms (GA) to search for the appropriate coasting point(s) and investigates the possible improvement on fitness of genes. Single and multiple coasting point control with simple GA are developed to attain the solutions and their corresponding train movement is examined. Further, a hierarchical genetic algorithm (HGA) is introduced here to identify the number of coasting points required according to the traffic conditions, and Minimum-Allele-Reserve-Keeper (MARK) is adopted as a genetic operator to achieve fitter solutions.
Resumo:
To maximise the capacity of the rail lineand provide a reliable service for pas-sengers throughout the day, regulation of train service to maintain steady service headway is es-sential. In most current metro systems, train usually starts coasting at a fixed distance from the departed station to achieve service regulation. However, this approach is only effective with re-spect to a nominal operational condition of train schedule but not necessarily the current service demand. Moreover, it is not simply to identify the necessary starting point for coasting under the run time constraints of current service conditions since train movement is attributed by a large number of factors, most of which are non-linear and inter-dependent. This paper presents an ap-plication of classical measures to search for the appropriate coasting point to meet a specified inter-station run time and they can be integrated in the on-board Automatic Train Operation (ATO) system and have the potential for on-line implementation in making a set of coasting command decisions.
Resumo:
Balancing between the provision of high quality of service and running within a tight budget is one of the biggest challenges for most metro railway operators around the world. Conventionally, one possible approach for the operator to adjust the time schedule is to alter the stop time at stations, if other system constraints, such as traction equipment characteristic, are not taken into account. Yet it is not an effective, flexible and economical method because the run-time of a train simply cannot be extended without limitation, and a balance between run-time and energy consumption has to be maintained. Modification or installation of a new signalling system not only increases the capital cost, but also affects the normal train service. Therefore, in order to procure a more effective, flexible and economical means to improve the quality of service, optimisation of train performance by coasting point identification has become more attractive and popular. However, identifying the necessary starting points for coasting under the constraints of current service conditions is no simple task because train movement is attributed by a large number of factors, most of which are non-linear and inter-dependent. This paper presents an application of genetic algorithms (GA) to search for the appropriate coasting points and investigates the possible improvement on computation time and fitness of genes.
Resumo:
Data flow analysis techniques can be used to help assess threats to data confidentiality and integrity in security critical program code. However, a fundamental weakness of static analysis techniques is that they overestimate the ways in which data may propagate at run time. Discounting large numbers of these false-positive data flow paths wastes an information security evaluator's time and effort. Here we show how to automatically eliminate some false-positive data flow paths by precisely modelling how classified data is blocked by certain expressions in embedded C code. We present a library of detailed data flow models of individual expression elements and an algorithm for introducing these components into conventional data flow graphs. The resulting models can be used to accurately trace byte-level or even bit-level data flow through expressions that are normally treated as atomic. This allows us to identify expressions that safely downgrade their classified inputs and thereby eliminate false-positive data flow paths from the security evaluation process. To validate the approach we have implemented and tested it in an existing data flow analysis toolkit.
Resumo:
Train delay is one of the most important indexes to evaluate the service quality of the railway. Because of the interactions of movement among trains, a delayed train may conflict with trains scheduled on other lines at junction area. Train that loses conflict may be forced to stop or slow down because of restrictive signals, which consequently leads to the loss of run-time and probably enlarges more delays. This paper proposes a time-saving train control method to recover delays as soon as possible. In the proposed method, golden section search is adopted to identify the optimal train speed at the expected time of restrictive signal aspect upgrades, which enables the train to depart from the conflicting area as soon as possible. A heuristic method is then developed to attain the advisory train speed profile assisting drivers in train control. Simulation study indicates that the proposed method enables the train to recover delays as soon as possible in case of disturbances at railway junctions, in comparison with the traditional maximum traction strategy and the green wave strategy.
Resumo:
With the large diffusion of Business Process Managemen (BPM) automation suites, the possibility of managing process-related risks arises. This paper introduces an innovative framework for process-related risk management and describes a working implementation realized by extending the YAWL system. The framework covers three aspects of risk management: risk monitoring, risk prevention, and risk mitigation. Risk monitoring functionality is provided using a sensor-based architecture, where sensors are defined at design time and used at run-time for monitoring purposes. Risk prevention functionality is provided in the form of suggestions about what should be executed, by who, and how, through the use of decision trees. Finally, risk mitigation functionality is provided as a sequence of remedial actions (e.g. reallocating, skipping, rolling back of a work item) that should be executed to restore the process to a normal situation.
Resumo:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.
Resumo:
The management of risks in business processes has been a subject of active research in the past few years. Many benefits can potentially be obtained by integrating the two traditionally-separated fields of risk management and business process management, including the ability to minimize risks in business processes (by design) and to mitigate risks at run time. In the past few years, an increasing amount of research aimed at delivering such an integrated system has been proposed. However, these research efforts vary in terms of their scope, goals, and functionality. Through systematic collection and evaluation of relevant literature, this paper compares and classifies current approaches in the area of risk-aware business process management in order to identify and explain relevant research gaps. The process through which relevant literature is collected, filtered, and evaluated is also detailed.