14 resultados para run-time allocation
em University of Queensland eSpace - Australia
Resumo:
Dynamic binary translation is the process of translating, modifying and rewriting executable (binary) code from one machine to another at run-time. This process of low-level re-engineering consists of a reverse engineering phase followed by a forward engineering phase. UQDBT, the University of Queensland Dynamic Binary Translator, is a machine-adaptable translator. Adaptability is provided through the specification of properties of machines and their instruction sets, allowing the support of different pairs of source and target machines. Most binary translators are closely bound to a pair of machines, making analyses and code hard to reuse. Like most virtual machines, UQDBT performs generic optimizations that apply to a variety of machines. Frequently executed code is translated to native code by the use of edge weight instrumentation, which makes UQDBT converge more quickly than systems based on instruction speculation. In this paper, we describe the architecture and run-time feedback optimizations performed by the UQDBT system, and provide results obtained in the x86 and SPARC® platforms.
Resumo:
Eastern curlews Numenius madagascariensis spending the nonbreeding season in eastern Australia foraged on three intertidal decapods: soldier crab Mictyris longicarpus, sentinel crab Macrophthalmus crassipes and ghost-shrimp Trypaea australiensis. Due to their ecology, these crustaceans were spatially segregated (=distributed in 'patches') and the curlews intermittently consumed more than one prey type. It was predicted that if the curlews behaved as intake rate maximizers, the time spent foraging on a particular prey (patch) would reflect relative availabilities of the prey types and thus prey-specific intake rates would be equal. During the mid-nonbreeding period (November-December), Mictyris and Macrophthalmus were primarily consumed and prey-specific intake rates were statistically indistinguishable (8.8 versus 10.1 kJ x min(-1)). Prior to migration (February), Mictyris and Trypaea were hunted and the respective intake rates were significantly different (8.9 versus 2.3 kJ x min(-1)). Time allocation to Trypaea-hunting was independent of the availability of Mictyris. Thus, consumption of Trypaea depressed the overall intake rate. Six hypotheses for consuming Trypaea before migration were examined. Five hypotheses: the possible error by the predator, prey specialization, observer overestimation of time spent hunting Trypaea, supplementary prey and the choice of higher quality prey due to a digestive bottleneck, were deemed unsatisfactory. The explanation for consumption of a low intake-rate but high quality prey (Trypaea) deemed plausible was diet optimisation by the Curlews in response to the pre-migratory modulation (decrease in size/processing capacity) of their digestive system. With a seasonal decrease in the average intake rate, the estimated intake per low tide increased from 1233 to 1508 kJ between the mid-nonbreeding and pre-migratory periods by increasing the overall time spent on the sandflats and the proportion of time spent foraging.
Resumo:
The aim of the present study was to examine the relationship between the performance heart rate during an ultra-endurance triathlon and the heart rate corresponding to several demarcation points measured during laboratory-based progressive cycle ergometry and treadmill running. Less than one month before an ultra-endurance triathlon, 21 well-trained ultra-endurance triathletes (mean +/- s: age 35 +/- 6 years, height 1.77 +/- 0.05 in, mass 74.0 +/- 6.9 kg, (V) over dot O-2peak = 4.75 +/- 0.42 1 center dot min(-1)) performed progressive exercise tests of cycle ergometry and treadmill running for the determination of peak oxygen uptake ((V) over do O-2peak), heart rate corresponding to the first and second ventilatory thresholds, as well as the heart rate deflection point. Portable telemetry units recorded heart rate at 60 s increments throughout the ultra-endurance triathlon. Heart rate during the cycle and run phases of the ultra-endurance triathlon (148 +/- 9 and 143 +/- 13 beats center dot min(-1) respectively) were significantly (P < 0.05) less than the second ventilatory thresholds (160 +/- 13 and 165 +/- 14 beats center dot min(-1) respectively) and heart rate deflection points (170 +/- 13 and 179 +/- 9 beats center dot min(-1) respectively). However, mean heart rate during the cycle and run phases of the ultra-endurance triathlon were significantly related to (r = 0.76 and 0.66; P < 0.01), and not significantly different from, the first ventilatory thresholds (146 +/- 12 and 148 +/- 15 beats center dot min(-1) respectively). Furthermore, the difference between heart rate during the cycle phase of the ultra-endurance triathlon and heart rate at the first ventilatory threshold was related to marathon run time (r = 0.61; P < 0.01) and overall ultra-endurance triathlon time (r = 0.45; P < 0.05). The results suggest that triathletes perform the cycle and run phases of the ultra-endurance triathlon at an exercise intensity near their first ventilatory threshold
Resumo:
This paper explores potential for the RAMpage memory hierarchy to use a microkernel with a small memory footprint, in a specialized cache-speed static RAM (tightly-coupled memory, TCM). Dreamy memory is DRAM kept in low-power mode, unless referenced. Simulations show that a small microkernel suits RAMpage well, in that it achieves significantly better speed and energy gains than a standard hierarchy from adding TCM. RAMpage, in its best 128KB L2 case, gained 11% speed using TCM, and reduced energy 14%. Equivalent conventional hierarchy gains were under 1%. While 1MB L2 was significantly faster against lower-energy cases for the smaller L2, the larger SRAM's energy does not justify the speed gain. Using a 128KB L2 cache in a conventional architecture resulted in a best-case overall run time of 2.58s, compared with the best dreamy mode run time (RAMpage without context switches on misses) of 3.34s, a speed penalty of 29%. Energy in the fastest 128KB L2 case was 2.18J vs. 1.50J, a reduction of 31%. The same RAMpage configuration without dreamy mode took 2.83s as simulated, and used 2.39J, an acceptable trade-off (penalty under 10%) for being able to switch easily to a lower-energy mode.
Resumo:
High-level language program compilation strategies can be proven correct by modelling the process as a series of refinement steps from source code to a machine-level description. We show how this can be done for programs containing recursively-defined procedures in the well-established predicate transformer semantics for refinement. To do so the formalism is extended with an abstraction of the way stack frames are created at run time for procedure parameters and variables.
Resumo:
This paper presents a formal framework for modelling and analysing mobile systems. The framework comprises a collection of models of the dominant design paradigms which are readily extended to incorporate details of particular technologies, i.e., programming languages and their run-time support, and applications. The modelling language is Object-Z, an extension of the well-known Z specification language with explicit support for object-oriented concepts. Its support for object orientation makes Object-Z particularly suited to our task. The system structuring techniques offered by object-orientation are well suited to modelling mobile systems. In addition, inheritance and polymorphism allow us to exploit commonalities in mobile systems by defining more complex models in terms of simpler ones.
Resumo:
This paper presents a methodology for deriving business process descriptions based on terms in business contract. The aim is to assist process modellers in structuring collaborative interactions between parties, including their internal processes, to ensure contract-compliant behaviour. The methodology requires a formal model of contracts to facilitate process derivations and to form a basis for contract analysis tools and run-time process execution.
Resumo:
Pervasive computing applications must be sufficiently autonomous to adapt their behaviour to changes in computing resources and user requirements. This capability is known as context-awareness. In some cases, context-aware applications must be implemented as autonomic systems which are capable of dynamically discovering and replacing context sources (sensors) at run-time. Unlike other types of application autonomy, this kind of dynamic reconfiguration has not been sufficiently investigated yet by the research community. However, application-level context models are becoming common, in order to ease programming of context-aware applications and support evolution by decoupling applications from context sources. We can leverage these context models to develop general (i.e., application-independent) solutions for dynamic, run-time discovery of context sources (i.e., context management). This paper presents a model and architecture for a reconfigurable context management system that supports interoperability by building on emerging standards for sensor description and classification.
Resumo:
While developments in distributed object computing environments, such as the Common Object Request Broker Architecture (CORBA) [17] and the Telecommunication Intelligent Network Architecture (TINA) [16], have enabled interoperability between domains in large open distributed systems, managing the resources within such systems has become an increasingly complex task. This challenge has been considered for several years within the distributed systems management research community and policy-based management has recently emerged as a promising solution. Large evolving enterprises present a significant challenge for policy-based management partly due to the requirement to support both mutual transparency and individual autonomy between domains [2], but also because the fluidity and complexity of interactions occurring within such environments requires an ability to cope with the coexistence of multiple, potentially inconsistent policies. This paper discusses the need of providing both dynamic (run-time) and static (compile-time) conflict detection and resolution for policies in such systems and builds on our earlier conflict detection work [7, 8] to introduce the methods for conflict resolution in large open distributed systems.
Resumo:
The Roche Cobas Amplicor system is widely used for the detection of Neisseria gonorrhoeae but is known to cross react with some commensal Neisseria spp. Therefore, a confirmatory test is required. The most common target for confirmatory tests is the cppB gene of N. gonorrhoeae. However, the cppB gene is also present in other Neisseria spp. and is absent in some N. gonorrhoeae isolates. As a result, laboratories targeting this gene run the risk of obtaining both false-positive and false-negative results. In the study presented here, a newly developed N. gonorrhoeae LightCycler assay (NGpapLC) targeting the N. gonorrhoeae porA pseudogene was tested. The NGpapLC assay was used to test 282 clinical samples, and the results were compared to those obtained using a testing algorithm combining the Cobas Amplicor System (Roche Diagnostics, Sydney, Australia) and an in-house LightCycler assay targeting the cppB gene (cppB-LC). In addition, the specificity of the NGpapLC assay was investigated by testing a broad panel of bacteria including isolates of several Neisseria spp. The NGpapLC assay proved to have comparable clinical sensitivity to the cppB-LC assay. In addition; testing of the bacterial panel showed the NGpapLC assay to be highly specific for N. gonorrhoeae DNA. The results of this study show the NGpapLC assay is a suitable alternative to the cppB-LC assay for confirmation of N. gonorrhoeae-positive results obtained with Cobas Amplicor.
Resumo:
Multiple-sown field trials in 4 consecutive years in the Riverina region of south-eastern Australia provided 24 different combinations of temperature and day length, which enabled the development of crop phenology models. A crop model was developed for 7 cultivars from diverse origins to identify if photoperiod sensitivity is involved in determining phenological development, and if that is advantageous in avoiding low-temperature damage. Cultivars that were mildly photoperiod-sensitive were identified from sowing to flowering and from panicle initiation to flowering. The crop models were run for 47 years of temperature data to quantify the risk of encountering low temperature during the critical young microspore stage for 5 different sowing dates. Cultivars that were mildly photoperiod-sensitive, such as Amaroo, had a reduced likelihood of encountering low temperature for a wider range of sowing dates compared with photoperiod-insensitive cultivars. The benefits of increased photoperiod sensitivity include greater sowing flexibility and reduced water use as growth duration is shortened when sowing is delayed. Determining the optimal sowing date also requires other considerations, e. g. the risk of cold damage at other sensitive stages such as flowering and the response of yield to a delay in flowering under non-limiting conditions. It was concluded that appropriate sowing time and the use of photoperiod-sensitive cultivars can be advantageous in the Riverina region in avoiding low temperature damage during reproductive development.
Resumo:
Real-time software systems are rarely developed once and left to run. They are subject to changes of requirements as the applications they support expand, and they commonly outlive the platforms they were designed to run on. A successful real-time system is duplicated and adapted to a variety of applications - it becomes a product line. Current methods for real-time software development are commonly based on low-level programming languages and involve considerable duplication of effort when a similar system is to be developed or the hardware platform changes. To provide more dependable, flexible and maintainable real-time systems at a lower cost what is needed is a platform-independent approach to real-time systems development. The development process is composed of two phases: a platform-independent phase, that defines the desired system behaviour and develops a platform-independent design and implementation, and a platform-dependent phase that maps the implementation onto the target platform. The last phase should be highly automated. For critical systems, assessing dependability is crucial. The partitioning into platform dependent and independent phases has to support verification of system properties through both phases.
Resumo:
Real-time control programs are often used in contexts where (conceptually) they run forever. Repetitions within such programs (or their specifications) may either (i) be guaranteed to terminate, (ii) be guaranteed to never terminate (loop forever), or (iii) may possibly terminate. In dealing with real-time programs and their specifications, we need to be able to represent these possibilities, and define suitable refinement orderings. A refinement ordering based on Dijkstra's weakest precondition only copes with the first alternative. Weakest liberal preconditions allow one to constrain behaviour provided the program terminates, which copes with the third alternative to some extent. However, neither of these handles the case when a program does not terminate. To handle this case a refinement ordering based on relational semantics can be used. In this paper we explore these issues and the definition of loops for real-time programs as well as corresponding refinement laws.