902 resultados para Math Applications in Computer Science
Resumo:
Software must be constantly adapted to changing requirements. The time scale, abstraction level and granularity of adaptations may vary from short-term, fine-grained adaptation to long-term, coarse-grained evolution. Fine-grained, dynamic and context-dependent adaptations can be particularly difficult to realize in long-lived, large-scale software systems. We argue that, in order to effectively and efficiently deploy such changes, adaptive applications must be built on an infrastructure that is not just model-driven, but is both model-centric and context-aware. Specifically, this means that high-level, causally-connected models of the application and the software infrastructure itself should be available at run-time, and that changes may need to be scoped to the run-time execution context. We first review the dimensions of software adaptation and evolution, and then we show how model-centric design can address the adaptation needs of a variety of applications that span these dimensions. We demonstrate through concrete examples how model-centric and context-aware designs work at the level of application interface, programming language and runtime. We then propose a research agenda for a model-centric development environment that supports dynamic software adaptation and evolution.
Resumo:
Back-in-time debuggers are extremely useful tools for identifying the causes of bugs, as they allow us to inspect the past states of objects no longer present in the current execution stack. Unfortunately the "omniscient" approaches that try to remember all previous states are impractical because they either consume too much space or they are far too slow. Several approaches rely on heuristics to limit these penalties, but they ultimately end up throwing out too much relevant information. In this paper we propose a practical approach to back-in-time debugging that attempts to keep track of only the relevant past data. In contrast to other approaches, we keep object history information together with the regular objects in the application memory. Although seemingly counter-intuitive, this approach has the effect that past data that is not reachable from current application objects (and hence, no longer relevant) is automatically garbage collected. In this paper we describe the technical details of our approach, and we present benchmarks that demonstrate that memory consumption stays within practical bounds. Furthermore since our approach works at the virtual machine level, the performance penalty is significantly better than with other approaches.
Resumo:
We report on our experiences with the Spy project, including implementation details and benchmark results. Spy is a re-implementation of the Squeak (i.e., Smalltalk-80) VM using the PyPy toolchain. The PyPy project allows code written in RPython, a subset of Python, to be translated to a multitude of different backends and architectures. During the translation, many aspects of the implementation can be independently tuned, such as the garbage collection algorithm or threading implementation. In this way, a whole host of interpreters can be derived from one abstract interpreter definition. Spy aims to bring these benefits to Squeak, allowing for greater portability and, eventually, improved performance. The current Spy codebase is able to run a small set of benchmarks that demonstrate performance superior to many similar Smalltalk VMs, but which still run slower than in Squeak itself. Spy was built from scratch over the course of a week during a joint Squeak-PyPy Sprint in Bern last autumn.
Resumo:
Multicasting is an efficient mechanism for one to many data dissemination. Unfortunately, IP Multicasting is not widely available to end-users today, but Application Layer Multicast (ALM), such as Content Addressable Network, helps to overcome this limitation. Our OM-QoS framework offers Quality of Service support for ALMs. We evaluated OM-QoS applied to CAN and show that we can guarantee that all multicast paths support certain QoS requirements.
Resumo:
To interconnect a wireless sensor network (WSN) to the Internet, we propose to use TCP/IP as the standard protocol for all network entities. We present a cross layer designed communication architecture, which contains a MAC protocol, IP, a new protocol called Hop-to-Hop Reliability (H2HR) protocol, and the TCP Support for Sensor Nodes (TSS) protocol. The MAC protocol implements the MAC layer of beacon-less personal area networks (PANs) as defined in IEEE 802.15.4. H2HR implements hop-to-hop reliability mechanisms. Two acknowledgment mechanisms, explicit and implicit ACK are supported. TSS optimizes using TCP in WSNs by implementing local retransmission of TCP data packets, local TCP ACK regeneration, aggressive TCP ACK recovery, congestion and flow control algorithms. We show that H2HR increases the performance of UDP, TCP, and RMST in WSNs significantly. The throughput is increased and the packet loss ratio is decreased. As a result, WSNs can be operated and managed using TCP/IP.
Resumo:
Contention-based MAC protocols follow periodic listen/sleep cycles. These protocols face the problem of virtual clustering if different unsynchronized listen/sleep schedules occur in the network, which has been shown to happen in wireless sensor networks. To interconnect these virtual clusters, border nodes maintaining all respective listen/sleep schedules are required. However, this is a waste of energy, if locally a common schedule can be determined. We propose to achieve local synchronization with a mechanism that is similar to gravitation. Clusters represent the mass, whereas synchronization messages sent by each cluster represent the gravitation force of the according cluster. Due to the mutual attraction caused by the clusters, all clusters merge finally. The exchange of synchronization messages itself is not altered by LACAS. Accordingly, LACAS introduces no overhead. Only a not yet used property of synchronization mechanisms is exploited.
Resumo:
Through the concerted evaluations of thousands of commercial substances for the qualities of persistence, bioaccumulation, and toxicity as a result of the United Nations Environment Program's Stockholm Convention, it has become apparent that fewer empirical data are available on bioaccumulation than other endpoints and that bioaccumulation models were not designed to accommodate all chemical classes. Due to the number of chemicals that may require further assessment, in vivo testing is cost prohibitive and discouraged due to the large number of animals needed. Although in vitro systems are less developed and characterized for fish, multiple high-throughput in vitro assays have been used to explore the dietary uptake and elimination of pharmaceuticals and other xenobiotics by mammals. While similar processes determine bioaccumulation in mammalian species, a review of methods to measure chemical bioavailability in fish screening systems, such as chemical biotransformation or metabolism in tissue slices, perfused tissues, fish embryos, primary and immortalized cell lines, and subcellular fractions, suggest quantitative and qualitative differences between fish and mammals exist. Using in vitro data in assessments for whole organisms or populations requires certain considerations and assumptions to scale data from a test tube to a fish, and across fish species. Also, different models may incorporate the predominant site of metabolism, such as the liver, and significant presystemic metabolism by the gill or gastrointestinal system to help accurately convert in vitro data into representative whole-animal metabolism and subsequent bioaccumulation potential. The development of animal alternative tests for fish bioaccumulation assessment is framed in the context of in vitro data requirements for regulatory assessments in Europe and Canada.
Resumo:
Durch die von Rapid Prototyping gebotenen Möglichkeiten können computergestützte 3D Operationsplanungen präzise in der Operation umgesetzt werden. An der Universitätsklinik Balgrist wurden in den letzten 3 Jahren nahezu 100 Patienten erfolgreich behandelt, deren Operation in 3D geplant und mit patienten-spezifischen Schablonen umgesetzt wurde. Wir beschreiben die Genauigkeit dieser Methode und berichten über die hierbei gesammelten Erfahrungen. Aufgrund der Flexibilität der Rapid Prototyping Technologie, gibt es nicht immer nur einen Weg wie eine 3D geplante Operation umgesetzt werden kann. Wir zeigen daher anhand von Fallbeispielen unterschiedliche Strategien auf und beschreiben deren Vor- und Nachteile. Ausserdem präsentieren wir die Weiterentwicklung der Methode zur Anwendung an kleinerer Anatomie wie Knochen des Handgelenkes oder der Finger.