871 resultados para propositional linear-time temporal logic
Resumo:
Self-adaptive software provides a profound solution for adapting applications to changing contexts in dynamic and heterogeneous environments. Having emerged from Autonomic Computing, it incorporates fully autonomous decision making based on predefined structural and behavioural models. The most common approach for architectural runtime adaptation is the MAPE-K adaptation loop implementing an external adaptation manager without manual user control. However, it has turned out that adaptation behaviour lacks acceptance if it does not correspond to a user’s expectations – particularly for Ubiquitous Computing scenarios with user interaction. Adaptations can be irritating and distracting if they are not appropriate for a certain situation. In general, uncertainty during development and at run-time causes problems with users being outside the adaptation loop. In a literature study, we analyse publications about self-adaptive software research. The results show a discrepancy between the motivated application domains, the maturity of examples, and the quality of evaluations on the one hand and the provided solutions on the other hand. Only few publications analysed the impact of their work on the user, but many employ user-oriented examples for motivation and demonstration. To incorporate the user within the adaptation loop and to deal with uncertainty, our proposed solutions enable user participation for interactive selfadaptive software while at the same time maintaining the benefits of intelligent autonomous behaviour. We define three dimensions of user participation, namely temporal, behavioural, and structural user participation. This dissertation contributes solutions for user participation in the temporal and behavioural dimension. The temporal dimension addresses the moment of adaptation which is classically determined by the self-adaptive system. We provide mechanisms allowing users to influence or to define the moment of adaptation. With our solution, users can have full control over the moment of adaptation or the self-adaptive software considers the user’s situation more appropriately. The behavioural dimension addresses the actual adaptation logic and the resulting run-time behaviour. Application behaviour is established during development and does not necessarily match the run-time expectations. Our contributions are three distinct solutions which allow users to make changes to the application’s runtime behaviour: dynamic utility functions, fuzzy-based reasoning, and learning-based reasoning. The foundation of our work is a notification and feedback solution that improves intelligibility and controllability of self-adaptive applications by implementing a bi-directional communication between self-adaptive software and the user. The different mechanisms from the temporal and behavioural participation dimension require the notification and feedback solution to inform users on adaptation actions and to provide a mechanism to influence adaptations. Case studies show the feasibility of the developed solutions. Moreover, an extensive user study with 62 participants was conducted to evaluate the impact of notifications before and after adaptations. Although the study revealed that there is no preference for a particular notification design, participants clearly appreciated intelligibility and controllability over autonomous adaptations.
Resumo:
Die vorliegende Arbeit beschäftigt sich mit den Einflüssen visuell wahrgenommener Bewegungsmerkmale auf die Handlungssteuerung eines Beobachters. Im speziellen geht es darum, wie die Bewegungsrichtung und die Bewegungsgeschwindigkeit als aufgabenirrelevante Reize die Ausführung von motorischen Reaktionen auf Farbreize beeinflussen und dabei schnellere bzw. verzögerte Reaktionszeiten bewirken. Bisherige Studien dazu waren auf lineare Bewegungen (von rechts nach links und umgekehrt) und sehr einfache Reizumgebungen (Bewegungen einfacher geometrischer Symbole, Punktwolken, Lichtpunktläufer etc.) begrenzt (z.B. Ehrenstein, 1994; Bosbach, 2004, Wittfoth, Buck, Fahle & Herrmann, 2006). In der vorliegenden Dissertation wurde die Gültigkeit dieser Befunde für Dreh- und Tiefenbewegungen sowie komplexe Bewegungsformen (menschliche Bewegungsabläufe im Sport) erweitert, theoretisch aufgearbeitet sowie in einer Serie von sechs Reaktionszeitexperimenten mittels Simon-Paradigma empirisch überprüft. Allen Experimenten war gemeinsam, dass Versuchspersonen an einem Computermonitor auf einen Farbwechsel innerhalb des dynamischen visuellen Reizes durch einen Tastendruck (links, rechts, proximal oder distal positionierte Taste) reagieren sollten, wobei die Geschwindigkeit und die Richtung der Bewegungen für die Reaktionen irrelevant waren. Zum Einfluss von Drehbewegungen bei geometrischen Symbolen (Exp. 1 und 1a) sowie bei menschlichen Drehbewegungen (Exp. 2) zeigen die Ergebnisse, dass Probanden signifikant schneller reagieren, wenn die Richtungsinformationen einer Drehbewegung kompatibel zu den räumlichen Merkmalen der geforderten Tastenreaktion sind. Der Komplexitätsgrad des visuellen Ereignisses spielt dabei keine Rolle. Für die kognitive Verarbeitung des Bewegungsreizes stellt nicht der Drehsinn, sondern die relative Bewegungsrichtung oberhalb und unterhalb der Drehachse das entscheidende räumliche Kriterium dar. Zum Einfluss räumlicher Tiefenbewegungen einer Kugel (Exp. 3) und einer gehenden Person (Exp. 4) belegen unsere Befunde, dass Probanden signifikant schneller reagieren, wenn sich der Reiz auf den Beobachter zu bewegt und ein proximaler gegenüber einem distalen Tastendruck gefordert ist sowie umgekehrt. Auch hier spielt der Komplexitätsgrad des visuellen Ereignisses keine Rolle. In beiden Experimenten führt die Wahrnehmung der Bewegungsrichtung zu einer Handlungsinduktion, die im kompatiblen Fall eine schnelle und im inkompatiblen Fall eine verzögerte Handlungsausführung bewirkt. In den Experimenten 5 und 6 wurden die Einflüsse von wahrgenommenen menschlichen Laufbewegungen (freies Laufen vs. Laufbandlaufen) untersucht, die mit und ohne eine Positionsveränderung erfolgten. Dabei zeigte sich, dass unabhängig von der Positionsveränderung die Laufgeschwindigkeit zu keiner Modulation des richtungsbasierten Simon Effekts führt. Zusammenfassend lassen sich die Studienergebnisse gut in effektbasierte Konzepte zur Handlungssteuerung (z.B. die Theorie der Ereigniskodierung von Hommel et al., 2001) einordnen. Weitere Untersuchungen sind nötig, um diese Ergebnisse auf großmotorische Reaktionen und Displays, die stärker an visuell wahrnehmbaren Ereignissen des Sports angelehnt sind, zu übertragen.
Resumo:
In the theory of the Navier-Stokes equations, the proofs of some basic known results, like for example the uniqueness of solutions to the stationary Navier-Stokes equations under smallness assumptions on the data or the stability of certain time discretization schemes, actually only use a small range of properties and are therefore valid in a more general context. This observation leads us to introduce the concept of SST spaces, a generalization of the functional setting for the Navier-Stokes equations. It allows us to prove (by means of counterexamples) that several uniqueness and stability conjectures that are still open in the case of the Navier-Stokes equations have a negative answer in the larger class of SST spaces, thereby showing that proof strategies used for a number of classical results are not sufficient to affirmatively answer these open questions. More precisely, in the larger class of SST spaces, non-uniqueness phenomena can be observed for the implicit Euler scheme, for two nonlinear versions of the Crank-Nicolson scheme, for the fractional step theta scheme, and for the SST-generalized stationary Navier-Stokes equations. As far as stability is concerned, a linear version of the Euler scheme, a nonlinear version of the Crank-Nicolson scheme, and the fractional step theta scheme turn out to be non-stable in the class of SST spaces. The positive results established in this thesis include the generalization of classical uniqueness and stability results to SST spaces, the uniqueness of solutions (under smallness assumptions) to two nonlinear versions of the Euler scheme, two nonlinear versions of the Crank-Nicolson scheme, and the fractional step theta scheme for general SST spaces, the second order convergence of a version of the Crank-Nicolson scheme, and a new proof of the first order convergence of the implicit Euler scheme for the Navier-Stokes equations. For each convergence result, we provide conditions on the data that guarantee the existence of nonstationary solutions satisfying the regularity assumptions needed for the corresponding convergence theorem. In the case of the Crank-Nicolson scheme, this involves a compatibility condition at the corner of the space-time cylinder, which can be satisfied via a suitable prescription of the initial acceleration.
Resumo:
Web services from different partners can be combined to applications that realize a more complex business goal. Such applications built as Web service compositions define how interactions between Web services take place in order to implement the business logic. Web service compositions not only have to provide the desired functionality but also have to comply with certain Quality of Service (QoS) levels. Maximizing the users' satisfaction, also reflected as Quality of Experience (QoE), is a primary goal to be achieved in a Service-Oriented Architecture (SOA). Unfortunately, in a dynamic environment like SOA unforeseen situations might appear like services not being available or not responding in the desired time frame. In such situations, appropriate actions need to be triggered in order to avoid the violation of QoS and QoE constraints. In this thesis, proper solutions are developed to manage Web services and Web service compositions with regard to QoS and QoE requirements. The Business Process Rules Language (BPRules) was developed to manage Web service compositions when undesired QoS or QoE values are detected. BPRules provides a rich set of management actions that may be triggered for controlling the service composition and for improving its quality behavior. Regarding the quality properties, BPRules allows to distinguish between the QoS values as they are promised by the service providers, QoE values that were assigned by end-users, the monitored QoS as measured by our BPR framework, and the predicted QoS and QoE values. BPRules facilitates the specification of certain user groups characterized by different context properties and allows triggering a personalized, context-aware service selection tailored for the specified user groups. In a service market where a multitude of services with the same functionality and different quality values are available, the right services need to be selected for realizing the service composition. We developed new and efficient heuristic algorithms that are applied to choose high quality services for the composition. BPRules offers the possibility to integrate multiple service selection algorithms. The selection algorithms are applicable also for non-linear objective functions and constraints. The BPR framework includes new approaches for context-aware service selection and quality property predictions. We consider the location information of users and services as context dimension for the prediction of response time and throughput. The BPR framework combines all new features and contributions to a comprehensive management solution. Furthermore, it facilitates flexible monitoring of QoS properties without having to modify the description of the service composition. We show how the different modules of the BPR framework work together in order to execute the management rules. We evaluate how our selection algorithms outperform a genetic algorithm from related research. The evaluation reveals how context data can be used for a personalized prediction of response time and throughput.
Resumo:
Time-resolved diffraction with femtosecond electron pulses has become a promising technique to directly provide insights into photo induced primary dynamics at the atomic level in molecules and solids. Ultrashort pulse duration as well as extensive spatial coherence are desired, however, space charge effects complicate the bunching of multiple electrons in a single pulse.Weexperimentally investigate the interplay between spatial and temporal aspects of resolution limits in ultrafast electron diffraction (UED) on our highly compact transmission electron diffractometer. To that end, the initial source size and charge density of electron bunches are systematically manipulated and the resulting bunch properties at the sample position are fully characterized in terms of lateral coherence, temporal width and diffracted intensity.Weobtain a so far not reported measured overall temporal resolution of 130 fs (full width at half maximum) corresponding to 60 fs (root mean square) and transversal coherence lengths up to 20 nm. Instrumental impacts on the effective signal yield in diffraction and electron pulse brightness are discussed as well. The performance of our compactUEDsetup at selected electron pulse conditions is finally demonstrated in a time-resolved study of lattice heating in multilayer graphene after optical excitation.
Resumo:
The dynamic power requirement of CMOS circuits is rapidly becoming a major concern in the design of personal information systems and large computers. In this work we present a number of new CMOS logic families, Charge Recovery Logic (CRL) as well as the much improved Split-Level Charge Recovery Logic (SCRL), within which the transfer of charge between the nodes occurs quasistatically. Operating quasistatically, these logic families have an energy dissipation that drops linearly with operating frequency, i.e., their power consumption drops quadratically with operating frequency as opposed to the linear drop of conventional CMOS. The circuit techniques in these new families rely on constructing an explicitly reversible pipelined logic gate, where the information necessary to recover the energy used to compute a value is provided by computing its logical inverse. Information necessary to uncompute the inverse is available from the subsequent inverse logic stage. We demonstrate the low energy operation of SCRL by presenting the results from the testing of the first fully quasistatic 8 x 8 multiplier chip (SCRL-1) employing SCRL circuit techniques.
Resumo:
This paper investigates the linear degeneracies of projective structure estimation from point and line features across three views. We show that the rank of the linear system of equations for recovering the trilinear tensor of three views reduces to 23 (instead of 26) in the case when the scene is a Linear Line Complex (set of lines in space intersecting at a common line) and is 21 when the scene is planar. The LLC situation is only linearly degenerate, and we show that one can obtain a unique solution when the admissibility constraints of the tensor are accounted for. The line configuration described by an LLC, rather than being some obscure case, is in fact quite typical. It includes, as a particular example, the case of a camera moving down a hallway in an office environment or down an urban street. Furthermore, an LLC situation may occur as an artifact such as in direct estimation from spatio-temporal derivatives of image brightness. Therefore, an investigation into degeneracies and their remedy is important also in practice.
Resumo:
Testing constraints for real-time systems are usually verified through the satisfiability of propositional formulae. In this paper, we propose an alternative where the verification of timing constraints can be done by counting the number of truth assignments instead of boolean satisfiability. This number can also tell us how “far away” is a given specification from satisfying its safety assertion. Furthermore, specifications and safety assertions are often modified in an incremental fashion, where problematic bugs are fixed one at a time. To support this development, we propose an incremental algorithm for counting satisfiability. Our proposed incremental algorithm is optimal as no unnecessary nodes are created during each counting. This works for the class of path RTL. To illustrate this application, we show how incremental satisfiability counting can be applied to a well-known rail-road crossing example, particularly when its specification is still being refined.
Resumo:
We present a technique for the rapid and reliable evaluation of linear-functional output of elliptic partial differential equations with affine parameter dependence. The essential components are (i) rapidly uniformly convergent reduced-basis approximations — Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N (optimally) selected points in parameter space; (ii) a posteriori error estimation — relaxations of the residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs; and (iii) offline/online computational procedures — stratagems that exploit affine parameter dependence to de-couple the generation and projection stages of the approximation process. The operation count for the online stage — in which, given a new parameter value, we calculate the output and associated error bound — depends only on N (typically small) and the parametric complexity of the problem. The method is thus ideally suited to the many-query and real-time contexts. In this paper, based on the technique we develop a robust inverse computational method for very fast solution of inverse problems characterized by parametrized partial differential equations. The essential ideas are in three-fold: first, we apply the technique to the forward problem for the rapid certified evaluation of PDE input-output relations and associated rigorous error bounds; second, we incorporate the reduced-basis approximation and error bounds into the inverse problem formulation; and third, rather than regularize the goodness-of-fit objective, we may instead identify all (or almost all, in the probabilistic sense) system configurations consistent with the available experimental data — well-posedness is reflected in a bounded "possibility region" that furthermore shrinks as the experimental error is decreased.
Resumo:
Time series regression models are especially suitable in epidemiology for evaluating short-term effects of time-varying exposures on health. The problem is that potential for confounding in time series regression is very high. Thus, it is important that trend and seasonality are properly accounted for. Our paper reviews the statistical models commonly used in time-series regression methods, specially allowing for serial correlation, make them potentially useful for selected epidemiological purposes. In particular, we discuss the use of time-series regression for counts using a wide range Generalised Linear Models as well as Generalised Additive Models. In addition, recently critical points in using statistical software for GAM were stressed, and reanalyses of time series data on air pollution and health were performed in order to update already published. Applications are offered through an example on the relationship between asthma emergency admissions and photochemical air pollutants
Resumo:
We develop an extension to the tactical planning model (TPM) for a job shop by the third author. The TPM is a discrete-time model in which all transitions occur at the start of each time period. The time period must be defined appropriately in order for the model to be meaningful. Each period must be short enough so that a job is unlikely to travel through more than one station in one period. At the same time, the time period needs to be long enough to justify the assumptions of continuous workflow and Markovian job movements. We build an extension to the TPM that overcomes this restriction of period sizing by permitting production control over shorter time intervals. We achieve this by deriving a continuous-time linear control rule for a single station. We then determine the first two moments of the production level and queue length for the workstation.
Resumo:
Aitchison and Bacon-Shone (1999) considered convex linear combinations of compositions. In other words, they investigated compositions of compositions, where the mixing composition follows a logistic Normal distribution (or a perturbation process) and the compositions being mixed follow a logistic Normal distribution. In this paper, I investigate the extension to situations where the mixing composition varies with a number of dimensions. Examples would be where the mixing proportions vary with time or distance or a combination of the two. Practical situations include a river where the mixing proportions vary along the river, or across a lake and possibly with a time trend. This is illustrated with a dataset similar to that used in the Aitchison and Bacon-Shone paper, which looked at how pollution in a loch depended on the pollution in the three rivers that feed the loch. Here, I explicitly model the variation in the linear combination across the loch, assuming that the mean of the logistic Normal distribution depends on the river flows and relative distance from the source origins
Resumo:
In an earlier investigation (Burger et al., 2000) five sediment cores near the Rodrigues Triple Junction in the Indian Ocean were studied applying classical statistical methods (fuzzy c-means clustering, linear mixing model, principal component analysis) for the extraction of endmembers and evaluating the spatial and temporal variation of geochemical signals. Three main factors of sedimentation were expected by the marine geologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. The display of fuzzy membership values and/or factor scores versus depth provided consistent results for two factors only; the ultra-basic component could not be identified. The reason for this may be that only traditional statistical methods were applied, i.e. the untransformed components were used and the cosine-theta coefficient as similarity measure. During the last decade considerable progress in compositional data analysis was made and many case studies were published using new tools for exploratory analysis of these data. Therefore it makes sense to check if the application of suitable data transformations, reduction of the D-part simplex to two or three factors and visual interpretation of the factor scores would lead to a revision of earlier results and to answers to open questions . In this paper we follow the lines of a paper of R. Tolosana- Delgado et al. (2005) starting with a problem-oriented interpretation of the biplot scattergram, extracting compositional factors, ilr-transformation of the components and visualization of the factor scores in a spatial context: The compositional factors will be plotted versus depth (time) of the core samples in order to facilitate the identification of the expected sources of the sedimentary process. Kew words: compositional data analysis, biplot, deep sea sediments
Resumo:
Resumen tomado de la publicaci??n
Resumo:
El Zimbardo Time Perspective Inventory – ZTPI (Zimbardo & Boyd, 1999) es una escala compuesta por cinco factores (pasado positivo, pasado negativo, presente hedonista, presente fatalista y futuro) que evalúa la perspectiva temporal de forma multidimensional superando, de esta forma, una de las limitaciones señaladas en otros instrumentos creados en el pasado. El objetivo de este estudio es analizar la estructura factorial de una versión portuguesa del ZTPI en una muestra de 277 estudiantes universitarios portugueses con edades comprendidas entre los 18 y los 53 años (M = 22, DE = 5.43). Fueron encontrados 5 factores que explican 35.25% de la varianza total. Estos resultados son muy parecidos a los expuestos por Zimbardo y Boyd (1999) en la publicación original del instrumento.