2 resultados para Object-oriented methods
em Digital Peer Publishing
Resumo:
In this paper I first discuss some non-causal change constructions which have largely gone unnoticed in the literature, such as The butler bowed the guests in (which is said to code mild causation) and The supporters booed Newcastle off at the interval (which only codes temporal coextension between its two constitutive subevents). Since the same structure (i.e. the transitive object-oriented change construction) can be used to code a wide spectrum of causal and temporal relations, the question arises of what cognitive mechanisms may be involved in such meaning shifts. I argue that variation can be motivated on the basis of the figure/ground segregation which the conceptualiser can impose upon the integrated scene coded by the change construction. The integrated scene depicts a force-dynamic scenario but also evokes a unique temporal setting (i.e. temporal overlap or coextension between the constitutive subevents). Such a “bias” towards temporal overlap can be used by the conceptualiser to background causation and highlight temporal overlap interpretations. It is also shown that figure/ground segregation can be appealed to to account for the causal interpretation of intransitive change constructions, e.g. The kettle boiled dry. If the conceptual distance between the verbal event and the non-verbal event is (relatively) great, causality can be highlighted even in intransitive patterns.
Resumo:
We present a user supported tracking framework that combines automatic tracking with extended user input to create error free tracking results that are suitable for interactive video production. The goal of our approach is to keep the necessary user input as small as possible. In our framework, the user can select between different tracking algorithms - existing ones and new ones that are described in this paper. Furthermore, the user can automatically fuse the results of different tracking algorithms with our robust fusion approach. The tracked object can be marked in more than one frame, which can significantly improve the tracking result. After tracking, the user can validate the results in an easy way, thanks to the support of a powerful interpolation technique. The tracking results are iteratively improved until the complete track has been found. After the iterative editing process the tracking result of each object is stored in an interactive video file that can be loaded by our player for interactive videos.