929 resultados para deontic modality
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The modal distinctions proposed by Hengeveld (2004), reexamined by Hengeveld and Mackenzie (2008) within the Functional Discourse Grammar (FDG), consider the existence of five types of modality: facultative, deontic, volitive, epistemic and evidential. Taking into special account the deontic modality, there are evidences that it can be subdivided into objective and subjective, as analyzed by Olbertz and Gasparini-Bastos (2013) in auxiliary constructions of spoken Spanish. This wok aims to investigate the contextual elements that favor the interpretation of these two values when they are expressed by the modal auxiliary verb “dever” (must) in spoken Portuguese data.
Resumo:
Impure systems contain Objects and Subjects: Subjects are human beings. We can distinguish a person as an observer (subjectively outside the system) and that by definition is the Subject himself, and part of the system. In this case he acquires the category of object. Objects (relative beings) are significances, which are the consequence of perceptual beliefs on the part of the Subject about material or energetic objects (absolute beings) with certain characteristics.The IS (Impure System) approach is as follows: Objects are perceptual significances (relative beings) of material or energetic objects (absolute beings). The set of these objects will form an impure set of the first order. The existing relations between these relative objects will be of two classes: transactions of matter and/or energy and inferential relations. Transactions can have alethic modality: necessity, possibility, impossibility and contingency. Ontic existence of possibility entails that inferential relations have Deontic modality: obligation, permission, prohibition, faculty and analogy. We distinguished between theorems (natural laws) and norms (ethical, legislative and customary rules of conduct).
Resumo:
[EN] This article examines a variety of options for expressing speaker and writer stance in a subcorpus of MarENG, a maritime English learning tool sponsored by the EU (35,041 words). Non-verbal markers related to key areas of modal expression are presented; (1)epistemic adverbs and adverbial expressions, (2) epistemic adjectives, (3) deontic adjectives, (4) evidential adverbs, (5) evidential adjectives, (6) evidential interpersonal markers, and (7) single adverbials conveying the speaker’s attitudes, feelings or value judgments. The overall aim is to present an overview of how these non-verbal markers operate in this LSP genre.
Resumo:
Thermal-infrared images have superior statistical properties compared with visible-spectrum images in many low-light or no-light scenarios. However, a detailed understanding of feature detector performance in the thermal modality lags behind that of the visible modality. To address this, the first comprehensive study on feature detector performance on thermal-infrared images is conducted. A dataset is presented which explores a total of ten different environments with a range of statistical properties. An investigation is conducted into the effects of several digital and physical image transformations on detector repeatability in these environments. The effect of non-uniformity noise, unique to the thermal modality, is analyzed. The accumulation of sensor non-uniformities beyond the minimum possible level was found to have only a small negative effect. A limiting of feature counts was found to improve the repeatability performance of several detectors. Most other image transformations had predictable effects on feature stability. The best-performing detector varied considerably depending on the nature of the scene and the test.
The backfilled GEI : a cross-capture modality gait feature for frontal and side-view gait recognition
Resumo:
In this paper, we propose a novel direction for gait recognition research by proposing a new capture-modality independent, appearance-based feature which we call the Back-filled Gait Energy Image (BGEI). It can can be constructed from both frontal depth images, as well as the more commonly used side-view silhouettes, allowing the feature to be applied across these two differing capturing systems using the same enrolled database. To evaluate this new feature, a frontally captured depth-based gait dataset was created containing 37 unique subjects, a subset of which also contained sequences captured from the side. The results demonstrate that the BGEI can effectively be used to identify subjects through their gait across these two differing input devices, achieving rank-1 match rate of 100%, in our experiments. We also compare the BGEI against the GEI and GEV in their respective domains, using the CASIA dataset and our depth dataset, showing that it compares favourably against them. The experiments conducted were performed using a sparse representation based classifier with a locally discriminating input feature space, which show significant improvement in performance over other classifiers used in gait recognition literature, achieving state of the art results with the GEI on the CASIA dataset.
Resumo:
Background Standard operating procedures state that police officers should not drive while interacting with their mobile data terminal (MDT) which provides in-vehicle information essential to police work. Such interactions do however occur in practice and represent a potential source of driver distraction. The MDT comprises visual output with manual input via touch screen and keyboard. This study investigated the potential for alternative input and output methods to mitigate driver distraction with specific focus on eye movements. Method Nineteen experienced drivers of police vehicles (one female) from the NSW Police Force completed four simulated urban drives. Three drives included a concurrent secondary task: imitation licence plate search using an emulated MDT. Three different interface methods were examined: Visual-Manual, Visual-Voice, and Audio-Voice (“Visual” and “Audio” = output modality; “Manual” and “Voice” = input modality). During each drive, eye movements were recorded using FaceLAB™ (Seeing Machines Ltd, Canberra, ACT). Gaze direction and glances on the MDT were assessed. Results The Visual-Voice and Visual-Manual interfaces resulted in a significantly greater number of glances towards the MDT than Audio-Voice or Baseline. The Visual-Manual and Visual-Voice interfaces resulted in significantly more glances to the display than Audio-Voice or Baseline. For longer duration glances (>2s and 1-2s) the Visual-Manual interface resulted in significantly more fixations than Baseline or Audio-Voice. The short duration glances (<1s) were significantly greater for both Visual-Voice and Visual-Manual compared with Baseline and Audio-Voice. There were no significant differences between Baseline and Audio-Voice. Conclusion An Audio-Voice interface has the greatest potential to decrease visual distraction to police drivers. However, it is acknowledged that an audio output may have limitations for information presentation compared with visual output. The Visual-Voice interface offers an environment where the capacity to present information is sustained, whilst distraction to the driver is reduced (compared to Visual-Manual) by enabling adaptation of fixation behaviour.
Resumo:
The purpose of this study was to compare the effectiveness of three different recovery modalities - active (ACT), passive (PAS) and contrast temperature water immersion (CTW) - on the performance of repeated treadmill running, lactate concentration and pH. Fourteen males performed two pairs of treadmill runs to exhaustion at 120% and 90% of peak running speed (PRS) over a 4-hour period. ACT, PAS or CTW was performed for 15-min after the first pair of treadmill runs. ACT consisted of running at 40% PRS, PAS consisted of standing stationary and CTW consisted of alternating between 60-s cold (10°C) and 120-s hot (42°C) water immersion. Run times were converted to time to cover set distance using critical power. Type of recovery modality did not have a significant effect on change in time to cover 400 m (Mean±SD; ACT 2.7±3.6 s, PAS 2.9±4.2 s, CTW 4.2±6.9 s), 1000 m (ACT 2.2±4.0 s, PAS 4.8±8.6 s, CTW 2.1±7.2 s) or 5000 m (ACT 1.4±29.0 s, PAS 16.7±58.5 s, CTW 11.7±33.0 s). Post exercise blood lactate concentration was lower in ACT and CTW compared with PAS. Participants reported an increased perception of recovery in the CTW compared with ACT and PAS. Blood pH was not significantly influenced by recovery modality. Data suggest both ACT and CTW reduce lactate accumulation after high intensity running, but high intensity treadmill running performance is returned to baseline 4-hours after the initial exercise bout regardless of the recovery strategy employed.
Resumo:
This paper proposes an approach to achieve resilient navigation for indoor mobile robots. Resilient navigation seeks to mitigate the impact of control, localisation, or map errors on the safety of the platform while enforcing the robot’s ability to achieve its goal. We show that resilience to unpredictable errors can be achieved by combining the benefits of independent and complementary algorithmic approaches to navigation, or modalities, each tuned to a particular type of environment or situation. In this paper, the modalities comprise a path planning method and a reactive motion strategy. While the robot navigates, a Hidden Markov Model continually estimates the most appropriate modality based on two types of information: context (information known a priori) and monitoring (evaluating unpredictable aspects of the current situation). The robot then uses the recommended modality, switching between one and another dynamically. Experimental validation with a SegwayRMP- based platform in an office environment shows that our approach enables failure mitigation while maintaining the safety of the platform. The robot is shown to reach its goal in the presence of: 1) unpredicted control errors, 2) unexpected map errors and 3) a large injected localisation fault.
Resumo:
The provision of visual support to individuals with an autism spectrum disorder (ASD) is widely recommended. We explored one mechanism underlying the use of visual supports: efficiency of language processing. Two groups of children, one with and one without an ASD, participated. The groups had comparable oral and written language skills and nonverbal cognitive abilities. In two semantic priming experiments, prime modality and prime–target relatedness were manipulated. Response time and accuracy of lexical decisions on the spoken word targets were measured. In the first uni-modal experiment, both groups demonstrated significant priming effects. In the second experiment which was cross-modal, no effect for relatedness or group was found. This result is considered in the light of the attentional capacity required for access to the lexicon via written stimuli within the developing semantic system. These preliminary findings are also considered with respect to the use of visual support for children with ASD.
Resumo:
The ability to build high-fidelity 3D representations of the environment from sensor data is critical for autonomous robots. Multi-sensor data fusion allows for more complete and accurate representations. Furthermore, using distinct sensing modalities (i.e. sensors using a different physical process and/or operating at different electromagnetic frequencies) usually leads to more reliable perception, especially in challenging environments, as modalities may complement each other. However, they may react differently to certain materials or environmental conditions, leading to catastrophic fusion. In this paper, we propose a new method to reliably fuse data from multiple sensing modalities, including in situations where they detect different targets. We first compute distinct continuous surface representations for each sensing modality, with uncertainty, using Gaussian Process Implicit Surfaces (GPIS). Second, we perform a local consistency test between these representations, to separate consistent data (i.e. data corresponding to the detection of the same target by the sensors) from inconsistent data. The consistent data can then be fused together, using another GPIS process, and the rest of the data can be combined as appropriate. The approach is first validated using synthetic data. We then demonstrate its benefit using a mobile robot, equipped with a laser scanner and a radar, which operates in an outdoor environment in the presence of large clouds of airborne dust and smoke.