994 resultados para situation theory


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The representation of a perceptual scene by a computer is usually limited to numbers representing dimensions and colours. The theory of affordances attempted to provide a new way of representing an environment, with respect to a particular agent. The view was introduced as part of an entire field of psychology labeled as 'ecological,' which has since branched into computer science through the field of robotics, and formal methods. This thesis will describe the concept of affordances, review several existing formalizations, and take a brief look at applications to robotics. The formalizations put forth in the last 20 years have no agreed upon structure, only that both the agent and the environment must be taken in relation to one another. Situation theory has also been evolving since its inception in 1983 by Barwise & Perry. The theory provided a formal way to represent any arbitrary piece of information in terms of relations. This thesis will take a toy version of situation theory published in CSLI lecture notes no. 22, and add to the given ontologies. This thesis extends the given ontologies to include specialized affordance types, and individual object types. This allows for the definition of semantic objects called environments, which support a situation and a set of affordances, and niches which refer to a set of actions for an individual. Finally, a possible way for an environment to change into a new environment is suggested via the activation of an affordance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the ever-growing amount of connected sensors (IoT), making sense of sensed data becomes even more important. Pervasive computing is a key enabler for sustainable solutions, prominent examples are smart energy systems and decision support systems. A key feature of pervasive systems is situation awareness which allows a system to thoroughly understand its environment. It is based on external interpretation of data and thus relies on expert knowledge. Due to the distinct nature of situations in different domains and applications, the development of situation aware applications remains a complex process. This thesis is concerned with a general framework for situation awareness which simplifies the development of applications. It is based on the Situation Theory Ontology to provide a foundation for situation modelling which allows knowledge reuse. Concepts of the Situation Theory are mapped to the Context Space Theory which is used for situation reasoning. Situation Spaces in the Context Space are automatically generated with the defined knowledge. For the acquisition of sensor data, the IoT standards O-MI/O-DF are integrated into the framework. These allow a peer-to-peer data exchange between data publisher and the proposed framework and thus a platform independent subscription to sensed data. The framework is then applied for a use case to reduce food waste. The use case validates the applicability of the framework and furthermore serves as a showcase for a pervasive system contributing to the sustainability goals. Leading institutions, e.g. the United Nations, stress the need for a more resource efficient society and acknowledge the capability of ICT systems. The use case scenario is based on a smart neighbourhood in which the system recommends the most efficient use of food items through situation awareness to reduce food waste at consumption stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the role of first aid training in increasing adolescent helping behaviours when taught in a school-based injury prevention program, Skills for Preventing Injury in Youth (SPIY). The research involved the development and application of an extended Theory of Planned Behaviour (TPB), including “behavioural willingness in a fight situation,” “first aid knowledge” and “perceptions of injury seriousness”, to predict the relationship between participation in SPIY and helping behaviours when a friend is injured in a fight. From 35 Queensland high schools, 2500 Year 9 students (mean age = 13.5, 40% male) completed surveys measuring their attitudes, perceived behavioural control, subjective norms and behavioural intention, from the TPB, and added measures of behavioural willingness in a fight situation, perceptions of injury seriousness and first aid knowledge, to predict helping behaviours when a friend is injured in a fight. It is expected that the TPB will significantly contribute to understanding the relationship between participation in SPIY and helping behaviours when a friend is injured in a fight. Further analyses will determine whether the extension of the model significantly increases the variance explained in helping behaviours. The findings of this research will provide insight into the critical factors that may increase adolescent bystanders’ actions in injury situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interest in utilising multiple heterogeneous Unmanned Aerial Vehicles (UAVs) in close proximity is growing rapidly. As such, many challenges are presented in the effective coordination and management of these UAVs; converting the current n-to-1 paradigm (n operators operating a single UAV) to the 1-to-n paradigm (one operator managing n UAVs). This paper introduces an Information Abstraction methodology used to produce the functional capability framework initially proposed by Chen et al. and its Level Of Detail (LOD) indexing scale. This framework was validated through comparing the operator workload and Situation Awareness (SA) of three experiment scenarios involving multiple autonomously heterogeneous UAVs. The first scenario was set in a high LOD configuration with highly abstracted UAV functional information; the second scenario was set in a mixed LOD configuration; and the final scenario was set in a low LOD configuration with maximal UAV functional information. Results show that there is a significant statistical decrease in operator workload when a UAV’s functional information is displayed at its physical form (low LOD - maximal information) when comparing to the mixed LOD configuration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The autonomous capabilities in collaborative unmanned aircraft systems are growing rapidly. Without appropriate transparency, the effectiveness of the future multiple Unmanned Aerial Vehicle (UAV) management paradigm will be significantly limited by the human agent’s cognitive abilities; where the operator’s CognitiveWorkload (CW) and Situation Awareness (SA) will present as disproportionate. This proposes a challenge in evaluating the impact of robot autonomous capability feedback, allowing the human agent greater transparency into the robot’s autonomous status - in a supervisory role. This paper presents; the motivation, aim, related works, experiment theory, methodology, results and discussions, and the future work succeeding this preliminary study. The results in this paper illustrates that, with a greater transparency of a UAV’s autonomous capability, an overall improvement in the subjects’ cognitive abilities was evident, that is, with a confidence of 95%, the test subjects’ mean CW was demonstrated to have a statistically significant reduction, while their mean SA was demonstrated to have a significant increase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the future the number of the disabled drivers requiring a special evaluation of their driving ability will increase due to the ageing population, as well as the progress of adaptive technology. This places pressure on the development of the driving evaluation system. Despite quite intensive research there is still no consensus concerning what is the factual situation in a driver evaluation (methodology), which measures should be included in an evaluation (methods), and how an evaluation has to be carried out (practise). In order to find answers to these questions we carried out empirical studies, and simultaneously elaborated upon a conceptual model for driving and a driving evaluation. The findings of empirical studies can be condensed into the following points: 1) A driving ability defined by the on-road driving test is associated with different laboratory measures depending on the study groups. Faults in the laboratory tests predicted faults in the on-road driving test in the novice group, whereas slowness in the laboratory predicted driving faults in the experienced drivers group. 2) The Parkinson study clearly showed that even an experienced clinician cannot reliably accomplish an evaluation of a disabled person’s driving ability without collaboration with other specialists. 3) The main finding of the stroke study was that the use of a multidisciplinary team as a source of information harmonises the specialists’ evaluations. 4) The patient studies demonstrated that the disabled persons themselves, as well as their spouses, are as a rule not reliable evaluators. 5) From the safety point of view, perceptible operations with the control devices are not crucial, but correct mental actions which the driver carries out with the help of the control devices are of greatest importance. 6) Personality factors including higher-order needs and motives, attitudes and a degree of self-awareness, particularly a sense of illness, are decisive when evaluating a disabled person’s driving ability. Personality is also the main source of resources concerning compensations for lower-order physical deficiencies and restrictions. From work with the conceptual model we drew the following methodological conclusions: First, the driver has to be considered as a holistic subject of the activity, as a multilevel hierarchically organised system of an organism, a temperament, an individuality, and a personality where the personality is the leading subsystem from the standpoint of safety. Second, driving as a human form of a sociopractical activity, is also a hierarchically organised dynamic system. Third, in an evaluation of driving ability it is a question of matching these two hierarchically organised structures: a subject of an activity and a proper activity. Fourth, an evaluation has to be person centred but not disease-, function- or method centred. On the basis of our study a multidisciplinary team (practitioner, driving school teacher, psychologist, occupational therapist) is recommended for use in demanding driver evaluations. Primary in a driver’s evaluations is a coherent conceptual model while concrete methods of evaluations may vary. However, the on-road test must always be performed if possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An attempt is made to provide a theoretical explanation of the effect of the positive column on the voltage-current characteristic of a glow or an arc discharge. Such theories have been developed before, and all are based on balancing the production and loss of charged particles and accounting for the energy supplied to the plasma by the applied electric field. Differences among the theories arise from the approximations and omissions made in selecting processes that affect the particle and energy balances. This work is primarily concerned with the deviation from the ambipolar description of the positive column caused by space charge, electron-ion volume recombination, and temperature inhomogeneities.

The presentation is divided into three parts, the first of which involved the derivation of the final macroscopic equations from kinetic theory. The final equations are obtained by taking the first three moments of the Boltzmann equation for each of the three species in the plasma. Although the method used and the equations obtained are not novel, the derivation is carried out in detail in order to appraise the validity of numerous approximations and to justify the use of data from other sources. The equations are applied to a molecular hydrogen discharge contained between parallel walls. The applied electric field is parallel to the walls, and the dependent variables—electron and ion flux to the walls, electron and ion densities, transverse electric field, and gas temperature—vary only in the direction perpendicular to the walls. The mathematical description is given by a sixth-order nonlinear two-point boundary value problem which contains the applied field as a parameter. The amount of neutral gas and its temperature at the walls are held fixed, and the relation between the applied field and the electron density at the center of the discharge is obtained in the process of solving the problem. This relation corresponds to that between current and voltage and is used to interpret the effect of space charge, recombination, and temperature inhomogeneities on the voltage-current characteristic of the discharge.

The complete solution of the equations is impractical both numerically and analytically, and in Part II the gas temperature is assumed uniform so as to focus on the combined effects of space charge and recombination. The terms representing these effects are treated as perturbations to equations that would otherwise describe the ambipolar situation. However, the term representing space charge is not negligible in a thin boundary layer or sheath near the walls, and consequently the perturbation problem is singular. Separate solutions must be obtained in the sheath and in the main region of the discharge, and the relation between the electron density and the applied field is not determined until these solutions are matched.

In Part III the electron and ion densities are assumed equal, and the complicated space-charge calculation is thereby replaced by the ambipolar description. Recombination and temperature inhomogeneities are both important at high values of the electron density. However, the formulation of the problem permits a comparison of the relative effects, and temperature inhomogeneities are shown to be important at lower values of the electron density than recombination. The equations are solved by a direct numerical integration and by treating the term representing temperature inhomogeneities as a perturbation.

The conclusions reached in the study are primarily concerned with the association of the relation between electron density and axial field with the voltage-current characteristic. It is known that the effect of space charge can account for the subnormal glow discharge and that the normal glow corresponds to a close approach to an ambipolar situation. The effect of temperature inhomogeneities helps explain the decreasing characteristic of the arc, and the effect of recombination is not expected to appear except at very high electron densities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning to perceive is faced with a classical paradox: if understanding is required for perception, how can we learn to perceive something new, something we do not yet understand? According to the sensorimotor approach, perception involves mastery of regular sensorimotor co-variations that depend on the agent and the environment, also known as the "laws" of sensorimotor contingencies (SMCs). In this sense, perception involves enacting relevant sensorimotor skills in each situation. It is important for this proposal that such skills can be learned and refined with experience and yet up to this date, the sensorimotor approach has had no explicit theory of perceptual learning. The situation is made more complex if we acknowledge the open-ended nature of human learning. In this paper we propose Piaget's theory of equilibration as a potential candidate to fulfill this role. This theory highlights the importance of intrinsic sensorimotor norms, in terms of the closure of sensorimotor schemes. It also explains how the equilibration of a sensorimotor organization faced with novelty or breakdowns proceeds by re-shaping pre-existing structures in coupling with dynamical regularities of the world. This way learning to perceive is guided by the equilibration of emerging forms of skillful coping with the world. We demonstrate the compatibility between Piaget's theory and the sensorimotor approach by providing a dynamical formalization of equilibration to give an explicit micro-genetic account of sensorimotor learning and, by extension, of how we learn to perceive. This allows us to draw important lessons in the form of general principles for open-ended sensorimotor learning, including the need for an intrinsic normative evaluation by the agent itself. We also explore implications of our micro-genetic account at the personal level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When we reason about change over time, causation provides an implicit preference: we prefer sequences of situations in which one situation leads causally to the next, rather than sequences in which one situation follows another at random and without causal connections. In this paper, we explore the problem of temporal reasoning --- reasoning about change over time --- and the crucial role that causation plays in our intuitions. We examine previous approaches to temporal reasoning, and their shortcomings, in light of this analysis. We propose a new system for causal reasoning, motivated action theory, which builds upon causation as a crucial preference creterion. Motivated action theory solves the traditional problems of both forward and backward reasoning, and additionally provides a basis for a new theory of explanation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores the changing role of contemporary grandparents with many demonstrating a willingness and ability to take on parental responsibilities for their grandchildren, where they may face challenges and opportunities in difficult times. Three main forms of grand parenting are identified in the literature, those who have primary responsibility and are raising their grand children as their main carers perhaps in response to crisis situations, those who live in extended families and participate in care, and those who provide day care while the child’s parents work. The latter has increased because of the increasing frequency of divorce, single parenting and the lack of available or subsidised child care in the United Kingdom. When grandparents step into a troubled situation and attempt to offer stability and security for their grandchildren they may have to manage the combined responsibilities of family caregivers and parental figures. Grandparenthood is a tenuous role, lacking clear agreement on behaviour norms. In the culture of advice and parenting support, while care must be taken not to undermine parenting skills or make judgements about the ability to cope with the demands of childcare, an exploration of the impact on grandparents and children must be undertaken. Due to the complex web of interrelated factors the process and outcomes of care giving by grandparents is not well known in the literature. It is proposed therefore that it is timely for research to be undertaken to explore and develop a theory of Grandparenthood.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genes, species and ecosystems are often considered to be assets. The need to ensure a sufficient diversity of this asset is being increasingly recognised today. Asset managers in banks and insurance companies face a similar challenge. They are asked to manage the assets of their investors by constructing efficient portfolios. They deliberately make use of a phenomenon observed in the formation of portfolios: returns are additive, while risks diversify. This phenomenon and its implications are at the heart of portfolio theory. Portfolio theory, like few other economic theories, has dramatically transformed the practical work of banks and insurance companies. Before portfolio theory was developed about 50 years ago, asset managers were confronted with a situation similar to the situation the research on biodiversity faces today. While the need for diversification was generally accepted, a concept that linked risk and return on a portfolio level and showed the value of diversification was missing. Portfolio theory has closed this gap. This article first explains the fundamentals of portfolio theory and transfers it to biodiversity. A large part of this article is then dedicated to some of the implications portfolio theory has for the valuation and management of biodiversity. The last section introduces three development openings for further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new statistical signal reception model for shadowed body-centric communications channels. In this model, the potential clustering of multipath components is considered alongside the presence of elective dominant signal components. As typically occurs in body-centric communications channels, the dominant or line-of-sight (LOS) components are shadowed by body matter situated in the path trajectory. This situation may be further exacerbated due to physiological and biomechanical movements of the body. In the proposed model, the resultant dominant component which is formed by the phasor addition of these leading contributions is assumed to follow a lognormal distribution. A wide range of measured and simulated shadowed body-centric channels considering on-body, off-body and body-to-body communications are used to validate the model. During the course of the validation experiments, it was found that, even for environments devoid of multipath or specular reflections generated by the local surroundings, a noticeable resultant dominant component can still exist in body-centric channels where the user's body shadows the direct LOS signal path between the transmitter and the receiver.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present paper studies the probability of ruin of an insurer, if excess of loss reinsurance with reinstatements is applied. In the setting of the classical Cramer-Lundberg risk model, piecewise deterministic Markov processes are used to describe the free surplus process in this more general situation. It is shown that the finite-time ruin probability is both the solution of a partial integro-differential equation and the fixed point of a contractive integral operator. We exploit the latter representation to develop and implement a recursive algorithm for numerical approximation of the ruin probability that involves high-dimensional integration. Furthermore we study the behavior of the finite-time ruin probability under various levels of initial surplus and security loadings and compare the efficiency of the numerical algorithm with the computational alternative of stochastic simulation of the risk process. (C) 2011 Elsevier Inc. All rights reserved.