853 resultados para Sensory hierarchy
Resumo:
Virtual fencing has the potential to control grazing livestock. Understanding and refi ning the cues that can alter behaviour is an integral part of autonomous animal control. A series of tests have been completed to explore the relationship between temperament and control. Prior to exposure to virtual fencing control the animals were scored for temperament using fl ight speed and a sociability index using contact logging devices. The behavioural response of 30, Belmont Red steers were observed for behavioural changes when presented with cues prior to receiving an electrical stimulation. A control and four treatments designed to interrupt the animal’s movement down an alley were tested. The treatments consisted of sound plus electrical stimulation, vibration plus electrical stimulation, a visual cue plus electrical stimulation and electrical stimulation by itself. The treatments were randomly applied to each animal over fi ve consecutive trials. A control treatment in which no cues were applied was used to establish a basal behavioural pattern. A trial was considered completed after each animal had been retained behind the cue barrier for at least 60 sec. All cues and electrical stimulation were manually applied from a laptop located on a portable 3.5 m tower located immediately outside the alley. The electric stimulation consisted of 1.0 Kv of electricity. Electric stimulation, sound and vibration along with the Global Position System (GPS) hardware to autonomously record the animal’s path within the alley were recorded every second.
Resumo:
We have developed a new experimental method for interrogating statistical theories of music perception by implementing these theories as generative music algorithms. We call this method Generation in Context. This method differs from most experimental techniques in music perception in that it incorporates aesthetic judgments. Generation In Context is designed to measure percepts for which the musical context is suspected to play an important role. In particular the method is suitable for the study of perceptual parameters which are temporally dynamic. We outline a use of this approach to investigate David Temperley’s (2007) probabilistic melody model, and provide some provisional insights as to what is revealed about the model. We suggest that Temperley’s model could be improved by dynamically modulating the probability distributions according to the changing musical context.
Resumo:
Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.
Resumo:
This work is a digital version of a dissertation that was first submitted in partial fulfillment of the Degree of Doctor of Philosophy at the Queensland University of Technology (QUT) in March 1994. The work was concerned with problems of self-organisation and organisation ranging from local to global levels of hierarchy. It considers organisations as living entities from local to global things that a living entity – more particularly, an individual, a body corporate or a body politic - must know and do to maintain an existence – that is to remain viable – or to be sustainable. The term ‘land management’ as used in 1994 was later subsumed into a more general concept of ‘natural resource management’ and then merged with ideas about sustainable socioeconomic and sustainable ecological development. The cybernetic approach contains many cognitive elements of human observation, language and learning that combine into production processes. The approach tends to highlight instances where systems (or organisations) can fail because they have very little chance of succeeding. Thus there are logical necessities as well as technical possibilities in designing, constructing, operating and maintaining production systems that function reliably over extended periods. Chapter numbers and titles to the original thesis are as follows: 1. Land management as a problem of coping with complexity 2. Background theory in systems theory and cybernetic principles 3. Operationalisation of cybernetic principles in Beer’s Viable System Model 4. Issues in the design of viable cadastral surveying and mapping organisation 5. An analysis of the tendency for fragmentation in surveying and mapping organisation 6. Perambulating the boundaries of Sydney – a problem of social control under poor standards of literacy 7. Cybernetic principles in the process of legislation 8. Closer settlement policy and viability in agricultural production 9. Rate of return in leasing Crown lands
Resumo:
This study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable et al, 2006). IS-Impact is defined as "a measure at a point in time, of the stream of net benefits from the IS [Information System], to date and anticipated, as perceived by all key-user-groups" (Gable Sedera and Chan, 2008). Track efforts have yielded the bicameral IS-Impact measurement model; the "impact" half includes Organizational-Impact and Individual-Impact dimensions; the "quality" half includes System-Quality and Information-Quality dimensions. The IS-Impact model, by design, is intended to be robust, simple and generalisable, to yield results that are comparable across time, stakeholders, different systems and system contexts. The model and measurement approach employs perceptual measures and an instrument that is relevant to key stakeholder groups, thereby enabling the combination or comparison of stakeholder perspectives. Such a validated and widely accepted IS-Impact measurement model has both academic and practical value. It facilitates systematic operationalisation of a main dependent variable in research (IS-Impact), which can also serve as an important independent variable. For IS management practice it provides a means to benchmark and track the performance of information systems in use. From examination of the literature, the study proposes that IS-Impact is an Analytic Theory. Gregor (2006) defines Analytic Theory simply as theory that ‘says what is’, base theory that is foundational to all other types of theory. The overarching research question thus is "Does IS-Impact positively manifest the attributes of Analytic Theory?" In order to address this question, we must first answer the question "What are the attributes of Analytic Theory?" The study identifies the main attributes of analytic theory as: (1) Completeness, (2) Mutual Exclusivity, (3) Parsimony, (4) Appropriate Hierarchy, (5) Utility, and (6) Intuitiveness. The value of empirical research in Information Systems is often assessed along the two main dimensions - rigor and relevance. Those Analytic Theory attributes associated with the ‘rigor’ of the IS-Impact model; namely, completeness, mutual exclusivity, parsimony and appropriate hierarchy, have been addressed in prior research (e.g. Gable et al, 2008). Though common tests of rigor are widely accepted and relatively uniformly applied (particularly in relation to positivist, quantitative research), attention to relevance has seldom been given the same systematic attention. This study assumes a mainly practice perspective, and emphasises the methodical evaluation of the Analytic Theory ‘relevance’ attributes represented by the Utility and Intuitiveness of the IS-Impact model. Thus, related research questions are: "Is the IS-Impact model intuitive to practitioners?" and "Is the IS-Impact model useful to practitioners?" March and Smith (1995), identify four outputs of Design Science: constructs, models, methods and instantiations (Design Science research may involve one or more of these). IS-Impact can be viewed as a design science model, composed of Design Science constructs (the four IS-Impact dimensions and the two model halves), and instantiations in the form of management information (IS-Impact data organised and presented for management decision making). In addition to methodically evaluating the Utility and Intuitiveness of the IS-Impact model and its constituent constructs, the study aims to also evaluate the derived management information. Thus, further research questions are: "Is the IS-Impact derived management information intuitive to practitioners?" and "Is the IS-Impact derived management information useful to practitioners? The study employs a longitudinal design entailing three surveys over 4 years (the 1st involving secondary data) of the Oracle-Financials application at QUT, interspersed with focus groups involving senior financial managers. The study too entails a survey of Financials at four other Australian Universities. The three focus groups respectively emphasise: (1) the IS-Impact model, (2) the 2nd survey at QUT (descriptive), and (3) comparison across surveys within QUT, and between QUT and the group of Universities. Aligned with the track goal of producing IS-Impact scores that are highly comparable, the study also addresses the more specific utility-related questions, "Is IS-Impact derived management information a useful comparator across time?" and "Is IS-Impact derived management information a useful comparator across universities?" The main contribution of the study is evidence of the utility and intuitiveness of IS-Impact to practice, thereby further substantiating the practical value of the IS-Impact approach; and also thereby motivating continuing and further research on the validity of IS-Impact, and research employing the ISImpact constructs in descriptive, predictive and explanatory studies. The study also has value methodologically as an example of relatively rigorous attention to relevance. A further key contribution is the clarification and instantiation of the full set of analytic theory attributes.
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.
Resumo:
Mindfulness is a concept which has been widely used in studies on consciousness, but has recently been applied to the understanding of behaviours in other areas, including clinical psychology, meditation, physical activity, education and business. It has been suggested that mindfulness can also be applied to road safety, though this has not yet been researched. A standard definition of mindfulness is “paying attention in a particular way, on purpose in the present moment and non-judgemental to the unfolding of experience moment by moment” [1]. Scales have been developed to measure mindfulness; however, there are different views in the literature on the nature of the mindfulness construct. This paper reviews the issues raised in the literature and arrives at an operational definition of mindfulness considered relevant to road safety. It is further proposed that mindfulness is best construed as operating together with other psychosocial factors to influence road safety behaviours. The specific case of speeding behaviour is outlined, where the psychosocial variables in the Theory of Planned Behaviour (TPB) have been demonstrated to predict both intention to speed and actual speeding behaviour. A role is proposed for mindfulness in enhancing the explanatory and predictive powers of the TPB concerning speeding. The implications of mindfulness for speeding countermeasures are discussed and a program of future research is outlined.
Resumo:
The term ’public discourses’ describes a range of texts or signifiers that inform the conditions of audience reception. Public discourses include myriad written, visual, spatial, auditory and sensory texts experienced by an audience at a particular theatrical event. Ric Knowles first introduced this term in his recent work Reading the Material Theatre. Whereas Knowles was interested in how public discourses modified the conditions of reception, my broader research is to explore how these public discourses become texts in themselves. This paper will discuss one public discourse, the theatre programme, as it related to a staging of Maxwell Anderson’s Anne of the Thousand Days at the Brisbane Powerhouse in June 2006. The significance of the programme was explored at symposiums held after the performances. Audiences generally view programmes before a performance and after a performance and its significance as a written text changes. The program became a sign vehicle that worked to expound and explicate the meaning of the play for the audience. This public discourse became a significant written text contributing to the textual whole of the theatrical event.
Resumo:
Structural health monitoring has been accepted as a justified effort for long-span bridges, which are critical to a region's economic vitality. As the most heavily instrumented bridge project in the world, WASHMS - Wind And Structural Health Monitoring System has been developed and installed on the cable-supported bridges in Hong Kong (Wong and Ni 2009a). This chapter aims to share some of the experience gained through the operations and studies on the application of WASHMS. It is concluded that Structural Health Monitoring should be composed of two main components: Structural Performance Monitoring (SPM) and Structural Safety Evaluation (SSE). As an example to illustrate how the WASHMS could be used for structural performance monitoring, the layout of the sensory system installed on the Tsing Ma Bridge is briefly described. To demonstrate the two broad approaches of structural safety evaluation - Structural Health Assessment and Damage Detection, three examples in the application of SHM information are presented. These three examples can be considered as pioneer works for the research and development of the structural diagnosis and prognosis tools required by the structural health monitoring for monitoring and evaluation applications.
Resumo:
Patients with idiopathic small fibre neuropathy (ISFN) have been shown to have significant intraepidermal nerve fibre loss and an increased prevalence of impaired glucose tolerance (IGT). It has been suggested that the dysglycemia of IGT and additional metabolic risk factors may contribute to small nerve fibre damage in these patients. Twenty-five patients with ISFN and 12 aged-matched control subjects underwent a detailed evaluation of neuropathic symptoms, neurological deficits (Neuropathy deficit score (NDS); Nerve Conduction Studies (NCS); Quantitative Sensory Testing (QST) and Corneal Confocal Microscopy (CCM)) to quantify small nerve fibre pathology. Eight (32%) patients had IGT. Whilst all patients with ISFN had significant neuropathic symptoms, NDS, NCS and QST except for warm thresholds were normal. Corneal sensitivity was reduced and CCM demonstrated a significant reduction in corneal nerve fibre density (NFD) (Pb0.0001), nerve branch density (NBD) (Pb0.0001), nerve fibre length (NFL) (Pb0.0001) and an increase in nerve fibre tortuosity (NFT) (Pb0.0001). However these parameters did not differ between ISFN patients with and without IGT, nor did they correlate with BMI, lipids and blood pressure. Corneal confocal microscopy provides a sensitive non-invasive means to detect small nerve fibre damage in patients with ISFN and metabolic abnormalities do not relate to nerve damage.
Resumo:
This paper considers the use of servo-mechanisms as part of a tightly integrated homogeneous Wireless Multi- media Sensor Network (WMSN). We describe the design of our second generation WMSN node platform, which has increased image resolution, in-built audio sensors, PIR sensors, and servo- mechanisms. These devices have a wide disparity in their energy consumption and in the information quality they return. As a result, we propose a framework that establishes a hierarchy of devices (sensors and actuators) within the node and uses frequent sampling of cheaper devices to trigger the activation of more energy-hungry devices. Within this framework, we consider the suitability of servos for WMSNs by examining the functional characteristics and by measuring the energy consumption of 2 analog and 2 digital servos, in order to determine their impact on overall node energy cost. We also implement a simple version of our hierarchical sampling framework to evaluate the energy consumption of servos relative to other node components. The evaluation results show that: (1) the energy consumption of servos is small relative to audio/image signal processing energy cost in WMSN nodes; (2) digital servos do not necessarily consume as much energy as is currently believed; and (3) the energy cost per degree panning is lower for larger panning angles.
Resumo:
Study Design: Case Study Series.---------- Introduction: Restriction of forearm rotation may be required for effective management and rehabilitation of the upper limb after trauma.---------- Purpose of the Study: To compare the effectiveness of four splints in restricting forearm rotation.---------- Methods: Muenster, Sugartong, antipronation distal radioulnar joint (DRUJ), and standard wrist splints were fabricated for five healthy participants. Active range of motion (AROM) in forearm pronation and supination was measured with a goniometer for each splint, at the initial point of sensory feedback and during exertion of maximal force.---------- Results: Repeated-measures analysis of variance indicated significant differences between splints for all four AROM measures. Post hoc paired t-tests showed that the Sugartong splint was significantly more restrictive in pronation than the Muenster splint. The antipronation DRUJ splint provided significantly greater restriction in pronation than the standard wrist splint. No splints immobilized the forearm completely.---------- Conclusions: The Sugartong splint is recommended for maximal restriction in pronation, but individual patient characteristics require consideration in splint choice.
Resumo:
Drivers are known to be optimistic about their risk of crash involvement, believing that they are less likely to be involved in a crash than other drivers. However, little comparative research has been conducted among other road users. In addition, optimism about crash risk is conceptualised as applying only to an individual’s assessment of his or her personal risk of crash involvement. The possibility that the self-serving nature of optimism about safety might be generalised to the group-level as a cyclist or a pedestrian, i.e., becoming group-serving rather than self-serving, has been overlooked in relation to road safety. This study analysed a subset of data collected as part of a larger research project on the visibility of pedestrians, cyclists and road workers, focusing on a set of questionnaire items administered to 406 pedestrians, 838 cyclists and 622 drivers. The items related to safety in various scenarios involving drivers, pedestrians and cyclists, allowing predictions to be derived about group differences in agreement with items based on the assumption that the results would exhibit group-serving bias. Analysis of the responses indicated that specific hypotheses about group-serving interpretations of safety and responsibility were supported in 22 of the 26 comparisons. When the nine comparisons relevant to low lighting conditions were considered separately, seven were found to be supported. The findings of the research have implications for public education and for the likely acceptance of messages which are inconsistent with current assumptions and expectations of pedestrians and cyclists. They also suggest that research into group-serving interpretations of safety, even for temporary roles rather than enduring groups, could be fruitful. Further, there is an implication that gains in safety can be made by better educating road users about the limitations of their visibility and the ramifications of this for their own road safety, particularly in low light.
Resumo:
Fatigue has been recognised as the primary contributing factor in approximately 15% of all fatal road crashes in Australia. To develop effective countermeasures for managing fatigue, this study investigates why drivers continue to drive when sleepy, and driver perceptions and behaviours in regards to countermeasures. Based on responses from 305 Australian drivers, it was identified that the major reasons why these participants continued to drive when sleepy were: wanting to get to their destination; being close to home; and time factors. Participants’ perceptions and use of 18 fatigue countermeasures were investigated. It was found that participants perceived the safest strategies, including stopping and sleeping, swapping drivers and stopping for a quick nap, to be the most effective countermeasures. However, it appeared that their knowledge of safe countermeasures did not translate into their use of these strategies. For example, although the drivers perceived stopping for a quick nap to be an effective countermeasure, they reported more frequent use of less safe methods such as stopping to eat or drink and winding down the window. This finding suggests that, while practitioners should continue educating drivers, they may need a greater focus on motivating drivers to implement safe fatigue countermeasures.
Resumo:
A number of advanced driver assistance systems (ADAS) are currently being released on the market, providing safety functions to the drivers such as collision avoidance, adaptive cruise control or enhanced night-vision. These systems however are inherently limited by their sensory range: they cannot gather information from outside this range, also called their “perceptive horizon”. Cooperative systems are a developing research avenue that aims at providing extended safety and comfort functionalities by introducing vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) wireless communications to the road actors. This paper presents the problematic of cooperative systems, their advantages and contributions to road safety and exposes some limitations related to market penetration, sensors accuracy and communications scalability. It explains the issues of how to implement extended perception, a central contribution of cooperative systems. The initial steps of an evaluation of data fusion architectures for extended perception are exposed.