85 resultados para non-conscious cognitive processing (NCCP) time.


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Separability is a concept that is very difficult to define, and yet much of our scientific method is implicitly based upon the assumption that systems can sensibly be reduced to a set of interacting components. This paper examines the notion of separability in the creation of bi-ambiguous compounds that is based upon the CHSH and CH inequalities. It reports results of an experiment showing that violations of the CHSH and CH inequality can occur in human conceptual combination.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a method of voice activity detection (VAD) suitable for high noise scenarios, based on the fusion of two complementary systems. The first system uses a proposed non-Gaussianity score (NGS) feature based on normal probability testing. The second system employs a histogram distance score (HDS) feature that detects changes in the signal through conducting a template-based similarity measure between adjacent frames. The decision outputs by the two systems are then merged using an open-by-reconstruction fusion stage. Accuracy of the proposed method was compared to several baseline VAD methods on a database created using real recordings of a variety of high-noise environments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The study investigated the effect on learning of four different instructional formats used to teach assembly procedures. Cognitive load and spatial information processing theories were used to generate the instructional material. The first group received a physical model to study, the second an isometric drawing, the third an isometric drawing plus a model and the fourth an orthographic drawing. Forty secondary school students were presented with the four different instructional formats and subsequently tested on an assembly task. The findings indicated that there may be evidence to argue that the model format which only required encoding of an already constructed three dimensional representation, caused less extraneous cognitive load compared to the isometric and the orthographic formats. No significant difference was found between the model and the isometric-plus-model formats on all measures because 80% of the students in the isometric-plus-model format chose to use the model format only. The model format also did not differ significantly from other groups in total time taken to complete the assembly, in number of correctly assembled pieces and in time spent on studying the tasks. However, the model group had significantly more correctly completed models and required fewer extra looks than the other groups.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Driving is a vigilance task, requiring sustained attention to maintain performance and avoid crashes. Hypovigilance (i.e., marked reduction in vigilance) while driving manifests as poor driving performance and is commonly attributed to fatigue (Dinges, 1995). However, poor driving performance has been found to be more frequent when driving in monotonous road environments, suggesting that monotony plays a role in generating hypovigilance (Thiffault & Bergeron, 2003b). Research to date has tended to conceptualise monotony as a uni-dimensional task characteristic, typically used over a prolonged period of time to facilitate other factors under investigation, most notably fatigue. However, more often than not, more than one exogenous factor relating to the task or operating environment is manipulated to vary or generate monotony (Mascord & Heath, 1992). Here we aimed to explore whether monotony is a multi-dimensional construct that is determined by characteristics of both the task proper and the task environment. The general assumption that monotony is a task characteristic used solely to elicit hypovigilance or poor performance related to fatigue appears to have led to there being little rigorous investigation into the exact nature of the relationship. While the two concepts are undoubtedly linked, the independent effect of monotony on hypovigilance remains largely ignored. Notwithstanding, there is evidence that monotony effects can emerge very early in vigilance tasks and are not necessarily accompanied by fatigue (see Meuter, Rakotonirainy, Johns, & Wagner, 2005). This phenomenon raises a largely untested, empirical question explored in two studies: Can hypovigilance emerge as a consequence of task and/or environmental monotony, independent of time on task and fatigue? In Study 1, using a short computerised vigilance task requiring responses to be withheld to infrequent targets, we explored the differential impacts of stimuli and task demand manipulations on the development of a monotonous context and the associated effects on vigilance performance (as indexed by respone errors and response times), independent of fatigue and time on task. The role of individual differences (sensation seeking, extroversion and cognitive failures) in moderating monotony effects was also considered. The results indicate that monotony affects sustained attention, with hypovigilance and associated performance worse in monotonous than in non-monotonous contexts. Critically, performance decrements emerged early in the task (within 4.3 minutes) and remained consistent over the course of the experiment (21.5 minutes), suggesting that monotony effects can operate independent of time on task and fatigue. A combination of low task demands and low stimulus variability form a monotonous context characterised by hypovigilance and poor task performance. Variations to task demand and stimulus variability were also found to independently affect performance, suggesting that monotony is a multi-dimensional construct relating to both task monotony (associated with the task itself) and environmental monotony (related to characteristics of the stimulus). Consequently, it can be concluded that monotony is multi-dimensional and is characterised by low variability in stimuli and/or task demands. The proposition that individual differences emerge under conditions of varying monotony with high sensation seekers and/or extroverts performing worse in monotonous contexts was only partially supported. Using a driving simulator, the findings of Study 1 were extended to a driving context to identify the behavioural and psychophysiological indices of monotony-related hypovigilance associated with variations to road design and road side scenery (Study 2). Supporting the proposition that monotony is a multi-dimensional construct, road design variability emerged as a key moderating characteristic of environmental monotony, resulting in poor driving performance indexed by decrements in steering wheel measures (mean lateral position). Sensation seeking also emerged as a moderating factor, where participants high in sensation seeking tendencies displayed worse driving behaviour in monotonous conditions. Importantly, impaired driving performance was observed within 8 minutes of commencing the driving task characterised by environmental monotony (low variability in road design) and was not accompanied by a decline in psychophysiological arousal. In addition, no subjective declines in alertness were reported. With fatigue effects associated with prolonged driving (van der Hulst, Meijman, & Rothengatter, 2001) and indexed by drowsiness, this pattern of results indicates that monotony can affect driver vigilance, independent of time on task and fatigue. Perceptual load theory (Lavie, 1995, 2005) and mindlessness theory (Robertson, Manly, Andrade, Baddley, & Yiend, 1997) provide useful theoretical frameworks for explaining and predicting monotony effects by positing that the low load (of task and/or stimuli) associated with a monotonous task results in spare attentional capacity which spills over involuntarily, resulting in the processing of task-irrelevant stimuli or task unrelated thoughts. That is, individuals – even when not fatigued - become easily distracted when performing a highly monotonous task, resulting in hypovigilance and impaired performance. The implications for road safety, including the likely effectiveness of fatigue countermeasures to mitigate monotony-related driver hypovigilance are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Microwave heating technology is a cost-effective alternative way for heating and curing of used in polymer processing of various alternate materials. The work presented in this paper addresses the attempts made by the authors to study the glass transition temperature and curing of materials such as casting resins R2512, R2515 and laminating resin GPR 2516 in combination with two hardeners ADH 2403 and ADH 2409. The magnetron microwave generator used in this research is operating at a frequency of 2.45 GHz with a hollow rectangular waveguide. During this investigation it has been noted that microwave heated mould materials resulted with higher glass transition temperatures and better microstructure. It also noted that Microwave curing resulted in a shorter curing time to reach the maximum percentage cure. From this study it can be concluded that microwave technology can be efficiently and effectively used to cure new generation alternate polymer materials for manufacture of injection moulds in a rapid and efficient manner. Microwave curing resulted in a shorter curing time to reach the maximum percentage cure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: Older driver research has mostly focused on identifying that small proportion of older drivers who are unsafe. Little is known about how normal cognitive changes in aging affect driving in the wider population of adults who drive regularly. We evaluated the association of cognitive function and age, with driving errors. Method: A sample of 266 drivers aged 70 to 88 years were assessed on abilities that decline in normal aging (visual attention, processing speed, inhibition, reaction time, task switching) and the UFOV® which is a validated screening instrument for older drivers. Participants completed an on-road driving test. Generalized linear models were used to estimate the associations of cognitive factor with specific driving errors and number of errors in self-directed and instructor navigated conditions. Results: All errors types increased with chronological age. Reaction time was not associated with driving errors in multivariate analyses. A cognitive factor measuring Speeded Selective Attention and Switching was uniquely associated with the most errors types. The UFOV predicted blindspot errors and errors on dual carriageways. After adjusting for age, education and gender the cognitive factors explained 7% of variance in the total number of errors in the instructor navigated condition and 4% of variance in the self-navigated condition. Conclusion: We conclude that among older drivers errors increase with age and are associated with speeded selective attention particularly when that requires attending to the stimuli in the periphery of the visual field, task switching, errors inhibiting responses and visual discrimination. These abilities should be the target of cognitive training.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A time-resolved inverse spatially offset Raman spectrometer was constructed for depth profiling of Raman-active substances under both the lab and the field environments. The system operating principles and performance are discussed along with its advantages relative to traditional continuous wave spatially offset Raman spectrometer. The developed spectrometer uses a combination of space- and time-resolved detection in order to obtain high-quality Raman spectra from substances hidden behind coloured opaque surface layers, such as plastic and garments, with a single measurement. The time-gated spatially offset Raman spectrometer was successfully used to detect concealed explosives and drug precursors under incandescent and fluorescent background light as well as under daylight. The average screening time was 50 s per measurement. The excitation energy requirements were relatively low (20 mW) which makes the probe safe for screening hazardous substances. The unit has been designed with nanosecond laser excitation and gated detection, making it of lower cost and complexity than previous picosecond-based systems, to provide a functional platform for in-line or in-field sensing of chemical substances.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large amounts of money due to product recalls, consumer impact and subsequent loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and microorganisms to enter the package. In the food processing and packaging industry worldwide, there is an increasing demand for cost effective state of the art inspection technologies that are capable of reliably detecting leaky seals and delivering products at six-sigma. The new technology will develop non-destructive testing technology using digital imaging and sensing combined with a differential vacuum technique to assess seal integrity of food packages on a high-speed production line. The cost of leaky packages in Australian food industries is estimated close to AUD $35 Million per year. Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large sums of money due to product recalls, compensation claims and loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and micro-organisms to enter the package. Flexible plastic packages are widely used, and are the least expensive form of retaining the quality of the product. These packets can be used to seal, and therefore maximise, the shelf life of both dry and moist products. The seals of food packages need to be airtight so that the food content is not contaminated due to contact with microorganisms that enter as a result of air leakage. Airtight seals also extend the shelf life of packaged foods, and manufacturers attempt to prevent food products with leaky seals being sold to consumers. There are many current NDT (non-destructive testing) methods of testing the seal of flexible packages best suited to random sampling, and for laboratory purposes. The three most commonly used methods are vacuum/pressure decay, bubble test, and helium leak detection. Although these methods can detect very fine leaks, they are limited by their high processing time and are not viable in a production line. Two nondestructive in-line packaging inspection machines are currently available and are discussed in the literature review. The detailed design and development of the High-Speed Sensing and Detection System (HSDS) is the fundamental requirement of this project and the future prototype and production unit. Successful laboratory testing was completed and a methodical design procedure was needed for a successful concept. The Mechanical tests confirmed the vacuum hypothesis and seal integrity with good consistent results. Electrically, the testing also provided solid results to enable the researcher to move the project forward with a certain amount of confidence. The laboratory design testing allowed the researcher to confirm theoretical assumptions before moving into the detailed design phase. Discussion on the development of the alternative concepts in both mechanical and electrical disciplines enables the researcher to make an informed decision. Each major mechanical and electrical component is detailed through the research and design process. The design procedure methodically works through the various major functions both from a mechanical and electrical perspective. It opens up alternative ideas for the major components that although are sometimes not practical in this application, show that the researcher has exhausted all engineering and functionality thoughts. Further concepts were then designed and developed for the entire HSDS unit based on previous practice and theory. In the future, it would be envisaged that both the Prototype and Production version of the HSDS would utilise standard industry available components, manufactured and distributed locally. Future research and testing of the prototype unit could result in a successful trial unit being incorporated in a working food processing production environment. Recommendations and future works are discussed, along with options in other food processing and packaging disciplines, and other areas in the non-food processing industry.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The question of under what conditions conceptual representation is compositional remains debatable within cognitive science. This paper proposes a well developed mathematical apparatus for a probabilistic representation of concepts, drawing upon methods developed in quantum theory to propose a formal test that can determine whether a specific conceptual combination is compositional, or not. This test examines a joint probability distribution modeling the combination, asking whether or not it is factorizable. Empirical studies indicate that some combinations should be considered non-compositionally.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper reports on the development and implementation of a self-report risk assessment tool that was developed in an attempt to increase the efficacy of crash prediction within Australian fleet settings. This study forms a part of a broader program of research into work related road safety and identification of driving risk. The first phase of the study involved a series of focus groups being conducted with 217 professional drivers which revealed that the following factors were proposed to influence driving performance: Fatigue, Knowledge of risk, Mood, Impatience and frustration, Speed limits, Experience, Other road users, Passengers, Health, and Culture. The second phase of the study involved piloting the newly developed 38 item Driving Risk Assessment Scale - Work Version (DRAS-WV) with 546 professional drivers. Factor analytic techniques identified a 9 factor solution that was comprised of speeding, aggression, time pressure, distraction, casualness, awareness, maintenance, fatigue and minor damage. Speeding and aggressive driving manoeuvres were identified to be the most frequent aberrant driving behaviours engaged in by the sample. However, a series of logistic regression analyses undertaken to determine the DRAS-WV scale’s ability to predict self-reported crashes revealed limited predictive efficacy e.g., 10% of crashes. This paper outlines proposed reasons for this limited predictive ability of the DRAS-WV as well as provides suggestions regarding the future of research that aims to develop methods to identify “at risk” drivers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cognitive radio is an emerging technology proposing the concept of dynamic spec- trum access as a solution to the looming problem of spectrum scarcity caused by the growth in wireless communication systems. Under the proposed concept, non- licensed, secondary users (SU) can access spectrum owned by licensed, primary users (PU) so long as interference to PU are kept minimal. Spectrum sensing is a crucial task in cognitive radio whereby the SU senses the spectrum to detect the presence or absence of any PU signal. Conventional spectrum sensing assumes the PU signal as ‘stationary’ and remains in the same activity state during the sensing cycle, while an emerging trend models PU as ‘non-stationary’ and undergoes state changes. Existing studies have focused on non-stationary PU during the transmission period, however very little research considered the impact on spectrum sensing when the PU is non-stationary during the sensing period. The concept of PU duty cycle is developed as a tool to analyse the performance of spectrum sensing detectors when detecting non-stationary PU signals. New detectors are also proposed to optimise detection with respect to duty cycle ex- hibited by the PU. This research consists of two major investigations. The first stage investigates the impact of duty cycle on the performance of existing detec- tors and the extent of the problem in existing studies. The second stage develops new detection models and frameworks to ensure the integrity of spectrum sensing when detecting non-stationary PU signals. The first investigation demonstrates that conventional signal model formulated for stationary PU does not accurately reflect the behaviour of a non-stationary PU. Therefore the performance calculated and assumed to be achievable by the conventional detector does not reflect actual performance achieved. Through analysing the statistical properties of duty cycle, performance degradation is proved to be a problem that cannot be easily neglected in existing sensing studies when PU is modelled as non-stationary. The second investigation presents detectors that are aware of the duty cycle ex- hibited by a non-stationary PU. A two stage detection model is proposed to improve the detection performance and robustness to changes in duty cycle. This detector is most suitable for applications that require long sensing periods. A second detector, the duty cycle based energy detector is formulated by integrat- ing the distribution of duty cycle into the test statistic of the energy detector and suitable for short sensing periods. The decision threshold is optimised with respect to the traffic model of the PU, hence the proposed detector can calculate average detection performance that reflect realistic results. A detection framework for the application of spectrum sensing optimisation is proposed to provide clear guidance on the constraints on sensing and detection model. Following this framework will ensure the signal model accurately reflects practical behaviour while the detection model implemented is also suitable for the desired detection assumption. Based on this framework, a spectrum sensing optimisation algorithm is further developed to maximise the sensing efficiency for non-stationary PU. New optimisation constraints are derived to account for any PU state changes within the sensing cycle while implementing the proposed duty cycle based detector.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The mining environment presents a challenging prospect for stereo vision. Our objective is to produce a stereo vision sensor suited to close-range scenes consisting mostly of rocks. This sensor should produce a dense depth map within real-time constraints. Speed and robustness are of foremost importance for this application. This paper compares a number of stereo matching algorithms in terms of robustness and suitability to fast implementation. These include traditional area-based algorithms, and algorithms based on non-parametric transforms, notably the rank and census transforms. Our experimental results show that the rank and census transforms are robust with respect to radiometric distortion and introduce less computational complexity than conventional area-based matching techniques.