941 resultados para Biodosimetry errors
Resumo:
Previous research has shown the association between stress and crash involvement. The impact of stress on road safety may also be mediated by behaviours including cognitive lapses, errors, and intentional traffic violations. This study aimed to provide a further understanding of the impact that stress from different sources may have upon driving behaviour and road safety. It is asserted that both stress extraneous to the driving environment and stress directly elicited by driving must be considered part of a dynamic system that may have a negative impact on driving behaviours. Two hundred and forty-seven public sector employees from Queensland, Australia, completed self-report measures examining demographics, subjective work-related stress, daily hassles, and aspects of general mental health. Additionally, the Driver Behaviour Questionnaire (DBQ) and the Driver Stress Inventory (DSI) were administered. All participants drove for work purposes regularly, however the study did not specifically focus on full-time professional drivers. Confirmatory factor analysis of the predictor variables revealed three factors: DSI negative affect; DSI risk taking; and extraneous influences (daily hassles, work-related stress, and general mental health). Moderate intercorrelations were found between each of these factors confirming the ‘spillover’ effect. That is, driver stress is reciprocally related to stress in other domains including work and domestic life. Structural equation modelling (SEM) showed that the DSI negative affect factor influenced both lapses and errors, whereas the DSI risk-taking factor was the strongest influence on violations. The SEMs also confirmed that daily hassles extraneous to the driving environment may influence DBQ lapses and violations independently. Accordingly, interventions may be developed to increase driver awareness of the dangers of excessive emotional responses to both driving events and daily hassles (e.g. driving fast to ‘blow off steam’ after an argument). They may also train more effective strategies for self-regulation of emotion and coping when encountering stressful situations on the road.
Resumo:
Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.
Resumo:
When an organisation becomes aware that one of its products may pose a safety risk to customers, it must take appropriate action as soon as possible or it can be held liable. The ability to automatically trace potentially dangerous goods through the supply chain would thus help organisations fulfill their legal obligations in a timely and effective manner. Furthermore, product recall legislation requires manufacturers to separately notify various government agencies, the health department and the public about recall incidents. This duplication of effort and paperwork can introduce errors and data inconsistencies. In this paper, we examine traceability and notification requirements in the product recall domain from two perspectives: the activities carried out during the manufacturing and recall processes and the data collected during the enactment of these processes. We then propose a workflow-based coordination framework to support these data and process requirements.
Resumo:
A model to predict the buildup of mainly traffic-generated volatile organic compounds or VOCs (toluene, ethylbenzene, ortho-xylene, meta-xylene, and para-xylene) on urban road surfaces is presented. The model required three traffic parameters, namely average daily traffic (ADT), volume to capacity ratio (V/C), and surface texture depth (STD), and two chemical parameters, namely total suspended solid (TSS) and total organic carbon (TOC), as predictor variables. Principal component analysis and two phase factor analysis were performed to characterize the model calibration parameters. Traffic congestion was found to be the underlying cause of traffic-related VOC buildup on urban roads. The model calibration was optimized using orthogonal experimental design. Partial least squares regression was used for model prediction. It was found that a better optimized orthogonal design could be achieved by including the latent factors of the data matrix into the design. The model performed fairly accurately for three different land uses as well as five different particle size fractions. The relative prediction errors were 10–40% for the different size fractions and 28–40% for the different land uses while the coefficients of variation of the predicted intersite VOC concentrations were in the range of 25–45% for the different size fractions. Considering the sizes of the data matrices, these coefficients of variation were within the acceptable interlaboratory range for analytes at ppb concentration levels.
Resumo:
Regardless of technology benefits, safety planners still face difficulties explaining errors related to the use of different technologies and evaluating how the errors impact the performance of safety decision making. This paper presents a preliminary error impact analysis testbed to model object identification and tracking errors caused by image-based devices and algorithms and to analyze the impact of the errors for spatial safety assessment of earthmoving and surface mining activities. More specifically, this research designed a testbed to model workspaces for earthmoving operations, to simulate safety-related violations, and to apply different object identification and tracking errors on the data collected and processed for spatial safety assessment. Three different cases were analyzed based on actual earthmoving operations conducted at a limestone quarry. Using the testbed, the impacts of the errors were investigated for the safety planning purpose.
Resumo:
Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.
Resumo:
One of the more significant conveyancing decisions of 2005 was MNM Developments Pty Ltd v Gerrard [2005] QCA 230 (‘Gerrard’). Real estate agents, in particular, became concerned when the Court of Appeal raised grave doubts concerning the validity of a contract for the sale of residential property formed by the use of fax. As a result, the government acted quickly to introduce amendments to the Property Agents and Motor Dealers Act 2000 (Qld) (‘PAMDA’) and the Body Corporate and Community Management Act 1997 (Qld) (‘BCCMA’). The relevant Act is the Liquor and Other Acts Amendment Act 2005 (Qld). These amendments commenced on 1 December 2005. In the second reading speech, the Minister stated that these amendments would provide certainty for sellers of residential properties or their agents when transmitting pre-contractual documents by facsimile and other electronic means. The accuracy of this prediction must be assessed in light of the errors that may occur.
Resumo:
Facial expression is an important channel for human communication and can be applied in many real applications. One critical step for facial expression recognition (FER) is to accurately extract emotional features. Current approaches on FER in static images have not fully considered and utilized the features of facial element and muscle movements, which represent static and dynamic, as well as geometric and appearance characteristics of facial expressions. This paper proposes an approach to solve this limitation using ‘salient’ distance features, which are obtained by extracting patch-based 3D Gabor features, selecting the ‘salient’ patches, and performing patch matching operations. The experimental results demonstrate high correct recognition rate (CRR), significant performance improvements due to the consideration of facial element and muscle movements, promising results under face registration errors, and fast processing time. The comparison with the state-of-the-art performance confirms that the proposed approach achieves the highest CRR on the JAFFE database and is among the top performers on the Cohn-Kanade (CK) database.
Resumo:
A forced landing is an unscheduled event in flight requiring an emergency landing, and is most commonly attributed to engine failure, failure of avionics or adverse weather. Since the ability to conduct a successful forced landing is the primary indicator for safety in the aviation industry, automating this capability for unmanned aerial vehicles (UAVs) will help facilitate their integration into, and subsequent routine operations over civilian airspace. Currently, there is no commercial system available to perform this task; however, a team at the Australian Research Centre for Aerospace Automation (ARCAA) is working towards developing such an automated forced landing system. This system, codenamed Flight Guardian, will operate onboard the aircraft and use machine vision for site identification, artificial intelligence for data assessment and evaluation, and path planning, guidance and control techniques to actualize the landing. This thesis focuses on research specific to the third category, and presents the design, testing and evaluation of a Trajectory Generation and Guidance System (TGGS) that navigates the aircraft to land at a chosen site, following an engine failure. Firstly, two algorithms are developed that adapts manned aircraft forced landing techniques to suit the UAV planning problem. Algorithm 1 allows the UAV to select a route (from a library) based on a fixed glide range and the ambient wind conditions, while Algorithm 2 uses a series of adjustable waypoints to cater for changing winds. A comparison of both algorithms in over 200 simulated forced landings found that using Algorithm 2, twice as many landings were within the designated area, with an average lateral miss distance of 200 m at the aimpoint. These results present a baseline for further refinements to the planning algorithms. A significant contribution is seen in the design of the 3-D Dubins Curves planning algorithm, which extends the elementary concepts underlying 2-D Dubins paths to account for powerless flight in three dimensions. This has also resulted in the development of new methods in testing for path traversability, in losing excess altitude, and in the actual path formation to ensure aircraft stability. Simulations using this algorithm have demonstrated lateral and vertical miss distances of under 20 m at the approach point, in wind speeds of up to 9 m/s. This is greater than a tenfold improvement on Algorithm 2 and emulates the performance of manned, powered aircraft. The lateral guidance algorithm originally developed by Park, Deyst, and How (2007) is enhanced to include wind information in the guidance logic. A simple assumption is also made that reduces the complexity of the algorithm in following a circular path, yet without sacrificing performance. Finally, a specific method of supplying the correct turning direction is also used. Simulations have shown that this new algorithm, named the Enhanced Nonlinear Guidance (ENG) algorithm, performs much better in changing winds, with cross-track errors at the approach point within 2 m, compared to over 10 m using Park's algorithm. A fourth contribution is made in designing the Flight Path Following Guidance (FPFG) algorithm, which uses path angle calculations and the MacCready theory to determine the optimal speed to fly in winds. This algorithm also uses proportional integral- derivative (PID) gain schedules to finely tune the tracking accuracies, and has demonstrated in simulation vertical miss distances of under 2 m in changing winds. A fifth contribution is made in designing the Modified Proportional Navigation (MPN) algorithm, which uses principles from proportional navigation and the ENG algorithm, as well as methods specifically its own, to calculate the required pitch to fly. This algorithm is robust to wind changes, and is easily adaptable to any aircraft type. Tracking accuracies obtained with this algorithm are also comparable to those obtained using the FPFG algorithm. For all three preceding guidance algorithms, a novel method utilising the geometric and time relationship between aircraft and path is also employed to ensure that the aircraft is still able to track the desired path to completion in strong winds, while remaining stabilised. Finally, a derived contribution is made in modifying the 3-D Dubins Curves algorithm to suit helicopter flight dynamics. This modification allows a helicopter to autonomously track both stationary and moving targets in flight, and is highly advantageous for applications such as traffic surveillance, police pursuit, security or payload delivery. Each of these achievements serves to enhance the on-board autonomy and safety of a UAV, which in turn will help facilitate the integration of UAVs into civilian airspace for a wider appreciation of the good that they can provide. The automated UAV forced landing planning and guidance strategies presented in this thesis will allow the progression of this technology from the design and developmental stages, through to a prototype system that can demonstrate its effectiveness to the UAV research and operations community.
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.
Resumo:
Understanding the relationship between diet, physical activity and health in humans requires accurate measurement of body composition and daily energy expenditure. Stable isotopes provide a means of measuring total body water and daily energy expenditure under free-living conditions. While the use of isotope ratio mass spectrometry (IRMS) for the analysis of 2H (Deuterium) and 18O (Oxygen-18) is well established in the field of human energy metabolism research, numerous questions remain regarding the factors which influence analytical and measurement error using this methodology. This thesis was comprised of four studies with the following emphases. The aim of Study 1 was to determine the analytical and measurement error of the IRMS with regard to sample handling under certain conditions. Study 2 involved the comparison of TEE (Total daily energy expenditure) using two commonly employed equations. Further, saliva and urine samples, collected at different times, were used to determine if clinically significant differences would occur. Study 3 was undertaken to determine the appropriate collection times for TBW estimates and derived body composition values. Finally, Study 4, a single case study to investigate if TEE measures are affected when the human condition changes due to altered exercise and water intake. The aim of Study 1 was to validate laboratory approaches to measure isotopic enrichment to ensure accurate (to international standards), precise (reproducibility of three replicate samples) and linear (isotope ratio was constant over the expected concentration range) results. This established the machine variability for the IRMS equipment in use at Queensland University for both TBW and TEE. Using either 0.4mL or 0.5mL sample volumes for both oxygen-18 and deuterium were statistically acceptable (p>0.05) and showed a within analytical variance of 5.8 Delta VSOW units for deuterium, 0.41 Delta VSOW units for oxygen-18. This variance was used as “within analytical noise” to determine sample deviations. It was also found that there was no influence of equilibration time on oxygen-18 or deuterium values when comparing the minimum (oxygen-18: 24hr; deuterium: 3 days) and maximum (oxygen-18: and deuterium: 14 days) equilibration times. With regard to preparation using the vacuum line, any order of preparation is suitable as the TEE values fall within 8% of each other regardless of preparation order. An 8% variation is acceptable for the TEE values due to biological and technical errors (Schoeller, 1988). However, for the automated line, deuterium must be assessed first followed by oxygen-18 as the automated machine line does not evacuate tubes but merely refills them with an injection of gas for a predetermined time. Any fractionation (which may occur for both isotopes), would cause a slight elevation in the values and hence a lower TEE. The purpose of the second and third study was to investigate the use of IRMS to measure the TEE and TBW of and to validate the current IRMS practices in use with regard to sample collection times of urine and saliva, the use of two TEE equations from different research centers and the body composition values derived from these TEE and TBW values. Following the collection of a fasting baseline urine and saliva sample, 10 people (8 women, 2 men) were dosed with a doubly labeled water does comprised of 1.25g 10% oxygen-18 and 0.1 g 100% deuterium/kg body weight. The samples were collected hourly for 12 hrs on the first day and then morning, midday, and evening samples were collected for the next 14 days. The samples were analyzed using an isotope ratio mass spectrometer. For the TBW, time to equilibration was determined using three commonly employed data analysis approaches. Isotopic equilibration was reached in 90% of the sample by hour 6, and in 100% of the sample by hour 7. With regard to the TBW estimations, the optimal time for urine collection was found to be between hours 4 and 10 as to where there was no significant difference between values. In contrast, statistically significant differences in TBW estimations were found between hours 1-3 and from 11-12 when compared with hours 4-10. Most of the individuals in this study were in equilibrium after 7 hours. The TEE equations of Prof Dale Scholler (Chicago, USA, IAEA) and Prof K.Westerterp were compared with that of Prof. Andrew Coward (Dunn Nutrition Centre). When comparing values derived from samples collected in the morning and evening there was no effect of time or equation on resulting TEE values. The fourth study was a pilot study (n=1) to test the variability in TEE as a result of manipulations in fluid consumption and level of physical activity; the magnitude of change which may be expected in a sedentary adult. Physical activity levels were manipulated by increasing the number of steps per day to mimic the increases that may result when a sedentary individual commences an activity program. The study was comprised of three sub-studies completed on the same individual over a period of 8 months. There were no significant changes in TBW across all studies, even though the elimination rates changed with the supplemented water intake and additional physical activity. The extra activity may not have sufficiently strenuous enough and the water intake high enough to cause a significant change in the TBW and hence the CO2 production and TEE values. The TEE values measured show good agreement based on the estimated values calculated on an RMR of 1455 kcal/day, a DIT of 10% of TEE and activity based on measured steps. The covariance values tracked when plotting the residuals were found to be representative of “well-behaved” data and are indicative of the analytical accuracy. The ratio and product plots were found to reflect the water turnover and CO2 production and thus could, with further investigation, be employed to identify the changes in physical activity.
Resumo:
Driving is a vigilance task, requiring sustained attention to maintain performance and avoid crashes. Hypovigilance (i.e., marked reduction in vigilance) while driving manifests as poor driving performance and is commonly attributed to fatigue (Dinges, 1995). However, poor driving performance has been found to be more frequent when driving in monotonous road environments, suggesting that monotony plays a role in generating hypovigilance (Thiffault & Bergeron, 2003b). Research to date has tended to conceptualise monotony as a uni-dimensional task characteristic, typically used over a prolonged period of time to facilitate other factors under investigation, most notably fatigue. However, more often than not, more than one exogenous factor relating to the task or operating environment is manipulated to vary or generate monotony (Mascord & Heath, 1992). Here we aimed to explore whether monotony is a multi-dimensional construct that is determined by characteristics of both the task proper and the task environment. The general assumption that monotony is a task characteristic used solely to elicit hypovigilance or poor performance related to fatigue appears to have led to there being little rigorous investigation into the exact nature of the relationship. While the two concepts are undoubtedly linked, the independent effect of monotony on hypovigilance remains largely ignored. Notwithstanding, there is evidence that monotony effects can emerge very early in vigilance tasks and are not necessarily accompanied by fatigue (see Meuter, Rakotonirainy, Johns, & Wagner, 2005). This phenomenon raises a largely untested, empirical question explored in two studies: Can hypovigilance emerge as a consequence of task and/or environmental monotony, independent of time on task and fatigue? In Study 1, using a short computerised vigilance task requiring responses to be withheld to infrequent targets, we explored the differential impacts of stimuli and task demand manipulations on the development of a monotonous context and the associated effects on vigilance performance (as indexed by respone errors and response times), independent of fatigue and time on task. The role of individual differences (sensation seeking, extroversion and cognitive failures) in moderating monotony effects was also considered. The results indicate that monotony affects sustained attention, with hypovigilance and associated performance worse in monotonous than in non-monotonous contexts. Critically, performance decrements emerged early in the task (within 4.3 minutes) and remained consistent over the course of the experiment (21.5 minutes), suggesting that monotony effects can operate independent of time on task and fatigue. A combination of low task demands and low stimulus variability form a monotonous context characterised by hypovigilance and poor task performance. Variations to task demand and stimulus variability were also found to independently affect performance, suggesting that monotony is a multi-dimensional construct relating to both task monotony (associated with the task itself) and environmental monotony (related to characteristics of the stimulus). Consequently, it can be concluded that monotony is multi-dimensional and is characterised by low variability in stimuli and/or task demands. The proposition that individual differences emerge under conditions of varying monotony with high sensation seekers and/or extroverts performing worse in monotonous contexts was only partially supported. Using a driving simulator, the findings of Study 1 were extended to a driving context to identify the behavioural and psychophysiological indices of monotony-related hypovigilance associated with variations to road design and road side scenery (Study 2). Supporting the proposition that monotony is a multi-dimensional construct, road design variability emerged as a key moderating characteristic of environmental monotony, resulting in poor driving performance indexed by decrements in steering wheel measures (mean lateral position). Sensation seeking also emerged as a moderating factor, where participants high in sensation seeking tendencies displayed worse driving behaviour in monotonous conditions. Importantly, impaired driving performance was observed within 8 minutes of commencing the driving task characterised by environmental monotony (low variability in road design) and was not accompanied by a decline in psychophysiological arousal. In addition, no subjective declines in alertness were reported. With fatigue effects associated with prolonged driving (van der Hulst, Meijman, & Rothengatter, 2001) and indexed by drowsiness, this pattern of results indicates that monotony can affect driver vigilance, independent of time on task and fatigue. Perceptual load theory (Lavie, 1995, 2005) and mindlessness theory (Robertson, Manly, Andrade, Baddley, & Yiend, 1997) provide useful theoretical frameworks for explaining and predicting monotony effects by positing that the low load (of task and/or stimuli) associated with a monotonous task results in spare attentional capacity which spills over involuntarily, resulting in the processing of task-irrelevant stimuli or task unrelated thoughts. That is, individuals – even when not fatigued - become easily distracted when performing a highly monotonous task, resulting in hypovigilance and impaired performance. The implications for road safety, including the likely effectiveness of fatigue countermeasures to mitigate monotony-related driver hypovigilance are discussed.
Resumo:
Unusual event detection in crowded scenes remains challenging because of the diversity of events and noise. In this paper, we present a novel approach for unusual event detection via sparse reconstruction of dynamic textures over an overcomplete basis set, with the dynamic texture described by local binary patterns from three orthogonal planes (LBPTOP). The overcomplete basis set is learnt from the training data where only the normal items observed. In the detection process, given a new observation, we compute the sparse coefficients using the Dantzig Selector algorithm which was proposed in the literature of compressed sensing. Then the reconstruction errors are computed, based on which we detect the abnormal items. Our application can be used to detect both local and global abnormal events. We evaluate our algorithm on UCSD Abnormality Datasets for local anomaly detection, which is shown to outperform current state-of-the-art approaches, and we also get promising results for rapid escape detection using the PETS2009 dataset.
Resumo:
Transmission smart grids will use a digital platform for the automation of high voltage substations. The IEC 61850 series of standards, released in parts over the last ten years, provide a specification for substation communications networks and systems. These standards, along with IEEE Std 1588-2008 Precision Time Protocol version 2 (PTPv2) for precision timing, are recommended by the both IEC Smart Grid Strategy Group and the NIST Framework and Roadmap for Smart Grid Interoperability Standards for substation automation. IEC 61850, PTPv2 and Ethernet are three complementary protocol families that together define the future of sampled value digital process connections for smart substation automation. A time synchronisation system is required for a sampled value process bus, however the details are not defined in IEC 61850-9-2. PTPv2 provides the greatest accuracy of network based time transfer systems, with timing errors of less than 100 ns achievable. The suitability of PTPv2 to synchronise sampling in a digital process bus is evaluated, with preliminary results indicating that steady state performance of low cost clocks is an acceptable ±300 ns, but that corrections issued by grandmaster clocks can introduce significant transients. Extremely stable grandmaster oscillators are required to ensure any corrections are sufficiently small that time synchronising performance is not degraded.