582 resultados para Identification problem
Resumo:
The use of Mahalanobis squared distance–based novelty detection in statistical damage identification has become increasingly popular in recent years. The merit of the Mahalanobis squared distance–based method is that it is simple and requires low computational effort to enable the use of a higher dimensional damage-sensitive feature, which is generally more sensitive to structural changes. Mahalanobis squared distance–based damage identification is also believed to be one of the most suitable methods for modern sensing systems such as wireless sensors. Although possessing such advantages, this method is rather strict with the input requirement as it assumes the training data to be multivariate normal, which is not always available particularly at an early monitoring stage. As a consequence, it may result in an ill-conditioned training model with erroneous novelty detection and damage identification outcomes. To date, there appears to be no study on how to systematically cope with such practical issues especially in the context of a statistical damage identification problem. To address this need, this article proposes a controlled data generation scheme, which is based upon the Monte Carlo simulation methodology with the addition of several controlling and evaluation tools to assess the condition of output data. By evaluating the convergence of the data condition indices, the proposed scheme is able to determine the optimal setups for the data generation process and subsequently avoid unnecessarily excessive data. The efficacy of this scheme is demonstrated via applications to a benchmark structure data in the field.
Resumo:
The motion response of marine structures in waves can be studied using finite-dimensional linear-time-invariant approximating models. These models, obtained using system identification with data computed by hydrodynamic codes, find application in offshore training simulators, hardware-in-the-loop simulators for positioning control testing, and also in initial designs of wave-energy conversion devices. Different proposals have appeared in the literature to address the identification problem in both time and frequency domains, and recent work has highlighted the superiority of the frequency-domain methods. This paper summarises practical frequency-domain estimation algorithms that use constraints on model structure and parameters to refine the search of approximating parametric models. Practical issues associated with the identification are discussed, including the influence of radiation model accuracy in force-to-motion models, which are usually the ultimate modelling objective. The illustration examples in the paper are obtained using a freely available MATLAB toolbox developed by the authors, which implements the estimation algorithms described.
Resumo:
This article describes a Matlab toolbox for parametric identification of fluid-memory models associated with the radiation forces ships and offshore structures. Radiation forces are a key component of force-to-motion models used in simulators, motion control designs, and also for initial performance evaluation of wave-energy converters. The software described provides tools for preparing non-parmatric data and for identification with automatic model-order detection. The identification problem is considered in the frequency domain.
Resumo:
This chapter looks at issues of non-stationarity in determining when a transient has occurred and when it is possible to fit a linear model to a non-linear response. The first issue is associated with the detection of loss of damping of power system modes. When some control device such as an SVC fails, the operator needs to know whether the damping of key power system oscillation modes has deteriorated significantly. This question is posed here as an alarm detection problem rather than an identification problem to get a fast detection of a change. The second issue concerns when a significant disturbance has occurred and the operator is seeking to characterize the system oscillation. The disturbance initially is large giving a nonlinear response; this then decays and can then be smaller than the noise level ofnormal customer load changes. The difficulty is one of determining when a linear response can be reliably identified between the non-linear phase and the large noise phase of thesignal. The solution proposed in this chapter uses “Time-Frequency” analysis tools to assistthe extraction of the linear model.
Resumo:
Time-domain models of marine structures based on frequency domain data are usually built upon the Cummins equation. This type of model is a vector integro-differential equation which involves convolution terms. These convolution terms are not convenient for analysis and design of motion control systems. In addition, these models are not efficient with respect to simulation time, and ease of implementation in standard simulation packages. For these reasons, different methods have been proposed in the literature as approximate alternative representations of the convolutions. Because the convolution is a linear operation, different approaches can be followed to obtain an approximately equivalent linear system in the form of either transfer function or state-space models. This process involves the use of system identification, and several options are available depending on how the identification problem is posed. This raises the question whether one method is better than the others. This paper therefore has three objectives. The first objective is to revisit some of the methods for replacing the convolutions, which have been reported in different areas of analysis of marine systems: hydrodynamics, wave energy conversion, and motion control systems. The second objective is to compare the different methods in terms of complexity and performance. For this purpose, a model for the response in the vertical plane of a modern containership is considered. The third objective is to describe the implementation of the resulting model in the standard simulation environment Matlab/Simulink.
Resumo:
The problem of determining the script and language of a document image has a number of important applications in the field of document analysis, such as indexing and sorting of large collections of such images, or as a precursor to optical character recognition (OCR). In this paper, we investigate the use of texture as a tool for determining the script of a document image, based on the observation that text has a distinct visual texture. An experimental evaluation of a number of commonly used texture features is conducted on a newly created script database, providing a qualitative measure of which features are most appropriate for this task. Strategies for improving classification results in situations with limited training data and multiple font types are also proposed.
Resumo:
Excessive consumption of alcohol is a serious public health problem. While intensive treatments are suitable for those who are physically dependent on alcohol, they are not cost-effective options for the vast majority of problem drinkers who are not dependent. There is good evidence that brief interventions are effective in reducing overall alcohol consumption, alcohol-related problems, and health-care utilisation among nondependent problem drinkers. Psychologists are in an ideal position to opportunistically detect people who drink excessively and to offer them brief advice to reduce their drinking. In this paper we outline the process involved in providing brief opportunistic screening and intervention for problem drinkers. We also discuss methods that psychologists can employ if a client is not ready to reduce drinking, or is ambivalent about change. Depending on the client's level of motivation to change, psychologists can engage in either an education-clarification approach, a commitment-enhancement approach, or a skills-training approach. Routine engagement in opportunistic intervention is an important public-health approach to reducing alcohol-related harm in the community.
Resumo:
Objective: The Brief Michigan Alcoholism Screening Test (bMAST) is a 10-item test derived from the 25-item Michigan Alcoholism Screening Test (MAST). It is widely used in the assessment of alcohol dependence. In the absence of previous validation studies, the principal aim of this study was to assess the validity and reliability of the bMAST as a measure of the severity of problem drinking. Method: There were 6,594 patients (4,854 men, 1,740 women) who had been referred for alcohol-use disorders to a hospital alcohol and drug service who voluntarily participated in this study. Results: An exploratory factor analysis defined a two-factor solution, consisting of Perception of Current Drinking and Drinking Consequences factors. Structural equation modeling confirmed that the fit of a nine-item, two-factor model was superior to the original one-factor model. Concurrent validity was assessed through simultaneous administration of the Alcohol Use Disorders Identification Test (AUDIT) and associations with alcohol consumption and clinically assessed features of alcohol dependence. The two-factor bMAST model showed moderate correlations with the AUDIT. The two-factor bMAST and AUDIT were similarly associated with quantity of alcohol consumption and clinically assessed dependence severity features. No differences were observed between the existing weighted scoring system and the proposed simple scoring system. Conclusions: In this study, both the existing bMAST total score and the two-factor model identified were as effective as the AUDIT in assessing problem drinking severity. There are additional advantages of employing the two-factor bMAST in the assessment and treatment planning of patients seeking treatment for alcohol-use disorders. (J. Stud. Alcohol Drugs 68: 771-779,2007)
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, very few attempts have been made to explore the structure damage with noise polluted data which is unavoidable effect in real world. The measurement data are contaminated by noise because of test environment as well as electronic devices and this noise tend to give error results with structural damage identification methods. Therefore it is important to investigate a method which can perform better with noise polluted data. This paper introduces a new damage index using principal component analysis (PCA) for damage detection of building structures being able to accept noise polluted frequency response functions (FRFs) as input. The FRF data are obtained from the function datagen of MATLAB program which is available on the web site of the IASC-ASCE (International Association for Structural Control– American Society of Civil Engineers) Structural Health Monitoring (SHM) Task Group. The proposed method involves a five-stage process: calculation of FRFs, calculation of damage index values using proposed algorithm, development of the artificial neural networks and introducing damage indices as input parameters and damage detection of the structure. This paper briefly describes the methodology and the results obtained in detecting damage in all six cases of the benchmark study with different noise levels. The proposed method is applied to a benchmark problem sponsored by the IASC-ASCE Task Group on Structural Health Monitoring, which was developed in order to facilitate the comparison of various damage identification methods. The illustrated results show that the PCA-based algorithm is effective for structural health monitoring with noise polluted FRFs which is of common occurrence when dealing with industrial structures.
Resumo:
In recent years, some models have been proposed for the fault section estimation and state identification of unobserved protective relays (FSE-SIUPR) under the condition of incomplete state information of protective relays. In these models, the temporal alarm information from a faulted power system is not well explored although it is very helpful in compensating the incomplete state information of protective relays, quickly achieving definite fault diagnosis results and evaluating the operating status of protective relays and circuit breakers in complicated fault scenarios. In order to solve this problem, an integrated optimization mathematical model for the FSE-SIUPR, which takes full advantage of the temporal characteristics of alarm messages, is developed in the framework of the well-established temporal constraint network. With this model, the fault evolution procedure can be explained and some states of unobserved protective relays identified. The model is then solved by means of the Tabu search (TS) and finally verified by test results of fault scenarios in a practical power system.
Resumo:
This report presents a snapshot from work which was funded by the Queensland Injury Prevention Council in 2010-11 titled “Feasibility of Using Health Data Sources to Inform Product Safety Surveillance in Queensland children”. The project provided an evaluation of the current available evidence-base for identification and surveillance of product-related injuries in children in Queensland and Australia. A comprehensive 300 page report was produced (available at: http://eprints.qut.edu.au/46518/) and a series of recommendations were made which proposed: improvements in the product safety data system, increased utilisation of health data for proactive and reactive surveillance, enhanced collaboration between the health sector and the product safety sector, and improved ability of health data to meet the needs of product safety surveillance. At the conclusion of the project, a Consumer Product Injury Research Advisory group (CPIRAG) was established as a working party to the Queensland Injury Prevention Council (QIPC), to prioritise and advance these recommendations and to work collaboratively with key stakeholders to promote the role of injury data to support product safety policy decisions at the Queensland and national level. This group continues to meet monthly and is comprised of the organisations represented on the second page of this report. One of the key priorities of the CPIRAG group for 2012 was to produce a snapshot report to highlight problem areas for potential action arising out of the larger report. Subsequent funding to write this snapshot report was provided by the Institute for Health and Biomedical Innovation, Injury Prevention and Rehabilitation Domain at QUT in 2012. This work was undertaken by Dr Kirsten McKenzie and researchers from QUT's Centre for Accident Research and Road Safety - Queensland. This snapshot report provides an evidence base for potential further action on a range of children’s products that are significantly represented in injury data. Further information regarding injury hazards, safety advice and regulatory responses are available on the Office of Fair Trading (OFT) Queensland website and the Product Safety Australia websites. Links to these resources are provided for each product reviewed.
Resumo:
A recent comment in the Journal of Sports Sciences (MacNamara & Collins, 2011) highlighted some major concerns with the current structure of talent identification and development (TID) programmes of Olympic athletes (e.g. Gulbin, 2008; Vaeyens, Gullich, Warr, & Philippaerts, 2009). In a cogent commentary, MacNamara and Collins (2011) provided a short review of the extant literature, which was both timely and insightful. Specifically, they criticised the ubiquitous one-dimensional ‘physically-biased’ attempts to produce world class performers, emphasising the need to consider a number of key environmental variables in a more multi-disciplinary perspective. They also lamented the wastage of talent, and alluded to the operational and opportunistic nature of current talent transfer programmes. A particularly compelling aspect of the comment was their allusion to high profile athletes who had ‘failed’ performance evaluation tests and then proceeded to succeed in that sport. This issue identifies a problem with current protocols for evaluating performance and is a line of research that is sorely needed in the area of talent development. To understand the nature of talent wastage that might be occurring in high performance programmes in sport, future empirical work should seek to follow the career paths of ‘successful’ and ‘unsuccessful’ products of TID programmes, in comparative analyses. Pertinent to the insights of MacNamara and Collins (2011), it remains clear that a number of questions have not received enough attention from sport scientists interested in talent development, including: (i) why is there so much wastage of talent in such programmes? And (ii), why are there so few reported examples of successful talent transfer programmes? These questions highlight critical areas for future investigation. The aim of this short correspondence is to discuss these and other issues researchers and practitioners might consider, and to propose how an ecological dynamics underpinning to such investigations may help the development of existing protocols...
Resumo:
This study examined primary school teachers’ knowledge of anxiety and excessive anxiety symptoms in children. Three hundred and fifteen primary school teachers completed a questionnaire exploring their definitions of anxiety and the indications they associated with excessive anxiety in primary school children. Results showed that teachers had an understanding of what anxiety was in general but did not consistently distinguish normal anxiety from excessive anxiety, often defining all anxiety as a negative experience. Teachers were able to identify symptoms of excessive anxiety in children by recognizing anxiety-specific and general problem indications. The results provided preliminary evidence that teachers’ knowledge of anxiety and anxiety disorders does not appear to be a barrier in preventing children’s referrals for mental health treatment. Implications for practice and directions for future research are discussed.
Resumo:
Acoustic sensing is a promising approach to scaling faunal biodiversity monitoring. Scaling the analysis of audio collected by acoustic sensors is a big data problem. Standard approaches for dealing with big acoustic data include automated recognition and crowd based analysis. Automatic methods are fast at processing but hard to rigorously design, whilst manual methods are accurate but slow at processing. In particular, manual methods of acoustic data analysis are constrained by a 1:1 time relationship between the data and its analysts. This constraint is the inherent need to listen to the audio data. This paper demonstrates how the efficiency of crowd sourced sound analysis can be increased by an order of magnitude through the visual inspection of audio visualized as spectrograms. Experimental data suggests that an analysis speedup of 12× is obtainable for suitable types of acoustic analysis, given that only spectrograms are shown.
Resumo:
The sum of k mins protocol was proposed by Hopper and Blum as a protocol for secure human identification. The goal of the protocol is to let an unaided human securely authenticate to a remote server. The main ingredient of the protocol is the sum of k mins problem. The difficulty of solving this problem determines the security of the protocol. In this paper, we show that the sum of k mins problem is NP-Complete and W[1]-Hard. This latter notion relates to fixed parameter intractability. We also discuss the use of the sum of k mins protocol in resource-constrained devices.