973 resultados para Identification parameters
Resumo:
An estimation of costs for maintenance and rehabilitation is subject to variation due to the uncertainties of input parameters. This paper presents the results of an analysis to identify input parameters that affect the prediction of variation in road deterioration. Road data obtained from 1688 km of a national highway located in the tropical northeast of Queensland in Australia were used in the analysis. Data were analysed using a probability-based method, the Monte Carlo simulation technique and HDM-4’s roughness prediction model. The results of the analysis indicated that among the input parameters the variability of pavement strength, rut depth, annual equivalent axle load and initial roughness affected the variability of the predicted roughness. The second part of the paper presents an analysis to assess the variation in cost estimates due to the variability of the overall identified critical input parameters.
Resumo:
Dehydration has been associated with increased morbidity and mortality. Dehydration risk increases with advancing age, and will progressively become an issue as the aging population increases. Worldwide, those aged 60 years and over are the fastest growing segment of the population. The study aimed to develop a clinically practical means to identify dehydration amongst older people in the clinical care setting. Older people aged 60 years or over admitted to the Geriatric and Rehabilitation Unit (GARU) of two tertiary teaching hospitals were eligible for participation in the study. Ninety potential screening questions and 38 clinical parameters were initially tested on a single sample (n=33) with the most promising 11 parameters selected to undergo further testing in an independent group (n=86). Of the almost 130 variables explored, tongue dryness was most strongly associated with poor hydration status, demonstrating 64% sensitivity and 62% specificity within the study participants. The result was not confounded by age, gender or body mass index. With minimal training, inter-rater repeatability was over 90%. This study identified tongue dryness as a potentially practical tool to identify dehydration risk amongst older people in the clinical care setting. Further studies to validate the potential screen in larger and varied populations of older people are required
Resumo:
This thesis details methodology to estimate urban stormwater quality based on a set of easy to measure physico-chemical parameters. These parameters can be used as surrogate parameters to estimate other key water quality parameters. The key pollutants considered in this study are nitrogen compounds, phosphorus compounds and solids. The use of surrogate parameter relationships to evaluate urban stormwater quality will reduce the cost of monitoring and so that scientists will have added capability to generate a large amount of data for more rigorous analysis of key urban stormwater quality processes, namely, pollutant build-up and wash-off. This in turn will assist in the development of more stringent stormwater quality mitigation strategies. The research methodology was based on a series of field investigations, laboratory testing and data analysis. Field investigations were conducted to collect pollutant build-up and wash-off samples from residential roads and roof surfaces. Past research has identified that these impervious surfaces are the primary pollutant sources to urban stormwater runoff. A specially designed vacuum system and rainfall simulator were used in the collection of pollutant build-up and wash-off samples. The collected samples were tested for a range of physico-chemical parameters. Data analysis was conducted using both univariate and multivariate data analysis techniques. Analysis of build-up samples showed that pollutant loads accumulated on road surfaces are higher compared to the pollutant loads on roof surfaces. Furthermore, it was found that the fraction of solids smaller than 150 ìm is the most polluted particle size fraction in solids build-up on both roads and roof surfaces. The analysis of wash-off data confirmed that the simulated wash-off process adopted for this research agrees well with the general understanding of the wash-off process on urban impervious surfaces. The observed pollutant concentrations in wash-off from road surfaces were different to pollutant concentrations in wash-off from roof surfaces. Therefore, firstly, the identification of surrogate parameters was undertaken separately for roads and roof surfaces. Secondly, a common set of surrogate parameter relationships were identified for both surfaces together to evaluate urban stormwater quality. Surrogate parameters were identified for nitrogen, phosphorus and solids separately. Electrical conductivity (EC), total organic carbon (TOC), dissolved organic carbon (DOC), total suspended solids (TSS), total dissolved solids (TDS), total solids (TS) and turbidity (TTU) were selected as the relatively easy to measure parameters. Consequently, surrogate parameters for nitrogen and phosphorus were identified from the set of easy to measure parameters for both road surfaces and roof surfaces. Additionally, surrogate parameters for TSS, TDS and TS which are key indicators of solids were obtained from EC and TTU which can be direct field measurements. The regression relationships which were developed for surrogate parameters and key parameter of interest were of a similar format for road and roof surfaces, namely it was in the form of simple linear regression equations. The identified relationships for road surfaces were DTN-TDS:DOC, TP-TS:TOC, TSS-TTU, TDS-EC and TSTTU: EC. The identified relationships for roof surfaces were DTN-TDS and TSTTU: EC. Some of the relationships developed had a higher confidence interval whilst others had a relatively low confidence interval. The relationships obtained for DTN-TDS, DTN-DOC, TP-TS and TS-EC for road surfaces demonstrated good near site portability potential. Currently, best management practices are focussed on providing treatment measures for stormwater runoff at catchment outlets where separation of road and roof runoff is not found. In this context, it is important to find a common set of surrogate parameter relationships for road surfaces and roof surfaces to evaluate urban stormwater quality. Consequently DTN-TDS, TS-EC and TS-TTU relationships were identified as the common relationships which are capable of providing measurements of DTN and TS irrespective of the surface type.
Resumo:
This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.
Resumo:
Food microstructure represents the way their elements arrangement and their interaction. Researchers in this field benefit from identifying new methods of examination of the microstructure and analysing the images. Experiments were undertaken to study micro-structural changes of food material during drying. Micro-structural images were obtained for potato samples of cubical shape at different moisture contents during drying using scanning electron microscopy. Physical parameters such as cell wall perimeter, and area were calculated using an image identification algorithm, based on edge detection and morphological operators. The algorithm was developed using Matlab.
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, very few attempts have been made to explore the structure damage with noise polluted data which is unavoidable effect in real world. The measurement data are contaminated by noise because of test environment as well as electronic devices and this noise tend to give error results with structural damage identification methods. Therefore it is important to investigate a method which can perform better with noise polluted data. This paper introduces a new damage index using principal component analysis (PCA) for damage detection of building structures being able to accept noise polluted frequency response functions (FRFs) as input. The FRF data are obtained from the function datagen of MATLAB program which is available on the web site of the IASC-ASCE (International Association for Structural Control– American Society of Civil Engineers) Structural Health Monitoring (SHM) Task Group. The proposed method involves a five-stage process: calculation of FRFs, calculation of damage index values using proposed algorithm, development of the artificial neural networks and introducing damage indices as input parameters and damage detection of the structure. This paper briefly describes the methodology and the results obtained in detecting damage in all six cases of the benchmark study with different noise levels. The proposed method is applied to a benchmark problem sponsored by the IASC-ASCE Task Group on Structural Health Monitoring, which was developed in order to facilitate the comparison of various damage identification methods. The illustrated results show that the PCA-based algorithm is effective for structural health monitoring with noise polluted FRFs which is of common occurrence when dealing with industrial structures.
Resumo:
House dust is a heterogeneous matrix, which contains a number of biological materials and particulate matter gathered from several sources. It is the accumulation of a number of semi-volatile and non-volatile contaminants. The contaminants are trapped and preserved. Therefore, house dust can be viewed as an archive of both the indoor and outdoor air pollution. There is evidence to show that on average, people tend to stay indoors most of the time and this increases exposure to house dust. The aims of this investigation were to: " assess the levels of Polycyclic Aromatic Hydrocarbons (PAHs), elements and pesticides in the indoor environment of the Brisbane area; " identify and characterise the possible sources of elemental constituents (inorganic elements), PAHs and pesticides by means of Positive Matrix Factorisation (PMF); and " establish the correlations between the levels of indoor air pollutants (PAHs, elements and pesticides) with the external and internal characteristics or attributes of the buildings and indoor activities by means of multivariate data analysis techniques. The dust samples were collected during the period of 2005-2007 from homes located in different suburbs of Brisbane, Ipswich and Toowoomba, in South East Queensland, Australia. A vacuum cleaner fitted with a paper bag was used as a sampler for collecting the house dust. A survey questionnaire was filled by the house residents which contained information about the indoor and outdoor characteristics of their residences. House dust samples were analysed for three different pollutants: Pesticides, Elements and PAHs. The analyses were carried-out for samples of particle size less than 250 µm. The chemical analyses for both pesticides and PAHs were performed using a Gas Chromatography Mass Spectrometry (GC-MS), while elemental analysis was carried-out by using Inductively-Coupled Plasma-Mass Spectroscopy (ICP-MS). The data was subjected to multivariate data analysis techniques such as multi-criteria decision-making procedures, Preference Ranking Organisation Method for Enrichment Evaluations (PROMETHEE), coupled with Geometrical Analysis for Interactive Aid (GAIA) in order to rank the samples and to examine data display. This study showed that compared to the results from previous works, which were carried-out in Australia and overseas, the concentrations of pollutants in house dusts in Brisbane and the surrounding areas were relatively very high. The results of this work also showed significant correlations between some of the physical parameters (types of building material, floor level, distance from industrial areas and major road, and smoking) and the concentrations of pollutants. Types of building materials and the age of houses were found to be two of the primary factors that affect the concentrations of pesticides and elements in house dust. The concentrations of these two types of pollutant appear to be higher in old houses (timber houses) than in the brick ones. In contrast, the concentrations of PAHs were noticed to be higher in brick houses than in the timber ones. Other factors such as floor level, and distance from the main street and industrial area, also affected the concentrations of pollutants in the house dust samples. To apportion the sources and to understand mechanisms of pollutants, Positive Matrix Factorisation (PMF) receptor model was applied. The results showed that there were significant correlations between the degree of concentration of contaminants in house dust and the physical characteristics of houses, such as the age and the type of the house, the distance from the main road and industrial areas, and smoking. Sources of pollutants were identified. For PAHs, the sources were cooking activities, vehicle emissions, smoking, oil fumes, natural gas combustion and traces of diesel exhaust emissions; for pesticides the sources were application of pesticides for controlling termites in buildings and fences, treating indoor furniture and in gardens for controlling pests attacking horticultural and ornamental plants; for elements the sources were soil, cooking, smoking, paints, pesticides, combustion of motor fuels, residual fuel oil, motor vehicle emissions, wearing down of brake linings and industrial activities.
Resumo:
Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Recently, a convex hull-based human identification protocol was proposed by Sobrado and Birget, whose steps can be performed by humans without additional aid. The main part of the protocol involves the user mentally forming a convex hull of secret icons in a set of graphical icons and then clicking randomly within this convex hull. While some rudimentary security issues of this protocol have been discussed, a comprehensive security analysis has been lacking. In this paper, we analyze the security of this convex hull-based protocol. In particular, we show two probabilistic attacks that reveal the user’s secret after the observation of only a handful of authentication sessions. These attacks can be efficiently implemented as their time and space complexities are considerably less than brute force attack. We show that while the first attack can be mitigated through appropriately chosen values of system parameters, the second attack succeeds with a non-negligible probability even with large system parameter values that cross the threshold of usability.
Resumo:
Recently a convex hull based human identification protocol was proposed by Sobrado and Birget, whose steps can be performed by humans without additional aid. The main part of the protocol involves the user mentally forming a convex hull of secret icons in a set of graphical icons and then clicking randomly within this convex hull. In this paper we show two efficient probabilistic attacks on this protocol which reveal the user’s secret after the observation of only a handful of authentication sessions. We show that while the first attack can be mitigated through appropriately chosen values of system parameters, the second attack succeeds with a non-negligible probability even with large system parameter values which cross the threshold of usability.
Resumo:
We propose a new protocol providing cryptographically secure authentication to unaided humans against passive adversaries. We also propose a new generic passive attack on human identification protocols. The attack is an application of Coppersmith’s baby-step giant-step algorithm on human identification protcols. Under this attack, the achievable security of some of the best candidates for human identification protocols in the literature is further reduced. We show that our protocol preserves similar usability while achieves better security than these protocols. A comprehensive security analysis is provided which suggests parameters guaranteeing desired levels of security.
Resumo:
The motion response of marine structures in waves can be studied using finite-dimensional linear-time-invariant approximating models. These models, obtained using system identification with data computed by hydrodynamic codes, find application in offshore training simulators, hardware-in-the-loop simulators for positioning control testing, and also in initial designs of wave-energy conversion devices. Different proposals have appeared in the literature to address the identification problem in both time and frequency domains, and recent work has highlighted the superiority of the frequency-domain methods. This paper summarises practical frequency-domain estimation algorithms that use constraints on model structure and parameters to refine the search of approximating parametric models. Practical issues associated with the identification are discussed, including the influence of radiation model accuracy in force-to-motion models, which are usually the ultimate modelling objective. The illustration examples in the paper are obtained using a freely available MATLAB toolbox developed by the authors, which implements the estimation algorithms described.
Resumo:
Person re-identification is particularly challenging due to significant appearance changes across separate camera views. In order to re-identify people, a representative human signature should effectively handle differences in illumination, pose and camera parameters. While general appearance-based methods are modelled in Euclidean spaces, it has been argued that some applications in image and video analysis are better modelled via non-Euclidean manifold geometry. To this end, recent approaches represent images as covariance matrices, and interpret such matrices as points on Riemannian manifolds. As direct classification on such manifolds can be difficult, in this paper we propose to represent each manifold point as a vector of similarities to class representers, via a recently introduced form of Bregman matrix divergence known as the Stein divergence. This is followed by using a discriminative mapping of similarity vectors for final classification. The use of similarity vectors is in contrast to the traditional approach of embedding manifolds into tangent spaces, which can suffer from representing the manifold structure inaccurately. Comparative evaluations on benchmark ETHZ and iLIDS datasets for the person re-identification task show that the proposed approach obtains better performance than recent techniques such as Histogram Plus Epitome, Partial Least Squares, and Symmetry-Driven Accumulation of Local Features.
Resumo:
A novel gray-box neural network model (GBNNM), including multi-layer perception (MLP) neural network (NN) and integrators, is proposed for a model identification and fault estimation (MIFE) scheme. With the GBNNM, both the nonlinearity and dynamics of a class of nonlinear dynamic systems can be approximated. Unlike previous NN-based model identification methods, the GBNNM directly inherits system dynamics and separately models system nonlinearities. This model corresponds well with the object system and is easy to build. The GBNNM is embedded online as a normal model reference to obtain the quantitative residual between the object system output and the GBNNM output. This residual can accurately indicate the fault offset value, so it is suitable for differing fault severities. To further estimate the fault parameters (FPs), an improved extended state observer (ESO) using the same NNs (IESONN) from the GBNNM is proposed to avoid requiring the knowledge of ESO nonlinearity. Then, the proposed MIFE scheme is applied for reaction wheels (RW) in a satellite attitude control system (SACS). The scheme using the GBNNM is compared with other NNs in the same fault scenario, and several partial loss of effect (LOE) faults with different severities are considered to validate the effectiveness of the FP estimation and its superiority.
Resumo:
Summary 1. Acoustic methods are used increasingly to survey and monitor bat populations. However, the use of acoustic methods at continental scales can be hampered by the lack of standardized and objective methods to identify all species recorded. This makes comparable continent-wide monitoring difficult, impeding progress towards developing biodiversity indicators, transboundary conservation programmes and monitoring species distribution changes. 2. Here we developed a continental-scale classifier for acoustic identification of bats, which can be used throughout Europe to ensure objective, consistent and comparable species identifications. We selected 1350 full-spectrum reference calls from a set of 15 858 calls of 34 European species, from EchoBank, a global echolocation call library. We assessed 24 call parameters to evaluate how well they distinguish between species and used the 12 most useful to train a hierarchy of ensembles of artificial neural networks to distinguish the echolocation calls of these bat species. 3. Calls are first classified to one of five call-type groups, with a median accuracy of 97·6%. The median species-level classification accuracy is 83·7%, providing robust classification for most European species, and an estimate of classification error for each species. 4. These classifiers were packaged into an online tool, iBatsID, which is freely available, enabling anyone to classify European calls in an objective and consistent way, allowing standardized acoustic identification across the continent. 5. Synthesis and applications. iBatsID is the first freely available and easily accessible continental- scale bat call classifier, providing the basis for standardized, continental acoustic bat monitoring in Europe. This method can provide key information to managers and conservation planners on distribution changes and changes in bat species activity through time.