887 resultados para dry method processing
Resumo:
The unsaturated soil mechanics is receiving increasing attention from researchers and as well as from practicing engineers. However, the requirement of sophisticated devices to measure unsaturated soil properties and time consumption have made the geotechnical engineers keep away from implication of the unsaturated soil mechanics for solving practical geotechnical problems. The application of the conventional laboratory devices with some modifications to measure unsaturated soil properties can promote the application of unsaturated soil mechanics into engineering practice. Therefore, in the present study, a conventional direct shear device was modified to measure unsaturated shear strength parameters at low suction. Specially, for the analysis of rain-induced slope failures, it is important to measure unsaturated shear strength parameters at low suction where slopes become unstable. The modified device was used to measure unsaturated shear strength of two silty soils at low suction values (0 ~ 50 kPa) that were achieved by following drying path and wetting path of soil-water characteristic curves (SWCCs) of soils. The results revealed that the internal friction angle of soil was not significantly affected by the suction and as well as the drying-wetting SWCCs of soils. The apparent cohesion of soil increased with a decreasing rate as the suction increased. Further, the apparent cohesion obtained from soil in wetting was greater than that obtained from soil in drying. Shear stress-shear displacement curves obtained from soil specimens subjected to the same net normal stress and different suction values showed a higher initial stiffness and a greater peak stress as the suction increased. In addition, it was observed that soil became more dilative with the increase of suction. A soil in wetting exhibited slightly higher peak shear stress and more contractive volume change behaviour than that of in drying at the same net normal stress and the suction.
Resumo:
Purpose–The growing debate in the literature indicates that the initiative to implement Knowledge Based Urban Development (KBUD) approaches in urban development process is neither simple nor quick. Many research efforts has therefore, been put forward to the development of appropriate KBUD framework and KBUD practical approaches. But this has lead to a fragmented and incoherent methodological approach. This paper outlines and compares a few most popular KBUD frameworks selected from the literature. It aims to identify some key and common features in the effort to achieve a unified method of KBUD framework. Design/methodology/approach–This paper reviews, examines and identifies various popular KBUD frameworks discussed in the literature from urban planners’ viewpoint. It employs a content analysis technique i.e. a research tool used to determine the presence of certain words or concepts within texts or sets of texts. Originality/value–The paper reports on the key and common features of a few selected most popular KBUD frameworks. The synthesis of the results is based from a perspective of urban planners. The findings which encompass a new KBUD framework incorporating the key and common features will be valuable in setting a platform to achieve a unified method of KBUD. Practical implications –The discussion and results presented in this paper should be significant to researchers and practitioners and to any cities and countries that are aiming for KBUD. Keywords – Knowledge based urban development, Knowledge based urban development framework, Urban development and knowledge economy
Resumo:
Aim. This paper is a report of a review conducted to identify (a) best practice in information transfer from the emergency department for multi-trauma patients; (b) conduits and barriers to information transfer in trauma care and related settings; and (c) interventions that have an impact on information communication at handover and beyond. Background. Information transfer is integral to effective trauma care, and communication breakdown results in important challenges to this. However, evidence of adequacy of structures and processes to ensure transfer of patient information through the acute phase of trauma care is limited. Data sources. Papers were sourced from a search of 12 online databases and scanning references from relevant papers for 1990–2009. Review methods. The review was conducted according to the University of York’s Centre for Reviews and Dissemination guidelines. Studies were included if they concerned issues that influenced information transfer for patients in healthcare settings. Results. Forty-five research papers, four literature reviews and one policy statement were found to be relevant to parts of the topic, but not all of it. The main issues emerging concerned the impact of communication breakdown in some form, and included communication issues within trauma team processes, lack of structure and clarity during handovers including missing, irrelevant and inaccurate information, distractions and poorly documented care. Conclusion. Many factors influence information transfer but are poorly identified in relation to trauma care. The measurement of information transfer, which is integral to patient handover, has not been the focus of research to date. Nonetheless, documented patient information is considered evidence of care and a resource that affects continuing care.
Resumo:
This paper reports on the empirical comparison of seven machine learning algorithms in texture classification with application to vegetation management in power line corridors. Aiming at classifying tree species in power line corridors, object-based method is employed. Individual tree crowns are segmented as the basic classification units and three classic texture features are extracted as the input to the classification algorithms. Several widely used performance metrics are used to evaluate the classification algorithms. The experimental results demonstrate that the classification performance depends on the performance matrix, the characteristics of datasets and the feature used.
Resumo:
A good object representation or object descriptor is one of the key issues in object based image analysis. To effectively fuse color and texture as a unified descriptor at object level, this paper presents a novel method for feature fusion. Color histogram and the uniform local binary patterns are extracted from arbitrary-shaped image-objects, and kernel principal component analysis (kernel PCA) is employed to find nonlinear relationships of the extracted color and texture features. The maximum likelihood approach is used to estimate the intrinsic dimensionality, which is then used as a criterion for automatic selection of optimal feature set from the fused feature. The proposed method is evaluated using SVM as the benchmark classifier and is applied to object-based vegetation species classification using high spatial resolution aerial imagery. Experimental results demonstrate that great improvement can be achieved by using proposed feature fusion method.
Resumo:
Acoustic emission (AE) is the phenomenon where high frequency stress waves are generated by rapid release of energy within a material by sources such as crack initiation or growth. AE technique involves recording these stress waves by means of sensors placed on the surface and subsequent analysis of the recorded signals to gather information such as the nature and location of the source. It is one of the several diagnostic techniques currently used for structural health monitoring (SHM) of civil infrastructure such as bridges. Some of its advantages include ability to provide continuous in-situ monitoring and high sensitivity to crack activity. But several challenges still exist. Due to high sampling rate required for data capture, large amount of data is generated during AE testing. This is further complicated by the presence of a number of spurious sources that can produce AE signals which can then mask desired signals. Hence, an effective data analysis strategy is needed to achieve source discrimination. This also becomes important for long term monitoring applications in order to avoid massive date overload. Analysis of frequency contents of recorded AE signals together with the use of pattern recognition algorithms are some of the advanced and promising data analysis approaches for source discrimination. This paper explores the use of various signal processing tools for analysis of experimental data, with an overall aim of finding an improved method for source identification and discrimination, with particular focus on monitoring of steel bridges.
Resumo:
Several key issues need to be resolved before an efficient and reproducible Agrobacterium-mediated sugarcane transformation method can be developed for a wider range of sugarcane cultivars. These include loss of morphogenetic potential in sugarcane cells after Agrobacterium-mediated transformation, effect of exposure to abiotic stresses during in vitro selection, and most importantly the hypersensitive cell death response of sugarcane (and other nonhost plants) to Agrobacterium tumefaciens. Eight sugarcane cultivars (Q117, Q151, Q177, Q200, Q208, KQ228, QS94-2329, and QS94-2174) were evaluated for loss of morphogenetic potential in response to the age of the culture, exposure to Agrobacterium strains, and exposure to abiotic stresses during selection. Corresponding changes in the polyamine profiles of these cultures were also assessed. Strategies were then designed to minimize the negative effects of these factors on the cell survival and callus proliferation following Agrobacterium-mediated transformation. Some of these strategies, including the use of cell death protector genes and regulation of intracellular polyamine levels, will be discussed.
Resumo:
In computational linguistics, information retrieval and applied cognition, words and concepts are often represented as vectors in high dimensional spaces computed from a corpus of text. These high dimensional spaces are often referred to as Semantic Spaces. We describe a novel and efficient approach to computing these semantic spaces via the use of complex valued vector representations. We report on the practical implementation of the proposed method and some associated experiments. We also briefly discuss how the proposed system relates to previous theoretical work in Information Retrieval and Quantum Mechanics and how the notions of probability, logic and geometry are integrated within a single Hilbert space representation. In this sense the proposed system has more general application and gives rise to a variety of opportunities for future research.
Resumo:
For the first time in human history, large volumes of spoken audio are being broadcast, made available on the internet, archived, and monitored for surveillance every day. New technologies are urgently required to unlock these vast and powerful stores of information. Spoken Term Detection (STD) systems provide access to speech collections by detecting individual occurrences of specified search terms. The aim of this work is to develop improved STD solutions based on phonetic indexing. In particular, this work aims to develop phonetic STD systems for applications that require open-vocabulary search, fast indexing and search speeds, and accurate term detection. Within this scope, novel contributions are made within two research themes, that is, accommodating phone recognition errors and, secondly, modelling uncertainty with probabilistic scores. A state-of-the-art Dynamic Match Lattice Spotting (DMLS) system is used to address the problem of accommodating phone recognition errors with approximate phone sequence matching. Extensive experimentation on the use of DMLS is carried out and a number of novel enhancements are developed that provide for faster indexing, faster search, and improved accuracy. Firstly, a novel comparison of methods for deriving a phone error cost model is presented to improve STD accuracy, resulting in up to a 33% improvement in the Figure of Merit. A method is also presented for drastically increasing the speed of DMLS search by at least an order of magnitude with no loss in search accuracy. An investigation is then presented of the effects of increasing indexing speed for DMLS, by using simpler modelling during phone decoding, with results highlighting the trade-off between indexing speed, search speed and search accuracy. The Figure of Merit is further improved by up to 25% using a novel proposal to utilise word-level language modelling during DMLS indexing. Analysis shows that this use of language modelling can, however, be unhelpful or even disadvantageous for terms with a very low language model probability. The DMLS approach to STD involves generating an index of phone sequences using phone recognition. An alternative approach to phonetic STD is also investigated that instead indexes probabilistic acoustic scores in the form of a posterior-feature matrix. A state-of-the-art system is described and its use for STD is explored through several experiments on spontaneous conversational telephone speech. A novel technique and framework is proposed for discriminatively training such a system to directly maximise the Figure of Merit. This results in a 13% improvement in the Figure of Merit on held-out data. The framework is also found to be particularly useful for index compression in conjunction with the proposed optimisation technique, providing for a substantial index compression factor in addition to an overall gain in the Figure of Merit. These contributions significantly advance the state-of-the-art in phonetic STD, by improving the utility of such systems in a wide range of applications.
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
A survey was conducted to identify viruses affecting dry bean (Phaseolus vulgaris) and cowpea (Vigna unguiculata) in Lebanon. Three hundred and thirty four samples exhibiting virus-like symptoms were collected from 13 different locations during the fall growing season of 1984. Samples were stored at 20 deg C until they were tested by the enzyme-linked immunosorbent assay (ELISA) for the presence of blackeye cowpea mosaic virus (B1CMV), bean yellow mosaic virus (BYMV), bean common mosaic virus (BCMV) and cucumber mosaic virus (CMV). In preliminary tests, the extraction buffer 0.1M phosphate + O.1MEDTA, pH 7.4 was found to be far better than the standard extraction buffer and, accordingly, was used for virus extraction for all field samples. Results obtained indicated that around 50% of the bean samples tested were infected with B1CMV. Incidence of BCMV, BYMV and CMV in the samples tested were 4,4 and 1.7%, respectively. B1CMV was detected in 10 locations, whereas, BYMV, BCMV and CMV were found in 1,4 and 4 locations, respectively. Mixed infections such as BCMV, BICMV, BCMV+CMV, BYMV+CMV and BICMV+BCMV+CMV were detected. In 35% of the samples assayed, the causal virus was not identified
Resumo:
In this paper, an enriched radial point interpolation method (e-RPIM) is developed the for the determination of crack tip fields. In e-RPIM, the conventional RBF interpolation is novelly augmented by the suitable trigonometric basis functions to reflect the properties of stresses for the crack tip fields. The performance of the enriched RBF meshfree shape functions is firstly investigated to fit different surfaces. The surface fitting results have proven that, comparing with the conventional RBF shape function, the enriched RBF shape function has: (1) a similar accuracy to fit a polynomial surface; (2) a much better accuracy to fit a trigonometric surface; and (3) a similar interpolation stability without increase of the condition number of the RBF interpolation matrix. Therefore, it has proven that the enriched RBF shape function will not only possess all advantages of the conventional RBF shape function, but also can accurately reflect the properties of stresses for the crack tip fields. The system of equations for the crack analysis is then derived based on the enriched RBF meshfree shape function and the meshfree weak-form. Several problems of linear fracture mechanics are simulated using this newlydeveloped e-RPIM method. It has demonstrated that the present e-RPIM is very accurate and stable, and it has a good potential to develop a practical simulation tool for fracture mechanics problems.
Resumo:
The development and use of a virtual assessment tool for a signal processing unit is described. It allows students to take a test from anywhere using a web browser to connect to the university server that hosts the test. While student responses are of the multiple choice type, they have to work out problems to arrive at the answer to be entered. CGI programming is used to verify student identification information and record their scores as well as provide immediate feedback after the test is complete. The tool has been used at QUT for the past 3 years and student feedback is discussed. The virtual assessment tool is an efficient alternative to marking written assignment reports that can often take more hours than actual lecture hall contact from a lecturer or tutor. It is especially attractive for very large classes that are now the norm at many universities in the first two years.
Resumo:
Diffusion is the process that leads to the mixing of substances as a result of spontaneous and random thermal motion of individual atoms and molecules. It was first detected by the English botanist Robert Brown in 1827, and the phenomenon became known as ‘Brownian motion’. More specifically, the motion observed by Brown was translational diffusion – thermal motion resulting in random variations of the position of a molecule. This type of motion was given a correct theoretical interpretation in 1905 by Albert Einstein, who derived the relationship between temperature, the viscosity of the medium, the size of the diffusing molecule, and its diffusion coefficient. It is translational diffusion that is indirectly observed in MR diffusion-tensor imaging (DTI). The relationship obtained by Einstein provides the physical basis for using translational diffusion to probe the microscopic environment surrounding the molecule.
Resumo:
In this paper, a method has been developed for estimating pitch angle, roll angle and aircraft body rates based on horizon detection and temporal tracking using a forward-looking camera, without assistance from other sensors. Using an image processing front-end, we select several lines in an image that may or may not correspond to the true horizon. The optical flow at each candidate line is calculated, which may be used to measure the body rates of the aircraft. Using an Extended Kalman Filter (EKF), the aircraft state is propagated using a motion model and a candidate horizon line is associated using a statistical test based on the optical flow measurements and the location of the horizon. Once associated, the selected horizon line, along with the associated optical flow, is used as a measurement to the EKF. To test the accuracy of the algorithm, two flights were conducted, one using a highly dynamic Uninhabited Airborne Vehicle (UAV) in clear flight conditions and the other in a human-piloted Cessna 172 in conditions where the horizon was partially obscured by terrain, haze and smoke. The UAV flight resulted in pitch and roll error standard deviations of 0.42◦ and 0.71◦ respectively when compared with a truth attitude source. The Cessna flight resulted in pitch and roll error standard deviations of 1.79◦ and 1.75◦ respectively. The benefits of selecting and tracking the horizon using a motion model and optical flow rather than naively relying on the image processing front-end is also demonstrated.