530 resultados para extraction methods
Resumo:
This paper describes technologies we have developed to perform autonomous large-scale off-world excavation. A scale dragline excavator of size similar to that required for lunar excavation was made capable of autonomous control. Systems have been put in place to allow remote operation of the machine from anywhere in the world. Algorithms have been developed for complete autonomous digging and dumping of material taking into account machine and terrain constraints and regolith variability. Experimental results are presented showing the ability to autonomously excavate and move large amounts of regolith and accurately place it at a specified location.
Resumo:
Presents a unified and systematic assessment of ten position control strategies for a hydraulic servo system with single-ended cylinder driven by a proportional directional control valve. We aim at identifying those methods that achieve better tracking, have a low sensitivity to system uncertainties, and offer a good balance between development effort and end results. A formal approach for solving this problem relies on several practical metrics, which is introduced herein. Their choice is important, as the comparison results between controllers can vary significantly, depending on the selected criterion. Apart from the quantitative assessment, we also raise aspects which are difficult to quantify, but which must stay in attention when considering the position control problem for this class of hydraulic servo systems.
Resumo:
As a special type of novel flexible structures, tensegrity holds promise for many potential applications in such fields as materials science, biomechanics, civil and aerospace engineering. Rhombic systems are an important class of tensegrity structures, in which each bar constitutes the longest diagonal of a rhombus of four strings. In this paper, we address the design methods of rhombic structures based on the idea that many tensegrity structures can be constructed by assembling one-bar elementary cells. By analyzing the properties of rhombic cells, we first develop two novel schemes, namely, direct enumeration scheme and cell-substitution scheme. In addition, a facile and efficient method is presented to integrate several rhombic systems into a larger tensegrity structure. To illustrate the applications of these methods, some novel rhombic tensegrity structures are constructed.
Resumo:
Background: Up to 1% of adults will suffer from leg ulceration at some time. The majority of leg ulcers are venous in origin and are caused by high pressure in the veins due to blockage or weakness of the valves in the veins of the leg. Prevention and treatment of venous ulcers is aimed at reducing the pressure either by removing / repairing the veins, or by applying compression bandages / stockings to reduce the pressure in the veins. The vast majority of venous ulcers are healed using compression bandages. Once healed they often recur and so it is customary to continue applying compression in the form of bandages, tights, stockings or socks in order to prevent recurrence. Compression bandages or hosiery (tights, stockings, socks) are often applied for ulcer prevention. Objectives To assess the effects of compression hosiery (socks, stockings, tights) or bandages in preventing the recurrence of venous ulcers. To determine whether there is an optimum pressure/type of compression to prevent recurrence of venous ulcers. Search methods The searches for the review were first undertaken in 2000. For this update we searched the Cochrane Wounds Group Specialised Register (October 2007), The Cochrane Central Register of Controlled Trials (CENTRAL) - The Cochrane Library 2007 Issue 3, Ovid MEDLINE - 1950 to September Week 4 2007, Ovid EMBASE - 1980 to 2007 Week 40 and Ovid CINAHL - 1982 to October Week 1 2007. Selection criteria Randomised controlled trials evaluating compression bandages or hosiery for preventing venous leg ulcers. Data collection and analysis Data extraction and assessment of study quality were undertaken by two authors independently. Results No trials compared recurrence rates with and without compression. One trial (300 patients) compared high (UK Class 3) compression hosiery with moderate (UK Class 2) compression hosiery. A intention to treat analysis found no significant reduction in recurrence at five years follow up associated with high compression hosiery compared with moderate compression hosiery (relative risk of recurrence 0.82, 95% confidence interval 0.61 to 1.12). This analysis would tend to underestimate the effectiveness of the high compression hosiery because a significant proportion of people changed from high compression to medium compression hosiery. Compliance rates were significantly higher with medium compression than with high compression hosiery. One trial (166 patients) found no statistically significant difference in recurrence between two types of medium (UK Class 2) compression hosiery (relative risk of recurrence with Medi was 0.74, 95% confidence interval 0.45 to 1.2). Both trials reported that not wearing compression hosiery was strongly associated with ulcer recurrence and this is circumstantial evidence that compression reduces ulcer recurrence. No trials were found which evaluated compression bandages for preventing ulcer recurrence. Authors' conclusions No trials compared compression with vs no compression for prevention of ulcer recurrence. Not wearing compression was associated with recurrence in both studies identified in this review. This is circumstantial evidence of the benefit of compression in reducing recurrence. Recurrence rates may be lower in high compression hosiery than in medium compression hosiery and therefore patients should be offered the strongest compression with which they can comply. Further trials are needed to determine the effectiveness of hosiery prescribed in other settings, i.e. in the UK community, in countries other than the UK.
Resumo:
The paper compares three different methods of inclusion of current phasor measurements by phasor measurement units (PMUs) in the conventional power system state estimator. For each of the three methods, comprehensive formulation of the hybrid state estimator in the presence of conventional and PMU measurements is presented. The performance of the state estimator in the presence of conventional measurements and optimally placed PMUs is evaluated in terms of convergence characteristics and estimator accuracy. Test results on the IEEE 14-bus and IEEE 300-bus systems are analyzed to determine the best possible method of inclusion of PMU current phasor measurements.
Resumo:
As part of an ongoing research on the development of a longer life insulated rail joint (IRJ), this paper reports a field experiment and a simplified 2D numerical modelling for the purpose of investigating the behaviour of rail web in the vicinity of endpost in an insulated rail joint (IRJ) due to wheel passages. A simplified 2D plane stress finite element model is used to simulate the wheel-rail rolling contact impact at IRJ. This model is validated using data from a strain gauged IRJ that was installed in a heavy haul network; data in terms of the vertical and shear strains at specific positions of the IRJ during train passing were captured and compared with the results of the FE model. The comparison indicates a satisfactory agreement between the FE model and the field testing. Furthermore, it demonstrates that the experimental and numerical analyses reported in this paper provide a valuable datum for developing further insight into the behaviour of IRJ under wheel impacts.
Resumo:
Home Automation (HA) has emerged as a prominent ¯eld for researchers and in- vestors confronting the challenge of penetrating the average home user market with products and services emerging from technology based vision. In spite of many technology contri- butions, there is a latent demand for a®ordable and pragmatic assistive technologies for pro-active handling of complex lifestyle related problems faced by home users. This study has pioneered to develop an Initial Technology Roadmap for HA (ITRHA) that formulates a need based vision of 10-15 years, identifying market, product and technology investment opportunities, focusing on those aspects of HA contributing to e±cient management of home and personal life. The concept of Family Life Cycle is developed to understand the temporal needs of family. In order to formally describe a coherent set of family processes, their relationships, and interaction with external elements, a reference model named Fam- ily System is established that identi¯es External Entities, 7 major Family Processes, and 7 subsystems-Finance, Meals, Health, Education, Career, Housing, and Socialisation. Anal- ysis of these subsystems reveals Soft, Hard and Hybrid processes. Rectifying the lack of formal methods for eliciting future user requirements and reassessing evolving market needs, this study has developed a novel method called Requirement Elicitation of Future Users by Systems Scenario (REFUSS), integrating process modelling, and scenario technique within the framework of roadmapping. The REFUSS is used to systematically derive process au- tomation needs relating the process knowledge to future user characteristics identi¯ed from scenarios created to visualise di®erent futures with richly detailed information on lifestyle trends thus enabling learning about the future requirements. Revealing an addressable market size estimate of billions of dollars per annum this research has developed innovative ideas on software based products including Document Management Systems facilitating automated collection, easy retrieval of all documents, In- formation Management System automating information services and Ubiquitous Intelligent System empowering the highly mobile home users with ambient intelligence. Other product ideas include robotic devices of versatile Kitchen Hand and Cleaner Arm that can be time saving. Materialisation of these products require technology investment initiating further research in areas of data extraction, and information integration as well as manipulation and perception, sensor actuator system, tactile sensing, odour detection, and robotic controller. This study recommends new policies on electronic data delivery from service providers as well as new standards on XML based document structure and format.
Resumo:
During the past three decades, the subject of fractional calculus (that is, calculus of integrals and derivatives of arbitrary order) has gained considerable popularity and importance, mainly due to its demonstrated applications in numerous diverse and widespread fields in science and engineering. For example, fractional calculus has been successfully applied to problems in system biology, physics, chemistry and biochemistry, hydrology, medicine, and finance. In many cases these new fractional-order models are more adequate than the previously used integer-order models, because fractional derivatives and integrals enable the description of the memory and hereditary properties inherent in various materials and processes that are governed by anomalous diffusion. Hence, there is a growing need to find the solution behaviour of these fractional differential equations. However, the analytic solutions of most fractional differential equations generally cannot be obtained. As a consequence, approximate and numerical techniques are playing an important role in identifying the solution behaviour of such fractional equations and exploring their applications. The main objective of this thesis is to develop new effective numerical methods and supporting analysis, based on the finite difference and finite element methods, for solving time, space and time-space fractional dynamical systems involving fractional derivatives in one and two spatial dimensions. A series of five published papers and one manuscript in preparation will be presented on the solution of the space fractional diffusion equation, space fractional advectiondispersion equation, time and space fractional diffusion equation, time and space fractional Fokker-Planck equation with a linear or non-linear source term, and fractional cable equation involving two time fractional derivatives, respectively. One important contribution of this thesis is the demonstration of how to choose different approximation techniques for different fractional derivatives. Special attention has been paid to the Riesz space fractional derivative, due to its important application in the field of groundwater flow, system biology and finance. We present three numerical methods to approximate the Riesz space fractional derivative, namely the L1/ L2-approximation method, the standard/shifted Gr¨unwald method, and the matrix transform method (MTM). The first two methods are based on the finite difference method, while the MTM allows discretisation in space using either the finite difference or finite element methods. Furthermore, we prove the equivalence of the Riesz fractional derivative and the fractional Laplacian operator under homogeneous Dirichlet boundary conditions – a result that had not previously been established. This result justifies the aforementioned use of the MTM to approximate the Riesz fractional derivative. After spatial discretisation, the time-space fractional partial differential equation is transformed into a system of fractional-in-time differential equations. We then investigate numerical methods to handle time fractional derivatives, be they Caputo type or Riemann-Liouville type. This leads to new methods utilising either finite difference strategies or the Laplace transform method for advancing the solution in time. The stability and convergence of our proposed numerical methods are also investigated. Numerical experiments are carried out in support of our theoretical analysis. We also emphasise that the numerical methods we develop are applicable for many other types of fractional partial differential equations.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.